Automated essay scoring to assess digital literacy competence

Yeni Anistyasari*, Ekohariadi, Tri Rijanto, Shintami C. Hidayati

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Students nowadays must acquire critical, cognitive, social, operational, emotional, and projective digital literacy skills, particularly in the Faculty of Engineering. An essay test is one technique that may be used to measure them. However, evaluating essay tests is time-consuming and subjective, resulting in variable measuring outcomes. (1) analyzing the construct validity of digital literacy competencies, (2) analyzing the reliability of the score obtained from the automated essay scoring test, and (3) analyzing the difference between scores obtained from automated essay scoring and scores assessed manually were the objectives of this study. The measurement results data were studied using factor analysis to evaluate construct validity, Cronbach alpha estimation to generate reliability coefficients, and inter-rater kappa to ascertain the level of concordance between scores acquired through automated essay scoring and scores assessed manually. The analysis results indicate that the digital literacy competence construct is legitimate, that the scores received from automated essay scoring tests are deemed credible, and that the scores acquired from automated essay scoring and those assessed manually are equivalent.

Original languageEnglish
Article number060030
JournalAIP Conference Proceedings
Volume3116
Issue number1
DOIs
Publication statusPublished - 24 May 2024
Event2023 Electronic Physics Informatics International Conference, EPIIC 2023 - Tangerang, Indonesia
Duration: 25 Aug 202326 Aug 2023

Fingerprint

Dive into the research topics of 'Automated essay scoring to assess digital literacy competence'. Together they form a unique fingerprint.

Cite this