TY - JOUR
T1 - Automated essay scoring to assess digital literacy competence
AU - Anistyasari, Yeni
AU - Ekohariadi,
AU - Rijanto, Tri
AU - Hidayati, Shintami C.
N1 - Publisher Copyright:
© 2024 Author(s).
PY - 2024/5/24
Y1 - 2024/5/24
N2 - Students nowadays must acquire critical, cognitive, social, operational, emotional, and projective digital literacy skills, particularly in the Faculty of Engineering. An essay test is one technique that may be used to measure them. However, evaluating essay tests is time-consuming and subjective, resulting in variable measuring outcomes. (1) analyzing the construct validity of digital literacy competencies, (2) analyzing the reliability of the score obtained from the automated essay scoring test, and (3) analyzing the difference between scores obtained from automated essay scoring and scores assessed manually were the objectives of this study. The measurement results data were studied using factor analysis to evaluate construct validity, Cronbach alpha estimation to generate reliability coefficients, and inter-rater kappa to ascertain the level of concordance between scores acquired through automated essay scoring and scores assessed manually. The analysis results indicate that the digital literacy competence construct is legitimate, that the scores received from automated essay scoring tests are deemed credible, and that the scores acquired from automated essay scoring and those assessed manually are equivalent.
AB - Students nowadays must acquire critical, cognitive, social, operational, emotional, and projective digital literacy skills, particularly in the Faculty of Engineering. An essay test is one technique that may be used to measure them. However, evaluating essay tests is time-consuming and subjective, resulting in variable measuring outcomes. (1) analyzing the construct validity of digital literacy competencies, (2) analyzing the reliability of the score obtained from the automated essay scoring test, and (3) analyzing the difference between scores obtained from automated essay scoring and scores assessed manually were the objectives of this study. The measurement results data were studied using factor analysis to evaluate construct validity, Cronbach alpha estimation to generate reliability coefficients, and inter-rater kappa to ascertain the level of concordance between scores acquired through automated essay scoring and scores assessed manually. The analysis results indicate that the digital literacy competence construct is legitimate, that the scores received from automated essay scoring tests are deemed credible, and that the scores acquired from automated essay scoring and those assessed manually are equivalent.
UR - http://www.scopus.com/inward/record.url?scp=85194700090&partnerID=8YFLogxK
U2 - 10.1063/5.0210437
DO - 10.1063/5.0210437
M3 - Conference article
AN - SCOPUS:85194700090
SN - 0094-243X
VL - 3116
JO - AIP Conference Proceedings
JF - AIP Conference Proceedings
IS - 1
M1 - 060030
T2 - 2023 Electronic Physics Informatics International Conference, EPIIC 2023
Y2 - 25 August 2023 through 26 August 2023
ER -