Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/169447
Title: A human-centric automated essay scoring and feedback system for the development of ethical reasoning
Authors: Lee, Alwyn Vwen Yen
Luco, Andrés Carlos
Tan, Seng Chee
Keywords: Humanities::Philosophy
Issue Date: 2023
Source: Lee, A. V. Y., Luco, A. C. & Tan, S. C. (2023). A human-centric automated essay scoring and feedback system for the development of ethical reasoning. Educational Technology and Society, 26(1), 147-159. https://dx.doi.org/10.30191/ETS.202301_26(1).0011
Journal: Educational Technology and Society 
Abstract: Although artificial Intelligence (AI) is prevalent and impacts facets of daily life, there is limited research on responsible and humanistic design, implementation, and evaluation of AI, especially in the field of education. Afterall, learning is inherently a social endeavor involving human interactions, rendering the need for AI designs to be approached from a humanistic perspective, or human-centered AI (HAI). This study focuses on the use of essays as a principal means for assessing learning outcomes, through students’ writing in subjects that require arguments and justifications, such as ethics and moral reasoning. We considered AI with a human and student-centric design for formative assessment, using an automated essay scoring (AES) and feedback system to address issues of running an online course with large enrolment and to provide efficient feedback to students with substantial time savings for the instructor. The development of the AES system occurred over four phases as part of an iterative design cycle. A mixed-method approach was used, allowing instructors to qualitatively code subsets of data for training a machine learning model based on the Random Forest algorithm. This model was subsequently used to automatically score more essays at scale. Findings show substantial agreement on inter-rater reliability before model training was conducted with acceptable training accuracy. The AES system’s performance was slightly less accurate than human raters but is improvable over multiple iterations of the iterative design cycle. This system has allowed instructors to provide formative feedback, which was not possible in previous runs of the course.
URI: https://hdl.handle.net/10356/169447
ISSN: 1176-3647
DOI: 10.30191/ETS.202301_26(1).0011
Schools: School of Humanities 
Rights: This article of Educational Technology & Society is available under Creative Commons CC-BYNC-ND 3.0 license (https://creativecommons.org/licenses/by-nc-nd/3.0/).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SoH Journal Articles

Files in This Item:
File Description SizeFormat 
ETS_26_1_11.pdf332.41 kBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 20

17
Updated on Mar 19, 2025

Web of ScienceTM
Citations 50

4
Updated on Oct 30, 2023

Page view(s)

163
Updated on Mar 20, 2025

Download(s) 20

287
Updated on Mar 20, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.