Date of Award

10-28-2021

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

School of Information Technology: Information Systems

First Advisor

Xing Fang

Abstract

Under the influence of the COVID-19 pandemic, traditional in-person teaching has undergone significant changes. Online courses become an essential education method. However, online teaching lacks adequate evaluation approaches. That's why exams are still indispensable. However, grading short answer exam questions can be an onerous task. In this work, we propose a novel Automatic Short Answer Grading (ASAG) model based on the Sentence BERT model. On the Short Answer Scoring V2.0 dataset, our proposed model shows improvements on accuracy, Marco F1 score, and Weighted F1 score comparing to the results obtained from the BERT model. In addition, we also compare different task functions and different lengths of answers to further evaluate our model’s performance. A better result is achieved when using the regression task function. At the same time, we find that shorter answers’ result is better than the result obtained from longer answers.

Comments

Imported from Luo_ilstu_0092N_12055.pdf

DOI

https://doi.org/10.30707/ETD2021.20220215070317697127.999986

Page Count

45

Share

COinS