An Exploratory Study into Automated Pr\'ecis Grading
Automated writing evaluation is a popular research field, but the main focus has been on evaluating argumentative essays. In this paper, we consider a different genre, namely pr{\'e}cis texts. A pr{\'e}cis is a written text that provides a coherent summary of main points of a spoken or written text. We present a corpus of English pr{\'e}cis texts which all received a grade assigned by a highly-experienced English language teacher and were subsequently annotated following an exhaustive error typology. With this corpus we trained a machine learning model which relies on a number of linguistic, automatic summarization and AWE features. Our results reveal that this model is able to predict the grade of pr{\'e}cis texts with only a moderate error margin.
PDF Abstract