An Exploratory Study into Automated Pr\'ecis Grading

LREC 2020  ·  Orphee De Clercq, Senne Van Hoecke ·

Automated writing evaluation is a popular research field, but the main focus has been on evaluating argumentative essays. In this paper, we consider a different genre, namely pr{\'e}cis texts. A pr{\'e}cis is a written text that provides a coherent summary of main points of a spoken or written text. We present a corpus of English pr{\'e}cis texts which all received a grade assigned by a highly-experienced English language teacher and were subsequently annotated following an exhaustive error typology. With this corpus we trained a machine learning model which relies on a number of linguistic, automatic summarization and AWE features. Our results reveal that this model is able to predict the grade of pr{\'e}cis texts with only a moderate error margin.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here