Offline Sentence Processing Measures for testing Readability with Users

Advaith Siddharthan, Napoleon Katsos

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

11 Citations (Scopus)


While there has been much work on computational models to predict readability based on the lexical, syntactic and discourse properties of a text, there are also interesting open questions about how computer generated text should be evaluated with target populations. In this paper, we compare two offline methods for evaluating sentence quality, magnitude estimation of acceptability judgements and sentence recall. These methods differ in the extent to which they can differentiate between surface level fluency and deeper comprehension issues. We find, most importantly, that the two correlate. Magnitude estimation can be run on the web without supervision, and the results can be analysed automatically. The sentence recall methodology is more resource intensive, but allows us to tease apart the fluency and comprehension issues that arise.
Original languageEnglish
Title of host publicationProceedings of the NAACL 2012 Workshop on Predicting and Improving Text Readability (PITR 2012)
Number of pages8
Publication statusPublished - 2012


Dive into the research topics of 'Offline Sentence Processing Measures for testing Readability with Users'. Together they form a unique fingerprint.

Cite this