Generating basic skills reports for low-skilled readers

Sandra Williams, Ehud Reiter

Research output: Contribution to journalArticlepeer-review

35 Citations (Scopus)


We describe SKILLSUM, a Natural Language Generation (NLG) system that generates a personalised feedback report for someone who has just completed a screening assessment of their basic literacy and numeracy skills. Because many SKILLSUM users have limited literacy, the generated reports must be easily comprehended by people with limited reading skills; this is the most novel aspect of SKILLSUM, and the focus of this paper. We used two approaches to maximise readability. First, for determining content and structure (document planning), we did not explicitly model readability, but rather followed a pragmatic approach of repeatedly revising content and structure following pilot experiments and interviews with domain experts. Second, for choosing linguistic expressions (microplanning), we attempted to formulate explicitly the choices that enhanced readability, using a constraints approach and preference rules; our constraints were based on corpus analysis and our preference rules were based on psycholinguistic findings. Evaluation of the SKILLSUM system was twofold: it compared the usefulness of NLG technology to that of canned text output, and it assessed the effectiveness of the readability model. Results showed that NLG was more effective than canned text at enhancing users' knowledge of their skills, and also suggested that the empirical revise based on experiments and interviews approach made a substantial contribution to readability as well as our explicit psycholinguistically inspired models of readability choices.
Original languageEnglish
Pages (from-to)495-525
Number of pages31
JournalNatural Language Engineering
Issue number4
Early online date24 Apr 2008
Publication statusPublished - Oct 2008


Dive into the research topics of 'Generating basic skills reports for low-skilled readers'. Together they form a unique fingerprint.

Cite this