Progress in NLG requires understanding what users want, creating high quality data sets, building models and algorithms, and thoroughly evaluating systems. I remain disappointed that the research community seems fixated on building models and pays much less attention to user needs, datasets, and evaluation.
The most meaningful evaluation is when we test whether an NLG system actually achieves its communicative goal, eg helps people make better decisions or write documents faster. Unfortunately such “extrinsic” or “task” evaluation is rare in NLP in 2002, we need to see more such evaluations!
I’ve come to realise that there is some confusion, especially amongst newcomers to NLP/AI, about when a research paper can be presented at two venues. I try to explain the rules and principles as I understand them.
The ROUGE metric dominates evaluation of summarisation, and I do not understand why. I am not aware of good evidence that ROUGE predicts utility, and recent work by one of my students shows that character-level edit (Levenshtein) distance against a reference text is a better predictor of utility than ROUGE.
Some of my PhD students have recently looked at how many mistakes people (professionals, not Turkers) make when they do NLG-like tasks. The number of mistakes is considerably higher than we expected (although still much lower than the number of mistakes made by current neural NLG systems).
Both academic researchers and commercial NLG developers are interested in building NLG systems which describe sporting events. However, they care about different things. For example, many academics show little interest in use cases, domain knowledge, robustness, and high quality input data, all of which are very important to commercial NLG developers.
NLG texts must be correct pragmatically as well as semantically. In particular, they must not contain statements which are contextually misleading even if they are literally true. We badly need better techniques for evaluating pragmatic accuracy as well as generating pragmatically correct texts.
Like many others, I am trying to do too much in my university academic role. I’m looking for areas where I can “do less” without having a major impact on research and teaching.
There is a lot of uninformed criticism of rule-based NLG in academic papers. In this blog I explain at a very high level how such systems work and what some of the main challenges are in building them.
One of the challenges in data-to-text NLG is creating good summaries and insights when the input is flawed (incomplete, incorrect, or inconsistent). One of my PhD students has been working on this problem, and it is a hard one! But a good solution would be hugely valuable for society. I may be able to offer a PhD studentship in this area, contact me if interested.