The BBC used Arria NLG to generate stories about the recent UK election. In this application, texts communicated a meaning, there was no corpus, accuracy was paramount, and domain experts wanted to control the system. Most applied NLG systems I have worked on have had similar constraints.
I’ve spent much of the past few weeks marking, but nonetheless was unable to give my students detailed feedback and critiques, My apologies!
I’m looking for a PhD student to work on explaining Bayesian Reasoning, as part of the NL4XAI project. Should be a great project!
There are lots of opportunites in Aberdeen for people interested in NLG, including faculty positions and PhD studentships at the university, and commercial software development jobs at Arria. Come join me and my colleagues!
I’m just back from INLG 2019 in Tokyo, where I was very happy to see an increased emphasis on evaluation (and other methodological issues), including several papers on improving human evaluations.
It can be very exciting to apply powerful analytics and ML techniques to analyse data sets, but we need to be careful, otherwise we will make mistakes.
Texts produced by NLG systems can be evaluated in terms of accuracy (content is correct), fluency (text is readable), and utility (text is useful). I discuss these three “dimensions” of NLG evaluation.