I’ve spent much of the past few weeks marking, but nonetheless was unable to give my students detailed feedback and critiques, My apologies!
I’m looking for a PhD student to work on explaining Bayesian Reasoning, as part of the NL4XAI project. Should be a great project!
There are lots of opportunites in Aberdeen for people interested in NLG, including faculty positions and PhD studentships at the university, and commercial software development jobs at Arria. Come join me and my colleagues!
I’m just back from INLG 2019 in Tokyo, where I was very happy to see an increased emphasis on evaluation (and other methodological issues), including several papers on improving human evaluations.
It can be very exciting to apply powerful analytics and ML techniques to analyse data sets, but we need to be careful, otherwise we will make mistakes.
Texts produced by NLG systems can be evaluated in terms of accuracy (content is correct), fluency (text is readable), and utility (text is useful). I discuss these three “dimensions” of NLG evaluation.
I’ve been shocked by the fact that many neural NLG researchers dont seem to care that their systems produce texts which contain many factual mistakes and hallucinations. NLG users expect accurate texts, and will not use systems which produce inaccurate texts, not matter how well the texts are written,