The BBC used Arria NLG to generate stories about the recent UK election. In this application, texts communicated a meaning, there was no corpus, accuracy was paramount, and domain experts wanted to control the system. Most applied NLG systems I have worked on have had similar constraints.
I’ve been shocked by the fact that many neural NLG researchers dont seem to care that their systems produce texts which contain many factual mistakes and hallucinations. NLG users expect accurate texts, and will not use systems which produce inaccurate texts, not matter how well the texts are written,
Many neural NLG systems “hallucinate” non-existent or incorrect content. This is a major problem, since such hallucination is unacceptable in many (most?) NLG use cases. Also BLEU and related metrics do not detect hallucination well, so researchers who rely on such metrics may be misled about the quality of their system.