People who search for me in Google see a Google-generated box which incorrectly says that I am Israeli. Google has ignored my complaints about this; they dont seem to care about the accuracy of their content-production algorithm. Which is ethically pretty dubious!
In both NLG and MT contexts, deep learning approaches can result in texts which are fluent and readable but also incorrect and misleading. This is problematical if accuracy is more important than readability, as is the case in most NLG contexts.
I am very happy to be involved in a new project, PhilHumans, which is exploring how AI can help users interact with personal health apps.
Many NLP (and AI) researchers focus on publishing in conferences, which I think is a shame. Publish in journals instead!
The most talked about NLG application at INLG was product descriptions. Which are very interesting, and also quite different from financial reporting, which is the other application which seems to be taking off.
Many neural NLG systems “hallucinate” non-existent or incorrect content. This is a major problem, since such hallucination is unacceptable in many (most?) NLG use cases. Also BLEU and related metrics do not detect hallucination well, so researchers who rely on such metrics may be misled about the quality of their system.
From a commercial perspective, I think NLG is currently most successful in financial reporting. Although of course there are many great NLG applications in other sectors!