I think NLG can help humanise and democratise data and AI reasoning. If so, this would provide huge benefits to society in a world which will increasingly by driven by data and data-based reasoning.
A few observations (not recommendations!) about what it is like to work as a researcher in university and corporate contexts.
I would like to see more PhD students and postdocs “getting their hands dirty” by collecting real-world data, working with real-world users and experts, and conducting real-world evaluations with users. Its not easy, but engaging with the real world does help scientific and technological progress.
I recently attended a workshop on Safety for Conversational AI, which discussed how such systems could potentially harm people. Is it possible that NLG systems could harm their users, maybe even contributing to death in the worst case?