Our 2022 Publications: NLG Evaluation, Requirements, Resources
I thought I’d end 2022 with a summary of the papers written by my students and I in 2022. All of them are about requirements, resources, and/or evaluation of NLG.
I thought I’d end 2022 with a summary of the papers written by my students and I in 2022. All of them are about requirements, resources, and/or evaluation of NLG.
I dont like academic leaderboards. Poor scientific techniques, poor data, and poor evaluation means leaderboard results may not be worth much. I also suspect that the community’s fixation on leaderboards also means less research on important topics that do not fit the leaderboard model, such as understanding user requirements.
Quality assurance processes for academic research, notably peer review by unpaid volunteers, are very lightweight and miss many problems. Better quality assurance processes would require more resources and efforts, but would result in more trustworthy papers.
I was very happy to win an INLG Test of Time award for my paper “An Architecture for Data-to-Text Systems”, so I thought I’d write a few comments on it.
Society (and most funding agencies) want to see real-world benefits or “impact” from academic research. Of course not all research will have real-world impact, and impact may take years or decades to appear! I share some thoughts on types of impact, barriers to impact, and my personal experiences.
I’ve come to realise that there is some confusion, especially amongst newcomers to NLP/AI, about when a research paper can be presented at two venues. I try to explain the rules and principles as I understand them.
Like many others, I am trying to do too much in my university academic role. I’m looking for areas where I can “do less” without having a major impact on research and teaching.
When I asked participants what they most liked at the recent INLG conference, people highlighted events and sessions which focused on discussion and interaction, not technical research papers. Perhaps there is a lesson here that conferences should focus more on interaction and community, and not simply be regarded as venues for presenting research papers.
I was surprised to find out that some institutions require PhD students to publish a certain number of papers before they can graduate. This is not my view; my goal as a supervisor is to train students to be good scientists, and rigid publication targets are not appropriate for this goal.
I would like neural NLG researchers to focus on more challenging datasets, and make some suggestions.