Anya Belz and I are looking for a research fellow to work on a new project on reproducibility of human evaluations of NLP systems. This is a great opportunity for a researcher who wants to improve the scientific quality of human evaluations in NLP!
In 2016, I was shocked by the poor scientific quality of research in neural NLG. Fortunately, the situation is better in 2021! However, progress has been less than I had hoped, I think in part because the “leaderboard” culture does not encourage good science.
I’m looking for a PhD student to work on using AI and NLG to help cancer patients who are managing their condition at home. The student will be jointly supervised by people at Aberdeen’s Medical School. I think this is a very exciting PhD, and a chance to work on ideas that could make a real difference to people’s lives!
Why would anyone use a Bayesian model instead of a neural model in clinical decision support? Perhaps because the Bayesian model is much easier to justify and adapt to a changing world. Explaining Bayesian models is also a really interesting research challenge, and one of my colleagues has funding for a PhD student in this area.
I think NLG can help humanise and democratise data and AI reasoning. If so, this would provide huge benefits to society in a world which will increasingly by driven by data and data-based reasoning.
A few observations (not recommendations!) about what it is like to work as a researcher in university and corporate contexts.
I would like to see more PhD students and postdocs “getting their hands dirty” by collecting real-world data, working with real-world users and experts, and conducting real-world evaluations with users. Its not easy, but engaging with the real world does help scientific and technological progress.
I recently attended a workshop on Safety for Conversational AI, which discussed how such systems could potentially harm people. Is it possible that NLG systems could harm their users, maybe even contributing to death in the worst case?