ACL vs TACL Reviewing
This year I was both a TACL Action Editor and an ACL Senior Area Chair. This experience has reinforced my belief that the journal review process is better!
This year I was both a TACL Action Editor and an ACL Senior Area Chair. This experience has reinforced my belief that the journal review process is better!
We may see a big change in NLG evaluation over the next few years, with LLM-based evaluation replacing metrics such as BLEU and BLEURT, and a renewed emphasis on high-quality human evaluation to assess semantic and pragmatic correctness. Would be a step forward if this happens!
Many problems in NLP papers can *not* be detected by reviewers who are checking submissions to conferences and journals. In medicine and many other field of science, people can raise concerns about papers *after* they are published, and authors are expected to take this seriously. This is not the practice in NLP, which is a shame.
In our ReproHum project, we have found that many NLP experiments are flawed, and many authors do not respond to requests for more information about their work. This is depressing and hinders scientific progress in NLP.
I think there is a lot of potential in using chatGPT in healthcare, provided that we focus on real use cases instead of trying to debate whether chatGPT is somehow better than a doctor.
I love getting questions about how to evaluate chatGPT, they are much more constructive than speculations about whether it is a threat to humanity. We need to understand what LLM technology can and cannot do, and rigorous experiments are the best way to do this. I give some advice and caveats about evaluating chatGPT in this blog, and am happy to answer questions from people who want to do high-quality evaluations.
I dont like leaderboards, which encourage academics to write papers about small improvements on established tasks and datasets. I suspect (and hope) that chatGPT and similar systems will encourage people to move away from leaderboards. If so this would be great!
Is fraud (eg fabricating or falsifying data) a problem in NLP? It certainly is a problem in other scientific areas, and it wouldnt surprise me if it affected NLP as well.
Since commercial researchers dominate the “hot” area of large language models, I’ve seen a number of people ask “what should academic researchers focus on”. There are of course huge numbers of exciting and valuable scientific research questions which are not of much commercial interest, including long-term work which wont pay off commercially for 10+ years, high quality evaluation, socially useful but low-profit applications, and using NLP to research fundamental cognitive science questions.
A reader asked me how accurate chatGPT texts need to be. The answer is that this depends on context, including use case, workflow, and error type.