Skip to content

Ehud Reiter's Blog

Ehud's thoughts and observations about Natural Language Generation

  • Home
  • Blog Index
  • About
  • What is NLG
  • Publications
  • Resources
  • University
  • Contact

Tag: evaluation

Uncategorized

Objective evaluation of NLG texts

Sep 8, 2020Sep 8, 2020 ehudreiter1 Comment

I would love to be able to define objective criteria for evaluating NLG texts. In principle, I think we can use task-based evaluation to measure utility, and some kind of mistake counting to measure accuracy. However its harder to think of a way to measure fluency without relying on human judgements

Uncategorized

Small differences in BLEU are meaningless

Jul 28, 2020 ehudreiter3 Comments

I was very impressed by a paper we recently read in our reading group, which showed that small differences in BLEU scores for MT usually dont mean anything. Since lots of academic papers justify a new model on the basis of such small differences, this is a real problem for NLP.

Uncategorized

Why do we still use 18-year old BLEU?

Mar 2, 2020Mar 2, 2020 ehudreiter2 Comments

NLP technology has changed and advanced over the past two decades, but it often seems that NLG evaluation has not. Why is the 18-year old BLEU metric still so dominant?

Uncategorized

Shared Task on Evaluating Accuracy?

Feb 18, 2020Jun 23, 2020 ehudreiter1 Comment

We’re thinking of organising a shared task on evaluating the accuracy of texts produced by NLG systems. Comments welcome, also let me know if you might participate.

Uncategorized

Lessons from 25 Years of Information Extraction

Jan 2, 2020Jan 2, 2020 ehudreiter1 Comment

I really liked Grishman’s recent paper on 25 years of research in information extraction, and summarise a few of the key insights here, about relative progress in different areas of NLP, reluctance of researchers to use complex evaluation techniques, and corpus creation vs rule-writing.

Uncategorized

Lots about evaluation and methodology at INLG – Great!

Nov 5, 2019Nov 5, 2019 ehudreiterLeave a comment

I’m just back from INLG 2019 in Tokyo, where I was very happy to see an increased emphasis on evaluation (and other methodological issues), including several papers on improving human evaluations.

Uncategorized

Accuracy, Fluency, and Utility

Oct 8, 2019Oct 8, 2019 ehudreiter2 Comments

Texts produced by NLG systems can be evaluated in terms of accuracy (content is correct), fluency (text is readable), and utility (text is useful). I discuss these three “dimensions” of NLG evaluation.

Uncategorized

Generated Texts Must Be Accurate!

Sep 26, 2019Sep 26, 2019 ehudreiter12 Comments

I’ve been shocked by the fact that many neural NLG researchers dont seem to care that their systems produce texts which contain many factual mistakes and hallucinations. NLG users expect accurate texts, and will not use systems which produce inaccurate texts, not matter how well the texts are written,

Uncategorized

NLG and Explainable AI

Jul 19, 2019Jul 22, 2019 ehudreiter1 Comment

Some thoughts on key NLG challenges in explainable AI: evaluation, conceptual alignment, narrative. Comments are welcome!

Uncategorized

Mistakes in Evaluating ML

May 15, 2019May 15, 2019 ehudreiter2 Comments

Unfortunately, I see many students (and indeed other people) make some basic mistakes when evaluating machine learning, for classifiers as well as NLG.

Posts navigation

Older Posts
  • LinkedIn
  • Twitter

Top Posts & Pages

  • Real-World Neural NLG
  • "Will I Pass my PhD Viva"
  • Human editing of NLG texts
  • Why I do not Want to be a Co-author on Your Paper
  • Hallucination in Neural NLG
  • Exciting NLG Research Topics (June 2017)
  • Best Papers I Read in 2020
  • Publish in Journals!
  • How do I Build an NLG System: Requirements and Corpora
  • How do I Learn about NLG?
Blog at WordPress.com.
Ehud Reiter's Blog
Blog at WordPress.com.
Cancel