Skip to content

Ehud Reiter's Blog

Ehud's thoughts about Natural Language Generation. Also see my book on NLG.

  • Home
  • Blog Index
  • About
  • What is NLG
  • Publications
  • Resources
  • University
  • Book
  • Contact

Category: building NLG systems

building NLG systems

Understanding what users want from NLG

Nov 6, 2025 ehudreiterLeave a comment

When building an NLG system, it really helps to understand what users want; this came up several times at the recent INLG conference. I discuss some of our work in this space, and give a few suggestions.

building NLG systems

Maintaining NLG Systems

Oct 21, 2024 ehudreiter2 Comments

Deployed software systems need to be maintained as bugs emerge and the domain and user needs evolve; this perhaps is especially challenging for systems based on LLMs. Unfortunately little is known about maintaining NLG systems,

building NLG systems

The latest/trendiest tech isnt always appropriate

Aug 26, 2024 ehudreiter2 Comments

Sometimes the latest technology is *not* appropriate for an NLG task. I saw this very strongly in the late 2010s with LSTMs (which do not work well for data-to-text), and continue to see this in 2024 (GPT4 is not always the best approach). Both researchers and developers need to be open-minded about alternative approaches.

building NLG systems

Well structured input data helps LLMs

Jun 3, 2024 ehudreiter1 Comment

My student Barkavi Sundararajan has shown that LLMs do a better job at data-to-text if the input data is well structured. She will present a paper about this at NAACL.

building NLG systems

Problems in using LLMs in commercial products

Aug 24, 2023Aug 24, 2023 ehudreiterLeave a comment

I see lots of big-picture talk about what LLMs can do, but at a practical level there are real challenges in using them in commercial applications. These include cost, stability, and need for human-in-loop, as well as use-case-specific challenges.

building NLG systems

LLMs and Data-to-text

Jun 29, 2023 ehudreiter1 Comment

At this moment in time, chatGPT and other LLMs seem to be much better at the “language” side of data-to-text than the “content” side, Even on the language side, there are important caveats about real-world usage. Of course, the above may change as the technology improves.

building NLG systems

Can ChatGPT do Data-to-Text?

Jan 23, 2023Jun 29, 2023 ehudreiter4 Comments

Last week I played around with using chatGPT for data-to-text, and to be honest overall I was disappointed. A few people have asked me about this, so I’ve written up some of my notes here.

building NLG systems

chatGPT: Great science, unclear commercials, hate the hype

Dec 29, 2022Dec 29, 2022 ehudreiter4 Comments

I get asked a lot about chatGPT, so I thought I’d write a blog explaining my views, which focus on its impact on data-to-text NLG. Basically I think chatGPT is really exciting science which shows major progress on many of the challenges in neural NLG. However, commercial potential is unclear, and the media hype is annoying…

building NLG systems

Simple vs Complex Models

Oct 26, 2022Oct 26, 2022 ehudreiter1 Comment

I was very impressed by a recent talk about the power of simple white-box models in tasks such as medical diagnosis. I’d love to see more work on simple models in NLP and NLG!

building NLG systems

Summarisation datasets should contain summaries!

Oct 13, 2022Oct 13, 2022 ehudreiter5 Comments

Thge most populat datasets used in summarisation (CNN/DailyMail and XSum) do not actually contain summaries. I find this worrying. Surely the best way to make make progress on summarisation is to use actual summarisation datasets, even if these are less convenient from a “leaderboard” perspective.

Posts navigation

Older Posts
  • LinkedIn
  • Twitter

News: I am likely to retire in summer 2026. Looking for interesting things to do afterwards.

Top Posts & Pages

  • What LLMs cannot do
  • Publish in Journals!
  • Do LLMs cheat on benchmarks
  • Is building neural NLG faster than rules NLG? No one knows, but I suspect not.
  • We need better LLM benchmarks
  • Generated Texts Must Be Accurate!
  • Do We Encourage Researchers to Use Inappropriate Data Sets?
  • Google: Please Stop Telling Lies About Me
  • We Need Robust Ways to Select Content of NLG Texts
  • Benchmarks distract us from what matters
Blog at WordPress.com.
Ehud Reiter's Blog
Blog at WordPress.com.
  • Subscribe Subscribed
    • Ehud Reiter's Blog
    • Join 100 other subscribers.
    • Already have a WordPress.com account? Log in now.
    • Ehud Reiter's Blog
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...