In 2019 LM output was fluent but not trustworthy: still true in 2024
In 2019 I told students that neural language models produced texts which were fluent but could not be trusted content-wise. In 2024 I told them the same thing. My high-level message hasnt changed despite the huge improvements in tech, maybe this is a fundamental aspect of LLMs?