evaluation

How effective is prompting?

I was very impressed by a recent paper that compared prompting-based MT to MT based on trained models. Results are very interesting; prompting-based MT generates fluent texts which however have accuracy problems. Also the paper itself is an excellent example of a high-quality NLP evaluation, and I recommd it to anyone who wants to do good NLP evaluations.

academics

I dont like leaderboards

I dont like academic leaderboards. Poor scientific techniques, poor data, and poor evaluation means leaderboard results may not be worth much. I also suspect that the community’s fixation on leaderboards also means less research on important topics that do not fit the leaderboard model, such as understanding user requirements.