When we try to use ML in commercial NLG contexts, one of the challenges is that NLG developers want to be able to customise, configure, and control their systems. So we need ML approaches which do not stop devs from configuring things they are likely to want to change.
I’m beginning to think that in some ways the NLP community *encourages* researchers to use poor-quality or otherwise inappropriate data sets. Which is a truly depressing thought…
Some thoughts on key NLG challenges in explainable AI: evaluation, conceptual alignment, narrative. Comments are welcome!
Most research software does not enter everyday operational use. In part because research projects usually do not worry about issues such as maintainability, regulatory approval, and change management, which are essential to the long-term success of commercial software.
Some thoughts on the properties texts need to have in order to be good non-fictional narratives, and speculations on how we might generate such texts.
A travelogue about my recent cycling holiday, in Wales and England, where I saw many places related to my wife’s family history,
Unfortunately, I see many students (and indeed other people) make some basic mistakes when evaluating machine learning, for classifiers as well as NLG.