I’m very excited to be part of a new project, NL4XAI (https://nl4xai.eu/), which is developing techniques for using natural language techniques, including NLG, to explain the reasoning of AI systems. This is an EU project which is a collaboration with several NLG researchers (including Kees van Deemter, Claire Gardent, Albert Gatt, and Mariet Theune) as well as top researchers working on explanation and argumentation (including Jose Alonso, Carles Sierra, and Nava Tintarev). A great bunch of people!
NL4XAI is a “Marie-Curie” project, so we will be hiring 11 “early stage researchers” who are expected to complete a PhD during the project (but are paid proper salaries, not just a PhD stipend). One of these researchers will be based at Aberdeen, and work on explaining Bayesian reasoning.
Aberdeen PhD: Explaining Bayesian Reasoning
People have a hard time understanding probabilistic and Bayesian reasoning, because humans do not naturally think in probabilistic terms. In this project, we will develop natural-language-generation (NLG) techniques for explaining probabilistic reasoning, especially in Bayesian networks, to human users. Key issues include formulating explanations as a set of arguments and counter-arguments, presenting explanations as a story or narrative, adapting explanations to be appropriate for users with different levels of domain knowledge and mathematical expertise, communicating uncertainty and data quality issues, and evaluating whether explanations are useful or not (Reiter 2019).
The researcher will work under me at the University of Aberdeen, and be co-supervised by Prof Nir Oren (also at Aberdeen) and Dr Nava Tintarev (TU Delft); he or she will be expected to enrol for a PhD at Aberdeen. I am an expert on NLG, Prof Oren is an expert on argumentation and explanations, and Dr Tintarev is an expert on explanations of recommendations. The researcher will be part of the Computation and Language research group at Aberdeen, and work alongside other students and researchers who work on NLG and explanations. We will expect the researcher to work with users (eg, understand the problems users have with probabilistic reasoning, and evaluate the effectiveness of explanations) as well as write code and develop models.
During the course of the project, the researcher will have temporary secondments to Arria NLG (a commercial NLG company in Aberdeen) and TU Delft (with Dr Tintarev). There are also opportunities to work with Prof Ingrid Zukerman (Monash University, Australia), who is an expert on Bayesian reasoning and argumentation, and has a related research project. And of course to meet and collaborate with other NL4XAI researchers.
This is a great opportunity to join a leading NLG research group and work with top researchers to develop innovative techniques for explaining complex reasoning!
For more information, see Further Particulars. Also feel free to email me if you have questions.
We are looking for people with good bachelors or masters degrees in Computer Science or AI, who are interested in and excited by the topic. In addition
- Because the Marie-Curie programme is intended to encourage mobility across European countries, we cannot accept applicants who have spent 12 months in the UK (studying or working) during the past three years.
- Because the Marie-Curie programme is aimed at Early Stage Researchers, we cannot accept people who already have PhDs or otherwise have an established research career.
- We probably will not be able to get a UK work permit for this position, so applicants should be UK or EU citizens, otherwise have the right to work in UK, or be able to secure an appropriate visa from UK Visas and Immigration.
You need to apply for this position through the central nl4xai website
- General information for all NL4XAI ESRs: https://nl4xai.eu/vacancies/
- Specific information for ESR3 (Aberdeen ESR) : https://nl4xai.eu/vacancies/esr3/
Deadline is 14 February 2020, and you will be able to apply for multiple NL4XAI positions at the same time. We hope to interview selected candidates in early March. If you are selected, you will need to start the position between 1 April 2020 and 1 October 2020.
Other NL4XAI positions
There are a lot of exciting positions in NL4XAI!
- NL4XAI- ESR1: Explaining black-box models in terms of grey-box twin models (with Jose Alonso, Santiago de Compostela)
- NL4XAI- ESR2: From grey-box models to explainable models (with Alberto Bugarin Diz, Santiago de Compostela) (I will help supervise this position)
- NL4XAI- ESR3: Explaining bayesian networks – this is the position with me, described above
- NL4XAI- ESR4: Explaining logical formulas (with Kees van Deemter, Utrecht)
- NL4XAI- ESR5: Multimodal semantic grounding and model transparency (with Albett Gatt, Malta)
- NL4XAI- ESR6: Explainable models for text production (with Claire Gardent, CNRS Nancy)
- NL4XAI- ESR7: Argumentation-based multi-agent recommender system (with Carles Sierra, Barcelona)
- NL4XAI- ESR8: Customized interactive argumentation schemes for XAI (with Katarzyna Budzynska, Warsaw)
- NL4XAI- ESR9: Personalized explanations by virtual characters (with Mariet Theune, Twente)
- NL4XAI- ESR10: Interactions to mitigate human biases (with Nava Tintarev, Delft)
- NL4XAI- ESR11: Explaining contracts and documentation of assets for construction companies and real estate agents (with Hitoshi Yano, Indra)
NL4XAI has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860621.