I am often asked how AI will impact university teaching of computer science. I’m not an expert on this, but I thought I’d make a few comments from the perspective of an AI researcher who has taught CS for 30 years. I will distinguish between impact on assessents, teaching methods, and curriculum.
Assessment and AI-enabled cheating
What people (both within and outside the university) seem to talk about most is how assessment needs to change because AI makes it easy for students to cheat. To be honest, I think anti-cheating strategies are much less important than better teaching methods and curriculum change, but (for better or for worse) people talk about this a lot.
Some of my students probably are trying to use AI to cheat, unfortunately. Also, in some cases weak students seem to decide not to bother engaging with difficult material, because they assume they can just use AI to pass the course. One of my courses includes an unusual programming assessment which is not straightforward to do using an LLM. This year attendance at lectures and labs fell quite a bit, while the failure rate skyrocketed.
Of course cheating is not new. There was a CS cheating scandal when I was an undergrad in 1981 (I was not involved!), and we have regularly had them during my 31 years teaching at Aberdeen. But in 2026 cheating seems more accepted and indeed “standard practice” for a **minority** of students.
However, I am not sure how much difference cheating actually makes. After cheating accusations in our MSc class 25 years ago, we ended up computing final degree classification (fail, MSc, MSc with distinction) in two ways: one way assumed that no cheating happened and took assessments at face value, and the other assumed that cheating happened and gave students no credit for relevant assessments. There was *no* difference, everyone got the same degree classification in both scenarios! This is partially because the students who were accused of cheating in their assessments did poorly in their thesis/dissertation projects.
Given above, I personally do not spend a lot of time on anti-cheating measures. I do tweak things a bit, but my focus is on supporting students who want to learn, not on stopping cheaters.
Teaching Methods and AI tutors
I think a more interesting and important issue is whether AI can help students learn, by serving as a tutor. Of the course the idea of Intelligent Tutoring Systems has been around for decades, but real-world usage has been limited.
Will LLM-based tutors change this? Certainly I am seeing research papers on this (eg Yan et al 2025) and interest from online education providers such as Khan Academy. But I do not yet see widespread adoption, and some of the science seems dubious (retraction note). It does seem clear that using LLMs as effective tutors is not straightforward. In the words of Bastani et al 2025, “Our findings highlight the need for thoughtful integration of generative AI in educational settings to ensure that human learning is preserved”.
So like a lot of other complex use cases (including using AI to support patients), using LLM technology to help learners requires a lot of thought about wider educational context and goals, and careful experiments to ensure that the technology is actually helping people. But I think there is a lot of potential, and I expect that over the next 5-10 years we will see increasing use of AI tutors in CS teaching.
Curriculum: What do CS students need to learn in a world with AI assistants
The most important question in this space is how AI impacts what we teach students. Cursor and others argue strongly that effective software development using AI assistants is a different process from traditional software development; Mealey presents an interesting perspective on what is needed to effectively use tools such as Claude Code.
Obviously this space is changing incredibly quickly, which makes it difficult to design appropriate curricums for a 3 or 4 year degree programme! But I personally think that there are some changes which we should start to make, including:
- Less focus on coding, more on other aspects of software development. Real-world software development teams include many people with a wide assortment of skills, including business analysis, software architecture, and software testing. But these topics are barely taught in most university CS degrees. I think our degrees would be more useful in general if we taught more about business analysis (etc), and this will become even more true as AI assistants spread.
- Explicit instruction in using AI assistants. We should explicitly teach students how to use AI assistants effectively, rather than treating their use as borderline cheating. I am retiring this summer, but if I was continuing to teach I would definitely start talking about this (and I have suggested this to other faculty who are picking up my courses).
Final thoughts
AI is going to change how CS teaching is done, and what is taught. The current fixation on stopping students from using AI to cheat is a distraction, what is more important is using AI tutors effectively and (most importantly) adjusting what we teach so that it is relevant to a world where software developers are expected to use AI assistants.