
Generative AI, Pedagogy and Professional Judgement
AI doesn't change teaching - it reveals what teaching has always been
This theme focuses on generative AI as a pressure on professional judgement in teaching and assessment, raising questions about responsibility, authorship, standards, and institutional trust.
My recent work interrogates a question at the heart of AI's impact on education: not whether AI will change teaching, but what happens to professional judgement when AI actively participates in teaching rather than simply supporting it.
As Co-Chair of the TPACK Special Interest Group, I keep returning to a framework that's been central to understanding teaching expertise: Technological Pedagogical Content Knowledge. But AI doesn't sit neatly in the "T" of TPACK. It cuts across all three dimensions technology, pedagogy, and content reshaping their intersections in ways that challenge how we think about teaching itself.
Why AI is Different from Previous Educational Technologies
In classic TPACK terms, technological knowledge refers to understanding how particular technologies afford or constrain pedagogical approaches. This framing works well when technologies are largely instrumental: VLEs, lecture capture, polling tools, simulations.
Generative AI is fundamentally different. AI doesn't simply support teaching activities; it participates in them. It generates text, explanations, code, images, and feedback that can closely resemble the outputs we have traditionally treated as evidence of student learning. As a result, AI doesn't sit alongside pedagogy and content it cuts across them, reshaping their intersections.
This is why staff unease around AI is often mischaracterised. It's not primarily a skills or confidence issue. What's being unsettled is confidence in professional judgement. Decisions that were once implicit about effort, authorship, understanding, and progression are now exposed and must be made explicit.
AI Forces Us to Confront What Teaching Actually Is
Through my recent presentations and writing, I've been exploring how AI reveals something we've long taken for granted: teaching is fundamentally the practice of judging learning.
Not delivering content, though we do that. Not designing learning activities, though that matters. But making continuous professional judgements: about what students understand, where they struggle, what they need next, whether they're ready, how to intervene, when intervention would help or hinder.
Assessment isn't one component of this practice. It's where these judgements become most explicit, most consequential, and with AI most disrupted.
Assessment design becomes fragile. When AI can generate plausible outputs, we can no longer assume traditional tasks reveal what we intend them to reveal. A literature review that once demonstrated research skill, critical synthesis, and academic writing now potentially demonstrates prompt engineering. The assessment hasn't changed. What it allows us to judge has.
The process of assessing becomes uncertain. When we read student work, we've always made inferences about understanding, effort, development. AI makes those inferences unreliable. We find ourselves second-guessing our judgements, looking for tells, questioning authenticity. The professional act of assessing becomes conscious and uncomfortable in ways it rarely was before.
And this reveals something deeper about teaching itself. AI exposes assessment the act of judging learning as central to what teaching is. Not as one component among others, but as the connective tissue of the entire practice.
AI Teacher Literacies as Judgement Literacies
This is why I've become skeptical of "AI literacy" frameworks that focus primarily on tool use or technical skills. What we need aren't AI skills we need frameworks for exercising professional judgement under new conditions of uncertainty.
From my work with educators navigating AI integration, several themes emerge:
At the intersection of AI and subject knowledge: Teachers need to judge which aspects of disciplinary learning are vulnerable to automation, which may be amplified by AI, and which remain irreducibly human, contested, or contextual. Subject expertise becomes more important, not less.
At the intersection of AI and pedagogy: Teachers must judge when AI functions as cognitive scaffold and when it becomes shortcut that undermines learning intention. These decisions are necessarily situated dependent on subject, level, cohort, purpose.
At the intersection of AI and assessment: Teachers must maintain legitimacy of academic judgement without defaulting to surveillance or prohibition. This involves designing assessment that makes disciplinary thinking and decision-making visible, even when AI is present.
What Concerns Me About Current Approaches
In presentations throughout 2024-2025, I've observed institutions rushing to develop AI policies while educators struggle with something deeper than policy can address.
The focus on detection rather than design. Institutions invest in AI detection tools rather than rethinking what assessments should evidence when AI exists.
The treatment of AI as skills gap. Professional development focuses on "how to use ChatGPT" rather than "how to maintain professional judgement about learning when AI can produce student-like outputs."
The assumption that prohibiting AI is possible or desirable. Some disciplines can meaningfully prohibit AI use; many cannot. Even where prohibition is possible, it may not be pedagogically sound students will use AI in their future work, and education should prepare them to use it critically and ethically.
The failure to engage with what this means for teaching. If teaching is fundamentally about judging learning, and AI makes that judgement ambiguous, this isn't a problem we solve with better policies or detection tools. It's a transformation in the nature of professional practice.
Key Insights from Practice
As Co-Chair of the TPACK Special Interest Group, I'm currently editing a special issue on AI and TPACK for the Journal of Technology and Teacher Education, bringing together international scholars examining how generative AI challenges our understanding of technological pedagogical content knowledge. Parallel to this scholarly work, my role leading UCISA's Digital Capabilities Survey provides sector-level insight into how UK institutions are approaching digital literacies and AI. This dual perspective research community leadership and sector strategy shapes how I'm thinking about what educators actually need to navigate AI's impact.
Through my work leading digital education at DMU and advising D2L on instructional design, I'm seeing firsthand how institutions struggle less with AI tools themselves and more with maintaining confidence in academic judgement when traditional markers of student learning become ambiguous.
AI doesn't introduce judgement into teaching; it makes judgement visible. Decisions that were previously embedded in routines can no longer remain implicit.
Where This Work Continues
Beyond formal scholarly and sector work, I'm engaging with colleagues across institutions to surface where professional judgement now feels most exposed, least supported, or most contested. The questions that shape this work include:
Where do you feel most uncertain about your academic judgement now, compared to two years ago?
Which intersections subject, pedagogy, assessment feel most disrupted in your context?
What do institutions underestimate about the impact of AI on teaching quality?
If you're navigating these questions in your own teaching or institutional leadership, I'd welcome the conversation.
Questions I'm holding open include:
Where do you feel most uncertain about your academic judgement now, compared to two years ago?
Which intersections subject, pedagogy, assessment feel most disrupted in your context?
What do institutions underestimate about the impact of AI on teaching quality?
If you're navigating these questions in your own teaching or institutional leadership, I'd welcome the conversation.
Presentations & Publications
Recent Writing:
📄 From TPACK to AI Teacher Literacies: extending a framework for professional judgement Blog post, January 2026 [Read full post - link when published]
Conference Presentations & Panels:
🎤 Enhancing Education: How AI Complements Human-Centric Education D2L Webinar, November 2024 Invited panelist exploring AI's potential for personalized learning while preserving human connection in teaching. Watch recording
🎤 Me, Myself and AI: Delivering Personalised Learning at Scale Times Higher Education Digital Universities Week, Leeds, April 2023 Invited panelist discussing AI-enabled personalisation and its implications for teaching practice.
🎤 Reimagining Future Learning D2L Webinar, June 2023 Panel discussion on how AI and emerging technologies reshape educational possibilities. Watch recording
