Co-Leading Workshops in AI-Assisted Assessment

Teaching educators how to thoughtfully integrate AI tools into student assessment practices

September 2025

With my colleague DR. FABIENNE LÜTHI, I designed and delivered a hands-on workshop at PHBern on using Custom GPTs to support fair, transparent, and learning-oriented assessment of student submissions. The workshop addressed how AI can assist (not replace) professional judgment in grading, with participants from PHBern. We will hold a similar workshop at a Gymnasium in Bern, later in 2026.

My contribution

I worked on the technical aspects of the workshop alongside my co-lead: prompt engineering architecture, data protection protocols, system setup, and ethical guardrails. Together we designed a practical framework for building Custom GPTs (role definition, constraints, output formats, interaction rules, examples) and facilitated an iterative testing process where participants refined prompts to improve output precision and consistency.

I also created clear documentation of governance requirements: human-in-the-loop grading, bias awareness, transparency expectations, and concrete privacy measures (anonymization, limiting uploads, avoiding model training where possible).

Fabienne brought expertise in educational science and pedagogy, covering assessment theory, rubric design, and feedback quality. Together we developed the complete workshop flow: from framing and use-cases through live demos, hands-on exercises, and critical reflection.

Outcomes

Participants moved from concept to working metaprompt in a single session, producing reusable structures transferable to their own disciplines (especially rubric alignment and feedback generation). We documented clear trade-offs: where Custom GPTs add value (consistency, efficiency, formative feedback) and where they fail (subjective tasks, reasoning transparency, bias risk, prompt sensitivity).