A faculty workshop, made explorable

Teaching with a new kind of intelligence.

Not a tool. Not a person. Something stranger — a powerful, uneven, alien sort of intelligence now sitting in every classroom. This is a place to understand how it actually works, what it does to learning, and how to design teaching and research around it on purpose.

4 sessions 4 interactives ~50-paper evidence base Built for GIPA · open to anyone
The premise

Most AI advice skips the part that matters.

The usual workshop explains "next-token prediction," warns about plagiarism, and stops. That leaves teachers with a tool metaphor and a detection problem — and neither survives contact with what these systems have actually become.

This companion takes a different route. First, build an honest mental model of the machine — not the hype, not a cartoon, but the recursion that turns a word-guesser into an agent that can act in the world. Then ask the real pedagogical question: not "how much AI?" but "what learning are we trying to protect, and how do we design for it?" Then look hard at the evidence — which is genuinely mixed — and decide course by course, assignment by assignment.

Each session below pairs a short read with something you can actually operate. Move through them in order, or jump to whatever you came for.

The path

Four sessions, four ways in.

Session 1 · Mental models 01

Models, Chatbots, Agents

Build the machine yourself, one step at a time: a bare word-guesser becomes a chatbot, then thinks, then uses tools, then spawns swarms. The whole field of "AI" is one recursive idea.

Open the steppable machine →
Session 2 · Pedagogy 02

Teaching with a New Kind of Intelligence

Five ways of seeing AI — parrot, bad tool, good tool, superhuman, alien — and why the lens you pick changes everything. Then: the questions that decide what to protect.

Try the five lenses →
Session 3 · Evidence 03

Evidence, and What to Do with Uncertainty

Cognitive debt, the leveling effect, why sequencing beats bans, and why detection is a losing game. The research is mixed — here is what it actually supports.

Read the evidence →
Session 4 · Design 04

Designing Courses & Research Programs

A working tool: answer a few questions about an assignment and get a reasoned analog / hybrid / full-AI recommendation — grounded in the evidence, not in vibes.

Use the design tool →
Go deeper

The evidence underneath all of it.

Every claim here traces back to a corpus of about fifty papers, policy documents, and essays. Two ways to dig in.

The research synthesis

The full ~50-paper review, readable end to end: cognitive costs, task design, AI's reasoning limits, institutional responses, and the open questions that aren't settled yet.

Read the synthesis →

The evidence explorer

A filterable map of the corpus — by finding, domain, and tension. Find the study behind any claim and follow it to the source.

Explore the corpus →
For GIPA participants This site mirrors and extends the four working sessions from the May workshop. Use it to revisit the mental model before policy work, to bring a real syllabus to the design tool, or to send a skeptical colleague the one interactive that will change their mind. It is self-contained — it works offline, from a USB stick, forever.