It is treated as “just another tool.” It is not. It is a powerful, deeply uneven, genuinely strange intelligence — and what you believe it is silently decides how you teach with it.
Before any optimism, name the problems plainly — and notice where each one lives. Some are problems with the AI we'd fix with better engineering. Some are problems in how humans relate to it. Some are problems for the society around it. The fixes are different for each column.
| AI problems (fix the system) | Human problems (fix the relationship) | Social problems (fix the institution) |
|---|---|---|
| Bias | Over-reliance | Existential / power concentration |
| Hallucination | Cognitive offloading | Bad-actor empowerment |
| Jagged intelligence / stupidity | Cognitive surrender | Systemic de-skilling |
| Sycophancy | Psychological attachment | Labor losses |
| Prompt-injection vulnerability | Illusion of understanding | Environmental cost |
| Sandbagging / gaming | Plagiarizing | Slop-ification |
What's become clear in every AI debate is that people are not disagreeing about policy — they're disagreeing about what AI is. Tool or entity? Echo or intelligence? Like us or nothing like us? Pick a lens below and watch the same four situations change meaning under it.
The image from the workshop — a smiling face drawn over a writhing mass — is the point: a friendly chat surface over a process whose insides are not like ours. The intentional stance (treat it as a mind that believes and wants) only half-works. So does the tool stance. The honest answer is alien intelligence — and that answer changes the teaching question entirely.
The common responses — teach AI literacy; use it as a tool but know its limits; show students how it fails — are all reasonable and all too static. They freeze a snapshot of a system that will be different by next term, and they treat working with an alien intelligence like learning Word or Photoshop: a prescribed set of skills. It isn't. It is a how — flexible judgment under uncertainty.
So go back to basics. For the specific thing you are teaching:
Not the assignment — the capacity. What should be true of the student's mind afterward that wasn't before?
Learning lives in productive difficulty. Which difficulty, exactly, is doing the teaching?
Does it dissolve the productive struggle, scaffold it, or leave it untouched? This is empirical and current — not a principle.
The same tool helps or harms depending on what the human does. Design decides which.
Session 4 turns these four questions into a working tool you can run against a real assignment. Jump to the design tool →
Places deliberately walled off from AI — not from fear, but to protect a specific cognitive moment. The case is institutional, not nostalgic:
An analog zone is a promise about where the struggle is real — backed, where needed, by structure rather than by an honor-code act of willpower.
To graduate students with full command of the tools of their age, some zones should require AI fluency. A real example — a Law & AI seminar built around it:
The promise to students is symmetrical: you will be taught and will cultivate the durable, analytic skills of your field — and you will graduate fluent in the technologies your work will actually demand.