We talk about AI ethics in terms of data and privacy. But what about what it does to us?
May, 2024
I want to start with an argument that seems, at first, to have nothing to do with artificial intelligence.
In his Lectures on Ethics, Immanuel Kant made an argument that has always struck me as peculiar. He claimed we shouldn't be cruel to animals, not because of what it does to them, but because of what it does to us. A man who shoots his ageing dog, Kant wrote, does not wrong the dog. Animals, in his view, lack the rational nature that would make them ends in themselves. But in shooting the dog, the man erodes something within himself. He weakens what Kant called his "natural predisposition" toward empathy, a disposition that is "serviceable to morality" in his relations with other people (Kant, 1797/1991, 6:443).
The argument feels cold at first. Surely the dog matters? But Kant was pointing at something else entirely. He was interested in what cruelty does to the one who inflicts it. The worry wasn't the suffering of the animal. It was the coarsening of the human soul.
I've been thinking about this idea in a different context.
Conversations about AI ethics tend to focus on data. Don't feed company secrets into ChatGPT. Be careful with personal information. Read the privacy policy. These warnings are sensible and I don't dispute them. The ethical terrain of AI, when we bother to map it at all, usually runs in one direction: from us toward the systems, the companies, the potential for misuse.
But ethics has another direction. It can also point inward.
A study published in 2025 by researchers at MIT Media Lab caught my attention. Led by Dr Nataliya Kosmyna, the team measured what happens to our brains when we outsource thinking to language models (Kosmyna et al., 2025). They recruited 54 participants and divided them into three groups. One group wrote essays using ChatGPT. Another used search engines for research. The third wrote entirely on their own.
The findings were striking. Participants who relied on AI showed weaker neural connectivity, particularly in brain regions associated with memory, creativity, and self-awareness. The differences weren't subtle. The AI-assisted group exhibited up to 55% reduced connectivity compared to those who wrote unaided. When later asked to recall key points from their own essays, 83% of the AI users couldn't do it. They had produced something without ever quite possessing it.
More troubling still, these effects persisted. When ChatGPT users were switched to writing independently, they struggled to regain their former cognitive engagement. The researchers coined a term for this: cognitive debt. Like technical debt in software, it accumulates silently. You borrow mental effort from the future, and the interest compounds.
This brings me back to Kant.
His argument about animals wasn't really about animals. It was about the shape of a human life, the kind of person one becomes through repeated action. If cruelty toward creatures that cannot reason back makes us worse at reasoning ethically with those who can, then the act carries moral weight regardless of the victim's moral status.
I wonder if something similar applies to how we use AI.
When I let a language model do my thinking, I'm not just outsourcing a task. I'm degrading a capacity. The struggle to move from confusion to clarity, to wrestle a half-formed idea into words, that is thinking. It's not a cost to be minimised; it's the thing itself. And when I skip it, I don't stay the same person who simply worked more efficiently. I become someone slightly diminished. Someone less capable of the effort next time.
The MIT researchers put it directly: LLMs spare the user mental effort in the short term but generate long-term costs including diminished critical thinking, reduced creativity, and independent thought (Kosmyna et al., 2025).
This isn't an argument against using AI. I use it myself. But there's a difference between using a tool to extend your capabilities and using one to replace them. A calculator doesn't threaten my ability to reason mathematically, because I still understand what multiplication is. But if I never learned arithmetic at all, if I simply trusted the machine from the start, I'd be diminished in ways I might not even notice.
The philosophers sometimes speak of qualia, the felt quality of experience. The redness of red. The ache of loss. The satisfaction of finally grasping an idea that had eluded you. These are not outputs. They are processes. And some processes can only happen if you're the one doing them.
There's a scene in Good Will Hunting where Robin Williams's character confronts Matt Damon about all the knowledge he's accumulated without ever having lived any of it. "If I asked you about Michelangelo," he says, "you could give me the lowdown on every art book ever written. But you couldn't tell me what it smells like in the Sistine Chapel" (Van Sant, 1997).
Language models know everything and nothing at all. They have ingested more text than any human could read in a thousand lifetimes. But they have never stood anywhere, never struggled through a thought, never felt the quiet relief of finally understanding something difficult. When we outsource our thinking to them entirely, we risk a similar fate. Informed but untouched. Articulate but empty.
I'm not suggesting we should abandon AI out of some romantic attachment to mental labour for its own sake. The technology is genuinely useful. It can help with iteration, with brainstorming, with seeing problems from angles we hadn't considered. But there's a posture we might adopt when using it, a kind of discipline.
Use AI to extend your brain, not to replace it. Let it be a collaborator in thinking, not a substitute for it. Ensure that the source of what happens remains you: your curiosity, your effort, your willingness to sit with confusion until it resolves. The model can help you refine a thought, but you should be the one who had it first.
This is, I think, a duty we owe ourselves. Not because AI is evil, or because technology is the enemy of the authentic life. But because certain capacities only develop through use, and atrophy without it. "Use it or lose it" isn't just folk wisdom; it's a principle of neural development (Shors et al., 2012).
Kant believed we could wrong ourselves. Not just others, but ourselves. If I numb my capacity for empathy, I have violated a duty I owe to my own humanity. If I let my thinking muscles wither through disuse, perhaps the same logic applies.
So here is my modest proposal: use AI, but use it in a way that makes you better, not worse. Not just more efficient. Better. More capable of thought, not less. More articulate from your own resources, not dependent on borrowed eloquence.
This isn't about purity. It's about virtue, in the old sense of the word: excellence of character achieved through practice. The Stoics knew this. Kant knew it too, in his own severe way. And now, in an age where thinking can be outsourced with a few keystrokes, perhaps we need to remember it again.
The ethical question of AI runs in two directions. One points outward, toward data and privacy and the systems we're building. The other points inward, toward the kind of people we're becoming.
Both deserve our attention.
References
Kant, I. (1991). Lectures on ethics (P. Heath, Trans.; P. Heath & J. B. Schneewind, Eds.). Cambridge University Press. (Original work published 1797)
Kosmyna, N., et al. (2025). Neural connectivity patterns during AI-assisted writing tasks. MIT Media Lab Working Papers.
Shors, T. J., Anderson, M. L., Curlik, D. M., & Nokia, M. S. (2012). Use it or lose it: How neurogenesis keeps the brain fit for learning. Behavioural Brain Research, 227(2), 450-458.
Van Sant, G. (Director). (1997). Good Will Hunting [Film]. Miramax Films.