As we begin to rely on AI tools in our daily lives, it's natural to ask what we might be losing in the process. Are we empowering ourselves, or ceding our ability to think? This general question has been discussed for thousands of years, in fact, one of the earliest and sharpest critiques of a new technology comes from ancient Athens. What I find interesting is that the critique itself may contain a practical answer. In this post, I'll explore the idea that how we use these new tools matters more than whether we use them, and that an old method of inquiry might be a remedy for keeping them on our side. I've come to think there are two fundamentally different ways people use AI: one that makes the technological fears come true, and one that answers them. But to explain why I think that, I'll start with the story illustrating the original argument.
Theuth and Thamus
In Plato's Phaedrus, Socrates tells a story about the Egyptian god Theuth, an inventor who brings his creations before King Thamus for judgment. Among them is writing. Theuth is proud of his work and presents it as a gift that will improve both memory and wisdom. Thamus is not impressed. Writing won't give people wisdom, he argues. It will give them the appearance of wisdom. They'll be able to retrieve information without understanding it, and they'll mistake one for the other. They'll stop nurturing knowledge within themselves and rely instead on external marks.
It's a compelling and relatable story, and it carries an amusing irony that Plato certainly intended, that we only know about Socrates' oral critique of writing because Plato wrote it down. There's also a word buried in this passage that I think is useful today in its original form. When Theuth presents writing to Thamus, he calls it a "pharmakon," a Greek word that means both remedy and poison (and depending on context, a magical charm). Thamus essentially responds, "you say remedy, I say poison."
Most translations choose one meaning or the other, collapsing the tension into something cleaner. Jacques Derrida, in his 1972 essay Plato's Pharmacy, argued that this collapse is the wrong move. The value of the word is that it occupies both the remedy and the poison. You can't separate a technology's capacity to help from its capacity to harm, they are two sides of the same coin. The philosopher Bernard Stiegler later generalized this insight, arguing that all technologies carry this dual nature. Every tool that extends a human capacity also risks atrophying it.
The pattern repeats
This fear has resurfaced with every major technology since writing: the printing press, television, the internet, social media, etc. In each case, the fear was partially justified, as these technologies really did change how people think, and not always for the better. Technology continues as a pharmakon, and AI tools are the latest example.
Several writers have already drawn the line from Phaedrus directly to AI. The parallel is hard to miss, because AI tools give us the ability to retrieve, synthesize, and generate knowledge without necessarily understanding any of it. A person can "prompt" their way to a plausible-sounding argument on a topic they know nothing about, or "vibe code" a software project they don't understand. Socrates would have recognized this immediately. It is precisely the mistaken feeling of knowledge he warned against.
However, while I think the diagnosis is accurate, it is often incomplete, stopping at a cautionary observation. What interests me more is whether the same tradition that identified the problem also offers a practical solution.
Socrates' remedy
I think it does, and fittingly, the solution is what Socrates was most famous for, the process of inquiry that bears his name. The Socratic method is known for relentless questioning, refusal to accept surface answers, following a line of reasoning wherever it leads. This was what Socrates valued about live dialogue and found missing in writing. Writing just sat there repeating itself. It couldn't answer back, couldn't adapt to challenges, couldn't be pushed into territory its author hadn't anticipated, and so couldn't participate in discourse.
This is where AI becomes genuinely interesting, and where the analogy to writing breaks down in a productive way. AI does answer back. It is the first text-based technology that can participate in something resembling dialectic (not just information retrieval). You can push it, challenge its assumptions, ask it to justify itself, and take the conversation in directions neither participant anticipated. In this narrow but important sense, it is closer to what Socrates actually valued than books ever were.
But only if you use it that way (the key issue). An empty prompt screen is a blank canvas that can be filled with any thought imaginable. In practice, I think there are two distinct modes of use that lead to very different outcomes, which I think are cleanly described as consumptive and discursive.
The consumptive mode treats the AI as an oracle. You ask a question, you receive an answer, you accept it. You ask it to write something, it writes it, you use it. The interaction is essentially one-directional, and it produces output to be consumed. This is the mode that fulfills Socrates' fears perfectly. The user gains the appearance of knowledge or capability without the underlying understanding. And because the output is fluent and confident, it is easy to mistake it for genuine insight, or the product of one's own mind.
The discursive mode treats the AI as a thought partner. You bring your own thinking, challenge the AI's responses, push back when something feels wrong, and follow threads that emerge from the exchange. The interaction is genuinely bidirectional: both participants shape the direction. This is much closer to the Socratic method, not because the AI is simulating Socrates, but because the practice of engaged questioning produces the same effect that Socrates argued for. Understanding is built through the process of inquiry, not delivered as a finished product.
One caveat, not all discourse is genuine inquiry. It's entirely possible to engage in extended back-and-forth with an AI while never actually challenging your own assumptions, so using AI essentially as a mirror reinforcing your existing beliefs. Socrates drew the same line between genuine dialectic and sophistry, and the same applies here.
There's a technical dimension to this picture worth considering. A generative pretrained AI model, left unchallenged, may produce the most probable output given its training data — a regression to the mean, not wisdom. It sounds authoritative because it's fluent and familiar, but it's entirely conventional. This can be ideal for some consumptive modes, but when applied to creative work, there's a risk of outputs converging toward the center of the distribution, where everyone's results look the same (the poison). By adopting a discursive mode and pushing back, you nudge the model toward a direction more of your own making (likely an outlier). The Socratic method, in this framing, may be a variance-preserving operation that keeps the AI pharmakon on the remedy side.
A closing note
I should conclude by mentioning that some of the ideas in this post were themselves refined through the kind of discursive AI use I've been describing. The core connections, between pharmakon and AI, between Socrates' critique and the Socratic method as its own answer emerged through extended conversation with Claude Opus. The experience was one where I pushed back repeatedly, rejected suggestions that felt generic, and followed my own curiosity rather than accepting the default direction often prescribed. In asking for feedback on early drafts, the model repeatedly pushed tweaks for marketing content optimization, which I had to aggressively deflect. Oddly enough, that experience reinforced many of the points I'm making here!
None of this resolves the tension Plato highlighted, and the idea of a technological pharmakon is evergreen. But in the case of AI prompting, I think this ancient discursive practice may be a remedy for the potential dangers of new AI tools.