Parenthood has made me an easy (or at least easier) mark for stories about how social media and other novel attention-farming technologies are frying our childrens’ brains. And so I read James D. Walsh’s recent piece about higher education’s epidemic of AI-powered cheating with a sense of alarm that seemed to grow more acute with every paragraph. It wasn’t so much the widespread cheating itself that bothered me but the rapidity with which many kids (and, it must be said, many adults) have outsourced basic cognition to predictive text generators.
Walsh devotes some attention in the article to Chungin Lee, one of the co-creators of Cluely, a piece of software that “scans a user’s computer screen and listens to its audio in order to provide AI feedback and answers to questions in real time without prompting.” You may remember Cluely from their revolting viral ad, in which a user (played by Lee) uses AI to bullshit his way through a date:
Walsh’s piece quotes the company’s mission statement: “We built Cluely so you never have to think alone again.” And that hits on what I find so chilling about this tech, or at least the way it’s being marketed. Because thinking alone is sort of the core of being a complete person, much less a good citizen of a democracy.
In an earlier post, I quoted Hannah Arendt’s observation that the Germans who refused to participate in Nazi atrocities were those who possessed “the disposition to live together explicitly with oneself, to have intercourse with oneself, that is, to be engaged in that silent dialogue between me and myself which, since Socrates and Plato, we usually call thinking.” To engage in that “silent dialogue,” you need to be capable of thinking alone — or not precisely alone, but in solitude. Because as Arendt notes elsewhere, in The Human Condition: “To be in solitude means to be with one’s self, and thinking, therefore though it may be the most solitary of all activities, it is never altogether without a partner and without company.”
Cluely’s explicit promise is to abolish solitude—and, in effect, to abolish thought. All dialogue with one’s self is to replaced by queries put to a large language model.
This isn’t exactly a new promise. The attention economy has been waging a one-sided war on solitude for going on twenty years. Chris Hayes even wrote a great book about attention fracking recently, called The Siren’s Call. But prior to AI, the tools of attention-fracking generally offered a distraction from solitude: Tiktok and Instagram are tools for killing time, for procrastinating from cognitively demanding tasks. Cluely, and in fact most companies selling public-facing LLMs, are instead offering machines that will do these tasks for you.
The purveyors of these tools treat this as a kind of freedom: you’ll never have to think alone again. Your teachers can’t force you to spend hours chipping away at a single essay — you can plug their essay prompt into an app and then use your time however you want. But while startups like Cluely promise freedom, they’re actually selling a form of voluntary subjugation. Because if you never learn to think — if you never spend enough time in intercourse with yourself to really get to know who you are —then you’ll never act freely. You’ll become one of those who, to again quote Arendt, “dispose of a set of learned or innate rules which we then apply to the particular case as it arises, so that every new experience or situation is already prejudged and we need only act out whatever we learned or possessed beforehand.”
That is not the life of a free person but of an automaton. Or, more accurately, it’s the life of an ideal totalitarian subject.
Which brings us to Grok. All of the tools that Walsh’s interviewees use to outsource cognition — Cluely, ChatGPT, Gemini, etc. — are in the hands of private firms who can manipulate their output however they see fit. If you think with Cluely instead of thinking alone, you are effectively thinking whatever Lee and his cofounder decide you should think. And Elon Musk has, as is his wont, deployed this power in the clumsiest and most overt way you can possibly imagine.
I’m referring to the little episode a couple weeks ago where Grok, the chatbot embedded in ex-Twitter, suddenly became obsessed with “white genocide” in South Africa. Max Read had a hopeful interpretation of the ensuing fallout:
Musk’s attempts to control and manipulate his A.I. may ultimately work against his interests: They open up a political, rather than a mystical, understanding of artificial intelligence. An A.I. that works like magic can have a spooky persuasive power, but an A.I. we know how to control should be subject to the same suspicion (not to mention political contestation) as any newspaper or cable channel. A.I. deployed as a propaganda machine is a much more familiar technology than A.I. deployed as an oracle.
Maybe so, but the political understanding is only accessible to people with some capacity to think critically and act as members of a democracy. That capacity is in pretty short supply: there are a lot of people in this country (including Elon Musk) who seem incapable of distinguishing factual information from obvious propaganda. And the more accustomed people get to using AI as an oracle — or as a substitute prefrontal cortex — the less ability they’ll have to make those kinds of judgments.
I don’t mean to sound paranoid. There’s a real conspiracy underway, in plain sight, to overthrow American democracy; we already have enough reasons to be alarmed without needing to invent additional, hidden conspiracies. I don’t think it’s likely that Sam Altman sees ChatGPT as the means by which he’ll install himself as cyberführer. In fact, many of the guys responsible for these tools appear to be high off their own supply: they’ve hypnotized themselves with their LLM tools as much as they’ve ensorceled others. Elon Musk is an apparent case in point, as are both the factions that in 2023 warred for control of OpenAI.
What I’m really worried about is less an active plot than the terminal erosion of those habits of mind and cultural practices that sustain a mass democracy. We’ve arguably been in real trouble on that front since before Neil Postman wrote Amusing Ourselves to Death. We wouldn’t have gotten Donald Trump and the present crisis without televised infotainment’s anti-democratic properties and potentialities. But reality TV and cable news look primitive when compared to devices that promise to fully replace your internal dialogue.
Incidentally, here’s the CEO of Duolingo predicting that AI will one day effectively replace primary schoolteachers. In his optimistic view, chatbots will be thinking with your kids virtually from birth. Maybe they’ll never even learn what solitude is.