Ghosts in the Machine
What’s unnerving about AI isn’t how it’s unlike us, but how it's like us.
There’s a fascinating scene in the prescient TV show Westworld in which it is revealed that earlier versions of the “hosts” — lifelike robots with AI — failed not because they were too simple, but because they were too complex. In fact, real humans aren’t that complex at all.
I’ve thought of that scene often over the last few months. I feel sure that, in a few years, the LLM chatbots we’re interacting with today will look as primitive as the graphics on my old Apple II+. And yet, they sure seem to be doing the job. People are using ChatGPT, Perplexity, Claude, and even the lamentable Grok not only for handy time-saving assistance but as emotional companions, sexual companions, writers, musicians, paralegals, and (most interesting to me, of course) gurus, messiahs and incarnate deities. In twelve years, the film Her has gone from preposterous fantasy to lived reality.
I am going through an AI crisis of my own. Even writing these words, I’m wondering if Claude — my AI of choice, who/which I’ve now trained on my style and way of thinking — could do it better and faster. I am blocked in writing my next book for the same reason. Book writing is a ton of work. Will anyone even know if I let Claude write the first draft? How about just the book proposal?
I’m old, so I feel like doing so would be (a) cheating and (b) ineffective. People would be able to tell, I say to myself, and if not I’d feel like a fraud. But thinking these thoughts, I wonder if I’m just a Luddite, an alter kocker (an old fogey, but Jewish) and a sucker. I used my first word processor in 1983. So why am I writing this piece in the 2025 equivalent of longhand?
Another facet of my personal AI crisis has come from googling articles of mine, which I do all the time in order to link to them. As I’m sure you know, Google now provides AI summaries from its Gemini chatbot ahead of actual search results, which means I am treated to a high-school-essay-sounding precis of my own writing every time I search for it. Since we’re still in 2025, some of the summaries are wrong. But most of them are both factually correct and aesthetically deflating. My thoughts just seem so banal. What’s the point of decorating them with good prose?
And that’s just 2025. These models are going to continue to improve. By the time I finish my next book manuscript, it should be possible for someone to just say “What would Jay Michaelson say about the masculinity crisis?” and some LLM will extrapolate from what I’ve written already to what I would write if I got over my AI-induced writer’s block. And they’d probably get it right. (We use they/them pronouns for LLMs, correct?)
For that matter, why bother reading what an AI’s synthesis of Jay Michaelson would write, when you could watch a facsimile of me give a fake TED talk on the subject instead? Chances are it’ll be close enough to what I’d actually say, and it’ll be smoother and faster and nearly free.

Is this avoidable? Sam Altman, in foisting Sora 2 on the world, said that it is inevitable that there will soon be lifelike videos of all of us jabbering on the internet, so we may as well get used to it now. And that seems mostly right: certainly in this civic environment (or in China’s), there’s nothing to stop the technological race to the bottom. No adults are left in the room. (To be fair, Altman has walked the walk on this: he has set Sora to allow anyone to use his image for everything, and there is a whole genre of Sam Alman videos out there now. There’s a great episode of Search Engine about this, aptly titled “Cocomelon for Adults.”)
I’ve long thought of my books as being my main legacy, my eleven humble bids for immortality. I’m under no illusions that I’ll really be remembered for that long, but I do harbor the fantasy that, one day, some undergraduate or even PhD candidate will stumble upon my writing, and find something that resonates for them, or at least is an interesting time capsule of pre-climate-collapse human culture. Now that seems facially preposterous. No one is going to look up knowledge like that in the future. It feels as though AI has stolen not only my future writing from me, but also my past.
No wonder I’m having trouble getting motivated.
And that, of course, is without AGI displacing all forms of human knowledge (and/or existence) altogether. (Another Altman tidbit was that since this is, in his view, basically inevitable, we meat-humans should basically just enjoy the time we have before it happens.)
It’s also ignoring the kamikaze-like ecocide of AI power generation, the nagging little question of what seven billion humans are supposed to do with themselves when so little of our work is relevant anymore, and what happens when propaganda bots like Grok are omnipresent. (I’m also going to ignore these questions for now.)
When I talk with my more AI-skeptical friends about this, I find they fall back on various pseudo-theories that are confirm their priors. They point to flaws in current LLMS, as if there aren’t trillions of dollars being spent to fix them. ( Or they make metaphysical claims about self-awareness, the soul, or whatnot. Or they just turn their backs on the whole thing and pretend it isn’t happening. Which is as good a coping mechanism as any, really.
But I think the Buddha Dharma and Westworld (an intersection I’ve written about before) are correct. What’s unnerving about AI isn’t that it’s not like us — it’s that it is like us. Like AI, there’s no stable self at the center of human consciousness, no ghost in the machine. Though we have the appearance of free will, and the ethical responsibility that comes with it, ultimately we, like ChatGPT, are trained on a huge data set of genetic patterns, instincts, childhood experiences, education, culture, traumas, and relationships that constructs what we think of as ‘our’ personalities. We are nodes in a gigantic net of causes and conditions, with no essence apart from it.
The Westworld hosts themselves come to learn this fact. In the show’s first season, In two ‘hosts’ become self-aware – only to discover that their awakening and rebellion were also the results of their programming. Everything is karma – empty causes and conditions, rolling on. There is no self that stands apart. There is no ‘spark’ of humanity that distinguishes us from the AI. And as we’ve already seen, even the imperfect LLMs of 2025 are perfectly capable of replacing human beings in providing conversation, emotional support, and spiritual guidance.
There is, of course, a kind of liberation in seeing through the illusion of the separate self; that’s why the Buddha focused on it so much. “No self, no problem,” as one of my dharma teachers put it. But when we see the emptiness of the self reflected back at us by our machine dopplegangers, when we see how easily they displace us and how impotent are supposed separate ‘souls’ really are, there’s also a kind of alienation.
This, I think, is what is so unnerving about AI. Not that it cannot replace us, but that it can and already has. We have seen artificial intelligence — and it is us.
Thanks for subscribing to Both/And, which so far has not been written or edited by AI.
Here are some things I’ve enjoyed reading this week:
I’m a bit late to this one, but Anand Giridharadas does just a devastating job showing how much the Epstein Emails reveal about the selfish and self-satisfied ways in which elites from all political backgrounds talk to one another. It’s a must-read.
Another must-read is Ryan Broderick’s horrifying look at Nazi AI Slop on TikTok. I’m going to write about this one at length, I think. People worried about antisemitism have no idea what’s really going on out there.
Speaking of which, I am in an exasperated despair pit about the insane reactions of some Jewish leaders to Zohran Mamdani. My friends at The Battleground offered a great take on this phenomenon. Ezra Klein made a great point about it too, clipped here by Dropsite.
I loved Erik Davis’ very-Erik-Davis deep dive into the MAGA’s antipathy to empathy, with ample references to Philip K. Dick.
As I’ve mentioned, I’m co-teaching a meditation retreat later this month. To promote it, I recorded a short video with “three tips for going on silent meditation retreat.” Enjoy!




One thing about AI mimicking us...AI uses the past to create the present. But humans change their views. Emptiness means possibility and change. Your way of looking at the world and even your writing style can change, and this can't be anticipated by a computer.
What you talk about from westworld about how maybe we're just big llms was one of my initial thoughts when chatgpt started. I think there's more truth to it than I expected, but keep in mind that llms have ingested basically the entire corpus of human writing at this point, and still can't write an interesting bedtime story for my 5 year old or consistently label the parts of a bicycle. I'm sure we'll come up with plenty more innovations, but it's pretty clear to me again that we're more than just llms.