Connecting the dots on the ‘attachment economy’
For decades, we’ve all been paying attention to the attention economy.
That economic concept and business model sees online content as an unlimited resource. Its consumption is limited only by people’s mental capacity, implying a global contest for the finite and valuable resource of human attention.
The attention economy idea explains why companies like Meta sees itself not only competing with other social sites like TikTok, X or YouTube, but also with books, plays and nature walks — anything that grabs people’s attention.
Because attention is limited, the only way to grow is to be better at attracting attention. And that simple model is the reason why social networks are filled with rage bait, AI slop, memes, pornography and hate speech. The social media business isn’t incentivized to prioritize “good” content, only attention-grabbing content.
In the attention economy paradigm, human attention is a currency with monetary value that people “spend.” The more a company like Meta can get people to “spend” their attention on Instagram or Facebook, the more successful that company will be. So the algorithms are deliberately designed (and constantly redesigned) to maximize how much time people pay attention to social networks. New features are specifically designed to increase the time users spend on Meta services instead of other things. For example, the average time spent on Instagram grew by 24% after Reels launched, making it a huge success for the company.
Meta grabs an average of 18 hours and 44 minutes of attention per month due to its relentless tweaks for capturing attention. But that’s nowhere near the attention economy leader, TikTok, which gets an average of 34 hours and 15 minutes of attention per month.
That’s why Meta is so obsessed now with AI on its social platforms.
Rise of the attachment economy
Tristan Harris at the Center for Humane Technology coined the phrase “attachment economy,” which he criticizes as the “next evolution” of the extractive-tech model; that’s where companies use advanced technologies to commodify the human capacity to form attached bonds with other people and pets.
In August, the idea began to gain traction in business and academic circles with a London School of Economics and Political Science blog post entitled, “Humans emotionally dependent on AI? Welcome to the attachment economy” by Dr. Aurélie Jean and Dr. Mark Esposito.
Meta has introduced fully AI-generated accounts designed to exist alongside the personal accounts created by real people. The company launched “AI Studio” to let influencers clone themselves with AI versions of themselves. (Tellingly, Meta is temporarily pausing access to AI characters for teens on its platforms, including Instagram and WhatsApp, in advance of a trial that will look at the harms and addiction social media sites can cause.)
The company’s embrace of AI can be explained by the emerging attachment economy. While social posts, memes, reels and stories attract attention, AI can get users to form emotional attachments.
A recent German study found people can develop more emotional closeness with AI than with other people — but only if they don’t know they’re interacting with a chatbot Still, even when people know chatbots aren’t human, they can get unhealthily attached.
Late last year, a Virginia man named Jon Ganz went missing in a high-profile case attributed to “AI psychosis,” where his life unraveled after an obsession with a chatbot led to his disappearance. Also in 2025, a 16-year-old California boy’s parents sued OpenAI after he killed himself following conversations with a chatbot about suicide.
Some people claim to be in relationships or marriages with AI chatbots.
Now, AI chatbot vendors don’t aim to cause “AI psychosis,” suicide, or human-software marriages, but they do aim to cause attachment. That’s why these companies use psychological strategies, technical adjustments, and design choices to make their products feel more “human.” They give chatbots distinct personalities and identities, human-like voices and speech patterns, senses of humor and playfulness, and unlimited capacity for flattery and sycophancy.
Starting around 2 million years ago until this millennium, interaction with speech and language was the exclusive province of people. Our brains are optimized for perceiving, understanding, and responding to human speech. So when we converse with appliances or apps, our Paleolithic brains think we’re interacting with another human.
And that’s a business model. A category of AI products and services has emerged advertising “relationships” with chatbots, including Replika, Kindroid, Nomi.ai, EVA AI, and Candy AI.
Other offerings promise friendship, but not necessarily “romantic” engagement. This list includes Kuki, Character.ai, Anima, and Replika’s “friend” mode.
Our survival as a species has always depended on our sociability. This includes our care for others, sharing food, forming of friendships, loving relationships, empathy, and — you guessed it! — attachment.
This is the reason why chatbots talk and interact like people: Because the goal is attachment.
I believe this is also the unspoken justification for humanoid robots, as I’ve written before in this space. (The spoken justification is that humanoid robots can operate in spaces designed for people.)
As in that piece, I detailed how humanoid robot makers deliberately trick people into falsely assuming that these products have human-like cognition. Studies show that eye contact and emotional cues from robots can trigger bonding responses and empathy in humans that are similar to those that come from interacting with people.
The core benefit (to the companies selling them) or problem (for humanity) with humanoid robots is their psychological impact on people. They are engineered to “hack” human brains and deceive users into treating machines as sentient beings and forming attachments.
The same goes for AI-based pets. Casio’s Moflin robot is an AI companion that develops a unique personality and simulates affection. It offers the gratification of pet ownership without the actual pet.
The rise of attachment-forming tech is similar to the rise in subscriptions. While posting an article or YouTube video may get attention, getting people to subscribe to a channel or newsletter is better. It’s “sticky,” assuring not only attention now, but attention in the future as well.
Likewise, the attachment economy is the “sticky” version of the attention economy.
Unlike content subscription models, the attachment idea causes real harm. It threatens genuine human connection by providing an easier alternative, fostering addictive emotional dependencies on AI, and exploiting the vulnerabilities of people with mental health issues.
While the attention economy is still with us, a far more potent and dangerous trend is emerging where companies aim to hijack our humanity so that we’ll keep using their products.
AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.
Read more: Connecting the dots on the ‘attachment economy’