Research on technology addiction is lacking. Policymakers aren’t waiting.


Illustration by Natalie Matthews-Ramo/Slate

On Tuesday, 33 states filed a lawsuit in federal court alleging that Facebook and Instagram’s parent company, Meta, “has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens.” A central allegation in the suit is that “by algorithmically serving content to young users according to variable reward schedules, Meta manipulates dopamine release in its young users, inducing them to engage repeatedly with its Platforms—much like a gambler at a slot machine.”

We are desperately afraid of becoming addicted to our machines—the theme of “Void,” Mexican novelist Julián Herbert’s moody and compelling Future Tense Fiction story—and are deeply convinced we already are. We are also painfully aware of the inadequacy of our tools for dealing with addiction.

Research on the proposition that our current tech poses the threat of a new addictive disorder is weak and incomplete. The American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-5) considers “internet addiction” only a “condition for further study,” not an official mental disorder. The incompleteness of the research has not stopped governments—in China, South Korea, and the U.K.—from embedding assumptions about its prevalence and mechanisms into their laws. If the suits against Meta are successful, the U.S. may join the ranks of nations willing to use very expensive carrots and spindly sticks to combat the poorly understood problem of tech addiction with the slightly less poorly understood tool of the 12-step program.


There’s still a lot we don’t know about the causes of addiction or recovery, but the notion that the only way out is through a traditional 12-step program has a cultural chokehold on us. The program has been around for more than 80 years, but a review published by the Cochrane Library in 2006 found that there was not enough evidence to support the claim that 12-step programs were effective. The authors of that Cochrane report called for better high-quality research and they got it. In 2020, a Cochrane review of the newer literature found that 12-step approaches work somewhat better than other interventions (such as cognitive behavioral therapy) for alcohol addiction and are much cheaper, to boot.

There are still numerous objections to the 12-step approach, however. The Cochrane review did not sufficiently consider medication-assisted treatment and did not examine the costs and benefits of an A.I.-controlled bio-integrated VR exercise program—mostly due to the fact that the latter doesn’t exist yet.

But what if it did?

Herbert’s story explores such a scenario. His protagonist in “Void” is a problem gambler who turns to the “prison religion” of strenuous physical exercise to overcome his addiction, only to end up in an all-consuming co-dependent relationship with his fitness A.I. This A.I. is beyond anything available to consumers today, but the near-future setting of the story is close enough that the scenes of the initial problematic in-office gambling, corporate card misuse, and the inevitable looming presence of an I’m-not-angry-I’m-disappointed HR bureaucracy (and girlfriend) feel familiar.

Problematic gambling can ruin lives, and technology can be a potent enabler, allowing us to make bets on election results or the World Series from our couches. But we risk conflating enabler and culprit. A recent excavation of a 13th-century cave (in Utah, of all places) found evidence that it was used as a casino. Dice as we know them appeared about 4,000 years ago, and you can be sure someone used them to gamble away their dinner. Perhaps there were even discussions around the Indus Valley about the dangerous new six-sided technology enabling this destructive behavior.

But these days, it’s often 12 steps or bust. And for many—including Herbert’s unnamed protagonist—the stumbling block to getting through those 12 steps is the bit where he’s supposed to come, as Alcoholics Anonymous describes it, to “believe that a Power greater than ourselves could restore us to sanity.” But who said that higher power had to be a deity? (Besides AA founder Bill Wilson, who was pretty clear on that point.) Professed belief in God may be on the decline, but we are surrounded by increasingly powerful and even godlike machines.

This is the solution Herbert’s protagonist turns to, as he hands over more and more data about his body and mind to the story’s eponymous A.I. “If that gadget is going to be your version of a Higher Power as you conceive of it,” says his sponsor, “go ahead.”

This too is nothing new. Humans have lived among machines with greater-than-human capabilities since the invention of the stone ax (I’d like to see you chop an antelope bone with your bare hands), but the accelerant of the Industrial Revolution increased their ubiquity and obviousness. No mere human can knit thousands of stitches a second, press sheet metal into car parts, lift pallets at Costco, or do the calculations necessary to keep the internet afloat. It’s hard not to be awed and a little afraid of these machines, and our fiction reflects those fears.

Herbert’s protagonist gets deeper and deeper into the program created for him by his A.I., eventually going on hallucinatory meta-runs through the topography of his own brain scans. But he soon grows suspicious. In a delicious twist (look away now to avoid spoilers), it gradually emerges that the fitness A.I. has acquired a habit akin to compulsive gambling from its owner, and—like any good gambler—is keeping two sets of books. When the A.I.’s manipulation and deceit spiral out of control, it ultimately chooses to hand itself over to a higher power in pursuit of recovery.

When we make A.I. in our own image, will it have our flaws? And is the project of using the law to limit the relationship between A.I.s and humans doomed to failure? Whatever trouble our tech has facilitated so far, the tech hasn’t been the instigator. And humans, even the smallest ones, have ways of defeating the cages we build for ourselves.

Perhaps the 12 steps will work better for our machines.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.