Illustrations by Stephan Dybus March, 2026 | The Atlantic
In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting. The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.
Still, the machines pressed on.
So Massachusetts created the nation’s first Bureau of Statistics of Labor, hoping that data might accomplish what conscience could not. By measuring work hours, conditions, wages, and what economists now call “negative externalities” but were then called “children’s arms torn off,” policy makers figured they might be able to produce reasonably fair outcomes for everyone. Or, if you’re a bit more cynical, a sustainable level of exploitation. A few years later, with federal troops shooting at striking railroad workers and wealthy citizens funding private armories—leading indicators that things in your society aren’t going great—Congress decided that this idea might be worth trying at scale and created the Bureau of Labor Statistics.
Measurement doesn’t abolish injustice; it rarely even settles arguments. But the act of counting—of trying to see clearly, of committing the government to a shared set of facts—signals an intention to be fair, or at least to be caught trying. Over time, that intention matters. It’s one way a republic earns the right to be believed in.
The BLS remains a small miracle of civilization. It sends out detailed surveys to about 60,000 households and 120,000 businesses and government agencies every month, supplemented by qualitative research it uses to check and occasionally correct its findings. It deserves at least some credit for the scoreboard. America: 250 years without violent class warfare. And you have to appreciate the entertainment value of its minutiae. The BLS is how we know that, in 2024, 44,119 people worked in mobile food services (a.k.a. food trucks), up 907 percent since 2000; that nonveterinary pet care (grooming, training) employed 190,984 people, up 513 percent; and that the United States had almost 100,000 massage therapists, with five times the national concentration in Napa, California.
These and thousands of other BLS statistics describe a society that has grown more prosperous, and a workforce endlessly adaptive to change. But like all statistical bodies, the BLS has its limits. It’s excellent at revealing what has happened and only moderately useful at telling us what’s about to. The data can’t foresee recessions or pandemics—or the arrival of a technology that might do to the workforce what an asteroid did to the dinosaurs.
I am referring, of course, to artificial intelligence. After a rollout that could have been orchestrated by H. P. Lovecraft—“We are summoning the demon,” Elon Musk warned in a typical early pronouncement—the AI industry has pivoted from the language of nightmares to the stuff of comas. Driving innovation. Accelerating transformation. Reimagining workflows. It’s the first time in history that humans have invented something genuinely miraculous and then rushed to dress it in a fleece vest.
There are gobs of money to be made selling enterprise software, but dulling the impact of AI is also a useful feint. This is a technology that can digest a hundred reports before you’ve finished your coffee, draft and analyze documents faster than teams of paralegals, compose music indistinguishable from the genius of a pop star or a Juilliard grad, code—really code, not just copy-paste from Stack Overflow—with the precision of a top engineer. Tasks that once required skill, judgment, and years of training are now being executed, relentlessly and indifferently, by software that learns as it goes.
AI is already so ubiquitous that any resourceful knowledge worker can delegate some of their job’s drudgery to machines. Many companies—Microsoft and PricewaterhouseCoopers among them—have instructed their employees to increase productivity by doing just that. But anyone subcontracting tasks to AI is clever enough to imagine what might come next—a day when augmentation crosses into automation, and cognitive obsolescence compels them to seek work at a food truck, pet spa, or massage table. At least until the humanoid robots arrive.
Many economists insist that this will all be fine. Capitalism is resilient. The arrival of the ATM famously led to the employment of more bank tellers, just as the introduction of Excel swelled the ranks of accountants and Photoshop spiked demand for graphic designers. In each case, new tech automated old tasks, increased productivity, and created jobs with higher wages than anyone could have conceived of before. The BLS projects that employment will grow 3.1 percent over the next 10 years. That’s down from 13 percent in the previous decade, but 5 million new jobs in a country with a stable population is hardly catastrophic.
And yet: There are things that economists struggle to measure. Americans tend to derive meaning and identity from what they do. Most don’t want to do something else, even if they had any confidence—which they don’t—that they could find something else to do. Seventy-one percent of respondents to an August Reuters/Ipsos poll said they’re worried that artificial intelligence will “put too many people out of work permanently.”
This data point might be easier to dismiss if the modern mill and factory owners hadn’t already declared that AI will put people out of work permanently.
In May 2025, Dario Amodei, the CEO of the AI company Anthropic, said that AI could drive unemployment up 10 to 20 percent in the next one to five years and “wipe out half of all entry-level white-collar jobs.” Jim Farley, the CEO of Ford, estimated that it would eliminate “literally half of all white-collar workers” in a decade. Sam Altman, the CEO of OpenAI, revealed that “my little group chat with my tech-CEO friends” has a bet about the inevitable date when a billion-dollar company is staffed by just one person. (The business side of this magazine, like some other publishers, has a corporate partnership with OpenAI.) Other companies, including Meta, Amazon, UnitedHealth, Walmart, JPMorgan Chase, and UPS, which have recently announced layoffs, have framed them more euphemistically in sunny reports to investors about the rise of “automation” and “head count trending down.” Taken together, these statements are extraordinary: the owners of capital warning workers that the ice beneath them is about to crack—while continuing to stomp on it.
It’s as if we’re watching two versions of the same scene. In one, the ice holds, because it always has. In the other, a lot of people go under. The difference becomes clear only when the surface finally gives way—at which point the range of available options will have considerably narrowed.
AI is already transforming work, one delegated task at a time. If the transformation unfolds slowly enough and the economy adjusts quickly enough, the economists may be right: We’ll be fine. Or better. But if AI instead triggers a rapid reorganization of work—compressing years of change into months, affecting roughly 40 percent of jobs worldwide, as the International Monetary Fund projects—the consequences will not stop at the economy. They will test political institutions that have already shown how brittle they can be.
The question, then, is whether we’re approaching the kind of disruption that can be managed with statistics—or the kind that creates statistics no one can bear to count.
AUSTAN Goolsbee is the president of the Federal Reserve Bank of Chicago, the Robert P. Gwinn Professor of Economics at the University of Chicago’s Booth School of Business, and a former chair of the Council of Economic Advisers under Barack Obama. He’s also one of the few economists you would not immediately regret bringing to a party. When I asked Goolsbee if any conclusive data indicated that AI had begun to eat into the labor market, he delivered an answer that was both obvious and unhelpful, smiling as he did it. The nonanswer was the point.
I’ve known Goolsbee long enough to enjoy these moments, when he makes fun of our shared uselessness. Economists are rarely equipped to give straight answers about the present. Journalists hate when the future won’t reveal itself on deadline.
We spoke in September, shortly after the release of what’s come to be known as “The Canaries Paper,” written by three academics from the Stanford Digital Economy Lab. By crunching data from millions of monthly payroll records for workers in jobs with exposure to generative AI, the authors concluded that workers ages 22 to 25—the canaries—have seen about a 13 percent decline in employment since late 2022.
For several days, the paper was all anyone in the field wanted to talk about, and by talk about I mostly mean punch holes in. The report overemphasized the effect of ChatGPT. Youth employment is cyclical. The same period saw a sharp interest-rate spike—a far more likely source of turbulence. “Canaries” also contradicted a study released a few weeks earlier by the Economic Innovation Group, which argued that AI is unlikely to cause mass unemployment in the near term, even as it reshapes jobs and wages. That paper was knowingly titled “AI and Jobs: The Final Word (Until the Next One).”
This was the point Goolsbee wanted to emphasize: Economists are constrained by numbers. And numerically speaking, nothing indicates that AI has had an impact on people’s jobs. “It’s just too early,” he said.
A lack of certainty should not be mistaken for a lack of concern. The Fed’s mandate is to promote maximum employment, so the corporate pronouncements about imminent job loss have Goolsbee’s attention. But the numbers don’t add up. It’s possible that the labor market is softer than it looks, but that the softness is being absorbed within firms rather than showing up in the unemployment rate. If companies are sitting on more workers than they need, however—a phenomenon known as labor hoarding—you’d expect that to reveal itself as weak productivity growth. It’s as predictable as a hangover: too many workers, not enough work, sagging productivity. “But it’s been totally the opposite,” Goolsbee said. “Productivity growth has been really high. So I don’t know how to reconcile that.”
Productivity is the cheat code for a more prosperous society. If each worker can produce more in the same hour—more goods, better services, faster results—then the total economic pie grows, even if the number of workers doesn’t. It’s the rare efficiency gain that expands the pie rather than merely redistributing slices.
America has been on a productivity tear for the past few years. It might be temporary, the result of a onetime boost, such as the COVID-era boom in new small businesses. But with the special joy of someone paid to complicate everything, Goolsbee pointed out that general-purpose technologies such as electricity and computing can create lasting productivity gains, the kind that make whole societies wealthier.
Whether AI is one of those technologies will only become clear over time. How long before we’ll know? “Years,” Goolsbee said.
In the meantime, there’s another complication. The immediate risk to employment may not be AI itself, but the way companies, seduced by its promise, overinvest before they understand what it can actually do. Goolsbee reached back to the internet bubble, when companies spent wildly on laying fiber cables and building capacity. “In 2001, when we found out that the growth rate of the internet is not going to be 25 percent a year, but merely 10 percent—which is still a pretty great growth rate—it meant we had way too much fiber, and there was a collapse of business investment,” Goolsbee said. “And a bunch of people were thrown out of work the old-fashioned way.”
A similar crash in AI investment, if it comes, would likely look familiar: painful, destabilizing, and accompanied by surges of CNBC rants and recriminations. But it would amount to a financial reset, not a technological reversal—the kind of outcome economists are especially good at recognizing, because it resembles a thing that’s happened before.
This is the paradox of economics. To understand how fast the present is hurtling us into the future, you need a fixed point, and the fixed points are all in the past. It’s like driving while looking only at the rearview mirror—plenty dangerous if the road stays straight, catastrophic if it doesn’t.
David Autor and Daron Acemoglu are among the most accomplished rearview drivers. Both are at MIT, and both excel at understanding previous economic disruptions. Acemoglu, who won the Nobel Prize in Economics in 2024, studies inequality; Autor focuses on labor. But both insist that the story of AI and its consequences will depend mostly on speed—not because they assume lost jobs will automatically be replaced, but because a slower rate of change leaves societies time to adapt, even if some of those jobs never come back.
Labor markets have a natural rate of adjustment. If, over the course of 30 years, 3 percent of employees in a profession retire or have their jobs eliminated annually, you’d barely notice. Yet a decade later, a third of the jobs in those professions would be gone. Elevator operators and tollbooth attendants went through this slow fade to obsolescence with no damage to the economy. “When it happens more rapidly,” Autor told me, “things become problematic.”
Autor is most famous for his work on the China shock. In 2001, China joined the World Trade Organization; six years later, 13 percent of U.S. manufacturing jobs—about 2 million—had disappeared. The China shock took a disproportionate toll on small-scale manufacturing—textiles, toys, furniture—concentrated primarily in the South. “Many of the workers in those places still haven’t recovered,” Autor said, “and we’re obviously living with the political consequences.”
But AI isn’t a trade policy. It’s software. Even if it hits some professions and places first—a lawyer in a large urban firm, say, may feel the impact years before a worker in a less digitized industry—the technology won’t be constrained by geography. Eventually, everyone will be affected.
All of this sounds foreboding, until you remember the most important thing about software: People hate it, almost as much as they hate change.
This is what gives many economists confidence that the AI asteroid is still at least a decade away. “These tech CEOs want us to believe that the market for automation is preordained, and that it will all happen smoothly and profitably,” Acemoglu said. He then made a disdainful noise from his Nobel Prize–winning bullshit detector. “History tells us it’s actually going to happen much slower.”
The argument goes like this: Before AI can transform a company, it has to access the company’s data and be woven into existing systems—which sounds easy, provided you’re not a chief technology officer. A trade secret of most Fortune 500 companies is that they still run many critical functions on lumbering, industrial-strength mainframe computers that almost never break down and therefore can never be replaced. Mainframes are like Christopher Walken: They’ve been going nonstop since the 1960s, they’re fantastic at performing peculiar roles (processing payments, safeguarding data), and nobody alive really understands how they work.
Integrating legacy tech with modern AI means navigating hardware, vendors, contracts, ancient coding languages, and humans—every one of whom has a strong opinion about the “right” way to make changes. Months pass, then years; another company holiday party comes and goes; and the CEO still can’t understand why the miracle of AI isn’t solving all of their problems.
Every new general-purpose technology is, for a time, held hostage by the mess of what already exists. The first electric-power stations opened in the 1880s, and no one debated whether they were superior to steam engines. But factories had been built with steam engines in their basements, powering overhead shafts that ran the length of the buildings, with belts and pulleys carrying power to individual machines. To adopt electricity, factory owners didn’t just need to buy motors—they needed to demolish and rebuild their entire operations. Some did. Most just waited for their infrastructure to wear out, which explains why the major economic gains from electrification didn’t show up for 40 years.
NONE of this is reassuring enough for the economist Anton Korinek. He’s “super worried,” he told me. He thinks that America will see major job losses—“a very noticeable labor-market effect”—as soon as this year.
“And then those economists you’ve been talking to, they’re going to say, ‘I see that in the data!’ ” Korinek paused. “Let’s not joke about it, because it’s too serious.”
Korinek is a professor and the faculty director of the Economics of Transformative AI Initiative at the University of Virginia. Last year, Time magazine put him on its list of the most influential people in AI. But he did not set out to become an economist. He grew up in an Austrian mountain village, writing machine code in 0s and 1s—the least glamorous form of programming, and the most unforgiving. It teaches you where instructions bottleneck, where systems jam, and what breaks first when pushed too hard.
He’d kept a close watch on developments in AI since the deep-learning breakthroughs of the early 2010s, even as his doctoral work focused on the prevention of financial crises. When he got his first demo of a large language model, in September 2022, it took “about five seconds” before he considered its consequences for the future of work, starting with his own.
We met for breakfast in Charlottesville in the fall. Korinek is youthful and slender, with delicate wire-frame glasses and a faintly red beard. My overall impression was of someone who’d rather be customizing Excel tabs than prophesizing doom. Still, here he was, saying the five words economists disdain the most: This time may be different.