The short answer as to whether we should be worried about AI is—yes! And this is not because mass AI-driven unemployment is already a settled fact, but because the speed of change, the incentives driving corporate adoption, and the absence of serious public planning together create a dangerous vacuum.
As Josh Tyrangiel writing for The Atlantic makes clear, the real threat is not simply that AI may displace workers, but that it could do so faster than our institutions can respond, leaving millions vulnerable while political leaders, CEOs, and policymakers look the other way.
Even economists who disagree on timing acknowledge the stakes: if AI compresses years of labor-market disruption into months, the damage will extend far beyond jobs to democracy itself, deepening inequality, anxiety, and political instability. What should concern us most is not only the technology, but the nation’s striking lack of preparation for a transition that may already be underway.
-Angela Valenzuela
Does anyone have a plan for what happens next?
The crux of Korinek’s argument is simple: His colleagues aren’t misreading the data—they’re misreading the technology. “We can’t quite conceptualize having very smart machines,” Korinek said. “Machines have always been dumb, and that’s why we don’t trust them and it’s always taken time to roll them out. But if they’re smarter than us, in many ways they can roll themselves out.”
This is already happening. Many of the least comprehensible ads during sporting events are for AI tools that promise to speed the integration of other AI tools into the workflows of large companies. Because many of these systems don’t require massive new hardware or human-engineered system rewrites, the rollout time shrinks by as much as 50 percent.
This is where Korinek parts company with the rearview economists. If AI moves as fast as he expects, for many workers the damage will arrive before institutions can adapt—and each successful use will only intensify the pressure for more.
Consider consulting firms, which have always charged high fees for having junior associates do research and draft reports—fees clients tolerated because there was no alternative. But if one firm can use AI to deliver the same work faster and cheaper, its competitors face a stark choice: adopt the technology, or explain why they are still charging a premium for human hours. Once a firm plugs in and undercuts its rivals, the rest must either race to follow or be left behind. Competition doesn’t just reward adoption; it makes delay indefensible.
Korinek concedes the two standard objections: The numbers don’t show anything definitive yet, and new technologies have historically created more jobs than they’ve destroyed. But he thinks that his peers need to start driving with their eyes looking ahead. “Whenever I speak to people at the labs on the West Coast”—Korinek is an unpaid member of Anthropic’s economic advisory council—“it does not strike me that they are trying to artificially hype what they’re producing. I usually have the sense that they are just as terrified as I am. We should at least consider the possibility that what they are telling us may come true.”
Korinek is not sure that the technology itself can be steered by policy, but he wants more economists doing scenario planning so that policy makers aren’t caught flat-footed—because mass job loss doesn’t just mean unemployment; it means missed loan payments, cascading defaults, shrinking consumer demand, and the kind of self-reinforcing downturn that can transform a shock into a crisis, and a crisis into the decline of an empire.
After thE brief period in early 2025 when CEOs were openly volunteering “thought leadership” about AI and its impact on their workforces and profit margins, the pronouncements stopped, eerily, at roughly the same time. Anyone who has seen a shark fin break the water and then disappear knows this is not reassuring.
The simple explanation comes courtesy of the Bureau of Labor Statistics. America employs about 280,590 public-relations specialists, an increase of 69 percent over the past two decades. (They outnumber journalists almost 7 to 1.) It’s not hard to imagine their expert syllogism: AI is unpopular. CEOs who talk about job cuts are even less popular. So maybe shut up about AI and jobs?
In October, the day after The New York Times revealed Amazon executives’ plan to potentially automate more than 600,000 jobs by 2033, the PR chief at a large multinational firm told me, “We are so done speaking about this.” It was at least a small piece of history—the first time I’d been asked to grant anonymity to someone so they could explain, on the record, that they would no longer be speaking at all.
All of which is to say that the chief executives of Walmart, Amazon, Ford, and other Fortune 100 companies, as well as executives from rising AI-driven firms including Anthropic, Stripe, and Waymo—people who had been remarkably chatty about AI and jobs a few months earlier—declined or ignored multiple interview requests for this story. Even the Business Roundtable, an association of 200 CEOs from America’s most powerful companies that exists to speak for its members on exactly these kinds of issues, told me that its CEO, former George W. Bush White House Chief of Staff Joshua Bolten, had nothing to say.
Of course, telling a reporter you won’t speak on the record isn’t the same as not speaking. The CEOs are talking to at least one person: Reid Hoffman, the co-founder of LinkedIn and a Microsoft board member. Hoffman is a technologist by pedigree and an optimist by temperament. He knows everyone in corporate America, and everyone knows he knows everyone, which makes him Silicon Valley’s favorite mensch—a reasonable, neutral sounding board whom CEOs can go to when they want to think out loud. He told me that AI has sorted the CEOs into three groups.
The first are the dabblers: latecomers finally spending some quality time with their chief technology officers. The second rushed to declare themselves AI leaders out of vanity or a desire to have their traditional businesses taken more seriously by tech snobs. “They’re like, Look at me! I’m important! I’m central here. But they’re not actually doing anything yet,” Hoffman said. “They’re just like, Put me at the AI table too.” The third group is different: executives who are quietly making transformational plans. “These are the ones who see it coming. And to their credit, I think a lot of them want to figure out how to help their whole workforce transition with this through education, reskilling, or training.”
But what all three groups share is a belief that investors—after years of hearing about AI’s promise—have lost patience with dreaming. This year, they expect results. And the fastest way for a CEO to produce results is to cut head count. Layoffs, Hoffman said, are inevitable. “A lot of them have convinced themselves this only ends one way. Which I think is a failure of the imagination.”
Hoffman doesn’t waste time urging CEOs not to make cuts; he knows they will. “What I tell them is that you need to be presenting paths and ideas for how to get benefits from AI that aren’t just cutting costs. How do you get more revenue? How do you help your people transition to being more effective using AI?”
“It’s a fever,” Gina Raimondo, the former governor of Rhode Island and commerce secretary under Joe Biden, told me, referring to the rush to cut jobs. “Every CEO and every board feels like they need to go faster. ‘We have 40,000 people doing customer service? Take it down to 10,000. AI can handle the rest.’ If the whole thing is about moving fast with your eye strictly on efficiency, then an awful lot of people are going to get really hurt. And I don’t think this country can handle that, given where we already are.”
Like Hoffman, Raimondo occupies an unusual niche: a Democrat who can walk into a boardroom without setting off the cultural metal detectors. She co-founded a venture-capital firm, and AI executives, who see her as pragmatic and fluent in tech, are willing to talk to her. “This is a technology that will make us more productive, healthier, more sustainable,” Raimondo said. “But only if we get very serious about managing the transition.”
Last summer, Raimondo made the trip to Sun Valley, Idaho, for the four-day Allen & Co. conference known as “summer camp for billionaires.” She asked people the same two questions: How are you using AI? And what happens to your workers when you do? A number of CEOs admitted that they felt trapped. Wall Street expects them to replace human labor with AI; if they don’t do it, they’ll be the ones out of a job. But if they all order mass job eliminations, they know the consequences will be enormous—for their workforces, for the country, and for their own humanity.
Raimondo’s response was that “it’s the responsibility of the country’s most powerful CEOs to help figure this out.” She sees the possibility of “new public-private partnerships at scale. Imagine if we could get companies to take ownership over the retraining and redeployment of people they lay off.”
She knows how this sounds. “A lot of people say, ‘Oh, Gina, you’re naive. Never going to happen.’ Okay. But I’m telling you it’s the end of America as we know it if we don’t use this moment to do things differently.”
If executives’ concern is as genuine as Raimondo thinks, then perhaps they can be moved to action. Liz Shuler, the president of the AFL-CIO, is trying—and mostly failing—to do just that. CEOs and tech leaders are so focused on winning the AI race that “working people are an afterthought,” she told me.
Shuler’s aware that this is a predictable take from a union leader, so she volunteered a concession: “Most working people, and especially union leaders, start out with a panic, right? Like, Wow, this is going to basically obliterate all jobs and everyone’s going to be left without a safety net and we have to put a stop to it—which we know is not going to happen.” Instead of panicking, Shuler said, she talked with the leaders of the AFL-CIO’s unions, representing about 15 million people, and pushed them to use the brief moment before AI is imposed on them to figure out what they want from the technology—and what they might be prepared to trade for that.
So far the olive branch has been grabbed by precisely one company. Microsoft has agreed to bring workers into conversations about developing AI and guardrails around it. Most remarkably, the deal includes a neutrality agreement that allows workers to freely form unions without retaliation—something that’s never been done before in tech. “We think it’s a model,” Shuler said. “We would love to see others acknowledge that working people are central to this debate and to our future.”
Squint and you might convince yourself that the Microsoft deal is indeed proof of concept. More likely, it’s an anomaly. Because all the coaxing, reasonableness, and appeals to patriotism and shared humanity are battling a truth as old as wage labor: American capitalism rushes toward efficiency the way water flows downhill—inevitably, indifferently, and with predictable consequences for whoever happens to be standing at the bottom. And with AI, for the first time, capital has a tool that promises the kind of near-limitless productivity the factory and mill owners could never have imagined: maximum efficiency with a minimum number of employees to demand a share of the gains.
In that context, the silence of the CEOs takes on a different resonance. It could be a cold acknowledgment that the decisions have already been made—or a muffled plea for the government to save them from themselves.
And so to Washington.
You’re probably aware that our politics are unbearable at the moment. And yet the only way to make them bearable—to recover the glimmer of promise at their core—is more politics. That’s the joke at the heart of Washington: The very struggle that’s hollowed the place out is also the only way it can be renewed.
If there were ever an issue capable of relieving the national migraine—something large enough and urgent enough—you might assume the future of American jobs would be it. “At least from my interactions here in the Senate, not many people are talking about it,” Gary Peters, the senior senator from Michigan, told me. “There’s a general attitude among my colleagues”—Peters, a Democrat, singles out Republicans, though he says there’s blame to go around—“like, We don’t need to do anything. It’s going to be fine. In fact, the government should just stay out of it. Let industry move forward and continue to innovate.”
It’s hard to slow AI without abdicating America’s tech supremacy to China—a point the tech lobby makes with religious fervor. It’s hard to force AI labs to give advance notice of the consequences of their deployments when they often don’t know themselves. You could regulate the use of job-displacing AI, but enforcement would require a regulatory apparatus that doesn’t exist and technical expertise the government doesn’t have.
That said, the government has a decades-old playbook on how to get workers through economic shocks. And Peters has been banging his head on his desk trying to get Congress to use it.
Since 1974, when the United States began opening its economy more aggressively to global trade, the Trade Adjustment Assistance program has helped more than 5 million people with retraining, wage insurance, and relocation grants, at a cost in recent years of roughly half a billion dollars annually. In 2018, Peters co-sponsored the TAA for Automation Act, which would have extended the same benefits to workers squeezed by AI and robotics. It died quietly, as many things in Congress do. In 2022, authorization for the TAA expired, and in a Congress allergic to trade votes and new spending, Peters’s efforts to revive it have gone nowhere.
This is very stupid. The United States has about 700,000 unfilled factory and construction jobs. (Ironically, one of the few things slowing AI is a shortage of HVAC technicians qualified to install cooling systems in data centers.) Jim Farley, the Ford CEO who predicted that half of white-collar jobs could disappear in a decade, has been saying that the auto industry is short hundreds of thousands of technicians to work in dealerships—jobs that sit in a long-term sweet spot: technical enough to earn six figures, and dependent on precise manual dexterity that makes them hard to roboticize. But someone has to pay for the months of training the jobs require. “These are really good jobs,” Peters said. But “we spend a lot more money from the federal government for four-year higher-education institutions than we do for skilled-training programs.”
There’s no shortage of ideas about what to do if AI hollows out large swaths of work: universal basic income, benefits that don’t depend on employers, lifelong retraining, a shorter workweek. They tend to surface whenever technological anxiety spikes—and to recede just as reliably, undone by cost, politics, or the simple fact that they would require a level of coordination the United States has not managed in decades.
The 119th Congress is a ghost ship, steered by ennui and the desire to evade hard choices. And the AI industry is paying millions of dollars to make sure no one grabs the wheel. To cite just one example, a super PAC called Leading the Future—which has reportedly secured $50 million in commitments from the Silicon Valley venture-capital firm Andreessen Horowitz and $50 million more from the OpenAI co-founder Greg Brockman and his wife, Anna—plans to “aggressively oppose” candidates from both parties who threaten the industry’s priorities, which boil down to: Go fast. No, faster.
Shuler told me that the AFL-CIO will keep pressing national elected officials for a worker-focused AI agenda, but that “this game is not gonna be played at the federal level as much as it will be at the state level.” More than 1,000 AI bills are bubbling up in statehouses. Of course, the AI money will be there, too; Leading the Future has already announced plans to focus its efforts on New York, California, Illinois, and Ohio.
The executive branch has delegated almost all of its AI oversight to David Sacks—nominally a co-chair of the President’s Council of Advisors on Science and Technology, but functionally a government LARPer who maintains his role as a venture capitalist and podcast host. Sacks, who is also the White House crypto czar, co-wrote the Trump administration’s “America’s AI Action Plan.” A New York Times investigation found that Sacks has at least 449 investments in companies with ties to artificial intelligence. The fox isn’t just guarding the henhouse; he’s livestreaming the feast.
AI is just a newborn. It may grow up to transform our lives in unimaginably good ways. But it has also introduced profound questions about safety, inequality, and the viability of a wage-labor system that, despite its flaws, spawned the most prosperous society in human history. And there’s no sign—none—that our political system is equipped to deal with what’s coming.
Which means the deepest challenge AI poses may not be to jobs at all.
“Gosh, the textbook ideal of democracy,” says Nick Clegg, “is the peaceful articulation and resolution of differences that otherwise might take a more disruptive or violent form. So you’d like to think that a strong democracy could digest these kinds of changes.”
Clegg is a former deputy prime minister of the United Kingdom and leader of the Liberal Democrats. When he lost his seat in Parliament after Brexit, he moved to California, where he spent seven years running global affairs at Facebook/Meta, becoming a kind of Tocqueville with vested options, before returning to London in 2025. Many governments “just don’t have the levers” to deal with AI, Clegg told me.
He suspects that the societies best positioned to navigate the next few years are small homogenous ones like the Scandinavians, who are capable of having mature conversations—they’ll put together “some commission led by some very wise former finance minister who will come up with a perfect blueprint which everybody consensually will then do, and they will remain in a hundred years the happiest societies”—or large authoritarian ones that refuse to have conversations at all. China, America’s primary AI rival, has repeatedly demonstrated a capacity to impose rapid, society-wide change (the one-child policy, the forced relocation of more than 1 million people for the Three Gorges Dam) without consent or delay.
“If democratic governments drift into this period, which may require much more rapid change than they currently appear to be capable of delivering,” Clegg warned, “then democracy is not going to pass this test with flying colors.”
He then delivered, over Zoom, a fantastically British pep talk, combining Churchillian resolve with a faintly patronizing nod to America’s centuries-long streak of pulling four-leaf clovers out of its ass. “You are extraordinarily dynamic,” he began. “It’s remarkable the number of times people have written off America.”
If politics is to be part of the solution, Gary Peters will not be around to participate; he’s retiring next year. Marjorie Taylor Greene, Congress’s most articulate Republican advocate (really) for safeguarding the workforce from AI, has already resigned. Gina Raimondo is being considered as a potential presidential contender for 2028, and she’s a centrist with the chops to balance the reasons for speeding forward on AI with the need to do so warily. But the issue is unlikely to wait that long. “We’re going into a world that seems to be getting more unstable with each and every day,” Peters said. “And that uncertainty creates anxiety, and anxiety leads to sometimes dramatic shifts in how people act and how they vote.”
Which brings us to Bernie Sanders, who has been wrestling with an AI-shaped future since it was still theoretical. “Are AI and robotics inherently evil or terrible? No,” Sanders told me in his familiar staccato. “We are already seeing positive developments in terms of health care, the manufacturing of drugs, diagnoses of diseases, etc. But here is the simple question: Who is going to benefit from this transformation?”
At the Davenport, Iowa, stop on his 2025 Fighting Oligarchy tour, audience members booed when he mentioned AI. And Sanders, the ultimate vibes politician, can feel decades of anger—over trade, inequality, affordability, systematic unfairness, government fealty to corporations—coalescing around AI.
In October, he issued a 95 theses–style report on AI and employment. It included all of the dire CEO and consulting-firm quotes about the looming job apocalypse and proposed a shorter workweek; worker protections; profit sharing; and an unspecified “robot tax on large corporations,” whose revenue would be used “to benefit workers harmed by AI.” It’s a furious document, as though Sanders typed it with his fists.
At least one populist politician thinks Sanders didn’t go far enough.
Steve Bannon’s D.C. townhouse is so close to the Supreme Court that you can read JUSTICE THE GUARDIAN OF LIBERTY from the top step. He greeted me in his signature look: camouflage cargo pants, a black shirt, also a brown shirt, also a black button-down shirt. He hadn’t shaved in days. It would not have surprised me if he suggested that we get hoagies, or form a militia.
Bannon has, shall we say, some scoundrel-like tendencies. But he’s not an AI tourist. In the early 2000s, while still a film producer, he tried to buy the rights to Ray Kurzweil’s The Singularity Is Near, a sacred text of the AI movement that imagines the day when machines surpass human intelligence. Bannon thought it would make a good documentary. He hired an AI correspondent for his War Room podcast a few years ago, and he tracks every corporate-layoff announcement, searching for omens.
He’s concerned about rogue AI creating viruses and seizing weapons—fears that are shared more soberly by national-security officials, biosecurity researchers, and some notable AI scientists—but he believes the American worker is in such imminent danger that he’s prepared to toss away parts of his ideology. “I’m for the deconstruction of the administrative state, but I’m not an anarchist,” Bannon told me. “You do have to have a regulatory apparatus. If you don’t have a regulatory apparatus for this, then fucking take the whole thing down, right? Because this is what the thing was built for.”
What Bannon wants goes beyond regulation. It’s a callback to an old idea: that when the government deems a technology strategically vital, it should own part of it—much as it once did with railroads and, briefly, banks during the 2008 financial crisis. He pointed to what he called Donald Trump’s “brilliant” decision to have the federal government take a 9.9 percent stake in Intel in August. But the stake in AI would need to be much greater, he believes—something commensurate with the scale of federal support flowing to AI companies.
“I don’t know—50 percent as a starter,” Bannon said. “I realize the right’s going to go nuts.” But the government needs to put people with good judgment on these companies’ boards, he said. “And you have to drill down on this now, now, now.”
Instead, he warned, we have “the worst elements of our system—greed and avarice, coupled with people that just want to grasp raw power—all converging.”
I pointed out that the person overseeing this convergence is the same man Bannon helped get elected, and recently suggested should stick around for a third term.
“President Trump’s a great business guy,” Bannon said. But he’s getting “selective information” from Elon Musk, David Sacks, and others who Bannon thinks hopped aboard the Trump bandwagon only to maximize their profit and control of AI. “If you noticed, these guys are not jumping around when I say ‘Trump ’28.’ I don’t get an ‘attaboy.’ ” He said that “they’ve used Trump,” and that he sees a major schism coming within the Republican Party.
Bannon’s politics don’t naturally lend themselves to cross-party coalition building, but AI has scrambled even his sense of the boundaries. He and Glenn Beck signed a letter demanding a ban on the development of superintelligent AI, out of fear that systems smarter than humans cannot be reliably contained; they were joined by eminent academics and former Obama-administration officials—“lefties that would rather spit on the floor than say Steve Bannon is with them on anything.” And he’s been sketching out a theory of the coalition needed to confront what’s coming. “These ethicists and moral philosophers—you have to combine that together with, quite frankly, some street fighters.”
Horseshoe issues—where the far right and far left touch—are rare in American politics. They tend to surface when something highly technical (the gold standard in 1896, or the subprime crisis of 2008) alchemizes into something emotional (William Jennings Bryan’s “cross of gold,” the Tea Party). That’s populism. And the threat of pitchforks has occasionally made American capitalism more humane: The eight-hour workday, weekends, and the minimum wage all emerged from the space between reform and revolution.
No one understands or exploits that shaggy zone quite like Bannon. His anger about AI can sound reasonable in one breath and menacing in the next. We were discussing some of the men who run the most powerful AI labs when he said, “Let’s just be blunt”: “We’re in a situation where people on the spectrum that are not, quite frankly, total adults—you can see by their behavior that they’re not—are making decisions for the species. Not for the country. For the species. Once we hit this inflection point, there’s no coming back. That’s why it’s got to be stopped, and we may have to take extreme measures.”
The trouble with pitchforks is that once you encourage everyone to grab them, there’s no end to the damage that might be done. And unlike in earlier eras, we’re now a society defined by two objects: phones that let everyone see exactly how much better other people have it, and guns should they decide to do something about it.
America would be better off if its elites could act responsibly without being terrified. If CEOs remembered that citizens are a kind of shareholder, too. If economists tried to model the future before it arrives in their rearview mirror. If politicians chose their constituents’ jobs over their own. None of this requires revolution. It requires everyone to do the jobs they already have, just better.
There’s an easy place for all of them to start—a bar so low, it amounts to a basic cognitive exam for the republic.
Erika McEntarfer was the commissioner of labor statistics until August, when Trump fired her after the release of a weak jobs report. McEntarfer has seen no evidence of political interference at the Bureau of Labor Statistics, but “independence is not the only threat facing economic data,” she told me. “Inadequate funding and staffing are also a danger.”
Most of the economic papers trying to figure out the impact of AI on labor demand use the BLS’s Current Population Survey. “It’s the best available source,” McEntarfer said. “But the sample is pretty small. It’s only 60,000 households and hasn’t increased for 20 years. Response rates have declined.” An obvious first step toward figuring out what’s going on in our economy would be to expand the survey’s sample size and add a supplement on AI usage at work. That would involve some extra economists and a few million dollars—a tiny investment. But the BLS budget has been shrinking for decades.
The United States created the BLS because it believed the first duty of a democracy was to know what was happening to its people. If we’ve misplaced that belief—if we can’t bring ourselves to measure reality; if we can’t be bothered to count—then good luck with the machines.
This article appears in the March 2026 print edition with the headline “What’s the Worst That Could Happen?”




No comments:
Post a Comment