The tech industry's most convenient villain is not the real culprit. While layoffs dominate headlines and executives invoke AI automation as the specter haunting white-collar workers, the data tells a different story: the current tech employment downturn traces primarily to pandemic-era overhiring, Federal Reserve interest rate decisions, and a sector-wide correction after years of exuberant growth. AI's role in tech job losses has been vastly overstated—and that misdirected blame is itself a problem worth examining.
The Economist's April analysis confirms what labor economists have observed: tech companies hired aggressively between 2020 and 2022, swelling payrolls by an average of 30-40% at major firms. When interest rates rose and venture capital tightened, that surplus became a liability. The cuts that followed—Meta, Google, Amazon, Microsoft all announced significant reductions—reflect corporate balance sheets, not algorithmic displacement. A software engineer laid off in 2024 was almost certainly replaced by another human in the same company or a competitor, not by a model inference.
This distinction matters enormously. When we attribute job losses to AI, we assume a technological inevitability: these roles were automated because machines can now do them. But that framing is factually wrong in the tech sector specifically. The roles being eliminated are being eliminated for financial reasons, and many will be rehired when conditions improve. Conflating economic restructuring with technological displacement distorts both policy debates and individual career decisions.
Meanwhile, in sectors adjacent to tech, the picture is more complex. An Ars Technica piece published this week captured this perfectly: a part-time college instructor described how generative AI has made teaching asynchronous online courses "mostly miserable." Students disengage, assignments become performative, and the human connection that made education fulfilling evaporates. This is a genuine AI harm—disrupting knowledge work in education—yet it generates far fewer headlines than the abstract specter of AI replacing programmers.
The danger of misdirected blame is threefold. First, it leads to miscalibrated policy responses. When legislators focus on regulating AI as the source of worker displacement, they ignore the actual levers of labor market dysfunction. Second, it misleads workers about where genuine risk lies. A developer worried about AI "stealing" their job may miss signals that their specific company is financially unstable. Third, it deflects attention from AI's real, less visible disruptions—in education, in creative industries, in customer service—where the technology is genuinely changing work without triggering the same alarm.
The irony is that AI will eventually automate significant portions of tech work. Code generation, testing, documentation—these are genuinely within reach for current systems. But the 2025-2026 layoffs are not evidence of that future arriving. They are evidence of a sector correcting after excess, doing what markets do when capital becomes expensive. Blaming AI for these cuts is like blaming the internet for a retailer's bankruptcy during a recession—the technology is real, the impact is real, but the timing doesn't match.
What we need is precision. AI is disrupting knowledge work in education and creative fields today. It may disrupt software development in earnest within the decade. But the tech jobs bust of 2024-2026 has roots in interest rates and hiring binges, not transformers and language models. Until we separate these narratives, we cannot have honest conversations about adaptation, retraining, or the genuine ethical questions AI automation raises. The scapegoat is convenient. The truth is more useful.