Here is the irony the tech industry does not want you to notice: the same artificial intelligence boom promising to democratize technology is making basic computing less affordable. While hyperscalers build 5-gigawatt data centers in Louisiana and Virginia, consumers are watching the price of a $35 single-board computer double. This is the high-bandwidth memory paradox—and it is only intensifying.
HBM is the specialized memory that feeds AI processors. Nvidia's latest GPUs require enormous quantities of it. So do Google's TPUs and AMD's Instinct accelerators. As Meta, Microsoft, OpenAI, and Anthropic race to train larger models, they are absorbing virtually every HBM chip that Samsung, SK Hynix, and Micron can produce. This leaves less capacity for everyone else—and the downstream effects are reaching devices that have nothing to do with generative AI.
Consider the Raspberry Pi. The foundation that once priced its computers as low-cost learning tools has publicly acknowledged that memory shortages are squeezing its supply chains. The same dynamic is hitting graphics cards, budget laptops, and embedded systems. SK Hynix's HBM3 commands significant premiums over commodity DDR5 DRAM, giving manufacturers every incentive to route production toward data center contracts. Consumer inventory tightens accordingly.
The conflict is structural, not temporary. HBM is not a commodity that additional factories can quickly produce. The manufacturing process is extraordinarily complex, and the three dominant suppliers are maxed out. SK Hynix and Samsung are adding capacity, but new production lines require years to reach meaningful scale. Meanwhile, AI demand shows no signs of plateauing. IEEE Spectrum projects that AI electricity consumption could reach 12 percent of all U.S. power by 2028—a figure that underscores the scale of infrastructure underway and the memory appetite behind it.
What makes this particularly troubling is the distributional impact. When HBM supply tightens, the devices that suffer first are not data center servers—they are laptops for students, single-board computers for makers, and budget PCs for households that cannot afford premium pricing. The wealthy hyperscalers outbid everyone else. The resulting price pressure falls heaviest on those least able to absorb it.
Some argue this is a temporary maladjustment that market forces will correct. Startups, they contend, will redesign products to use less memory. Data centers will adopt hardware that sacrifices some performance for efficiency. This has happened before. But the timeline matters. Correction may arrive in three years—after a generation of consumers has already paid premium prices for yesterday's capabilities.
The tension extends beyond pricing to the political economy of computing itself. Memory is a strategic resource, and the United States is acutely aware that SK Hynix and Samsung are Korean companies subject to geopolitical pressures. Micron is the lone American HBM producer, but its capacity is dwarfed by competitors. This concentration creates supply chain vulnerabilities that Washington is eager to address through incentives and reshoring. Whether those efforts succeed before consumer prices normalize remains uncertain.
The HBM crunch is not merely a data center logistics problem. It is a story about who gets access to the tools of the AI era and who gets left paying more for less. The shortage will ease eventually—new capacity is coming, and AI efficiency is improving. But the transition period will impose real costs on real users who simply wanted a computer that did not break the bank. The irony, as the industry builds ever-larger models of artificial general intelligence, is that it may be making general access to computing artificially scarce.