Micron Technology Inc. [NASDAQ: MU] has risen approximately ninefold over the past twelve months, pushing its market capitalisation above $800 billion in one of the largest single-year gains in the company’s four-decade trading history.

The rally is driven by surging demand and a structural shortage of high-bandwidth memory, the specialised chips that sit alongside AI accelerators inside data centre servers and enable the ultra-low-latency data access that large-scale AI inference and training workloads require.

Micron has pre-sold its entire HBM production through 2026 under binding contracts, and hyperscalers including Microsoft [NASDAQ: MSFT], Alphabet [NASDAQ: GOOGL], and Meta Platforms [NASDAQ: META] are projected to spend roughly $700 billion on AI infrastructure this year, making memory a critical and constrained input at every stage of that build-out.

Three structural factors distinguish the current AI cycle from prior memory upcycles that ended in severe price collapses.

The first is demand intensity: memory requirements per AI system are growing geometrically rather than linearly, with Nvidia’s next-generation Rubin Ultra GPU module targeting 512 gigabytes of HBM compared to 80 gigabytes in the H100, while the industry-wide shift from training to inference workloads in 2026 has amplified demand for the low-latency characteristics only HBM can deliver.

The second is contract structure: HBM is increasingly sold through long-term agreements with hyperscalers rather than on the spot market, which reduces the abrupt order cancellations that historically triggered sharp pricing collapses in commodity DRAM and NAND cycles, with Micron signing the industry’s first five-year HBM supply agreement covering both volume and pricing commitments.

The third is supply constraint: HBM requires significantly more wafer capacity per bit than standard DRAM and carries manufacturing complexity that limits how quickly rivals can scale output, enabling Micron’s HBM revenue market share to rise from 9% in Q4 2024 to 21% in Q4 2025 even as the total HBM market roughly doubled in size.

History suggests caution regardless of these structural differences.

The 2022 to 2023 memory downturn saw Micron report its largest-ever quarterly net loss of $2.31 billion, cut its workforce by 10%, and watch its stock fall roughly 50%, while the 2018 to 2019 inventory unwind saw NAND prices fall 60%, DRAM drop 40%, and Micron’s stock collapse 57% from its May 2018 peak.

The 2014 to 2016 DRAM crash was even more severe in percentage terms, with Micron falling approximately 70% as excess PC-era capacity met permanently declining demand from a consumer base that had shifted to mobile.

None of the structural improvements in today’s cycle override the fundamental economics of semiconductor manufacturing: Micron has guided fiscal 2026 capital expenditure above $25 billion, SK Hynix is expected to spend roughly $27 billion, and Samsung is also expanding aggressively, meaning that when all three major memory suppliers scale capacity simultaneously, history strongly suggests oversupply tends to emerge within two to three years.

The stock trades at 14 times estimated FY2026 earnings and approximately 8 times next year’s earnings, figures that appear modest relative to growth but carry meaningful risk if AI infrastructure spending slows, hyperscalers face pressure to demonstrate returns on their massive capex, or new HBM capacity arrives into a market where the pace of demand growth has begun to moderate.