The case
AI compute scaling is colliding with advanced packaging capacity. TSMC's CoWoS (Chip-on-Wafer-on-Substrate) process is the critical bottleneck for NVIDIA's highest-margin datacenter GPUs — H200, B200, and the Blackwell architecture. Current CoWoS capacity sits at roughly 35K wafers/month, with TSMC targeting 60K+ by late 2025, but expansion timelines have already slipped once. HBM3e memory supply from SK Hynix and Micron adds a second constraint layer. Meanwhile, US export controls on advanced chips to China create allocation uncertainty — NVIDIA must segregate product flows and may face retroactive licensing disruptions. This perpetual market tests whether the convergence of packaging constraints, memory shortages, and geopolitical allocation risk produces a confirmed, publicly acknowledged disruption to a major AI training project. LONG positions benefit from each CoWoS expansion delay, HBM supply warning, or major lab disclosing compute-constrained training timelines; SHORT positions benefit from TSMC hitting expansion targets and NVIDIA delivering on schedule.
Market signals
LONG buy $10.00 • 1d ago
100% LONG • 0% SHORT
No discussion yet