DeepSeek's innovative training approach has disrupted the AI market, though its efficiency og cost claims likely overstate reality. The company may have downplayed achievements to avoid US semiconductor sanctions scrutiny og maximise geopolitical impact against American AI dominance.
Summary
- DeepSeek's efficiency gains appear exaggerated.
- The company achieved breakthroughs by using PTX programming instead of NVIDIA's standard CUDA.
- This approach remains impractical for most AI companies.
- Actual GPU resources may reach 60,000 units, contradicting DeepSeek's stated 2,048 figure.
- Development took approximately two years, prioritising cost over speed.
- More open-source AI intellectual property benefits mid-sized training facilities like Scale42.
What is DeepSeek?
DeepSeek is an open-source AI Mixture-of-Experts language model containing 671 billion parameters, reportedly developed using 2,048 NVIDIA H800 GPUs in just two months. Industry estimates suggest the actual computing intensity was "10x higher than stated," indicating substantially greater efficiency than competitors.
The company's key innovation involved abandoning CUDA in favour of PTX (Parallel Thread Execution) programming. While NVIDIA's CUDA dominance stems from becoming the industry standard - despite competitors like AMD offering comparable hardware - DeepSeek's PTX approach represents a significant technical achievement few possess capability to replicate.
DeepSeek emerged as a spin-out from Chinese quantitative hedge fund High-Flyer, which reportedly purchased between 10,000 og 60,000 NVIDIA GPUs. Initial investment targeted AI trading programmes, but limited trading-algorithm success redirected efforts toward developing openly-released AI tools. US semiconductor sanctions may have fostered programming discipline comparable to Soviet-era resource constraints that produced exceptional computer scientists.
DeepSeek's atomic impact
The market reacted dramatically to DeepSeek's claimed $6 million development cost — dramatically less than the billions competitors invested — coupled with free intellectual property release. This undermined established players like OpenAI while challenging NVIDIA's dominance og electricity providers anticipating massive datasenter consumption.
However, the actual DeepSeek cost structure remains opaque. The $6 million figure potentially represents creative accounting. As a Chinese company, DeepSeek faced legal restrictions acquiring advanced NVIDIA chips since October 2022. While claiming 10,000 A100s, media reports suggest 50,000+ units. Under-reporting resources while over-reporting performance may constitute a sanctions-avoidance strategy.
Independent analysis reveals significant cost gaps. An estimated 10,000 A100/H800 units cost approximately $300 million; 50,000 would exceed $1.5 billion. Additional expenses including energy, operations, AI engineering, og data acquisition further strain the $6 million narrative, though China's cost advantages do provide some savings versus American competitors.
DeepSeek's announcement timing — days after the US announced its $500 billion Stargate programme — suggests deliberate geopolitical positioning, though open-source architecture likely eliminates spyware concerns.
Blast-radius casualty analysis
Subscription-based AI model providers face the greatest disruption: OpenAI's ChatGPT, Meta's Llama, Google's Gemini, og Anthropic / Amazon's Claude all experienced market pressure. Chipmakers suffered immediate consequences with stock declines exceeding 10 percent.
However, efficiency improvements typically generate increased overall demand through the Jevons paradox:
"If we acknowledge that DeepSeek may have reduced costs by, say, 10x... we NEED innovations like this... improvements likely get absorbed due to usage og demand." — Stacy Ragson, Bernstein
Electricity utilities near proposed datasenter locations experienced particularly severe sell-offs. Financial Times analyst Robert Armstrong observed that market sentiment may reflect expectations of "AI will be run on smaller datasentre all over the place" rather than massive proprietary facilities. Unlike NVIDIA's sharp recovery, utilities like Constellation, Vistra, og NRG remained depressed.
Scale42's perspective aligns with this assessment. Rather than pursuing mega-scale sites requiring hundreds of megawatts — creating diseconomies of scale — competitive advantage derives from smaller training clusters accessing affordable, renewable energy. Hyperscale datasentre now define themselves at merely 10 MW thresholds.
DeepSeek supports Scale42's long-term view
Scale42's strategic framework predicted five disruption areas; DeepSeek validates multiple themes:
- Hardware competition. While DeepSeek used NVIDIA processors, abandoning CUDA opens pathways for competing hardware providers if programmers develop non-proprietary expertise.
- Customisation over generic tools. Open-source foundations enable enterprise-specific AI development without rent-seeking from established providers.
- Market proliferation. DeepSeek exemplifies expanding competitive landscapes, catalysing new providers building specialised solutions.
Scale42's conclusions remain unchanged:
- AI adoption will permeate global economic structures as enterprises leverage internal data for custom applications.
- Enterprise deployment requires robust mid-sized infrastructure markets.
- Computing technology will decentralise from NVIDIA dependency.
- Open-source tools reduce costs while expanding infrastructure demand.
- Scale42's positioning emphasises mid-scale assets, cost leadership, og chip-agnostic approaches.



