Let's talk about this DeepSeek news …
Last Monday, as the U.S. was preparing to inaugurate a new President, one that will threaten China's economic model, a Chinese "hedge fund" posted this …
This fund claimed to have developed, as a side business, a large language model that performs "on par" with OpenAI's most advanced AI model (ChatGPT o1).
And they claimed to have trained it for less than 5% of the cost of the world's frontier models — and on just a few thousand old Nvidia GPUs. And they claim these models can perform inferencing (reasoning) with significantly less computing power than the other leading models.
This looks like a challenge to the American tech boom (either authentically or intentionally), the technology leadership of Nvidia (the world's most important company) and the hundreds of billions of dollars in data center investments from the "hyperscalers" (Google, Microsoft, Meta, Oracle, Amazon).
Still, as this was all circulating last week, Oracle, Meta and Microsoft all publicly doubled down on massive data center spending plans for the year.
By Friday afternoon, Nvidia put in a key technical reversal signal, an outside day.
This technical phenomenon (outside day) is a good predictor of turning points in markets, especially after long, sustained trends.
We also had an outside day in the Nasdaq on Friday — into record highs.
And of course, we had big declines in both to open the week.
So, does this new Chinese model upend the AI leadership picture?
What's verifiable at this point is the performance of the model. What's not verifiable is the model development cost and the inferencing efficiencies (i.e. the claim of less computing power requirements for inferencing).
That said, the trust level on such a discovery coming out of China should be low, until proven otherwise. And we should consider the motives, given the potential this model announcement has to disrupt the American financial markets and the economy, just as Trump is entering office with plans to impose tariffs and other demands on the Chinese.