X

Pro Perspectives 1/28/25

 

 

 

 

 

Please add bryan@newsletter.billionairesportfolio.com to your safe senders list or address book to ensure delivery.

January 28, 2025

We talked about the DeepSeek news yesterday.
 
As we discussed, there are plenty of reasons to doubt that a side project at a Chinese hedge fund will upend the AI leadership picture.
 
Today, David Sacks, venture capitalist and Trump's new AI and Crypto Czar suggested this Chinese fund may have reverse engineered OpenAi's most advanced (closed source) model. 
 
If so, they would not need to train the model from scratch (bypassing the most expensive part of building a large language model).  
 
What would the savings be, to reverse engineer the world's most valuable generative AI model? 
 
I asked ChatGPT.  Here's what it estimated …
 
Sounds about right.
 
Now, ChatGPT's knowledge training cutoff was June of last year.  So it, sadly, doesn't know the news. 
 
But this is how it perceived the implications of being reverse engineered.  
 
It says, "adversaries could 1) undercut OpenAi by offering the same or similar capabilities at a fraction of the price, 2) integrate the model into proprietary systems, making it difficult to detect theft, and 3) potentially improve the stolen model and deploy it for competitive advantage."  
 
Check, check and check.
 
And with this, among the best performers in the stock market today, were cybersecurity stocks.
 
What has changed, resulting from this DeepSeek model, based on the consensus view of the AI giants, is that this may have effectively/indirectly cracked open what has been a closed source model at OpenAi. 
 
And it has revealed the ability to improve (not create, but improve) on these models at a low cost, which the industry seems to be acknowledging will broaden AI adoption ("democratize" model development) and more AI consumption.
 
And with that, more AI models mean more inferencing. 
 
More inferencing means more data creation (by the models), which leads to … more inferencing
 
And it becomes self-reinforcing.
 
This is why some of the best performing stocks of the past two days, have been software companies that are delivering generative AI models to their customers, and will generate significant inferencing revenues.
 
Now, if we follow this self-reinforcing logic , the more abundant the data, the greater the demand for computing power for inferencing.
 
And with that, bigger picture, AI advancement will only be limited by computing and energy capacity (and probably regulation).
 
This counters the idea that the DeepSeek news exposed the hyperscalers as having overbuilt capacity. 
 
Tomorrow, after the market close, we get earnings from three of the big datacenter builders (META, MSFT and TSLA) and we'll see how they address their capex plans and the shift to inferencing opportunities.  
 

Categories: Latest
Bryan: