Sam Altman, CEO of OpenAI, recently praised the Chinese startup DeepSeek for its AI model, R1, describing it as “impressive.” However, he emphasized OpenAI’s belief that greater computational power remains key to achieving significant advancements in artificial intelligence.
DeepSeek’s Low-Cost Approach Draws Attention
DeepSeek has gained recognition for developing the DeepSeek-V3 with less than $6 million in computational costs, leveraging lower-capacity Nvidia H800 chips. Compared to OpenAI’s GPT-1, the R1 model is reportedly 20 to 50 times more cost-effective depending on the task, according to the company.
“The R1 from DeepSeek is an impressive model, particularly given what they’re able to deliver at such a low cost,” Altman remarked on X (formerly Twitter). Despite the commendation, he reiterated that OpenAI remains committed to its research roadmap, which places a strong emphasis on computational scalability as a cornerstone of its mission.
Challenges to Expensive AI Models
DeepSeek’s growth has sparked debate about the profitability of high-cost AI models and raised questions about the strategies of U.S. tech giants that have invested billions in artificial intelligence.
The impact was felt on Wall Street, where Nvidia’s stock suffered a record-breaking $593 billion market value loss on Monday.
DeepSeek claims its R1 model, described as an open reasoning system, outperforms OpenAI’s o1 model in specific benchmarks. This development has intensified discussions around the trade-offs between cost and performance in AI innovation.