
It’s fascinating to see how DeepSeek currently seems to be everywhere on our social media feeds throughout the last couple of days.
Unsurprisingly, it’s now sparking a mix of excitement and anxiety across the industry. For some, it’s a revolutionary breakthrough. For others, it’s raising serious concerns (and the deep correction in most tech stocks on NASDAQ on January 27th only confirms that).
The big debate: Whether DeepSeek is truly better than what OpenAI, Anthropic, or others have achieved. The internet is still split over which model outdoes the others. Some early tests do indicate that DeepSeek R1 manages to outscore its peers on some of the testing benchmarks.
Reach out to discuss this topic in depth.
Benchmark performance of DeepSeek-R1

Image source: DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Sheer compute power isn’t the only lever artificial intelligence (AI) creators have. An AI model can stand out in two ways in the current market – firstly by being state-of-the-art or secondly, by being almost as good but far cheaper. DeepSeek appears to have mastered the latter, and that’s what makes it such a revolution right now.
Its founders claim to have developed DeepSeek with a budget of under US$6 million. And it took them less than two months to do that. Secondly, all the training was done on older NVIDIA hardware rather than using the most expensive chips – H100s.
While there are some questions being raised about the authenticity of that number (such as possibly not accounting for all the hardware costs because the hedge fund that owns DeepSeek was already in possession of the hardware, or DeepSeek being built off of Open AI and Anthropic models, thus significantly bringing down the costs), DeepSeek has clearly shown that you don’t need multi-billion dollar budgets or the most shiny hardware in the market to build truly competitive AI models anymore.
It’s not just the development costs: What makes DeepSeek interesting is not just the development cost, but also the fact that it delivers the best that any AI company out there has to offer for free, at least for personal use. For commercial use, the cost per million tokens is considerably less at $0.55 per million input tokens and $2.19 per million output tokens, compared with OpenAI’s Application Programming Interface (API), which costs $15 and $60, respectively.
What’s more? It is open source. Everything it has achieved is documented and available for others to replicate. This means that it should open the path for more such models to come up on the market, further putting pressure on pricing.
Only good news for enterprises: For those who’ve been eager to embrace AI but held back by the high costs, DeepSeek might be the breakthrough they’ve been waiting for. Imagine a future where you don’t need to budget millions to train and run AI agents, don’t need super sophisticated hardware, and don’t need large, AI-skilled teams!
DeepSeek might also open doors for self-hosting soon, which so far hasn’t been possible because of the hardware and computer costs associated with it. And it’s not just DeepSeek either, there are others out there, such as Kimi k1.5, Alibaba’s Qwen, and 01.AI. These newer models could lead to faster adoption, even among Small and Mid-Sized Businesses (SMBs). The eureka moment for Generative AI (gen AI) might be now. These are exciting times for AI. With DeepSeek, it’s become clear that innovation in this space doesn’t require billion-dollar-deep pockets anymore and that is what truly makes it a game changer.
If you found this blog interesting, check out our blog focusing on AI-Powered Coding Assistants: Shaping The Future Of Software Development | Blog – Everest Group, which delves deeper into another topic relating to AI.
To learn more about DeepSeek, what it may mean for the services industry, and the evolution of AI, please contact Aishwarya Barjatya ([email protected]) or Sharang Sharma ([email protected]).