Based on the limited numbers of comparisons made so far, DeepSeek's AI models appear to be faster, smaller, and a whole lot cheaper than the best offerings from the supposed titans of AI like OpenAI, Anthropic and Google.
And here's the kicker, the Chinese offering appears to be just as good. So how have they done it?
Firstly, it looks like DeepSeek's engineers have thought about what an AI needs to do rather than what it might be able to do.
It doesn't need to work out every possible answer to a question, just the best one - to two decimal places for example instead of 20.
Their models are still massive computer programmes, DeepSeek-V3 has 671 billion variables.
But ChatGPT-4 is a colossal 1.76 trillion.
Doing more with less seems to be down to the architecture of the model, which uses a technique called "mixture of experts".
Where OpenAI's latest model GPT-4.0 attempts to be Einstein, Shakespeare and Picasso rolled into one, DeepSeek's is more like a university broken up into expert departments.
This allows the AI to decide what kind of query it's being asked, and then send it to a particular part of its digital brain to be dealt with.
Please use Chrome browser for a more accessible video player
This allows the other parts to remain switched off, saving time, energy and most importantly the need for computing power.
And it's the equivalent performance with significantly less computing power, that has shocked the big AI developers and financial markets.
The state-of-the-art AI models had been developed using more and more powerful graphics processing units (GPUs) made by the likes of Nvidia in the US.
Read more:
AI no longer contest between Californian tech bros
DeepSeek 'wakeup call' for US, says Trump
The only way to improve them, so the market logic went, was more and more "compute".
Partly to stay ahead of China in the AI arms-race, the US restricted the sale of the most powerful GPUs to China.
What DeepSeek's engineers have demonstrated is what engineers do when you present them with a problem. They come up with a workaround.
Learning from what OpenAI and others have done, they redesigned a model from the ground up so that it could work on GPUs designed for computer games not superintelligence.
What's more, their model is open source meaning it will be easier for developers to incorporate into their products.
Being far more efficient, and open source makes DeepSeek's approach look like a far more attractive offering for everyday AI applications.
The result, of course, a nearly $600bn overnight haircut for Nvidia.
But it will survive its sudden reverse in fortunes. The LLM-type (large language model) models pioneered by OpenAI and now improved by DeepSeek aren't the be-all and end-all in AI development.
"General intelligence" from an AI is still a way off - and lots of high end computing will likely be needed to get us there.
The fate of firms like OpenAI is less certain. Their supposedly game-changing GPT-5 model, requiring mind-blowing amounts of computing power to function, is still to emerge.
Now the game appears to have changed around them and many are clearly wondering what return they're going to get on their AI investment.