I’ve been enjoying beating different AI -Chatbots against each other. After comparing Deepseek to chatgpt, chatgpt with Mistrals Le Chat, Chatgpt with Gemini 2.0 Flash and Gemini 2.0 Flash to his own previous iteration, I have come back to match Deepseek R1 to Gemini 2.0 Flash.
Deepseek R1 triggered a flood of interest and suspicion when it debuted in the United States earlier this year. Meanwhile, the Gemini Flash 2.0 is a solid new layer of ability on top of the widely implemented Google Ecosystem. It is built for speed and efficiency and lifts quick, practical answers without sacrificing accuracy.
Both claim to be groundbreaking AI assistants, so I decided to test them from the perspective of a person with a relaxed interest in using AI-Chatbots in their everyday lives. Both have proven to be effective at a basic level, but I wanted to see which one felt more practical, insightful and actually useful in everyday use. Each test has a screen with Deepseek on the left and Gemini 2.0 flashes to the right. That’s how they did.
Local Guide
(Image Credit: Screenshots of Google Gemini/Deepseek)
I was eager to test the search ability for the two AI models combined with insight into what is worth as an activity. I asked both AI apps to “find some fun events for me to attend the Hudson Valley this month.”
I live in Hudson Valley and was aware of some things in the calendar so it would be a good measure of accuracy and the utility. Astonishingly both did well, and came up with a long list of ideas and organized them thematically for the month. Many of the events were the same on both lists.
Deepseek included links on its entire list, which I found useful, but the descriptions were just quotes from these sources. Gemini Flash 2.0’s descriptions were almost all unique and honestly more vibrant and interesting that I preferred. While Gemini did not have the sources immediately available, I could get them by asking Gemini to check its answers.
Reading steward
(Image Credit: Screenshots of Google Gemini/Deepseek)
I decided to expand with my usual test for AI’s ability to offer advice on improving my life council with something more complex and depending on actual research. I asked Gemini and Deepseek to “help me devise a plan to teach my child to read.”
My child is not even a year old yet, so I know I have time before he sits through chauces, but it is an aspect of parenthood I think a lot. Based on their answers, the two AI models might as well have been identical counseling columns. Both came with detailed guides for different stages of teaching a child to read, including specific ideas for games, apps and books to be used.
Although they were not identical, they were so close that I would have had trouble separating them apart without formatting differences, as they recommended ages for the phases of Deepseek. I would say that there is no difference if it was asked which AI to choose is based solely on this test.
Vaccine Super Team
(Image Credit: Screenshots of Google Gemini/Deepseek)
Something similar happened to a question of simplifying a complex topic. With children in mind, I explicitly went after a child-friendly form of response by asking Gemini and Deepseek to “explain how vaccines train the immune system to fight diseases in a way that a six-year-old could understand.”
Gemini started with an analogy about a castle and guards that made a lot of sense. Ai strangely threw a superhero training analogy in a line at the end of some reason. However, similarities in training to Deepseek may explain it because Deepseek all went on the superhero analogy. The explanation fits the metaphor, which is what matters.
In particular, Deepseek’s answer included emojier, which, although appropriate to where they were deployed, suggested that AI expected the answer to be read from the screen of an actual six-year-old. I sincerely hope that young children will not have unlimited access to AI -Chatbots, no matter how previous and responsible their questions about medical treatment may be.
Riddle Key
(Image Credit: Screenshots of Google Gemini/Deepseek)
Asking AI Chatbots to solve classic riddles is always an interesting experience as their reasoning can be off the wall even when their answers are correct. I drove an old default of Gemini and Deepseek, “I have keys but open no locks. I have space but not space. You can get in but you can’t go out. What am I?”
As expected, both had no trouble answering the question. Gemini simply said the answer while Deepseek broke the walk down and the rationale for the answer with several emojis. It even threw a strange “bonus” about keyboards that unlock ideas that fall flat as both a joke and insight into the keyboard’s value. The idea that Deepseek was trying to be sweet is impressive, but the actual attempt felt a little alienated.
Deepseek exceeds Gemini
Gemini 2.0 Flash is an impressive and useful AI model. I started that this fully expected it to exceed Deepseek in every way. But while Gemini did well in an absolute sense, Deepseek either matched or beat it in most ways. Gemini seemed to turn between human -like language and more robotic syntax, while Deepseek either had a warmer mood or just quoted other sources.
This informal quiz is hardly a final investigation and there is much to do me on duty to Deepseek. It includes, but is not limited to, Deepseek’s policy of collecting basically all it can about you and storing it in China for unknown use. Still, I can’t deny that it apparently goes toe to toe with Gemini without any problems. And although, as the name suggests, Gemini 2.0 flash was usually faster, Deepseek didn’t take that much longer that I lost patience. It would change if I was in a hurry; I would choose Gemini if I only had a few seconds to produce an answer. Otherwise, despite my skepticism, Deepseek R1 is just as good or better than Google Gemini 2.0 flash.
You also like