Google has published another round of significant AI model announcements and updated its Gemini offers across the board to bring users and developers of artificial intelligence engines that are more capable and reliable, according to the company. After Deepseekās climb And New Openai modelsThe pace of AI development does not slow down.
First the Gemini 2.0 Flash model, the appeared in December For some selected few, everyone is now being set up so that they will see it in the Gemini apps on desktop and mobile (this actually appeared last week, so that they may have already used it). The flash models are designed in such a way that they are faster and lighter without too many drops in performance.
Google also creates a Gemini 2.0 Flash Thinking Experimental model that all users can test. It is another āargumentationā model, like that, like the one We saw in ChatgptWhere the AI
There is also a version of this model that is included with all users with access to apps: Google Search, Google Maps and YouTube. Real-time information from the web and references to Google MAPS data (including travel times and location details) and information from YouTube videos are returned.
Finally, Google Gemini 2.0 flash-lite for developers provides for the flash models. It is the most cost-effective Gemini model that previously address those who build the tools with Gemini and at the same time maintain a high degree of processing performance in a variety of multimodal inputs (text, images and more).
Pro-level models
You need Gemini Advanced to reach some of these models.
Credit: Lifehacker
Next, the even more powerful test model from Gemini 2.0 Pro ā a little slower than the flash equivalent, but better in thinking, writing, encoding and solving problem solutions. This model now appears in an experimental form for developers and for all users who pay 20 US dollars a month Gemini advanced.
āIt has the strongest coding performance and the ability to treat complex input requests, with better understanding and thinking of world knowledge than any model that we have published so far.ā says Google. It can also accommodate two million tokens per command prompt, which has achieved an average of around 1.4 million words ā about twice the Bible.
This is twice as the capacity of the 2.0 Flash models, and Google has also provided some benchmarks. In the general MMLU-Pro-Benchmark we have scores of 71.6 percent, 77.6 percent or 79.1 percent for Gemini 2.0 flash lite, 2.0 lightning and 2.0 per, compared to 67, 3 percent for 1.5 lightning and 75.8 percent for 1.5 per.
There are similar improvements across the board at other KI benchmarks, with Gemini 2.0 per experimentally achieving a score of 91.8 percent in a leading math test. This corresponds to 90.9 percent for 2.0 flash, 86.8 percent for flash lite, 86.5 percent for 1.5 per and 77.9 percent for 1.5 flash.
Like the standard for AI model starts like this, the details on the training data used are thin. Hallucination risks And inaccuracies and energy consumption ā although Google says that the new flash models are most efficient so far, while all the latest models are better than ever when justification for feedback and the termination of potential safety and safety hacks.