Google on Thursday released a new artificial intelligence (AI) model in the Gemini 2.0 family that focuses on advanced reasoning. The new Large Language Model (LLM), called Gemini 2.0 Thinking, increases inference time to allow the model to spend more time on a problem. The Mountain View-based tech giant claims that it can solve complex logic, math and coding tasks. Additionally, LLM is said to perform tasks at a greater speed, despite the increase in processing time.
Google releases new reasoning-focused AI model
In a post on Have been trained to use ideas.” It is currently available in Google AI Studio, and developers can access it through the Gemini API.
Gemini 2.0 Flash Thinking AI Model
Gadgets 360 staff members were able to test the AI model and found that the advanced logic-focused Gemini model easily solves complex questions that are too difficult for the 1.5 flash model. In our testing, we found typical processing times to be between three and seven seconds, which is a significant improvement compared to OpenAI’s O1 series which can take more than 10 seconds to process a query.
Gemini 2.0 Flash Thinking also shows its thought process, where users can check how the AI model reached the result and what steps it took to get there. We found that LLM was able to find the correct solution eight times out of 10. Since this is an experimental model, mistakes are to be expected.
Although Google did not provide details about the architecture of the AI model, it highlighted its limitations in a developer-focused blog post. Currently, the input limit for Gemini 2.0 Flash Thinking is 32,000 tokens. It can only accept text and images as input. It supports only text as output and has a limit of 8,000 tokens. Additionally, the API does not come with built-in tool usage such as search or code execution.