- Gemini 3.1 Flash Lite is cheaper (and better) than Gemini 2.5 Flash
- The new Google model beat rivals across several benchmarks
- Variable reasoning improves efficiency and speed
Google has unveiled the new and improved Gemini 3.1 Flash Lite – its most cost-effective 3-series model designed specifically for developers.
According to internal tests, Gemini 3.1 Flash Lite offers up to 2.5x faster Time to First Answer Token performance than Gemini 2.5 Flash, as well as 45% faster output generation, while maintaining or improving quality and lowering costs.
The company confirmed that prices for the new 3.1 model will be $0.25 per 1M input tokens and $1.50 per 1M output tokens – a significant reduction from $0.30/$2.50 for 2.5 Flash, but an increase from $0.10/$0.40 for 2.5 Flash Lite.
Gemini 3.1 Flash Lite launches as developer mode at an affordable price
Google also offered comparisons with other third-party models, including the GPT-5 mini ($0.25/$2.00), Claude 4.5 Haiku ($1.00/$5.00), and Grok 4.1 Fast ($0.20/$0.50), revealing that the 3.1 Flash Lite outperforms key rival models across six benchmarks.
In terms of model usability, developers can adjust how much reasoning the model uses to switch between immediate answers for simple tasks and deeper reasoning for complex tasks.
Some of the use cases mentioned by Google include high-volume translation, content moderation, user interface and dashboard generation, and simulations.
The model is now available in preview for developers of the Gemini API in Google AI Studio and for enterprise users in Vertex AI.
More generally, the news comes just a few weeks after Google launched its 3.1 Pro model, which beats the likes of Claude Sonnet 4.6, Opus 4.6, GPT-5.2 and GPT-5.3-Codex across most benchmarks.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



