Google launches Gemma 4 open models with 140 languages, 400 million downloads

Google launches Gemma 4 open models with 140 languages, 400 million downloads

Google DeepMind released Gemma 4 on Wednesday, April 1st.

This marks the most intelligent open model by Google yet designed for advanced reasoning and agentic workflows under a permissive Apache 2.0 license.

Google introduced four versatile sizes, including effective 2B (E2B), effective 4B (E4B), a 26B mix of experts (MoE) and a 31B dense model.

So far, 31B is ranked as the third best open model globally on the Arena AI text list board.

Furthermore, Google reports that the 26B model takes sixth place, outselling models 20 times its size.

In the official blog post, the VP of Research at Google DeepMind wrote: “Gemma 4 delivers an unprecedented level of per-parameter intelligence.”

Since the first Gemma model was released, the models have been downloaded over 400 million times, creating a “Gemmaverse” of over 100,000 variants.

The new models support native function calls, structured JSON output, and system commands, allowing the creation of autonomous agents that can interact with tools and APIs.

All models support built-in video, image and text processing; The E2B and E4B models support native audio input for speech recognition.

The model supports more than 140 languages ​​and provides context windows of up to 256K tokens for larger models, enabling developers to process entire code repositories or long documents in a single prompt.

E2B and E4B edge-focused models are optimized for mobile and IoT devices and run completely offline on phones, Raspberry Pi and NVIDIA Jetson Orin Nano with near-zero latency. Google has been working with Qualcomm and MediaTek on mobile optimizations in collaboration with the Pixel team.

Users can access the models on Hugging Face, Kaggle, Ollama, and Google AI Studio.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top