Google’s new AI model could one day let you understand and talk to dolphins


  • Google and Wild Dolphin project has developed an AI model trained to understand dolphin vocalizations
  • Dolphingemma can run directly on pixel -smartphones
  • It will be open sourced this summer

In most of human history, our relationship with dolphins has been a one -sided conversation: We speak, they squeak, and we nod as if we understand each other before we throw them a fish. But now Google has a plan to use AI to bridge this gap. In collaboration with Georgia Tech and Wild Dolphin Project (WDP), Google has created Dolphingemma, a new AI model trained to understand and even generate dolphin chatter.

WDP has collected data on a particular group of wild Atlantic spotted dolphins since 1985. The Bahamas-based pod has provided huge amounts of sound, video and behavior notes as scientists have observed them, documented any squawk and humming and trying to share what it all means. This treasure chest of sound is now being brought into dolphingemma, based on Google’s open Gemma family of models. Dolphingemma takes dolphin sounds as input, treats them using audio tokenizers like soundstream and predicts what vocalization can come next. Imagine AutoCompleTe, but for dolphins.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top