What can we learn from two-way communications with dolphins?

Research into animal intelligence and communication over the past few decades with chimpanzees, elephants, dolphins, birds and other animals has elevated the respect and appreciation that many people have for some animal species. How will our world be different when we know what dolphins are telling us and, when we value their wisdom and intelligence? These are a few of the questions that Acoustic Interactions will explore in a phased approach.

Our overarching goal is to establish ongoing, two-way communication utilizing an acoustic interface and introducing aUI also known as The Language of Space. aUI is a powerful, logically-consistent language based on a simple set of grammatical rules. It will enable us to proceed without deciphering a dolphin language.

Our research phases: In February 2017 we launched Phase I to determine the vocal repertoire of thirteen dolphins living at the Oceanogràfic aquarium in Valencia, Spain, in cooperation with researchers from the Scottish Oceans Institute, St. Andrews University, Scotland. After this, we worked with dolphins at the Institute for Marine Mammal Studies, Gulfport, MS (IMMS). Here, we explored the role and process of mimicry in vocal learning.  Our hypothesis is that the dolphins will vocalize and spontaneously mimic the whistles without formal training. 

Currently we are in Phase II of our research which is focused on establishing a complete vocal repertoire for the IMMS dolphins, and then replicating our process with the Oceanogràfic dolphins. This procedure will be much more rapid thanks to the automation process we’ve developed through modifications to DeepSqueak, a free whistle-processing software package called DeepAcoustics that can automatically detect and analyze animal vocalizations using a deep-learning neural network architecture. We have automated the detection and characterization of each whistle contour with minimal human intervention. And in a recent breakthrough, we have automated the process of classifying whistles into a coherent set of categories based on similarity of shape. DeepSqueak improves the speed and accuracy of detecting acoustic signals in noisy environments.

Knowing the full vocal repertoire, we can then introduce new computer-generated whistles distinct from their natural whistles, a key bridge enabling us to build the interface and ultimately co-create a mutually developed language.

Why aUI? The language is built on concepts. Each letter or phoneme in aUI has a distinct sound and a unique meaning that when combined with other phonemes forms new words derived from their root concepts. This allows us to build vocabulary starting from scratch while expanding the vocabulary faster thereby enhancing teaching and learning new words and ideas. Access to the entire vocabulary by both species is fair and balanced. By assigning each whistle unit to a phoneme instead of a word, it means that the entire language can be represented by fewer than forty distinct whistles.

Reiss, D. & McCowan, B. (1993). Spontaneous vocal mimicry and production by bottlenose dolphins (Tursiops truncatus): Evidence for vocal learning. Journal of Comparative Psychology, 107, 301-312.


Learn more about our latest research with DeepAcoustics

Peter presents his latest dolphin whistle technology: DeepAcoustics, at the 2022 Society for Marine Mammalogy Conference poster session.