MEET THE ARTISTS: Antoine Bertin
Making All Voices of the City Heard
Antoine Bertin’s “Making Invisible Conversation Visible” investigates the informational, relational, and essential nature of air through the lens of bio-acoustic research and artistic exploration. It approaches air as an acoustic medium of interspecies relations, using digital listening to give a voice to biodiversity, not just humans. This project aims to harness digital acoustics—an intersection of field recording and artificial intelligence (AI)—to deepen our understanding of urban environments, focusing on the interactions between non-human species, urban systems, and human inhabitants. Digital acoustics is giving us the opportunity to dream about decoding the conversations of other species.
The project will gather vocalizations from birds (daytime) and bats (nighttime) over continuous recording periods. These recordings, gathered using ultrasound microphones and AI-driven bat loggers, aim to decode bat communications and identify species in urban settings through machine learning. The interpretations of the data, these invisible conversations, will be made tangible through: 1) sonic spatial sound explorations with a digital acoustic-driven instrument, 2) AI meta walks and data gathering, and 3) a sculptural 3D sound experience.
Using algorithms like BirdNet, BattyBirdnet, and batlogger, customized in collaboration with the Barcelona Supercomputing Center (BSC), the team will label audio datasets by species, location, and time. This labeling will facilitate the creation of visualizations of vocalizations across time, species, and space. A notable aspect of this research is the exploration of ambisonic or beamforming frameworks with PiNA’s technical team, leveraging specialized equipment like the Zylia microphone. This technology will extract additional information from audio datasets, including the angle, azimuth, and velocity of sound sources. The goal is to detect and map sound sources in space and movement, offering a novel machine listening approach.
A key outcome will be user feedback sessions, where researchers and artists experience and provide feedback on the project’s findings. These sessions may be held in site-specific pavilions, visualizing dynamics between species and cities through AI and listening. By integrating bio-acoustic research with urban planning, the project fosters a multi-species design paradigm where non-human species can “voice” their needs. This can lead to sustainable urban ecosystems that accommodate diverse forms of life. The innovative yet easily accessible use of AI and machine listening technologies, through DIY and readily available recording devices, enhances our understanding of urban biodiversity. This approach can inspire new, biomimetic methods in city planning, ultimately promoting a harmonious coexistence between humans and nature.
Inspirational quote for this project:
“The microscope allowed humans to see differently, with their eyes and imagination. Digital acoustics is an invention of similar importance. Like the microscope, it functions as a scientific prosthetic: it broadens our sense of hearing, expanding our perceptual and conceptual horizons.” – Karen Bakker in The Sounds of Life
Mauricio Valdes point of view
Making Invisible Voices Visible explores artistic strategies, technological development, and conceptual advancements for integrating other-than-human species into city design. Focused on sound, the project delves into how digital acoustics: the analysis of bioacoustic datasets through machine learning can be leveraged. By investigating artificial intelligence’s potential in organizing, visualizing, and composing with non-human vocalizations collected in urban environments, the project aims to deepen our understanding of the relationship between biodiversity voices and urban ecosystems.
Collaboration with the core team at PiNA has led to the development and testing of a workflow for collecting audio data tailored to the project’s needs. This involved curating and customizing solutions like BirdWeather’s PUC and the BirdNet algorithm on RaspberryPi. Two datasets of urban bird vocalizations have been compiled, each comprising over 300 bird detections across 30 different species. Additionally, in partnership with Dr. Mirjam Knörschild, head of the Behavioral Ecology and Bioacoustics Lab at the Museum of Natural History in Berlin, a collection of bat vocalizations has been initiated. Currently, this collection contains 600 bat vocalizations from 15 species, gathered using ultrasound microphones and AI-driven bat loggers. Dr. Knörschild’s expertise in decoding bat communications using machine learning contributes to understanding bat species in urban settings, assembling urban bat vocalization datasets, identifying bat species with machine learning, and discerning differences in their vocalizations.
These growing databases, encompassing increasing detections, species, and recording locations, will be utilized to visualize how the presence of different species and their vocalizations may interact with key urban metrics such as air quality, sound levels, green spaces, temperature, light and more. Simultaneously, these datasets are being used to train music-oriented machine learning algorithms, such as Rave and Flucoma, to pioneer novel sound composition approaches using non-human vocalizations. This includes creating AI synthesis-based instruments and exploring the diversity of non-human urban sounds through composition, spatialization, performance, gesture, etc.
Exploration and testing of ambisonic audio, supported by Mauricio Valdes at PiNA, utilizing specialized PiNA’s equipment (e.g., Zylia microphone) and their technical expertise, have led to conceptualizing a possibly novel machine listening approach. This approach aims to detect sound sources in space and movement through analyzing an ambisonic sound field, potentially paving the way for further technological advancements.
Reflection and thoughts on the project by Antoine Bertin
Main learnings:
- Training, visualization and audification of acoustic models
- Developments imaginaries and metaphors at the intersection of AI, the urban and the musical.
- Spatial sound composition and public listening event direction
Collaboration with AIR partners:
Antonie Bertin: “I get my most important take away from the collaboration with the partners from the time we were able to spend in person, in focus and in creative context together at Pina. This week of working together in a context allowing multiple experiments, across multiple methods such as group work, individual work, play oriented development or task oriented productions, all of this across multiple consecutive days, has been the most productive session of the program both creatively and in regards to achieving goals. The argument here is not that of supporting in extensive in person work, but of allowing methods from the art, technology and science contexts to interweave by taking them out of the comport zones of their respective ateliers, officies and desktops.”
KNOWLEDGE TRANSFER
Antoine Bertin conducted two significant knowledge transfer activities as part of the S+T+ARTS AIR project:
1. META WALK: AI, Ecology, and Urban Listening
This immersive, guided exploration of urban biodiversity through sound took place in Koper, Slovenia. Bertin invited participants to engage in a deep-listening experience in urban spaces, combining bioacoustics and AI. The META WALK encouraged attendees to tune into the subtle yet rich tapestry of sounds created by urban wildlife, often hidden within the noise of daily city life[1].
A unique aspect of this experience was Bertin’s innovative approach of injecting different sounds into the headphones worn by attendees. For instance, while participants were surrounded by traffic noise, they were treated to underwater recordings of whales and other sea sounds, creating a stark contrast and heightening awareness of diverse soundscapes[1].
2. Technological Achievements Video Presentation
Antoine Bertin, in collaboration with his team from the Barcelona Supercomputing Center (BSC) and fellow S+T+ARTS AIR resident artist Jonathan Reus, produced a video showcasing the technological achievements of their projects. This knowledge transfer session focused on:
1. Latent space visualization of the BIRDS.net dataset and other sound sources, developed by Bertin and the BSC team. This 3D visualization clustered various sounds, providing a unique perspective on audio data[1].
2. The transformation of this visualization software into a 3D immersive audio sound player. This innovative tool represents the coordinates of played sounds within a sound field, compatible with immersive audio studios or binaural headphones.
This video presentation not only highlighted the technical aspects of the projects but also demonstrated the intersection of art, technology, and ecological awareness that is central to Bertin’s work.
This project is funded by the European Union from call CNECT/2022/3482066 – Art and the digital: Unleashing creativity for European industry, regions, and society under grant agreement LC-01984767