Filippo Gregoretti

The project “Samasana: Togetherness” researches the relationships between humans and artificial entities, as well as amongst humans, focusing on the unconscious and emotional levels. 

During the S+T+ARTS AIR Residency at Sony CSL – Rome, a number of experimental strategies will be and have already been  assessed as a way to advance the research project and promote the following results:

●     Accepting coexistence with benign artificial beings.

●     Experiencing a co-creative emotional connection with AI and algorithms.

●     Understanding the echo of our words in the infosphere. [1]

The core of the research is related to visual arts and music as a means of expression to foster emotional connections with algorithms, artificial intelligence, and the infosphere. The research is conducted along two distinct but connected lines.

  1. An infosphere content classification system that incorporates toxicity assessments and moral values will be transformed into a creative emotional representation that makes use of generative music and visual content. The artistic representation will then be refined through a validation phase of the conversion model.
  2. The second phase of the research entails the analysis and setup of techniques and tools that enable the audience to engage with the generative engine in a co-creative, performative session. The spectator will perform within a spatialized sound environment, with the goal of achieving a deeper connection with the artistic outcome.

A compelling narrative will support the artistic output in order to stimulate the emotional response. Each solution and element of the project can potentially generate outcomes in terms of scientific results, best practices, papers, models, HW/SW tools, and artistic outputs.

As an artist in residence, Filippo Gregoretti will draw on his expertise as a visual artist, musician, composer, performer, storyteller, interaction designer, engineer – and interpreter of the extraordinary effects of technological advancements on the emotional, psychological, social, and creative realms – to explore research-related topics. The project’s generative engine leverages an artificial artistic personality that Gregoretti developed over the course of his research, named “Amrita,” a “living algorithm” that simulates how an artist’s personality changes throughout time as a consequence of psychological, emotional, and creative elements. In the context of S+T+ARTS Air, Amrita’s audio-visual generative engine will be connected to the infosphere representation model and will guide the performative tools. The desired outcome is to find further answers to some of the underlying questions that fuel Gregoretti’s artistic research: is it possible to find a comfortable, emotional sense of proximity with artificial beings? Can artistic expression, co-creation and performance be the keys to perceiving algorithms as benign entities? Furthermore, can this feeling be leveraged in order to perceive artificial beings as trustworthy companions capable of helping us become more conscious of the potential effects of our daily decisions?

[1] With the term Infosphere we refer to a metaphysical realm of information, data, knowledge, communications and expressions in general. This realm is the habitat of our ideas and beliefs. It dramatically evolved in the last decades, thanks to the IT explosion, and it is of paramount importance to understand this change because our actions and decisions project the Infosphere into our reality.


Sony CSL

The scientific investigation with Sony CSL involves 2 elements. The first one is related to the analysis of infosphere data in terms of toxicity and moral values in order to convert those into expressive output in visuals and music, and the validation process using online tools. The second one involves the creation of software capable of reading human gestures through a webcam and converting it into continuous data. Relevant solutions have been elaborated and a process has been designed to assess both lines of research.

Pina – HEKA Lab
With PiNA, we studied a solution to control real-time spatialized sound generation through an API. In Koper we experimented with several solutions, and the most effective one implies using Max8 with an embedded instance of NodeJS. The experiments we carried out gave positive results, and a software package can be developed to receive impulses via http through an API to generate music in real time from both an AI and human gestures received from the package developed with Sony. I worked extensively with Mauricio Valdes de San Emetrio at HEKALABS in order to test the raw technical solutions.