Uncharted Limbo Collective

Uncharted Limbo Collective is a team of creative coders and visual artists based in Athens and London who share a background in architecture but have expanded to develop creative projects using state-of-the-art digital tools. In their project Monolith, they will explore methods for facilitating communication and collaboration between humans and what they characterize as “digital beings.”

At HLRS, Uncharted Limbo will conduct experiments with dancers and movement artists. An algorithm they are developing will generate an audiovisual environment in HLRS’s CAVE visualization facility to prompt the dancers’ motions. At the same time, they will use sensors to capture data characterizing the dancers’ movements and interior states such as heart rate and neural oscillations. These data will then be analyzed using deep neural networks, and the results will be fed back into the algorithm to train it to recognize and respond to patterns in the dancers’ movements. During the development of this method, Uncharted Limbo also plans to work with S+T+ARTS AIR partner Sony Computer Science Laboratories, who conduct research on augmented creativity. The team expects that by gathering and analyzing many hours of data, the algorithm should learn to anticipate, react to, and encourage human movements and emotions in an increasingly confident and deliberate way.

At the conclusion of the project, the team will stage a performance involving equal cooperation between one or more dancers and the resulting digital being — humans will react to audiovisual impulses while the algorithm responds to their movements, creating a performative dialogue between humans and a machine. In addition to testing new interactive technologies, Uncharted Limbo will use the project as a way to engage the public in thinking about issues related to artificial agency, digital presence, data ownership, and the omnipresence of digital technology. The results will also be available to be exhibited elsewhere in the future.

“S+T+ARTS Air offers us a unique opportunity to develop a project we’ve been discussing for a while — an urge to address the important questions of agency and emergent sentience of digital entities,” commented Eleana Polychronaki on behalf of Uncharted Limbo. “As artists with significant experience in coding and art-directing real-time visuals for dance performances and interactive installations, we found ourselves wondering how we can break the norm of the typical ‘accompanying visuals’ and ‘one-way reactivity’ seen in most New Media performances. We hope that by collaborating with the HLRS team and taking advantage of their expertise in innovative visualization and access to supercomputing power we will be able to contribute to the immense field of research that explores human-machine interaction and the ever-evolving dynamics of that relationship.”

Reflection and thoughts on the project by Uncharted Limbo Collective:

  • Significant progress has been made in developing real-time camera-based motion tracking solutions, which are crucial for training our neural networks (NN) effectively and for future applications (generative visuals and audio systems).
  • These advancements allow the system to capture motion data in real-time, enhancing the quality and accuracy of the datasets used for training our machine learning models.

Machine Interpretation:

  • Data extraction from archive mocap sequences: Existing motion capture sequences from movement workshops have been processed, allowing the system to leverage a more diverse range of motion data. This data contributes to a richer training dataset for our AI models.
  • Data formatting to use for AI training (remove bias): Careful data formatting has been undertaken to ensure that the training dataset is free from biases, thereby improving the fairness and accuracy of the model. By removing redundancies and ensuring balanced representation, the model can generalise better to different types of motion inputs.
  • Motion Capture Classifier custom model [LSTM Architecture]: A custom Long Short-Term Memory (LSTM) architecture has been developed to classify motion capture data. LSTMs are well-suited for this task due to their ability to handle temporal sequences, allowing for more nuanced understanding and prediction of motion patterns.
  • Post-Process class clustering from PaCMAP dimension reduction: After training, the Pairwise Controlled Manifold Approximation (PaCMAP) technique is employed to reduce the high-dimensional LSTM output into a more manageable form. This allows for effective visualisation and clustering of motion classes, enabling easier identification of patterns and relationships within the data.

Inside the Machine Brain:

  • Spatial visualisation of LSTM latent space: The LSTM’s latent space is visualised in an almost gamified manner to intuitively understand how the model internally categorises different types of motion. This provides insights into how the machine thinks, learns and critically perceives similarities or differences between various motion sequences (displaying these as neighbours with spatial similarities).

Collaboration with AIR partners:

  • Continued useful discussions with the HLRS team, receiving consistent feedback on the overall project and the ‘work in progress visualisation system.’
  • Shared progress on motion capture development, though HLRS’s expertise in this area is limited, resulting in less crossover.
  • Collaboratively explored potential features and applications for other datasets or fields.
  • Shared experiences with other artists, particularly with Natan Sinigaglia, to consolidate ideas through discussions about his motion classification project.
  • Attended the HLRS ReACH II Workshop in Stuttgart in September, where an in-depth presentation and Q&A session allowed discussion of findings and invited new perspectives and routes of enquiry.
S+T+ARTS - Funded by the European Union

This project is funded by the European Union from call CNECT/2022/3482066 – Art and the digital: Unleashing creativity for European industry, regions, and society under grant agreement LC-01984767