REAL-TIME A/V INSTALLATION
Fragile is an audio-visual installation that aims to investigate the relationship between stressful human experience and the transformations that occur in our brain. Recent scientific research has shown that neurons belonging to different areas of our brain are affected by stress. In particular stress causes changes in neuron circuitry, impacting their plasticity, the ability to change through growth and reorganization.
Our process exploits the scientific data provided by the Society for Neuroscience and elaborates this information trying to show the effect of external interactions on our nervous system and ultimately on our relationship with the outside world. In order to achieve this we developed an artwork composed of different digital representations following one another, branching into 5 screen projections. At the same time a real-time algorithm constantly collects tweets and performs a prediction of the stress associated with each sentence. The values retrieved act as a global stress value for the community and it's used continuously throughout the installation to interact with the audio-visual content.
Our mind is able to process numerous impulses and contribute to our interpretation of reality throughout our life path. The intent of this work is to represent the incredible ability of the human brain to adapt and modify itself based on our experience in the constant search for a balance; a balance to be treated with care as something extremely precious and fragile.
We’re using a series of algorithms to extract a stress level prediction from social media messages. Starting from a dataset of 20 thousand sentences categorized as “stressful” or “neutral”, we trained a convolutional neural network to be able to predict the level of stress connected to real-time tweets shared by people in every precise moment.
Our aim has been to define a global stress level that can be used as a backbone for the variations occurring at the audio-visual level. We outlined a series of 3D-representations exploring the concept of neuronal behavior based on the data we have received and custom digital simulations as you can see on the scheme on the left. These visualizations are constantly affected by the data retrieved from the social stress value. Hence every time the installation restarts, the result can be different, showing a different interpretation of this complex phenomenon occurring inside our brain.
At the beginning we deep dive into the examination of the data collected and provided by the Society for Neuroscience, a series of SEM (scanning electron microscope) images of a human brain. We decided to visualize these sections images according to the predicted stress level from Twitter messages analysis.
When bad stress needs to be shown, we represent every section of the SEM images as a tiles sequence of not-synchronized frames, instead we represent good stress through the change from one section to the other, where every tile belongs to the same section. This conveys a sense of different levels of stress among people who are experiencing the installation.
After this 2D flat representation, the installation moves to a 3D visualization in an attempt to show this portion of the human brain and its beauty. A complex particle system takes the shape of our brain, as a composition of 955 SEM sections. This layering is rendered in real time with a volumetric shadow effect obtained through GLSL shaders. Once the reconstruction eventually finds its final form it gradually starts changing its structure. Here we defined a new environment in which we investigated 3d-particle simulation able to visually replicate some interesting pattern occurring in our brain when neurons change their structure after the effect of a stressor event. The pathways and the connections between neurons change sometimes in a very distinct and permanent way. We defined a series of behaviors for our system in order to simulate certain patterns occurring in the real neuron circuitry.
These simulated neural representations then converge into the information retrieved by the neuron models extracted from the real human brain.
The last part of the installation focuses on a different layer of perception of reality, sight. We investigated the interaction with the environment through the vision system reinterpreted with the help of deep learning techniques. Our intention was to recreate multiple versions of reality, fictional yet realistic enough to be perceived as reliable by a human being, these visions are created and continuously affected by the stress value they receive.
To achieve this we coupled two deep neural networks, one is a generative model representing the ability of our brain to reconstruct what surrounds us, which is responsible for the creation of the images of urban scenery trained on real-life photographs. The other is an object detection model, built to recognize objects in any given scene. The creation of new synthetic images is controlled by a series of parameters which are influenced by the “stress” value retrieved at the moment.
In general the higher the stress the more distorted the image will be and the harder it will be for the object detection neural network to recognize known elements in the image. On the other hand with low stress values the image generated will be very realistic and the object in the scene will be much more distinct.
The sound elements are reworked through sound-synthesis in real-time, and are combined with automations and sound layers fixed in time in order to maintain control over the narrative, the structure and the dynamics. The generative component is based on the idea of de-composition (a single element, fragmented into its parts). The whole is then recombined in a new layered structure with its own characteristics. Specific recordings of different analog instruments were used, reworked through a 5-voice custom granular synthesis system (the number of voices corresponds to the number of screens), specifically made for this installation.
The peculiarity relies on the fact that some of the synthesis parameters (such as size, position, volume) are based on visualizations of certain portions of the brain.
Within this visualizations, 5 cursors move detecting some of the values such as brightness and pixel positions. This data is then converted and used to control and manipulate the sound.
Art Direction: Mattia Carretti, Luca Camellini
Concept: Mattia Carretti, Luca Camellini, Samuel Pietri, Matteo Salsi
Software Artists: Luca Camellini, Samuel Pietri
Sound Design: Riccardo Bazzoni
Hardware Engineering: Matteo Mestucci
Video Report: Max Rykov
Premiere: Artechouse - Washington DC