AMYGDALA

Generative Data Installation
2016

AMYGDALA

Generative Data Installation
2016

The emotional state of each and every one of us is conditioned by impulses and stimuli from the outside world, from the people we relate to and from our experiences, constantly modifying our perception of ourselves and what lies around us. Ever more often, these interactions take place through digital social channels and networks, turning into data which may be listened to, interpreted and used. Suffice to access a social network, pick up a smartphone or simply surf the web to make personal and private information public, thus feeding ‘Big Data’: enormous pools of information containing all that which is inputted into the network. The news and thoughts of users spread across social networks in real time. And so an event with worldwide implications immediately involves millions of people sharing their own opinions and emotions: happiness, anger, sadness, disgust, amazement or fear. Thus, imagining Internet as a living organism, we might think that its emotional state may be given by the overall emotions shared by users at any given time.
AMYGDALA listens to shared thoughts, interprets states of mind and translates the data gathered into an audiovisual installation capable of representing the collective emotional state of the net and its changes on the basis of events that take place around the world.
The aim is to make visible the flow of data and information that are constantly being created by users, and that may be heard and interpreted by anyone, in the attempt to stimulate a reflection on the opportunities and dangers of the digital revolution that we are currently going through. Big Data may in fact be used to monitor the spread of an epidemic in real time, or to prevent a crime and improve the safety of a city; likewise, they may also be exploited by companies and institutions to store – often unknown to us – infinite quantities of information on our own private lives. We believe that gaining awareness of these mechanisms may be of help in the protection of individual and collective free speech.

HOW IT WORKS
In humans, the amygdala is believed to be the process integration centre of superior neurological processes such as the development of emotions, and it is also involved in emotional memory systems. It operates in the system of comparing stimuli received with those of past experiences, as well as in the elaboration of olfactory stimuli. In order to reproduce these mechanisms artificially, AMYGDALA exploits a very recent discipline: ‘Sentiment Analysis’ (or ‘Opinion Mining’): a crossover between information retrieval and computational linguistics which makes it possible to deduce the emotional status of the messages shared by users across a network. In fact, during the analysis of a document, this technique foresees a focus not on the topic discussed, but on the opinion that the document itself expresses. The advent of web 2.0 and the ensuing growth of volumes of ‘user-generated content’ has made Sentiment Analysis ever more widely used, especially for social research (of opinion or markets), for the management of online reputations, for the forecast of stock behaviour on the financial markets, or for tailor-made advertising campaigns.
Having the opportunity to understand the ‘sentiment’ of the network may help in better understanding the present and making forecasts for the future with regard to a wide range of social phenomena, ranging from stock market trends to the spread of illnesses, or from popular revolts through to the results of talent shows. Thus by posting our thoughts on the net, we make also very personal information available which may be exploited and stored, often without us knowing and for a variety of different purposes.
AMYGDALA does just this: it listens to and interprets the contents shared on the net by users in order to generate an audiovisual work.
The heart of the project is in fact an algorithm of Sentiment Analysis based on the open source library developed by U. Krcadinac, P. Pasquier, J. Jovanovic & V. Devedzic. Synesketch (An Open Source Library for Sentence-Based Emotion Recognition, IEEE Transactions on Affective Computing 4(3): 312-325, 2013).

IMG 1 contenuto AMYGDALA

The algorithm splits emotions into six types: happiness, sadness, fear, anger, disgust and amazement, and carries out a text analysis for each single tweet at a rate of around 30 tweets per second. The text analysis elaborates the tweet word by word using a dictionary of over 5,000 lexical items, each of which has a score for each emotion on the basis of its meaning. What’s more, during the analysis of a tweet there are also heuristic rules, for example checking for any negatives in the text, or doubling the score if a word is written in capitals to increase its importance. Once analysed, a tweet is therefore represented by six values, one for each emotion, offering the ‘Strongest emotion’ relating to the tweet itself. The overall emotional status is represented by the percentages of each emotion, determined by the number of tweets associated with the emotion itself.

From the moment AMYGDALA is activated, over the three months of the FLUX-US exhibition, millions of tweet will be listened to and interpreted, thus compiling an emotional archive of the net.


Luca wrk bw

The project will be developed in two areas of CUBO: the 125,952 LEDs on the 41 columns of the Media Garden, where the process of analysis and recognition of the emotions is to be represented, and on the 12 videowalls of the Mediateca, where the evolution of the global emotional state is visualised over time. In order to keep track of the evolution of the emotional state of the network, every 10 minutes the data gathered and analysed in the Media Garden are sent to the Mediateca to be ‘archived’ on the Videowalls in the form of generative emotional graphics which will go on to form the emotional memory of the three months in which AMYGDALA will be deployed.

SOUND
The sound component is a very important part of the AMYGDALA installation because it metaphorically represents the process of the analysis and recognition of emotions.


IMG 3 contenuto

The system that controls the audio part uses MAx MSP: six sound textures representing the six emotions which are mixed through OSC (Open Sound Control) communication protocol by receiving the data arising from the tweet analysis. In the first stage of data gathering, distortions, minimal reproduction delays (varying from 0.1 to 100 msec.) and strong decay effects are applied, thus obtaining coarse and barely recognisable sounds, while as the emotions are slowly identified, the sound becomes clearer, revealing a melody corresponding to the resulting emotional percentages.
What’s more, thanks to a quadraphonic system, the MAx MSP patch that we have developed also makes it possible to revolve the six tracks around the spectator, thus creating a disorientating effect until the emotions are identified, marking the end of the AMYGDALA cycle, which is repeated each time in a different manner.

cubo sketch new big

CUBO – MEDIA GARDEN
The collaboration between fuse* and UNIPOL began in 2012 when the planning and building of CUBO, (Centro Unipol Bologna) was commissioned from FUSE*ARCHITECTURE, a spinoff of the studio arising from collaboration between fuse* and the architect Fabrizio Gruppini with the aim of creating narrative spaces with a profound intertwining between architectural and multimedia design.
AMYGDALA will start life in one of the spaces created as part of CUBO: the MEDIA GARDEN, a light installation made up of 41 LED bars covering the two gardens of the building, and it is the most technologically complex installation in the whole CUBO project
Each column is made up of a matrix of 3,071 high luminosity RGB LEDs giving a total of 125,952 LEDs each controlled from the server room through 2.75km of optic fibre. The columns were designed to work 24 hours a day, 365 days a year. An internal cooling system allows a constant temperature to be kept, and a flow of air is blown onto the front glass panel to avoid misting.

 

IMG 5 contenuto

In the part of the garden lying between the two wings of CUBO where the installation becomes circular, there is an audio system transmitting sound 360° around, enveloping the spectators within the heart of the installation. The installation was designed to make it possible to visualise various types of contents: programmed multimedia shows as well as contents generated via software in real time, such as AMYGDALA.

 

“A project that was brought to life due to our curiosity of finding a way of depicting the listening and interpretation of our continuous flow of thoughts shared on the net via social network” - Intervista su Creative Applications
"It looks and sounds beautiful, like a futuristic reinterpretation of ancient stone circles, which carried, if not emotions, at least their own sets of associations." -  The Creators Projects
“A new art project wants to help you grasp the joy or fury expressed by all of the users around the world at once.” – Engadget
“I’m drawn to work that speaks to both the visceral and the intellectual, and AMYGDALA appears to do just that. There is a sense of ritualism, repetition, and exploration that drifts out from the imagery and forms.” - Cycling ’74
"It's a heady project, one that fuse's designers tell me they hope will not just help us understand Twitter better, but maybe even illuminate patterns that will allow them to predict surges of feeling on Twitter before they happen." - Fast.Co Design

 

Anno: 2016
Commissionato da: UNIPOL
Awards: CITIC Press Lightening Selection, Jury Selections of the 20th Japan Media Arts Festival
Video Shooting: Gianluca Bertoncelli

Premiere: FLUX-US at CUBO with Mary Bauermeister and Francesca Pasquali / 26 January – 16 April 2016

Selected Exhibitions
Artechouse / Washington DC, US
Georgia Tech Arts / Atlanta, US