Amygdala Web

Riccardo Bazzoni - 04/02/2021
Amygdala Cover 3

Starting from the Amygdala project, presented in 2016, we developed a prototype to make the installation’s heart accessible through a web page available on Amygdala Web v0.1.

One of the reasons that led us to create this adaptation was the desire to develop a sense of interconnection and overcome the limitations imposed during the periods of lockdown caused by the COVID-19 pandemic that have marked the historical phase that we are going through. From a technological point of view, our research revolved around the development and integration of generative soundtrack within the web format. At the same time, we included a generative graphic part to trace the original installation’s path, albeit differently. The creation of Amygdala Web was therefore developed in two phases, respectively of research and development.
Amygdala Web GIFT V3

RESEARCH

During the research phase we studied several technologies by analysing the features and criticalities of frameworks and libraries capable of creating audio content, including PureData, Max / MSP, Tone / Js, P5/Js. The final choice fell on Tone, a framework for creating interactive music within the browser. Of all the technologies analysed, this framework proved to be the best for its flexibility and functionality. Tone uses an advanced programming system, based on the Web Audio API (system for controlling audio in the WEB), capable of creating complex sounds, effects and musical abstractions, exploiting JavaScript’s full potential. As for the visual part (and in general for creating the web page the system runs in) we opted for the open-source P5Js javascript library as the ideal environment for audio development with ToneJS.
We eventually recreated the Amygdala’s engine starting from the sentiment analysis algorithm based on the open-source library developed by Uroš Krčadinac, which allows us to extrapolate the prevailing emotional trend from the analysed tweets. Lastly, we added the emotional analysis on web news thanks to the GDelt service, which provides a real-time emotional tone (positive or negative).
Amygdala Web Cover 1920 CloseUP 2 0000703

AUDIO
The audio part of Amygdala Web is developed around the idea of ​​free counterpoint. Four voices intertwine melodically and rhythmically, moving on chord progressions that define the harmonic aspect and the interval between each voice, creating a polyphonic flow in continuous transformation. The whole audio system is entirely made with the ToneJs framework and generated in real-time using the tweets’ texts and their emotional analysis. The texts coming from Twitter are converted from ASCII format to binary creating numerical sequences in which values ​​equal to 1 correspond to the execution of notes, values ​​equal to 0, on the contrary, correspond to silence.

TWEET TEXT: Amygdala 
ASCII: 97 109 121 103 100 97 108 97 10
Binary: 1 1 0 0 0 0 1 1 1 0 1 1 0 1 1 1 1 1 0 0 1 1 1 0 0 1 1 1 1 1 0 0 1 0 0 ...

The notes are extracted and obtained from groups of chords extended beyond the form of triad, extrapolated from intervals that constitute the modes’ structure.
Amygdala Web Cover 05 1920

Chord array creation and reading

var note = [0, 2, 4, 5, 7, 9, 11, 12]; //Ionian or Major
var chordNotes = [0, 2, 4, 5, 7, 9, 11, 12, 14, 16, 17, 19, 21, 23, 24, 26, 28, 29, 31, 33, 35, 36];
var firstVoice = note[grado];
var secondVoice = chordNotes[grado + 2];
var threeVoice = chordNotes[grado + 4]; 
var fourVoice = chordNotes[grado + 6]; 
var fiveVoice = chordNotes[grado + 8]; 
var sixVoice = chordNotes[grado + 12]; 
chord = [tone + firstVoice, tone + secondVoice, tone + threeVoice, tone + fourVoice]; 
var gradoProgression = [0, 3, 4];  let noteVoiceOne = chord[Math.floor(Math.random() * chord.length)];
var midiToFreq = Tone.Frequency(noteVoiceOne, "midi").transpose(transposition + 24); 

Through the Sentiment Analysis system, emotions are associated with a scale or mode to express an emotional sensation, exploiting the intrinsic expressive nature given by their interval structure.

 

Scales, Modes & EmotionsAmygdala Web Cover Grid Music 1920As for the timbral aspect, each voice is made of an oscillator that emits a simple triangular wave. Each oscillator has its own envelope generator that varies the ADSR values ​​randomly to differentiate the structure of the sound over time. Granular samplers manipulate sounds creating micro-loops and noises, adjusting their pitch on the playing notes. To differentiate and emphasise the aspect of independence each instrument has its own reading time, thus triggering random parameters that shape the sound, effects and vary the playing aspect.


Synth initialization

oscVoice1 = new Tone.Synth().connect(pingPongVoice1).connect(pannerVoice1).connect(meter);
oscVoice1.oscillator.type = 'triangle';
oscVoice1.volume.value = -20;
oscVoice1.toMaster();

 
Granular Sampler Initialization

grain1 = new Tone.GrainPlayer("./FluteC.mp3").connect(pingPongVoice1).connect(pannerGrain1).connect(meter);
Tone.Buffer.on('load', function() {
grain1.start();
});
grain1.loop = true;
grain1.overlap = 0.1;
grain1.playbackRate = 0.1;
grain1.volume.value = -10;
grain1.toMaster();


Event trigger

function repeatVoice1(time) {
var timeInt = Math.floor(time);
readNote = (timeInt % binaryText.length);
var binary = binaryText[readNote];
if (binary == 1) {
let noteVoiceOne = chord[Math.floor(Math.random() * chord.length)];
var midiToFreq = Tone.Frequency(noteVoiceOne, "midi").transpose(transposition + 24);
oscVoice1.triggerAttackRelease(midiToFreq, 10 + Math.floor(Math.random() * 100));
oscVoice1.envelope.attack = 5 + (Math.floor(Math.random() * 1000) / 100.);
oscVoice1.envelope.decay = 0.1 + (Math.floor(Math.random() * 100) / 100.);
oscVoice1.envelope.sustain = 0.1;
oscVoice1.envelope.release = 2;
pingPongVoice1.delayTime.value = 1 + Math.floor(Math.random() * 2000);
}
}

 

VISUAL
The visual aesthetic elaborates around the idea of ​​memory and time: graphic grooves are printed on the screen, changing appearance and color, based on the emotional variations in the world. Over time, the past trails move away from us, dissolving and leaving room for new paths that describe the present.
Technically, the paths are generated by overwriting a texture over time through flows of particles which draw trails, following a constantly changing vector field. A smoothing and blurring effect is then applied to the texture content on the alpha channel to create a fading effect over time. In the meantime, new defined lines continue to emerge indicating the freshness of those emotions.Amygdala Web Cover Grid 960 Campo Vettoriale2Each emotion generates a color, displaying the emotional variations and maintaining their "memory" that fades and slowly dissipates over time.Amygdala Web Cover Grid 960 Texture 3

Color particles

var totalEmotion = emotionFactor[6]*100.0;
var percentageID = float(particleId%totalEmotion)/float(totalEmotion);
if
(percentageID < emotionFactor[0]){
stroke(col[0] - 47 , col[1] - 79, col[2] - 252, this.alpha * particleColor); //Happiness
}else if
(percentageID < emotionFactor[0]+emotionFactor[1]){
stroke(col[0] - 81, col[1] - 30, col[2] - 57, this.alpha*particleColor ); //Sadness
}else if
(percentageID < emotionFactor[0]+emotionFactor[1]+emotionFactor[2]){
stroke(col[0] - 106, col[1] - 219, col[2] - 230,this.alpha * particleColor ); //Fear
}else if
(percentageID < emotionFactor[0]+emotionFactor[1]+emotionFactor[2]+emotionFactor[3]){
stroke(col[0] - 187, col[1] - 94, col[2] - 113,this.alpha * particleColor ); //Anger
}else if
(percentageID<emotionFactor[0]+emotionFactor[1]+emotionFactor[2]+emotionFactor[3]+emotionFactor[4]){
stroke(col[0] - 55, col[1] - 240, col[2] - 119, this.alpha * particleColor );//Disgust
}else{
stroke(col[0] - 248, col[1] - 156, col[2] - 181,this.alpha * particleColor); //Surprise
}
strokeWeight(particleStroke * this.stroke );
line(this.pos.x, this.pos.y, this.prevPos.x, this.prevPos.y);
this.updatePrev();

API
For the reception of Tweets and the extraction of the predominant emotion we have created a web service that provides Apis (Application Programming Interface). Specifically, the server queries Twitter Apis to receive some of the tweets that are published around the world at any time, applies the algorithm of Sentiment Analysis and eventually translates these data into graphical views through another custom API.