The visual flow that characterises the work was realised thanks to a custom pipeline that allowed us to make the creation process more customised and smooth. This system was divided into different sections: the first one is image generation, where a particle system is responsible for generating the initial image. This is subsequently reinterpreted by the Diffusion Model on the basis of the dream text and through the management of some characteristic parameters (e.g. prompt embeddings, guidance scale, strength, etc.). The versatility of the Diffusion Models allow for a dynamic departure from the source image by giving greater or lesser relevance to the interpretation of the dream text: precisely the management of this balance turned out to be one of the most interesting expressive tools on which the entire visual development of the project is based.
A second step included a particle simulation in a GLSL shader on top of the generated image: it has the goal of adding a further layer of customisation by visually referring to each dream data (sleep cycle, the polysomnography and EEG data) and adding visual elements that react in real-time to the soundtrack of the piece.
The final step is a set of subsequent, additional features that contribute to the visual reworking and artistic customisation of the generated image. This last layer makes it possible to increase the overall compositional complexity and, thanks to a feedback process, ensures visual consistency and becomes the basis for the next frame.
To dive deeper into the creation process of the work, check out our online workshop dedicated to Onirica ().
The particle system is responsible for generating the initial image; this is subsequently reinterpreted by the Diffusion Model on the basis of the dream text and through the management of some characteristic parameters (e.g. prompt embeddings, guidance scale, strength, etc.). The versatility of the Diffusion Models allows for a dynamic departure from the source image by giving greater or lesser relevance to the interpretation of the dream text: precisely the management of this balance turned out to be one of the most interesting expressive tools on which the entire visual development of the project is based. Finally, the image returned by the diffusion is modified by the third layer that deals with the introduction of additional compositional elements into the image. This last layer makes it possible to increase the overall compositional complexity and, through the feedback process, generate the basis for the next frame.
In Onirica (), the main visuals of the installations are shown on a central, squared canvas: acting as a window on a parallel reality, the installation absorbs the viewers thanks to the fast-moving images and visuals.
In parallel with this visual narrative, additional textual content is projected on side supports. These elements want to explore and expand the perception of the entire archive of collected dreams.
Throughout the experience, simultaneously with the utterance of specific keywords, texts of other dreams in the archive, syntactically connected to these same words, are displayed on the side walls.
The same correlation between keywords and lateral projections is further explored during the bridge phase - which serves as an interlude between the 30 dreams - where the fast-paced rhythm of the words displayed on the central dream visualisation causes the texts on the sides to alternate very quickly, giving the viewer a sense of the thematic recurrence, complexity and size of the archive at hand.
Pasqua Winery, 2024
Fondazione Peruzzo, 2023 | ph. Ugo Carmeni Studio
INOTA Festival, 2023
Pasqua Winery, 2024
Light Art Museum, 2024
Onirica () is an artwork by fuse*
Art Direction: Mattia Carretti
Executive Production: Mattia Carretti, Luca Camellini
Concept Development: Mattia Carretti, Matteo Williams Salsi, Giulia Caselli
Sound Design & Music: Riccardo Bazzoni
Head of Visual Design: Matteo William Salsi
Visual Development: Matteo William Salsi, Matteo Amerena, Samuel Pietri
Voices Design: Matteo Amerena
Prompt Design: Giulia Caselli, Matteo William Salsi
Hardware Engineering: Luca Camellini, Matteo Amerena
Production Assistants: Martina Reggiani, Filippo Aldovini, Virginia Bianchi
It was produced with the support of INOTA Festival and Fondazione Alberto Peruzzo.
The visual component is based on a pipeline integrating the Diffusers: state-of-the-art diffusion model library developed by Huggingface and OpenGL Shading Language (GLSL).
Connections between dreams were obtained through text analysis with the Sentence Transformer framework, first introduced in the article "Sentence-BERT: Sentence Embeddings using Siamese BERT- Networks," by authors N. Reimers and I. Gurevych.
The speech synthesis was realized thanks to the Bark model developed by Suno AI.
T33, Shenzhen (CN)
Light Art Museum, Budapest (HU)
Mad Arts, Miami (US)
Pasqua Winery, Verona (IT)
Fondazione Alberto Peruzzo, Padua (IT)
INOTA Festival, Várpalota (HU)