ARTIFICIAL BOTANY

A/V INSTALLATION
2019 / ongoing

ARTIFICIAL BOTANY

A/V INSTALLATION
2019 / ongoing

Artificial Botany is an ongoing project which explores the latent expressive capacity of botanical illustrations through the use of machine learning algorithms. Before the invention of photography, botanical illustration was the only way to visually record the many species of plants. These images were used by physicists, pharmacists, and botanical scientists for identification, analysis, and classification.
While these works are no longer scientifically relevant today, they have become an inspiration for artists who pay homage to life and nature using contemporary tools and methodologies. Artificial Botany draws from public domain archive images of illustrations by the greatest artists of the genre, including Maria Sibylla Merian, Pierre-Joseph Redouté, Anne Pratt, Mariann North, and Ernst Haeckel.

Developing as an organism in an interweaving of forms that are transmitted and flow into each other, the plant is the symbol of nature’s creative power. In this continuous activity of organizing and shaping forms two opposing forces in tension are confronted: on one hand, the tendency to the shapeless, the fluidity of passing and changing; on the other, the tenacious power to persist, the principle of crystallization of the flow, without which it would be lost indefinitely. In the dynamic of expunction and contraction that marks the development of the plant, beauty manifests itself in that moment of balance which is impossible to fix, caught in its formation and already in the point of fading into the next one.

 

PROCESS

The illustrations collected from digital archives have become the learning material for a particular machine learning system called GAN (Generative Adversarial Network), which through a training phase is able to recreate new artificial images with morphological elements extremely similar to the images of inspiration but with details and features that seem to bring out a real human representation. The machine in this sense re-elaborates the content by creating a new language, capturing the information and artistic qualities of man and nature.
The recent advances in the generative models realm makes it really intriguing to exploit their ability to create novel content from a given set of images. Following this direction, we focused our attention on the study of Generative Adversarial Networks (GANs). They represent a technique made up of two networks that are in competition with one another’s in a zero-sum game framework.

Artificial Botany EPK.jpg 0006 Layer 3 copy

The first network is called a generator and its job is to generate data from a random distribution. These data are then conducted to the second network, the discriminator, who on the basis of the data acquired during the learning phase, learns to decide whether the distribution of the generator data is close enough to what the discriminator knows as the original data. If the value obtained does not meet the requirements, the process will be repeated until the result is obtained. GANs typically run unsupervised, teaching itself how to mimic any given distribution of data, which means that once trained they are able to replicate novel content starting from a specific dataset.
The distribution the network is able to learn is often called the latent space of the model. It is usually high dimensional, though much lower than the dimensions of the raw medium. E.g. When dealing with full colour (e.g. RGB) images with 1024x1024 pixel resolution, we are dealing with roughly 3 million dimensions (i.e. features 1024x1024x3), whereas we might use a latent space of only 512 dimensions.
Artificial Botany EPK.jpg 0005 Layer 3 copy 2

The first step in establishing a GAN is to identify the desired output and gather an initial training dataset based on those parameters. This data is then randomized and input into the generator until it acquires basic accuracy in producing outputs.
In an unconditioned generative model, there is no direct control on the model and the data being generated. However, by conditioning the model on additional information it is possible to direct the data generation process. Such conditioning could be based on class labels, on some part of data for painting like, or even on data from different modalities.

TRANSFER LEARNING
We further developed the process by integrating the concept of transfer learning to the previously trained models. Practically, it consists on reusing or transferring information from previously learned tasks for the learning of new tasks as the potential to significantly improve the efficiency of the network. In this case, we started from the trained model used for the creation of the synthetic botanical illustrations and we started a new training process with a new dataset composed of images of forests and leaves.

forest octree
In this animation, you can see the intermediate steps during the new training phase of the forest dataset. It’s particularly fascinating the way the previously learned features, defining part of a plant illustration, slowly shift their meaning by outlining other parts of a mixed-complex structure.

The first instalment of the project published in 2019, involved the speculative interaction between two AI systems interacting with one another. The text underlying each artwork is generated by the exploitation of another neural network algorithm. This type of system is called “image to text translation” and while commonly it’s used to classify images, here it has been tested by asking it to recognize other artificial-generated images frame by frame.


A second version was prototyped by developing a grid inside which 576 modules different from each other are gradually revealed. This narration offers an unusual perspective on the generative process allowing the viewer to appreciate the overall dynamics and, at the same time, the detail of the individual flowers.

Due to its nature of open research and in continuous evolution, in 2022 Artificial Botany expands, with the conception and generation of a complex exhibition project, in which new languages, mediums, material and supports are experimented. The need was to describe the creative power of nature both in visual and conceptual terms, restoring the concept of mutability, transience, evolution that notions, images, supports, technologies and cultural elements implement over time, incessantly, just like natural ones.

The project combines digital and analogue prints with video installations, sound environments with specimens of ancient herbaria, natural printed silkscreens alongside with immersive audiovisual installations. The remediation process underlying the artistic project activates a comparison between systems of study and acquisition of images of natural elements, an open dialogue between ancient and contemporary, memory, archiving and imagination. This solution brings out the metamorphic characteristic of existence: everything changes constantly.
Selezione Marignana6

The research was further enriched thanks to the concession by BUB (University Library of Bologna), the Botanical Garden of Bologna and Alma Mater Studiorum, of some datasets of high-resolution images of the illustrated herbarium and the dry herbarium by Ulisse Aldrovandi, famous botanist and Bolognese naturalist recognized by many as the father of modern Natural History. By integrating the images of Aldrovandi's illustrations with the processing of Artificial Botany, a peculiar and unique version of the Aldrovandian botanical and imaginative sample collection was generated, a modern exploration that allows to relate stylistic elements and details that cannot be exploited by the human eye alone.

The new body of work was previewed in the spaces of Cubo Unipol in Bologna from 18 January to 22 May 2022, on the occasion of das.05 Mutamenti. Le metamorfosi sintetiche di fuse* and Francesca Pasquali, curated by Federica Patti.


“The process highlights the artistic potential of a new and totally synthetic aesthetic” - 
Creative Applications


Awards:
 
Jury Selections of the 24th Japan Media Arts Festival, Digital Design Awards 2020
Production: fuse*
Art Direction:
 Mattia Carretti, Luca Camellini
Concept: Mattia Carretti, Luca Camellini, Samuel Pietri
Supervisione Software: Luca Camellini 
Software Artists: Luca Camellini, Samuel Pietri
Exhibition Design: Martina Reggiani
Sound Design: Riccardo Bazzoni 
Hardware Engineering: Matteo Mestucci
Support to Concept Writing: Saverio Macrì

Premiere: NEO c/o Cosmo Caixa / Barcelona, ES

 

Upcoming Exhibitions
04 September - 31 December 2023 - Dongdaemun Design Plaza / Seoul, KR
28 September - 1 October 2023 - Smart Life Festival c/o Fondazione San Carlo / Modena, IT

 

Selected Past Exhibitions
13 - 17 September 2023 - Scopitone Festival / Nantes, FR
25 May - 09 July 2023 - "Tell All The Truth But Tell It Slant" c/o FACE B / Bruxelles, BE
11 May - 01 September 2023 - Italian Cultural Institute c/o INNOVIT / San Francisco, US
11 March - 9 September 2023 - Marignana Arte / Venice, IT
03 December 2022 - 02 April 2023 - Hong Kong Design Institute / Hong Kong, HK
02 October 2022 - 15 January 2023 - Misk Art Institute / Riyad, KSA
25 November 2022 - 2 January 2023 - National Taichung Theatre / Taichung City, TW
1 June - 31 July 2022 - Gardening Amelisweerd / Utrecht, NL
18 January - 22 May 2022 - MUTAMENTI. Le metamorfosi sintetiche di fuse* e Francesca Pasquali for das05 c/o CUBO, Unipol / Bologna IT
12 February - 30 April 2022 - Marignana Arte / Venice, IT
13-25 January 2022 - Japan Media Arts Festival / Kochi, JP
15 October–30 November 2021 - NEO c/o Cosmo Caixa / Barcellona, ES