ARTIFICIAL BOTANY

REAL-TIME A/V INSTALLATION
2020

ARTIFICIAL BOTANY

REAL-TIME A/V INSTALLATION
2020

Artificial Botany is an ongoing project which explores the latent expressive capacity of botanical illustrations through the use of machine learning algorithms. Before the invention of photography, botanical illustration was the only way to visually record the many species of plants. These images were used by physicists, pharmacists, and botanical scientists for identification, analysis, and classification.
While these works are no longer scientifically relevant today, they have become an inspiration for artists who pay homage to life and nature using contemporary tools and methodologies. Artificial Botany draws from public domain archive images of illustrations by the greatest artists of the genre, including Maria Sibylla Merian, Pierre-Joseph Redouté, Anne Pratt, Mariann North, and Ernst Haeckel.
Developing as an organism in an interweaving of forms that are transmitted and flow into each other, the plant is the symbol of nature’s creative power. In this continuous activity of organizing and shaping forms two opposing forces in tension are confronted: on one hand, the tendency to the shapeless, the fluidity of passing and changing; on the other, the tenacious power to persist, the principle of crystallization of the flow, without which it would be lost indefinitely. In the dynamic of expunction and contraction that marks the development of the plant, beauty manifests itself in that moment of balance which is impossible to fix, caught in its formation and already in the point of fading into the next one.

 

PROCESS

These illustrations have become the learning material for a machine learning system called GAN (Generative Adversarial Network), which through a training phase is able to recreate new artificial images with morphological elements almost identical to the images of inspiration, but with details and features as if they were generated by human painters. The machine in this sense re-elaborates the content by creating a new language, capturing the information and artistic qualities of human and nature.

Artificial Botany EPK.jpg 0006 Layer 3 copy1

The recent advances in the generative models realm makes it intriguing to exploit their ability to create novel content from a given set of images. Following this direction we focused our attention on the study of Generative Adversarial Neworks (GANS).
They represent a technique made up of two networks that are in competition with one another’s in a zero-sum game framework. GANs typically run unsupervised, teaching itself how to mimic any given distribution of data, which means that once trained they are able to replicate novel content starting from a specific dataset.
This distribution is often called the latent space of the model. It is usually high dimensional, though much lower than the dimensions of the raw medium. E.g. When dealing with full color (e.g. RGB) images with 1024x1024 pixel resolution, we are dealing with roughly 3 million dimensions (i.e. features — 1024x1024x3), whereas we might use a latent space of only 512 dimensions.
Artificial Botany EPK.jpg 0005 Layer 3 copy 2
The first step in establishing a GAN is to identify the desired end output and gather an initial training dataset based on those parameters. This data is then randomized and input into the generator until it acquires basic accuracy in producing outputs.
In an unconditioned generative model, there is no control on the model of the data being generated. However, by conditioning the model on additional information it is possible to direct the data generation process. Such conditioning could be based on class labels, on some part of data for inpainting like, or even on data from different modality.

The first instalment of the project involved the speculative interaction between two AI systems interacting with one another. The text underlying each artwork is generated by the exploitation of another neural network algorithm. This type of system is called “image to text translation” and while commonly it’s used to classify images, here it has been tested by asking it to recognize other artificial-generated images frame by frame.

Production: fuse*
Art Direction: Mattia Carretti, Luca Camellini
Concept: Mattia Carretti, Luca Camellini, Samuel Pietri
Supervisione Software: Luca Camellini 
Software Artists: Luca Camellini, Samuel Pietri
Sound Design: Riccardo Bazzoni 
Hardware Engineering: Matteo Mestucci