Adversarial Synesthetics


Recently a post by David O’Reilly made the rounds in our creative circles specifically regarding Dalle-2. Questioning the ethics of ingesting millions of other artists' work into their neural memories and making them available to paying subscribers through simple prompts.

This ‘prompted’ us to share a series of ongoing explorations, examining the new technocene of the modern zeitgeist. As a collective of creators, designers and artists we have been constantly playing with the ideas of ‘AI’ technology through our work for some time. From the conceptual research of FIELD BLUE to the more practical use of tools like depth estimation in our Adidas ‘Running Lab’ for Systems.

The purpose of these introspective studies is to bring into new light, the practical ways in which these new technologies can be used to benefit us creatively, and continue to question the ethical and philosophical implications of such a technology. It is after all the human condition to open Pandora’s box, before we try to understand what was in it.

#1 - Inside an abstract, expressionistic computer-generated synesthetic dream.

A common objection to image generation models like Dalle-2 and Mid Journey is that they ‘rip off’ other artists work. Consuming years of effort and labour, and spitting out all kinds of imitation art, unsanctioned and bootlegged counterfeits. Of course the line between ‘inspiration’ and ‘rip-off’ has always been a fine line to walk as a creative. As artists, do we not continuously pull from our own knowledge and inspirations when we create? We were all already standing on the shoulders of giants as the phrase goes - learning from those who came before us.

Knowledge is power and that's something that a ‘general’ AI like Dall-e 2 has in abundance. You could argue that an AI’s main strength is the ability to consume and refer to more ‘inspiration’ than we could ever hope to achieve in one lifetime. Giving you a limitless vision of references to pull from when you enter your prompt. Ignoring for a moment, the social and moral conundrum of a private company harnessing this for profit. It also brings out a fundamental problem with a general AI, even though you can direct it using seed phrases, and try point to specific references, its always pulling from the same general set knowledge, so is more always better?

If you ask 10 different artists the same question you will get 10 truly unique answers, based on their unique human experience and the limitations of each person - if you ask one AI the same question 10 times you will likely just get variations of the same answers, easily replicable by anyone with the same recipe. This creates a strange juxtaposition, creating a tool that can simultaneously create anything you can imagine, but also never exactly what you imagine - causing hopeful artists spend weeks trying to develop their secret prompt formula to the perfect AI face.

One possibly solution to some of these issues, is to create easier and open tools for training your own models, built and owned by the artist. If we can make it easier for an artist to truly build up a personalised AI companion, to feed in only the inspiration and ideas they wanted to pull from in the first place, we suddenly get a glimpse of a much healthier and varied AI generation landscape. The moral and physical implications and difficulties with building good and large enough datasets will have to wait for another topic but lets dive into our case study.

These artworks are a conversation between us and the machine, however we approached these artworks the same way we would any other - by asking how it should feel. We always use references in our work, build up mood boards visualising the energy and feel, find our inspiration. As an individual maybe you can just use instinct or memory, but as a collective you need to have a shared vision to create a cohesive work. We set out to create abstract visual stories told through lines and textures of nature, the flow of music. It was to be a visual dialogue with painter Heather Day, but seen through the eyes of an AI. This was an exploration into creatively manipulating the AI training process (using StyleGan2) to synthesise a new visual language.

We used a method akin to visual alchemy: adding, mixing and layering multiple visual datasets, in different proportions, to art direct the AI training process of the neural network. The results would then be fed back to the artists - A wide number of compositional ideas and abstract painterly snapshots from latent space, to explore how they could be used to create new forms and materials.

The results are a delicate and unique free-flowing humanistic expression, that we wouldn't naturally expect to result from an AI based creative process. It is also a outcome we would not have arrived at without the aid of the neural network. Something not totally defined by ourselves but the process. Artworks that stay true to ourselves, the spirit of exploration and also our influences.

To frame the big question then - We did not ask any artists if we could use their work to help train our neural net. We did not use any of their art directly to create our final pieces, however the work although unique is clearly inspired by theirs. If we had just approached this as a painting or even as a collage, cutting and pasting the work of others, there would be no question. Jackson Pollock did not ask permission when he was inspired by Janet Sobel in 1944 and Andy Warhol did not seek permission from the Campbell's soup company. So should we hold these new tools to a higher regard? The camera did not replace the artist although it did lesson their work, but it also created a whole new form of art. New technology will always displace, but maybe it should be our job as creatives to harness and explore this new capability? There is a clear need to be mindful about the potential of a creative monopoly when technology is involved, but art has always been an exchange of ideas, a discussion - and the machine has entered the chat.