I will be participating in a three-artist discussion panel at CogX 2020 on the future of creativity and art in the age of AI, VR, and AR. I will be joined by Jonathan Yeo, contemporary portrait painter, and Patrick Morgan, an artist exploring the boundaries of VR/AR in his work. The conversation will be wide ranging, but focused around how these emerging technologies might enable new creative horizons for aritsts in the coming decades.
My webinar, hosted by Nvidia, is now available on-demand. I talk about my latest work using AI (machine learning) as a “creative collaborator” in my artistic process. I show an eclectic range of successes, failures, and just crazy experiments that I’ve done over the last few years while exploring the capabilities and limitations of this emerging technology.
Free to register and watch.
I’m just back from a trip to Oxford where I gave an afternoon seminar on my explorations with art and machine learning to an amazing group of researchers and distinguished professors at Oxford University. The talk reviewed the last three years of my artistic experimentation using machine learning (AI) as a creative tool for art making and showcased an eclectic range of successes and failures. I’ve been sharing some of the work here, but mostly finished pieces, so it was nice to dust off some of my earlier, formative experiments from the recesses of my hard drive. I’ll try to start uploading more of these here in the coming months. Even though they’re unfinished, they were important developmental milestones and each succeeded/failed in interesting, instructive ways. Stay tuned…
Special thanks to Prof. Alexei Efros for arranging the visit. It was great to chat ML, tech and art with a group of super-smart computer scientists, engineers, and thinkers.
crisp winter day in Oxford
I was privileged to be invited to speak again at the THU conference in Malta, this time on the main stage talking about my explorations using machine learning (AI) as a ‘creative collaborator’ in my artistic process. The talk, weighing in at a hefty 75 minutes, explored the genesis of this body of work, my early steps (and missteps) in this emerging medium, and how I’ve started integrating it into my artistic practice.
The talk included a behind-the-scenes look at the inspiration, production and labour that went into the pieces for the Artist+AI: Figures and Form exhibition. I was also excited to show, for the first time, a number of my fun, early experiments that compelled me to dig deeper into the potential of these new tools.
People who know me know that drawing in essential to my creative process. Over the last couple years I have been using part of my morning drawing time (yes, with a coffee… or two, or three), to create input drawings to test my Bodies neural network, which I trained on a portion of my BodiesinMotion.photo library.
The idea behind this “AI tool” is that I train it to learn the correspondence between my drawing style and photographic representations of the human figure, in this case photography carefully lit and shot by me in the studio. Then, once trained, I can use it to dynamically ‘paint’ my drawings in the style of my photography. It is a wondrous interaction, and there is a magical space where I can draw very stylized or abstracted figures and the neural network infers some very interesting anatomical results, always beautifully lit and shaded. The images here are from my wall of Caffeinated Diversions, fifty of the most interesting results from these morning experiments. The grey line drawings are my hand-drawn inputs, the coloured images the output of my Bodies network.
Beyond just the final images though, a large part of the magic that has captivated me when developing and using these AI tools is seeing the final image emerge as I draw it. Here is compilation of timelapses from these drawing sessions:
A new sculpture which debuted at my Artist+AI: Figures and Form exhibition. This bronze (as with all works in the show), was created in collaboration with AI tools that I’ve trained as my ‘art assistants,’ in this instance one that translates my drawings into three-dimensional form.
I created this piece by drawing a ‘blueprint’, effectively the instruction set, which directs the AI to build volume, planes and edges in a certain way (based on the way that I originally trained the network, which is a sort of alchemy itself). Below you can see a side-by-side comparison of the ‘blueprint’ and the final sculpture. A video showing the process of creating the final bronze casting can be found HERE.
EXHIBITION OF WORK
19-23 June, 2019
Somerset House, New Wing, room G16
My new exhibition showcasing work created in ‘collaboration’ with AI is running from the 19-23rd of June at Somerset House in London. It is a free, but ticketed event, so you will need to book in advance. Please get your tickets HERE.
“This exhibition showcases the recent work of artist Scott Eaton combining the latest in generative artificial intelligence (AI) with the centuries old practices of drawing and sculpture. The show’s featured works are the result of the dynamic interaction between Scott’s traditionally-trained hand and the AI tools he has ‘taught’ to work as his assistants. In this show, Eaton, an interdisciplinary artist with backgrounds in sculpture, anatomy and design, underscores the impact AI is set to have on the art-making process.”
Behind the scenes – prepping a piece for my “Artist+AI: Figures & Form” show opening next week at Somerset House. Suffice it to say, this will be a LARGE composition (22,000 x 17,000 pixels!). My drawing hand aches.
Show is free and runs from 19-23rd of June, so squeeze in a visit to Somerset House if you are near London.