I will be participating in a three-artist discussion panel at CogX 2020 on the future of creativity and art in the age of AI, VR, and AR. I will be joined by Jonathan Yeo, contemporary portrait painter, and Patrick Morgan, an artist exploring the boundaries of VR/AR in his work. The conversation will be wide ranging, but focused around how these emerging technologies might enable new creative horizons for aritsts in the coming decades.
My webinar, hosted by Nvidia, is now available on-demand. I talk about my latest work using AI (machine learning) as a “creative collaborator” in my artistic process. I show an eclectic range of successes, failures, and just crazy experiments that I’ve done over the last few years while exploring the capabilities and limitations of this emerging technology.
Free to register and watch.
I’m just back from a trip to Oxford where I gave an afternoon seminar on my explorations with art and machine learning to an amazing group of researchers and distinguished professors at Oxford University. The talk reviewed the last three years of my artistic experimentation using machine learning (AI) as a creative tool for art making and showcased an eclectic range of successes and failures. I’ve been sharing some of the work here, but mostly finished pieces, so it was nice to dust off some of my earlier, formative experiments from the recesses of my hard drive. I’ll try to start uploading more of these here in the coming months. Even though they’re unfinished, they were important developmental milestones and each succeeded/failed in interesting, instructive ways. Stay tuned…
Special thanks to Prof. Alexei Efros for arranging the visit. It was great to chat ML, tech and art with a group of super-smart computer scientists, engineers, and thinkers.
crisp winter day in Oxford
I was privileged to be invited to speak again at the THU conference in Malta, this time on the main stage talking about my explorations using machine learning (AI) as a ‘creative collaborator’ in my artistic process. The talk, weighing in at a hefty 75 minutes, explored the genesis of this body of work, my early steps (and missteps) in this emerging medium, and how I’ve started integrating it into my artistic practice.
The talk included a behind-the-scenes look at the inspiration, production and labour that went into the pieces for the Artist+AI: Figures and Form exhibition. I was also excited to show, for the first time, a number of my fun, early experiments that compelled me to dig deeper into the potential of these new tools.
People who know me know that drawing in essential to my creative process. Over the last couple years I have been using part of my morning drawing time (yes, with a coffee… or two, or three), to create input drawings to test my Bodies neural network, which I trained on a portion of my BodiesinMotion.photo library.
The idea behind this “AI tool” is that I train it to learn the correspondence between my drawing style and photographic representations of the human figure, in this case photography carefully lit and shot by me in the studio. Then, once trained, I can use it to dynamically ‘paint’ my drawings in the style of my photography. It is a wondrous interaction, and there is a magical space where I can draw very stylized or abstracted figures and the neural network infers some very interesting anatomical results, always beautifully lit and shaded. The images here are from my wall of Caffeinated Diversions, fifty of the most interesting results from these morning experiments. The grey line drawings are my hand-drawn inputs, the coloured images the output of my Bodies network.
Beyond just the final images though, a large part of the magic that has captivated me when developing and using these AI tools is seeing the final image emerge as I draw it. Here is compilation of timelapses from these drawing sessions: