Playform, an AI-driven “creative soulmate”, has a new addition - a ‘Sketch-to-Image’ tool, which transforms simple sketches into full-fledged images with color, texture, and rich detail. Dr. Ahmed Elgammal explains more.
The new software was developed by Dr. Ahmed Elgammal and his Playform team at Rutgers’ Art and Artificial Intelligence Laboratory. According to Dr. Elgammal, unlike similar AI “art” generators that focus on a particular genre (like photo-realistic landscapes or headshots), Playform’s ‘Sketch-to-Image’ tool is able to synthesize imagery across multiple genres and styles.
The development of the AI is outlined in a white paper: "Sketch-to-Art: Synthesizing Stylized Art Images From Sketches."
The developer explains, in an interview with Digital Journal, how Playform’s generative AI model can “see” parts of the sketch and connect them accurately with style elements.
Digital Journal: How is digital art developing in the modern era?
Dr. Ahmed Elgammal: The market for digital artworks—which include anything that uses digital technology as part of the creative or presentation process—is flourishing. Prices for top works have tripled in the past five years and in October 2018 an AI artwork sold for $432,500 at Christie’s Auction House.
Yet, compared to the market for traditional artworks, like paintings, the digital art market is still small. Primarily because the collector base is much smaller and questions around the work’s scarcity and provenance often arise. Blockchain will help here.
DJ: What was the idea behind Playform?
Elgammal: Playform is like your soulmate, your AI companion to expand your creative mind. AI is so new and often abstract, Playform allows creatives from every discipline to use AI for their work - without any coding skills. It's about experimentation, exploration and finding unexpected sources of inspiration. We think that Playform will be the tool that every creative uses in the future - a creative collaboration of human and machine.
We found that artists are mainly left behind when it comes to AI. Using AI in the creative process requires a high-level of technical skills that is only available to experts. Artists are faced with lots of technical terminology that they have to navigate through. Artists also will need to have access to computational resources such as GPUs and mass amounts of data. All these make it very hard for artists and creatives to be part of the AI revolution.
We built Playform to make AI accessible for artists. We want artists to be able to explore and experiment with AI as part of their own creative process, without worrying about AI terminology, or the need to navigate unguided through the vast ocean of AI algorithms.
DJ: How does Playform work?
Elgammal: Artists simply choose a creative process that matches their own practice, and upload their inspiration images, or choose from a vast and growing library of image collections shared by other creatives. Artists then start the AI training process and, depending on the process, in seconds or minutes they will start seeing AI generated images evolving based on their inputs. In the last 6 months, Playform users have created over 25 Million images based on their own inspiration. Some artists used Playform as a mean of looking for inspirations based on AI uncanny aesthetics.
Some other artists fed images of their own artworks, training models that learn their own style and then used these models to generate new artworks based on new inspirations. Virtual reality artists used AI to generate digital assets to be integrated in virtual reality experiences. Several artists used Playform to generate imagery that used in making videos. Playform was also used to generate works that was upscaled and printed as a final art product. Several artists shown artworks that they have created using Playform in their process in exhibitions.
DJ: What is the key contribution of artificial intelligence, specifically?
Elgammal: Most generative-AI algorithms are mainly developed by AI researchers in academia and big corporate research lab to push the technology boundary. Artists and creatives are not typically the target audience when these algorithms are developed. The use of these algorithms as part of an artist work is an act of creativity by the artist who has to be imaginative in how to bend, adopt, and utilize such non-specialized tools for their purpose. In contrast, in Playform we focused on how to build AI that can fit the creative process of different artists, from the stage of looking for inspiration, to preparation of assets, all the way to producing final works.
One example of that is our newest feature, sketch-to-image, which allows artist to take control by sketching their own composition and choose or plug in any style they like, and AI algorithm in the background immediately render for them fully colored images based on their inputs.
DJ: What were the main challenges in the development of Playform?
Elgammal: At the AI research and development side, our team have to address the problem that AI requires large number of images and long hours of training. We had to work on developed novel optimized versions of generative AI methods that can be trained with only tens of images instead of thousands, and can produce reasonable results in a matter of one or two hours.
At the design side, we focused on making the user experience intuitive and free of AI jargon. All the AI is hidden under the hood. Users choose a creative process upload their own images and press a button to start trainings. Within minutes results will start to pop up and evolve as the training continue. Within an hour, or a bit more, the process is done and you already generated thousands of images. Users can navigate through all iterations to find their favorite results. Users can also continue the training process as needed to achieve better results.
DJ: How does Playform differ to competitor products?
Elgammal: There are many AI companies, but we are the only one that is dedicated solely for creatives. That means that we trained our models on artworks from different epochs, designs, architectural structures and many other creative outputs.
We are the only platform that allows creatives to train AI models from scratch, based on their own images, even with a handful of images. And get results in a matter of minutes.
DJ: Are there any notable examples of Playform generate art in the media?
Elgammal: There are many examples. Qinza Najm, a Playform artist in residence, worked with Playform to create a process inspired by her own artworks that explored abstract images based on the human body. The series that emerged from the collaboration with Playform was chosen for an exhibit about art and science at the National Museum of China in Beijing in November 2019, with 1 million visitors to the exhibition during its one-month run.
Along with groundbreaking works based on artists’ existing approaches, Playform has empowered conceptual explorations of what it means to be a mediated human and how we collide and merge with our digitally generated selves. NYU professor and artist Carla Gannis used Playform to create a series of works based on childhood memories for an avatar named C.A.R.L.A. G.A.N. She then monitored people’s responses to the works versus her own “human” works[AE1] . In another experiment, Gannis used Playform to create visuals she incorporated into a larger VR-based project , which will be exhibited at Telematic in San Francisco in March 2020. Italian artist Domenico Barra developed a project called Affiliation, which explored storytelling via Instagram stories using works made with Playform.
Playform’s images can also be used as a foundation for works in other media. Artist Anne Spalter generated images using Playform, then executed them in pastels on canvas, drawing on AI’s peculiar ability to surface and blend unexpected elements[AE3] . Spalter recently exhibited her Playform-based art at the Spring Break Art Fairs in LA and New York City.
In the past, we also collaborated with partners like the London contemporary orchestra in creating AI-visual that react to music, that accompanied their live performance at the Barbican Center, in October 2018. The HBO Silicon Valley TV show also used one or the AI art pieces we created in an episode aired in April 2018.
Read more: http://www.digitaljournal.com/tech-and-science/technology/q-a-playform-s-ai-model-turns-sketches-into-artworks/article/569903#ixzz6J2aEVKEk
Read more: http://www.digitaljournal.com/tech-and-science/technology/q-a-playform-s-ai-model-turns-sketches-into-artworks/article/569903#ixzz6J2a6SuER