Press Clipping
AI Under the Hood: Playform

In this regular insideBIGDATA feature we highlight our industry’s movers and shakers, companies that are pushing technology forward, and setting trends for innovation. We look at companies with a focus on big data, data science, machine learning, AI and deep learning – some new, some old, always leading, always dynamic. We also take deep dives into new technology promoted (or hyped) as “AI” or my favorite “AI-powered” to provide transparency for what’s really going on under the hood. Watch this column for intimate coverage of some pretty cool firms doing some pretty exciting things. Enjoy the ride!

In this installment of “AI Under the Hood” I introduce recently launched Playform (Artrendex Inc.), a generative AI collaborative tool for artists. The company’s tech publicist reached out to me around when the global pandemic became serious, so it’s taken a while for me to write this review. But I was impressed with all the materials the company provided me for making a technology assessment. As a previous researcher myself, I’m always excited when a company sends me a link to an paper written by a founder. Nice touch!

Dr. Ahmed Elgammal and his team’s latest paper “Sketch-to-Art: Synthesizing Stylized Art Images From Sketches” explaining the GAN framework behind Playform’s “Sketch to Image” tool. Dr. Elgammal is also the founder and director of Rutgers University’s Art and Artificial Intelligence Lab. The tool itself synthesizes fully detailed, stylized images from simple sketches using three modules: a dual-masked mechanism, a feature-map transformation technique, and an inverse procedure of instance-normalization.

Playform utilizes GANs to enrich creativity rather than replace it. Playform is user-friendly software that allows artists to custom train the AI with their own images and gives artists a variety of ways to experiment with generative AI. Using one such tool, for example, Playform can take a simple sketch and transform it into a full-fledged image with color, texture, and stunning detail. It allows a designer to create a series of prototypes of a piece of clothing, say, then layer in other textures, colors, and images for a truly innovative result. The platform functions like an interactive mirror, reflecting back novel iterations that help creatives evolve their ideas.

You can find a deep dive into the Playform technology and how it compares to/evolves past other applications that synthesize images (SketchyGAN, GauGAN), HERE. For instance, unlike GauGAN, Playform’s AI model guesses the semantics of the composition rather than requiring the user to manually label each region. This automated semantic understanding is achieved through novel components in the Generator, Discriminator, and feature extractor.

Generative AI, when algorithms create new images, texts, or sounds based on training from massive data sets, enriches human creativity, without replacing it. Playform was built on ongoing artist input, to ensure it served visual creatives in a way they could easily incorporate into their practice. “Playform is not a tool. It’s a creative soulmate to enhance artistic expression,” says Elgammal.

Behind Playform’s innovative new features like sketch-to-image lies a powerful AI engine trained on centuries of artworks, representing a range of styles, cultures, and techniques. Crafted with an eye for art history and style, this data set allows the AI to identify, mimic, and completely transform a range of images and inputs. From a bare bones sketch, for example, Playform can generate a novel landscape, portrait, or other type of image in a specific user defined style or historical style, taking its cues from Monet, Turner, Roerich, or one of many other artists or movements.

Elgammal and the Playform team worked with artist and instigator Devin Gharakhanian, who created abstract images from old photographs of Charlie Chaplin and who helped inspire Playform’s style morph feature. The portraits were displayed at SCOPE Art Fair in conjunction with Art Basel Miami, causing a buzz at the high-profile event and making history as the first human-AI generated artwork displayed there.

Qinza Najm, a Playform artist in residence, worked with Playform to create a process inspired by her own artworks that explored abstract images based on the human body. The series that emerged from the collaboration with Playform was chosen for an exhibit about art and science at the National Museum of China in Beijing in November 2019, with 1 million visitors to the exhibition during its one-month run.

Along with groundbreaking works based on artists’ existing approaches, Playform has empowered conceptual explorations of what it means to be a mediated human and how we collide and merge with our digitally generated selves. NYU professor and artist Carla Gannis used Playform to create a series of works based on childhood memories for an avatar named C.A.R.L.A. G.A.N. She then monitored people’s responses to the works versus her own “human” works. In another experiment, Gannis used Playform to create visuals she incorporated into a larger VR-based project, which will be exhibited at Telematic in San Francisco in March 2020. Italian artist Domenico Barra developed a project called Affiliation, which explored storytelling via Instagram stories using works made with Playform.

Playform’s images can also be used as a foundation for works in other media. Artist Anne Spalter generated images using Playform, then executed them in pastels on canvas, drawing on AI’s peculiar ability to surface and blend unexpected elements. Spalter recently exhibited her Playform-based art at the Spring Break Art Fairs in LA and New York City.

From time-tested methods to bleeding edge technology, Playform is designed to nurture and provoke creative impulses. Then artists take the next steps to make art. “We always listen to artists and creative professionals to build AI that can be part of their daily process,” notes Elgammal. “With them as our guides, we want to spark new ways of seeing.”