WRITTEN BY MediaMonks
Chances are, you’ve seen the meme before: “I forced a bot to watch over 1,000 hours of [TV show] and then asked it to write an episode of its own. Here is the first page,” followed by a nonsensical script. These memes are funny and quirky for their surreal and unintelligible output, but in the past couple of years, AI has improved to create some incredible work, like OpenAI’s language model that can write text and answer reading comprehension questions.
AI has picked up a handful of creative talents: making original music in the style of famous artists or turning your selfie into a classical portrait, to name a few. While these experiments are very impressive, they’re often toy examples designed to demonstrate how well (or poorly) an artificial intelligence stacks up to human creativity. They’re fun, but not very practical for day-to-day use by creatives. This led our R&D team, MediaMonks Labs, to consider how tools like these would actually function within a MediaMonks project.
This question fueled two years of experimentation and neural network training for the Labs team, who built a series of machine learning-enhanced music video animations that demonstrate true creative symbiosis between humans and machines, in which a 3D human figure performs a dance developed entirely by (or in collaboration with) artificial intelligence.
The Simulation Series was built out of a desire to let humans take a more active approach to working creatively with AI, controlling the output by either stitching together AI-created dance moves or by shooting and editing the digital performance to their liking. This means you don’t have to be a pro at animation (or choreography) to make an impressive video; simply let the machine render a series of dance clips based on an audio track and edit the output to your liking.
“Once I had the animations I liked, I could put it in Unity and could shoot them from the camera angles that I wanted, or rapidly change the entire art direction,” says Samuel Snider-Held. A Creative Technologist at MediaMonks, he led the development of the machine learning agent. “That was when I felt like all these ideas were coming together, that you can use the machine learning agent to try out a lot of different dances over and over and then have a lot of control over the final output.” Snider-Held says that it takes about an hour for the agent to generate 20 different dances—far outpacing the amount of time that it would take for a human to design and render the same volume.
Snider-Held isn’t an animator, but his tool gives anyone the opportunity to organically create, shoot and edit their own unique video with nothing but a source song and Unity. He jokes when he says: “I spent two years researching the best machine learning approaches geared towards animation. If I spent two years to learn animation instead, would I be at the same level?” It’s tough to say, though Snider-Held and the Labs team have accomplished much over those two years of exhaustive, iterative development—from filling virtual landscapes with AI-designed vegetation to more rudimentary forms of AI-generated dances in pursuit of human-machine collaboration.
Even though the tool fulfills the role of an animator, the AI isn’t meant to replace anyone—rather, it aims to augment creatives’ abilities and enable them to do their work even better, much like how Adobe Creative Cloud eases the creative process of designing and image editing. Creative machines help us think and explore vast creative possibilities in shorter amounts of time.
It’s within this process of developing the nuts and bolts that AI can be most helpful, laying a groundwork that provides creatives a series of options to refine and perfect. “We want to focus on the intermediate step where the neural network isn’t doing the whole thing in one go,” Snider-Held says. “We want the composition and blocking, and then we can stylize it how we want.”
“The tool’s glitchy aesthetic sells the ‘otherness’ to it. It doesn’t just enhance your productivity, it can enhance the limits of your imagination."
It’s easy to see how AI’s ability to generate a high volume of work could help a team take on projects that otherwise didn’t seem feasible at cost and scale—like generating a massive amount of hand-drawn illustrations in a short turnaround. But when it comes to neural network-enhanced creativity, Snider-Held is more excited about exploring an entirely new creative genre that perhaps couldn’t exist without machines.
“It’s like a reverse Turing test,” he says, referencing the famous test by computer scientist Alan Turing in which an interrogator must guess whether their conversation partner is human or machine. “The tool’s glitchy aesthetic sells the ‘otherness’ to it. It doesn’t just enhance your productivity, it can enhance the limits of your imagination. With AI, we can create new aesthetics that you couldn’t create otherwise, and paired with a really experimental client, we can do amazing things.”
Google’s Nsynth Super is a good example of how machine learning can be used to offer something creatively unprecedented: the synthesizer combines source sounds together into entirely new ones that humans have never heard before. Likewise, artificial intelligence tools like automatically rendering an AI-choreographed dance can unlock surreal, new creative possibilities that a traditional director or animator likely wouldn’t have envisioned.
In the spirit of collaboration, it will be interesting to see what humans and machines create together in the near and distant future—and how it will further transform the ways that creative teams will function. But for now, we’ll enjoy seeing humans and their AI collaborators dance virtually in simpatico.
See what else MediaMonks Labs is cooking up.