In the first 30 seconds of director and artist Paul Trillo’s short film Thank You for Not Responding, a woman stares out the window of a seemingly submerged subway car. A man appeared in the window swimming towards the car, whose body materialized out of the darkness and swirling water. It’s a terrifyingly claustrophobic and violent scene that would have required hundreds of thousands of dollars in props and special effects to shoot, but Trillo used an experimental toolkit created by an artificial intelligence company called Runway to capture the numbers. Generated this scene in minutes. . The characters in the film look real because they are played by people who might actually be underwater. But a second look reveals the eerieness of their hollow eyes, outstretched limbs, and puffy features. The surreal or surreal aesthetics of AI-generated video may rely on models trained on live-action footage, but the results “feel closer to a dream,” Trillo said. rice field. “It’s almost time to close my eyes and try to remember something,” he continued.
“Thank you for not responding” evokes loneliness and loneliness. A reed-like narration by an AI-generated voice model trained on Harry Dean Stanton’s monologue from the movie Paris, Texas reads out a script written by Trillo, while listening to the answering machine of possible possibilities. mourn the loss. Perhaps a ruined memory of a relationship. “Someday, all of our lives will come up and all that what-ifs will still haunt me,” an eerie voice rambles over the film’s two-and-a-half minutes. Trillo wrote the script at the height of the pandemic, at a moment of total disconnection. He set it on top of a cascade of AI imagery: flashes of flooded subway cars, phone booths in the desert, elegantly dressed people at a party, and apartments lit up at night. This atmosphere is part of Edward Hopper and David Lynch and the inspiration for the Trillo movies.
Paul Trillo used an AI tool called Stable Diffusion to generate the images, which he fed into another tool called Runway AI to establish the cinematic aesthetic.Artwork by Paul Trillo, generated with Automatic1111 stable diffusion
To create the clip, Trillo first used Stable Diffusion, an AI tool co-written by the Runway team, to generate still images suggestive of the scene he had in mind.very similar Darui, another image generator, Trillo, gives you a text prompt to describe the content you want to include in your image, and adjectives to clarify its beauty. These work as concept art or storyboards. We then entered paragraph-length text prompts into Runway describing the generated images (one for each clip) and the desired motion and animation within the video clip (such as suggesting camera movement). The runway jerks off, spewing short clips that loosely echo the images and prompts.
Trillo knitted together many of the resulting clips, each requiring multiple permutations, to create the final short film. Trillo demonstrated the process to me during his Zoom call. For example, tracking a woman crying alone in a softly lit restaurant could render a shot of her in seconds. His prompt included his SEO-esque terminology specifically intended to create a cinematic aesthetic for the machine: “moody lighting, iconic, visually stunning, immersive, impactful.” It was Trillo was fascinated by the process. “The speed at which I was able to handle it was something I had never experienced before,” he said, adding, “I felt like I could fly in my dreams.” The AI tool is “co-directing” with him and “makes a lot of decisions I couldn’t make.”
AI images have drawbacks. Human faces tend to be deformed, hands are still unwieldy, and natural body movements are difficult to represent. But this movie uses them to its advantage. It doesn’t matter if the scene doesn’t look completely realistic. Its atmospheric qualities make it more haunting and doubles the sadness of the narration. Photorealism doesn’t match the material, but the film comes close enough to be mistaken for the real thing for a second. (“The more shadows, the more believable,” Trillo said.) Whether it’s special effects or cinematography, directors want AI to do things that traditional tools can’t. I was thinking Do something that can be filmed with real cameras and performers. At the same time, using Runway allowed Trillo to visualize otherwise unseen scenes, such as flooded subway cars. “It’s very good with fire, explosions and water. The more organic the better,” he said. (Runway developed his own in-house video generation model, but did not reveal what data the model was trained on.)
AI imagery has its flaws, but the film uses them to its advantage.Artwork by Paul Trillo, generated with Automatic1111 stable diffusion
AI is now part of Trillo’s filmmaking palette. He also used it to generate animated backgrounds for his GoFundMe commercials. The barrier to creating visual effects is greatly lowered. Runway co-founder and CEO Cristóbal Valenzuela called it a fundamental change. “Content creation costs are going to zero. Most of the content you consume, watch, and create will be generated,” he told me. He envisions a fundamental shift in how films are made and received. If anyone could make a realistic explosion just by typing text into a box, the explosion wouldn’t be so noticeable on the screen. Filmmaking becomes “a job of curation and editing, iterating and picking what’s more interesting.”
The term “AI-generated movies” is a bit of a misnomer. For Trillo, the director writes the script, builds the visual aesthetic, decides which scenes to create, selects from Runway results, and then edits the clips to create a threaded, thematically consistent finished product. has been completed. Media such as voices, faces, landscapes, and animations were provided by generative tools, but human creative elements are still present at every stage of the process. Trillo doesn’t feel that using AI has made him obsolete. “I’m not interested in this replacing anything,” he said.