Technology

I turned doodles into video clips in minutes with Runway's new AI video tool - here's how

· 5 min read

AI transformed doodles of birds into scary winged creatures.

Follow ZDNET: Add us as a preferred source on Google.

A family of settlers, admiring a herd of bison, breaks into a run as two demonic winged creatures fly overhead. The little girl passes through a fencepost like a ghost.

This surreal five-second scene was created thanks to a new feature within AI platform Runway called Motion Sketch. As the name suggests, it generates videos from simple doodles that you can make directly on top of a still image. 

In the case of the clip below, I started by generating an image using Google's Nano Banana Pro -- one of the text-to-image models offered through Runway's platform -- then simply drew a smattering of bird-like shapes and two arrows pointing the direction in which the family and the bison should flee. I added some text prompts for a little extra guidance, and the model did the rest.

Runway iterpreted these doodles to animate a short video.

This video, while impressive, still has some flaws -- the little girl runs through the fence.

The feature still needs some hand-holding, in much the same way that ChatGPT often requires multiple finely crafted prompts to generate the output you're looking for. All the same, it could mark a big step forward for AI-generated video, and for professionals in creative industries who prefer to think more visually than verbally. 

"The feature is helpful for all types of users," Runway Product Manager Aditi Poduval told ZDNET. "Ultimately, we're helping people get precise movement without needing to craft a specific written prompt."

Here's what I've learned after dabbling with Motion Sketch.

After my initial experiment with the image of the bedraggled settler family, I decided to up the sketchiness, so to speak.

I'd generated the image of the family using a text prompt. But Nano Banana Pro via Runway can also create an image based solely on a doodle, which you can make directly on the platform à la Microsoft Paint. I'm a writer, not a visual artist, but I did my best to sketch out what I hoped vaguely looked like a snake hanging from the branch of a tree. 

Then, I turned to Motion Sketch. As mentioned, I'd met the model halfway in the settler family experiment by adding some text prompts in addition to my doodles, but this time I wanted to see how well it would perform without any written guidance. I hoped to generate a clip of the snake slithering to the right, so I drew some simple motion-directing arrows along the length of its body. Any six-year-old would be able to grasp what those arrows mean, but how would AI perform? I set the length to 10 seconds, selected Runway's Gen-4.5 video model (Google's Veo 3 and Veo 3.1 are also available), and clicked Generate.

I was expecting a relatively smooth clip of the snake moving along the branch and exiting the bottom-right of the shot, but what I got was much trippier. The snake left behind part of its body to the left of the shot, and another snake emerged from its body and fell out of the tree. At one point, it appeared to grow alligator-like feet, which it used to pull itself along the branch. The original arrows I'd drawn also briefly appeared at the beginning of the clip before disintegrating.

Admitedly, Runway had some challenges with the snake sketch.

To be fair, snakes are a biological marvel, and accurately modeling their movement is no small task for any animator -- human or machine. And the model seemed to misinterpret my multiple arrows as an instruction for the snake to move in multiple directions simultaneously, which clearly confused it. So I went back and added a simple text prompt to my original image with the arrows: "The snake slithers along the branch."

The final product this time was a bit closer to what I'd had in mind (no sudden duplication of snake bodies or sprouting of legs). The motion still looked awkward, but again, snake movement is a fairly tall ask, with lots of complicated fine-grained physics involved. 

A sketch of flames turned into a full bonfire in Prospect Park.

One more experiment: I wanted to see how well the model could work with actual photos, so I uploaded one from my laptop: a shot I'd taken some months earlier of Prospect Bark in Brooklyn. 

I drew what's effectively a universal icon for fire: wavy red lines inlaid with orange and radiating upwards. Again, I make no claim to be a gifted illustrator, but most humans would be able to recognize what I was going for with this. Runway's model picked up on it too, and dropped a massive bonfire into the center of the tranquil lakeside scene captured in my photo. Again, my sketch briefly appeared in the clip before disappearing in a psychedelic motion of gradually shortening wiggly lines. And maybe I'm imagining it, but the trees next to the bonfire even seemed to be blackening under its heat.

Like any other generative AI tool, Motion Sketch has its imperfections and glitches (the little girl passing through a wooden beam, the snake sprouting a leg, the wavy lines of my original sketches briefly appearing in the final video clips). Still, I'm impressed.

Here is a glimpse of an entirely new format for generating video with AI, one that could lower the barrier between imagination and production more than ever before. "THIS is what I've been waiting for," one X user posted beneath Runway's Tuesday announcement. "Drawing ur vision then watching it render as video? Absolute game changer for creators who can't prompt engineer their whole life."

Right now, it still requires a fair bit of trial-and-error if you have a specific vision in mind, but all the same, it's an exciting preview of what the future might hold. And it's definitely worth playing around with, even -- and perhaps especially -- if you're just going into it in the spirit of creativity, with no clear-cut vision in mind of what you'd actually like to produce.

Motion Sketch requires (at minimum) a Standard subscription to Runway, which costs $12 per user per month and comes with 625 monthly credits.

To access from your user dashboard, click on "App" in the left-hand menu, then select Motion Sketch either beneath the "Explore Gen-4.5" Collection, or beneath the preview window on the right half of the screen, where you'll see a button labeled "Try it now." 

From there, just upload your image and click "Sketch" —> "Export sketch" —> Generate.