I always enjoy a chance to mess with AI -Videor Generators. Even when they are terrible, they can be entertaining and when they pull it off, they can be amazing. So I was eager to play with Runway’s new Gen-4 model.
The company boasted that Gen-4 (and its smaller, faster sibling model, Gen-4 Turbo) can surpass the previous gene-3 model in quality and consistency. Gen-4 reportedly spikes the idea that figures can and should look like themselves between scenes along with more fluid movement and improved environmental physics.
It must also be remarkably good for following directions. You give it a visual reference and a descriptive text, and it produces a video similar to what you imagined. In fact, it sounded a lot like how Openai promotes his own AI video creator, Sora.
Although the videos that Sora makes are usually beautiful, they are also sometimes unreliable in quality. One scene can be perfect and the next one can have characters floating like ghosts or doors leading to nowhere.
Magic movie
Runway Gen-4 beat itself as a video Magic, so I decided to test it with it in mind and see if I could make videos telling the story of a wizard. I cooked up a few ideas for a little fantasy trilogy with a wandering wizard starring. I wanted the guide to meet an alve princess and then chase her through magical portals. Then, when he meets her again, she is disguised as a magical animal, and he transforms her back into a princess.
The goal was not to create a blockbuster. I just wanted to see how far gen-4 could extend with minimal input. Since I had no photos of real wizards, I took advantage of the newly upgraded chatgpt image generator to create compelling still images. Sora may not blast Hollywood, but I cannot deny the quality of some of the images produced by Chatgpt. I made the first video, then used Runway’s opportunity to “fix” a seed so the characters would look consistent in the videos. I collected the three videos in a single movie below with a short break between each.
AI Cinema
You can see it’s not perfect. There are some odd object movements and the consistent look is not perfect. Some background elements shone strangely strangely and I wouldn’t put these clips on a theater screen yet. However, the actual movement, expression and feelings of the characters felt surprisingly real.
And I liked iteration options that didn’t overwhelm me with too many manual options, but also gave me enough control, so it felt like I was actively involved in the creation and not just pressing a button and asked for coherence.
Will it now take down Sora and Openai’s many professional filmmaker partners? No, certainly not right now. But I would probably at least experiment with it if I was an amateur filmmaker that would have a relatively cheap way to see what some of my ideas could look like. At least, before spending lots of money on the people needed to actually make movies watch and feel as powerful as my vision for a movie.
And if I got comfortable enough with it and good enough to use and manipulate AI to get what I wanted from it every time, I might not even think about using Sora. You don’t have to be a wizard to see that it is the spelling track that Runway hopes to throw on its potential user base.