Future generations might think that some of these Seedance 2.0 videos—unearthed from a long-buried flash drive—represent the state of the art in mid-20th- and early-21st-century cinema. Perhaps they won’t notice the strange movements, lack of blinks, stilted dialogue and fixation on hand-to-hand combat.
I hope not, but when you consider how actual film decays at an alarming rate, and that digital content is nothing more than stored, eternal bits and bytes, this scenario isn’t all that far-fetched. Of course, that would be an accident.
A silly little AI 1960s comedy card I made with Nano Banana and Seedance 2. pic.twitter.com/haKQuQVYqS22 February 2026
For its resemblance to Douglas Sirk films of the mid-20th century and their saturated 1950s-style blues, it is almost enchanting. That is, if you can look past AI artifacts like a 12-piece band populated by a dozen duplicate musicians. Or, if you don’t mind, several restaurant patrons who look oddly alike.
There are other anomalies and many of them are characteristic of AI Video slop produced on Seedance 2.0 and other platforms. Still, the abundance of Seedance 2.0 content is almost unprecedented. As I write this, social media is awash with short videos featuring countless characters, usually engaged in some sort of battle or impossible crossover across brands.
I’ve seen at least two Matrix rip-off videos with rematches between Neo and Mr. Smith. There’s a video of Marvel’s Doctor Strange battling DC’s Superman, and another featuring the cast of The Office meeting Iron Man.
One match after another
A Matrix-level action scene used to cost $10M+ in the Hollywood industry. Now it’s done in 2 minutes.Seedance 2.0 on MartiniArt_ 🔥 pic.twitter.com/uL1Y3Wf1Iu23 February 2026
Time after AI time
In all cases, those who create the prompt and generate the videos are not aware of the intellectual property rules or the concerns expressed by the representing actors. What’s worse is that the person you know best is playing [fill in the blank] character is forced to play it again (without their consent) in these renegade videos.
It’s a big problem, to be sure, but I found myself drawn to two of the more original videos, ones that tried to tell new stories without tapping someone else’s IP.
I began to wonder how they were created and considered how, even when the ideas and characters are fresh, there are the inevitable quirks of the Seedance 2.0 videos.
Time Traveler (made with Seedance 2.0) I created this short time travel scene using Seedance 2.0 in just one day for under $200. pic.twitter.com/ImeoTh0vLe22 February 2026
It’s an attractive 5:30 minute clip, but the AI weirdness just keeps piling up. For some reason it’s all “shot” Wes Anderson style, with each character framed dead center.
No one blinks, and emotion is either lacking or delivered in odd ticks, as one of the characters seems to sniff his pen in panic.
Like much of the Seedance 2.0 content I’ve consumed, I’ve noticed how the skin on most characters is a bit plasticky at times. The effects can be good, but they tend to be repetitive. My guess is that Al-Ghaili generated them once and then reused the sequences.
My favorite part might be the robot. Although, like so many things in this and other AI-generated videos, it is derivative.
All the images were made with this single image created in Nano Banana on @freepik – for a few images I took screengrabs from videos and brought them back to Nano Banana to create variations or to edit a bit. pic.twitter.com/F0EXvwKmbB23 February 2026
Another AI time, another AI place
Because as much as I dislike these videos and the consternation, anxiety and consternation they generate across multiple industries, I am fascinated by how they are made.
Many creators like to claim that they created the work with a “simple prompt”, but I suspect they are somewhat disingenuous.
I noticed in Christopher Gwinn’s post that he credited Nano Banana with some of the work in his “Silly little AI 1960s comedy short.” I needed to learn more, so I peppered him on social media with questions:
- Were you in a single prompt or several?
- Who wrote the dialogue?
- How much description should he provide to Seedance 2.0 for the desired result?
- Did he say that to “use the same “actors” across multiple scenes and within the same scene?
More than just an invitation
Gwinn, who works in Hollywood as a digital creator, told me at X that he started with a single Nano Banana AI-generated image (above) that he built in Freepik. The picture, which was inspired by the French filmmaker’s film Jaques Tati (famous for the Monsieur Hulot comedies (he directed and starred), was used to fill out the entire Seedance 2.0 sequence.
While Gwinn usually writes his own dialogue, he took a different route with this short comedy: “I told Seedance what happened in the take. I only wrote a few lines myself—sometimes after it generated original dialogue, I’d tweak it a bit and run the prompt again,” he shared with me on Threads.
Gwinn also reused some characters across multiple pictures. Once he had all the pieces, including the same couple dancing in multiple scenes, he cut and edited in traditional video editing software—he switches between Adobe Premiere and CapCut.
What Gwinn described to me was a process, and ultimately not too different from what a traditional filmmaker might do. There are notable exceptions, such as the use of AI-generated humans instead of actors. Plus, for all Gwinn’s work, he can’t quite remove the funhouse mirror feel from the entire company.
There is something wrong
Sure, it might remind you of comedies from the 1950s, 1960s, or even 1970s, but it also feels straight out. The slapstick comedy doesn’t make much sense since there’s almost no setup for each gag. We come in almost in the middle of every comic moment. It made me think I was watching a trailer for a middlebrow comedy that was trying too hard for laughs.
The other anomalies, such as physics that don’t quite work and bodies that sometimes move as if they didn’t have bones, are evident across pretty much every Seedance 2.0 clip. However, with the rapid development of artificial intelligence, they will be solved within a few months.
I like to understand how these videos were made. It makes me feel a little better about the rapid development of this “art”, knowing that the digital creators behind it probably use far more than just a single prompt to achieve the desired result.
However, I hope that in their quest to create ever more bizarre scenarios for Neo, Iron Man, Superman, Brad Pitt and Tom Cruise, they stop and think about how they can use these tools to create something new and art that can finally stand on its own.
And of course you can also follow TechRadar on YouTube and TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



