- AI-generated videos often lose coherence over time due to a problem called drift
- Models trained in perfect data combat when dealing with imperfect real-world input
- EPFL researchers developed retraining by error recycling to limit progressive degradation
AI-generated videos often lose coherence as sequences grow longer, a problem known as drift.
This problem occurs because each new frame is generated based on the previous one, so any small error, such as a distorted object or slightly blurred face, is amplified over time.
Large language models trained exclusively on ideal datasets struggle to handle imperfect input, which is why videos usually become unrealistic after a few seconds.
Reuse bugs to improve AI performance
Generating videos that maintain logical continuity for extended periods of time remains a major challenge in the field.
Now researchers at EPFL’s Visual Intelligence for Transportation (VITA) laboratory have introduced a method called retraining by error recycling.
Unlike conventional approaches that try to avoid mistakes, this method deliberately feeds the AI’s own mistakes back into the training process.
By doing so, the model learns to correct errors in future frames, limiting the progressive degradation of images.
The process involves generating a video, identifying discrepancies between produced frames and intended frames, and retraining the AI on these discrepancies to refine future output.
Current AI video systems typically produce sequences that remain realistic for less than 30 seconds before degrading shapes, colors and motion logic.
By integrating error recycling, the EPFL team has produced videos that resist drift over longer durations, potentially removing strict time constraints on generative video.
This progress enables AI systems to create more stable sequences in applications such as simulations, animations or automated visual storytelling.
Although this approach addresses operation, it does not eliminate all technical limitations.
Retraining on reuse errors increases computational demand and may require continuous monitoring to prevent overfitting to specific errors.
Large-scale deployments may face resource and efficiency constraints, as well as the need to maintain consistency across different video content.
Whether it’s really a good idea to feed AI its own bugs is still uncertain, as the method could introduce unforeseen biases or reduce generalization in complex scenarios.
The development at VITA Lab shows that AI can learn from its own mistakes, potentially extending the time limits of video generation.
But how this method will work outside of controlled testing or in creative applications remains unclear, suggesting caution before assuming it can fully solve the drift problem.
Via TechXplore
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



