- Noise -coded lighting hides invisible video watermarks inside light patterns for manipulation detection
- The system remains effective across varied lighting, compression levels and the camera’s movement conditions
- Forgers need to replicate multiple matching code videos to bypass detection successfully
Cornell University researchers have developed a new method of detecting manipulated or AI-generated video by embedding coded signals in light sources.
The technique, known as noise -coded lighting, hides information within seemingly random light fluctuations.
Each embedded watermark has a low-belief, time stamped version of the original scene under slightly altered lighting and when manipulation occurs, the manipulated areas fail to match these coded versions and reveal evidence of change.
The system works through software to computer screens or by attaching a small chip to standard lamps.
As the embedded data appears as noise, it is to detect them without the decoding key extremely difficult.
This approach uses information asymmetry and ensures that those who try to create deepfaks lack access to the unique embedded data required to produce compelling forgeries.
The researchers tested their method against a series of manipulation techniques, including deepfakes, composition and changes in playback speed.
They also assessed it under different environmental conditions, such as different light levels, degrees of video compression, camera movement and both indoor and outdoor settings.
In all scenarios, the coded light technique preserved its efficiency, even when changes occurred at levels of subtle to human perception.
Even if a forgery learned the decoding method, they would have to repeat more code-matching versions of the recordings.
Each of these has to adapt to the hidden light patterns, a task that greatly increases the complexity of producing undetectable video gazes.
The research addresses an increasingly urgent problem in the approval of digital media, as the availability of sophisticated editing tools means that people can no longer assume that video represents reality without question.
While methods such as control can detect file changes, they cannot distinguish between harmless compression and deliberate manipulation.
Some water marking technologies require control of the recording equipment or the original source material, making them impractical for wider use.
The noise -coded lighting could be integrated into security suites to protect sensitive VideoSEds.
This type of embedded approval can also help reduce the risk of identity theft by protecting personal or official video records against undetected manipulation.
Although the Cornell team recognized the strong protection that work offers, it said the broader challenge of Deepfake detection will continue as manipulation tools develop.



