- IProov -examination finds older adults most struggling with Deepfakes
- False self -confidence is widespread among the younger generation
- Social media is a deepfake -Hotspot, experts warn
As the Deepfake technology continues to move on, the concerns of wrong information, fraud and identity thefts thanks to reading skills in AI tools that are on a surprisingly low.
A recent iProov study claims that most people are struggling to distinguish Deepfake content from reality when it took 2,000 participants from Britain and that we were exposed to a mixture of real and AI -generated images and videos, and found only 0 , 1% of participants – two whole people – correctly distinguished between real and deep phase -stimuli.
The study found that older adults are especially susceptible to AI-generated deception. About 30% of those aged 55-64 and 39% of them over 65 had never heard of deep-faeces before. While younger participants were more confident in their ability to discover deepfaks, their actual performance in the study did not improve.
Older generations are more vulnerable
Deepfake videos were markedly more difficult to detect than images, the study added, as participants were 36% less likely to properly identify a fake video compared to an image, which raised concern for video-based fraud and incorrect information.
Social media platforms were highlighted as important sources of deep phase content. Almost half of the participants (49%) identified metatlatforms, including Facebook and Instagram, as the most common places where deepfakes are found, while 47% pointed to Tiktok.
“[This underlines] How vulnerable both organizations and consumers are to the threat of identity fraud in the deep -age age, “said Andrew Bud, founder and CEO of Iproov.
“Criminals utilize consumers’ inability to distinguish real images from false images, put personal information and financial security at risk.”
Bud added, even when people suspect a deepfake, most people don’t take anything. Only 20% of respondents said they would report a suspected Deepfake if they encountered an online.
As Deepfakes become more sophisticated, Iproov believes that human perception alone is no longer reliable to detection, and commandments emphasized the need for biometric security solutions with vivity detection to fight the threat of increasingly compelling deep phase material.
“It is down to technology companies to protect their customers by implementing robust security measures,” he said. “The use of facial biometry with the face of vivacy provides a reliable approval factor and prioritizes both security and individual control, ensuring that organizations and users can keep up with these developing threats. “