FBI Warns: Criminals Leveraging Generative AI for Widespread, Convincing Fraud Schemes
AI technology is now being exploited by scammers to impersonate family members, marking a troubling new trend in fraudulent activities.
The Federal Bureau of Investigation (FBI) has issued a warning that criminals are utilizing generative artificial intelligence (AI) to perpetrate fraud on a much larger scale, noting that this advanced technology enhances the “believability of their schemes.”
As a growing number of criminals harness AI for schemes involving fraud and extortion, distinguishing AI-generated content is becoming increasingly challenging.
One alarming tactic involves audio scams where perpetrators use AI-generated “short audio clips featuring a loved one’s voice to impersonate a close relative during a crisis, pleading for immediate financial help or demanding ransom,” according to the alert.
A man’s voice claimed that the girl had been kidnapped, but DeStefano was relieved to find her daughter was safe inside their home.
Convinced by the video, he invested in the platform, ultimately losing at least $12,000, which represented his life savings.
Criminals are using AI to facilitate real-time video chats that impersonate high-profile individuals like corporate executives or authorities.
Fraudsters are leveraging AI-generated text and images to create an array of convincing fake materials. For example, they utilize AI to build social media profiles filled with content, which gives them a guise of authenticity.
Additionally, AI-driven image generation allows them to forge fake driver’s licenses and various government and banking documents for use in impersonation scams.
AI Explicit Content Threat
An FBI alert from last June raised concerns about malicious actors using AI to alter images and videos, resulting in sexually explicit content.
To generate such material, these perpetrators manipulate videos and images uploaded by targets on social media or other platforms. Once the fake content is created, it is disseminated through social media or pornographic websites, according to the FBI.
“The images are sent directly to the victims for sextortion or harassment,” the agency emphasized. “Once shared, victims face significant obstacles in stopping the ongoing circulation of this manipulated content or having it removed from the internet.”
One of the teenage victims, Elliston Berry, recounted her experience during a Senate field hearing in June, saying, “I was left speechless as I tried to wrap my head around the fact that this was occurring.”
If enacted, the bill would require social media platforms to remove reported content within 48 hours following a victim’s complaint.
“For young victims and their families, these deepfakes represent a critical issue that demands immediate legal protection,” Cruz stated.