Europol: Deepfakes Set to Be Used Extensively in Organized Crime

Security

Deepfake technology is set to be used extensively in organized crime over the coming years, according to new research by Europol.

Deepfakes involve the application of artificial intelligence to audio and audio-visual consent “that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place.”

Facing Reality? Law enforcement and the challenge of deepfakes, the first published analysis of the Europol Innovation Lab’s Observatory function, warned that law enforcement agencies will need to enhance the skills and technologies at officers’ disposal to keep pace with criminals’ use of deepfakes.

The analysis highlighted how deepfakes are being used nefariously in three key areas: disinformation, non-consensual pornography and document fraud. It predicts such attacks will become increasingly realistic and dangerous as the technology improves in the coming years.

1) Disinformation: Europol gave several examples of how false information could be spread using deepfakes, leading to potentially devastating consequences. These include in the geopolitical sphere, such as creating a fake emergency alert that warns of an impending attack. In February, before the Russia-Ukraine conflict, the United States accused the Kremlin of a disinformation plot to serve as a pretext for an invasion of Ukraine.

The technology could also be used to target businesses, such as creating a video or audio deepfake that makes it appear as though a company’s executive engaged in a controversial or illegal act. In one well-publicized case, criminals defrauded an energy company to the tune of $243,000 after impersonating the voice of the chief executive.

2) Non-consensual pornography: The report cited a study by Sensity, which found that 96% of fake videos involved non-consensual pornography. This typically involves overlaying a victim’s face onto the body of a pornography actor, making it appear that the victim is engaging in the act.

3) Document fraud: While passports are becoming increasingly difficult to forge due to modern fraud prevention measures, the report found that “synthetic media and digitally manipulated facial images present a new approach for document fraud.” For example, these technologies can combine or morph the faces of the person the passport belongs to and the person wanting to obtain a passport illegally, increasing the chances the photo can pass identity checks, including automated ones.

The authors added that, similarly to other tools used in cybercrime, “deepfake capabilities are becoming more accessible for the masses through deepfake apps and websites.”

In addition, the report observed that deepfakes could negatively impact the legal process, for example, by artificially manipulating or generating media to prove or disprove someone’s guilt. In one recent child custody case, a mother of a child manipulated an audio recording of her husband in an attempt to convince the court that he behaved violently towards her.

To effectively deal with these kinds of threats, Europol said law enforcement agencies must develop new skills and technologies. These include manual detection, which involves looking for inconsistencies, and automated detection techniques, including deepfake detection software using artificial intelligence that is being developed by organizations such as Facebook and security firm McAfee.

Policymakers also need to develop more legislation to set guidelines and enforce compliance around the use of deepfakes, the report added.

The researchers stated: “In the months and years ahead, it is highly likely that threat actors will make increasing use of deepfake technology to facilitate various criminal acts and conduct disinformation campaigns to influence or distort public opinion. Advances in machine learning and artificial intelligence will continue enhancing the capabilities of the software used to create deepfakes.”

They added: “The increase in use of deepfakes will require legislation to set guidelines and enforce compliance. Additionally, social networks and other online service providers should play a greater role in identifying and removing deepfake content from their platforms. As the public becomes more educated on deepfakes, there will be increasing concern worldwide about their impact on individuals, communities and democracies.”

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *