Interactive and composite deepfakes are two classes of growing threats in the field of information security. About this stated Chief Research Officer of Microsoft Eric Horvitz.
According to the expert, the developing capabilities of discriminatory and generative AI methods are approaching a critical point.
“The achievements provide unprecedented tools that can be used by state and non-state actors to create and spread convincing disinformation,” Horvitz wrote.
The scientist believes that the problem arises due to the methodology of generative-competitive networks consisting of two competing elements: a generator and a discriminator. The first one is intended for creating content, and the second one is for evaluating its quality.
Horvitz added that over time the generator will learn to deceive the discriminator.
“With this process, which is the basis of deep fakes, neither image recognition methods nor people will be able to reliably recognize fakes,” he wrote.
The expert emphasized that until now, AI fakes were created and distributed as one-off, autonomous creations.
“However, we can now expect the emergence of new forms of convincing deepfakes that go beyond fixed single-element productions,” he said.
Horvitz suggested several ways to help prepare and protect against the expected increase in the number of fakes. Among them:
- increase in requirements for journalism and reporting;
- increasing media literacy of people;
- introduction of new authentication protocols for identity verification;
- development of standards for confirming the origin of content
- constant monitoring.
“It is important to be vigilant about interactive and compositional deepfakes,” concluded Horvitz.
It will be recalled that in September the scientists presented method of detecting audio fakesmeasuring the differences between samples of organic and synthetic matter.
In August, hackers used a deepfake to Binance listing scams. The attackers impersonated Patrick Hillmann, the CCO of the company, in a series of video calls with representatives of cryptocurrency projects.
In June, the FBI warned Fr the growing number of cases of using deepfakes at online interviews.
Subscribe to ForkLog news in Telegram: ForkLog AI — all news from the world of AI!
Found an error in the text? Select it and press CTRL+ENTER