AI: how an unprecedented alliance of 900 players fight against “deep fakes”
Posted Apr 12, 2023, 9:00 AM The Pope in a white Balenciaga down jacket, the…
Posted Apr 12, 2023, 9:00 AM
The Pope in a white Balenciaga down jacket, the bloody face of an old man manhandled by the police, Putin kneeling in front of Xi Jinping, Donald Trump arrested… The false images generated by artificial intelligence are spreading like wildfire on the social networks.
Artificial intelligence is multiplying on a very large scale a phenomenon that has always existed, the manipulation of images for propaganda purposes or to sow confusion. Once the image is everywhere, it is difficult to identify its origin. Tools created to detect fake images do not always work, and may even mislead their users.
For the moment, it is possible to identify the images generated by artificial intelligence, provided that you pay attention to them and do not react hotly. Images often have a bit of an odd grain, faces can be blurry, hands sometimes have six or seven fingers, and distinct elements of the image overlap.
“However, the AIs are improving day by day and presenting fewer and fewer anomalies, so we should not rely on long-term visual clues,” warns Annalisa Verdoliva, professor at the University Frédéric- II of Naples and expert in AI. Videos, in particular, should become increasingly realistic.
The Content Authenticity Initiative
In an attempt to restore some semblance of order to this jungle of fake content, Adobe, the “New York Times” and Twitter joined forces to launch the “Content Authenticity Initiative” in 2019. . The latter brings together nearly 900 organisations: media (including AFP, Reuters, BBC and France Télévisions) but also tech companies (Microsoft, Arm, Intel and Nvidia) and camera manufacturers (Nikon, Leica).
“Three or four years ago, we saw the arrival of AI. We were developing it ourselves and we realized that once it was in the hands of people – which is happening – it will be very difficult to know what to believe on the Internet, “explains to “Echoes” Dana Rao, the vice-president of Adobe in charge of public affairs and security.
Like an organic label
This alliance wants to authenticate content to distinguish itself from the mass of dubious images circulating on the internet. “This initiative is designed to help organizations […] by offering them a way to authenticate their content,” continues Dana Rao. A bit like an “organic farming” label, it allows recognized media to display the origin of the image.
In practice, this involves creating a new metadata file integrated into the photo. This file records the conditions under which the image was taken (what type of device for example), but also if it was modified and how. For example, a photograph taken by AFP may have been retouched by “Les Echos” for the needs of its website. These are often small changes such as color balance or cropping.
“When you view an image on a site, you will be able to look at the symbol on it and understand what changes have been made to it. For example, they brightened the sky and changed the hair color, but it’s still the same person. Or on the contrary, the face has been changed,” continues Dana Rao.
The American software publisher recognizes that this initiative will not be enough to stem the tide of false images that risk flowing on the web because of image generators using AI. But it at least allows reliable actors to display the origin of their photos, in the hope of reassuring their readers.