Google has stressed that the metadata field in “About this image” is not going to be a surefire way to see the origins, or provenance, of an image. It’s mostly designed to give more context or alert the casual internet user if an image is much older than it appears—suggesting it might now be repurposed—or if it’s been flagged as problematic on the internet before.
Provenance, inference, watermarking, and media literacy: These are just some of the words and phrases used by the research teams who are now tasked with identifying computer-generated imagery as it exponentially multiplies. But all of these tools are in some ways fallible, and most entities—including Google—acknowledge that spotting fake content will likely have to be a multi-pronged approach.
WIRED’s Kate Knibbs recently reported on watermarking, digitally stamping online texts and photos so their origins can be traced, as one of the more promising strategies; so promising that OpenAI, Alphabet, Meta, Amazon, and Google’s DeepMind are all developing watermarking technology. Knibbs also reported on how easily groups of researchers were able to “wash out” certain types of watermarks from online images.
Reality Defender, a New York startup that sells its deepfake detector tech to government agencies, banks, and tech and media companies, believes that it’s nearly impossible to know the “ground truth” of AI imagery. Ben Colman, the firm’s cofounder and chief executive, says that establishing provenance is complicated because it requires buy-in, from every manufacturer selling an image-making machine, around a specific set of standards. He also believes that watermarking may be part of an AI-spotting toolkit, but it’s “not the strongest tool in the toolkit.”
Reality Defender is focused instead on inference—essentially, using more AI to spot AI. Its system scans text, imagery, or video assets and gives a 1-to-99 percent probability of whether the asset is manipulated in some way.
“At the highest level we disagree with any requirement that puts the onus on the consumer to tell real from fake,” says Colman. “With the advancements in AI and just fraud in general, even the PhDs in our room cannot tell the difference between real and fake at the pixel level.”
To that point, Google’s “About this image” will exist under the assumption that most internet users aside from researchers and journalists will want to know more about this image—and that the context provided will help tip the person off if something’s amiss. Google is also, of note, the entity that in recent years pioneered the transformer architecture that comprises the T in ChatGPT; the creator of a generative AI tool called Bard; the maker of tools like Magic Eraser and Magic Memory that alter images and distort reality. It’s Google’s generative AI world, and most of us are just trying to spot our way through it.
Source