Only a few months ago, AI content was easy to spot: unnatural inflections in speech, weird earlobes in photos, bland language in writing. This is no longer the case. In June, scammers used an AI to impersonate a daughter’s voice and rob her mother. Candidates are already using deepfakes as propaganda. And LLMs may help spammers by automating the otherwise costly back-and-forth conversations needed to separate a mark from their money. We need a way to distinguish things made by humans from things made by algorithms, and we need it very soon.
A universal way to tell human-generated content from AI-generated content would mitigate many of the concerns people have about this burgeoning technology. Consumers of generative text could “reveal AI” to quickly see what was written by a machine. Software companies could add AI markup awareness to their products, changing the way we find, replace, copy, paste, and share content. Governments could agree to buy generative AI only from companies that mark their output in this way, creating considerable market incentives. Teachers could insist that students leave the markings intact to leverage the power of generative AI while still showing their original thought. And brands that want to be “AI transparent” could promise not to remove the marker, making non-GPT the new non-GMO.
Fortunately, we have a solution waiting in plain sight. But to understand the elegance of this relatively simple hack, let’s first look at the alternatives and why they won’t work.
Both legislators and tech firms agree that the best way to distinguish AI-generated content from content made by humans is to mark it at the point of origin, something seven tech firms pledged to do as part of an agreement the White House announced last week. There are three broad approaches to watermarking digital content. The first is to add metadata, which cameras have been doing for decades. Blocks of text are often marked up as well. When you type something in bold, or set a font’s color on a website, the word processor or browser labels your content with metadata. But it’s application-specific: Paste some bold text into your address bar, and the formatting is gone.
You can also watermark digital images using steganography, which hides one message inside another cryptographically. First used by spies to smuggle secrets, there are now plenty of design tools that add hidden markings to images, then crawl the web looking for copyright violators. And encryption works for watermarking too. You can digitally sign a paragraph of text, and then tell when it’s been altered, either through a centralized system (a digital certificate authority) or a distributed one (a blockchain). This is why that movie you bought only plays in iTunes, and that NFT you’ve forgotten about still belongs to you.
But these approaches have three fundamental problems. First, they require immense coordination. By contrast, a good AI markup solution would need to work seamlessly across billions of devices. The markings would have to survive being copied and pasted from one app, operating system, or platform to another. Second, any solution would have to be accessible to any human with an internet connection, without any training, immediately. It would need to be deployable to the whole world with just a software update.
Third, while watermarks work well enough for large objects like images, songs, or book chapters, they don’t work for smaller objects like individual words or letters. That means these approaches don’t handle content that blends human and machine well. If you have a document that’s generated by an AI, and then edited by a human, you need a more fine-grained watermark—the digital equivalent of a highlighter.
That may seem like an impossibly tall order. But in fact, this system already exists: Unicode.
Unicode is the universal numbering system for text, and text is the fundamental building block of the internet. In Unicode, every character has a number. The Latin Capital Letter A, for example, is hexadecimal number 41. But there are plenty of other A’s in Unicode: There’s Fullwidth Latin Capital Letter A (A, number EF BC A1), Mathematical Bold Capital A (𝐀, number F0 9D 90 80), Mathematical Sans-Serif Capital A (𝖠, F0 9D 96 A0), and plenty of others. Each A has its own name, its own Unicode value, and in some cases, its own font shape. Why not create a letter A just for AI?
Source