The invasion of chatbots has disrupted the plans of countless businesses, including some that had been working on that very technology for years (looking at you, Google). But not Artifact, the news discovery app created by Instagram cofounders Kevin Systrom and Mike Krieger. When I talked to Systrom this week about his startup—a much-anticipated follow-up to the billion-user social network that’s been propping up Meta for the past few years—he was emphatic that Artifact is a product of the recent AI revolution, even though it was devised before GPT began its chatting. In fact, Systrom says that he and Krieger started with the idea of exploiting the powers of machine learning—and then ended up with a news app after scrounging around for a serious problem that AI could help solve.
That problem is the difficulty of finding individually relevant, high-quality news articles—the ones people most want to see—and not having to wade through irrelevant clickbait, misleading partisan cant, and low-calorie distractions to get those stories. Artifact delivers what looks like a standard feed containing links to news stories, with headlines and descriptive snippets. But unlike the links displayed on Twitter, Facebook, and other social media, what determines the selection and ranking is not who is suggesting them, but the content of the stories themselves. Ideally, the content each user wants to see, from publications vetted for reliability.
What makes that possible, Systrom tells me, is his small team’s commitment to the AI transformation. While Artifact doesn’t converse with users like ChatGPT—at least not yet—the app exploits a homegrown large language model of its own that’s instrumental in choosing what news article each individual sees. Under the hood, Artifact digests news articles so that their content can be represented by a long string of numbers.
By comparing those numerical hashes of available news stories to the ones that a given user has shown preference for (by their clicks, reading time, or stated desire to see stuff on a given topic), Artifact provides a collection of stories tailored to a unique human being. “The advent of these large language models allow us to summarize content into these numbers, and then allows us to find matches for you much more efficiently than you would have in the past,” says Systrom. “The difference between us and GPT or Bard is that we’re not generating text, but understanding it.”
That doesn’t mean that Artifact has ignored the recent boom in AI that does generate text for users. The startup has a business relationship with OpenAI that provides access to the API for GPT-4, OpenAI’s latest and greatest language model that powers the premium version of ChatGPT. When an Artifact user selects a story, the app offers the option to have the technology summarize the news articles into a few bullet points so users can get the gist of the story before they commit to reading on. (Artifact warns that, since the summary was AI-generated, “it may contain mistakes.”)
Today, Artifact is taking another jump on the generative-AI rocket ship in an attempt to address an annoying problem—clickbaity headlines. The app already offers a way for users to flag clickbait stories, and if multiple people tag an article, Artifact won’t spread it. But, Systrom explains, sometimes the problem isn’t with the story but the headline. It might promise too much, or mislead, or lure the reader into clicking just to find some information that’s held back from the headline. From the publisher’s viewpoint, winning more clicks is a big plus—but it’s frustrating to users, who might feel they have been manipulated.
Systrom and Krieger have created a futuristic way to mitigate this problem. If a user flags a headline as dicey, Artifact will submit the content to GPT-4. The algorithm will then analyze the content of the story and then write its own headline. That more descriptive title will be the one that the user sees in their feed. “Ninety-nine times out of 100, that title is both factual and more clear than the original one that the user is asking about,” says Systrom. That headline is shared only with the complaining user. But if several users report a clickbaity title, all of Artifact’s users will see the AI-generated headline, not the one the publisher provided. Eventually, the system will figure out how to identify and replace offending headlines without user input, Systrom says. (GPT-4 can do that on its own now, but Systrom doesn’t trust it enough to turn the process over to the algorithm.)
Source