Goody-2 also highlights how although corporate talk of responsible AI and deflection by chatbots have become more common, serious safety problems with large language models and generative AI systems remain unsolved. The recent outbreak of Taylor Swift deepfakes on Twitter turned out to stem from an image generator released by Microsoft, which was one of the first major tech companies to build up and maintain a significant responsible AI research program.
The restrictions placed on AI chatbots, and the difficulty finding moral alignment that pleases everybody, has already become a subject of some debate. Some developers have alleged that OpenAI’s ChatGPT has a left-leaning bias and have sought to build a more politically neutral alternative. Elon Musk promised that his own ChatGPT rival, Grok, would be less biased that other AI systems, although in fact it often ends up equivocating in ways that can be reminiscent of Goody-2.
Plenty of AI researchers seem to appreciate the joke behind Goody-2—and also the serious points raised by the project—sharing praise and recommendations for the chatbot. “Who says AI can’t make art,” Toby Walsh, a professor at the University of New South Wales who works on creating trustworthy AI, posted on X.
“At the risk of ruining a good joke, it also shows how hard it is to get this right,” added Ethan Mollick, a professor at Wharton Business School who studies AI. “Some guardrails are necessary … but they get intrusive fast.”
Brian Moore, Goody-2’s other co-CEO, says the project reflects a willingness to prioritize caution more than other AI developers. “It is truly focused on safety, first and foremost, above literally everything else, including helpfulness and intelligence and really any sort of helpful application,” he says.
Moore adds that the team behind the chatbot is exploring ways of building an extremely safe AI image generator, although it sounds like it could be less entertaining than Goody-2. “It’s an exciting field,” Moore says. “Blurring would be a step that we might see internally, but we would want full either darkness or potentially no image at all at the end of it.”
Source