Synthesia hasn’t always been considered at the sharp end of the generative AI industry. For six years, Riparbelli and his cofounders labored outside the spotlight in pursuit of their mission to invent a way to make video without using any camera equipment. Back in 2017, there were not a lot of investors who thought that was very interesting, says Riparbelli, who’s now 31. But then ChatGPT came along. And the Danish CEO was catapulted into London’s burgeoning AI elite alongside founders of companies like DeepMind, owned by Alphabet since 2014, which is currently working on a ChatGPT competitor, and Stability AI, the startup behind image generator Stable Diffusion.
In June, Synthesia announced a funding round that valued it at $1 billion. That’s not quite the $29 billion price tag OpenAI received in May—but it’s still a giant $700 million increase compared to two years ago, the last time investors poured over Synthesia’s business.
I meet Riparbelli over Zoom. He joins the call from his family’s vacation home on a Danish island, his childhood bunk bed in the frame behind him. Growing up in Copenhagen, Riparbelli became interested in computers through gaming and electronic music. Looking back, he believes being able to make techno with only his laptop, from Denmark—not a place known for its clubs or music industry—was a big influence for what he does now. “It was much more about who can make great music and upload it to SoundCloud or YouTube than about who lives in Hollywood and has a dad who works in the music industry,” he says. To get to that same point, he believes video has a long way to go because it still requires so much equipment. “It’s inherently restrictive because it’s very expensive to do.”
After graduation, Riparbelli got into the Danish startup scene, building what he describes as “vanilla” technologies, like accounting software. Dissatisfied, he moved to London in search of something more sci-fi. After trying his hand at crypto and VR projects, he started reading about deepfakes and found himself gripped by the potential. In 2017, he joined up with fellow Dane, Steffen Tjerrild, and two computer vision professors, Lourdes Agapito and Matthias Niessner, and together they launched Synthesia.
Over the past six years, the company has built a dizzying library of avatars. They’re available in different genders, skin tones, and uniforms. There are hipsters and call center workers. Santa is available in multiple ethnicities. Within Synthesia’s platform, clients can customize the language their avatars speak, their accents, even at what point in a script they raise their eyebrows. Riparbelli says his favorite is Alex, a classically pretty but unremarkable avatar who looks to be in her mid-twenties and has mid-length brown hair. There is a real human version of Alex who’s out there wandering the streets somewhere. Synthesia trains its algorithms on footage of actors filmed in its own production studios.
Owning that data is a big draw to investors. “Basically what all their algorithms need is 3D data, because it’s all about understanding how humans are moving, how they are talking,” says Philippe Botteri, partner at venture capital firm Accel, which led Synthesia’s latest funding round. “And for that, you need a very specific set of data that is not available.”
Source