We’re talking about AI in a very nuts-and-bolts way, but a lot of the discussion centers on whether it will ultimately be a utopian boon or the end of humanity. What’s your stance on those long-term questions?
AI is one of the most profound technologies we will ever work on. There are short-term risks, midterm risks, and long-term risks. It’s important to take all those concerns seriously, but you have to balance where you put your resources depending on the stage you’re in. In the near term, state-of-the-art LLMs have hallucination problems—they can make up things. There are areas where that’s appropriate, like creatively imagining names for your dog, but not “what’s the right medicine dosage for a 3-year-old?” So right now, responsibility is about testing it for safety and ensuring it doesn’t harm privacy and introduce bias. In the medium term, I worry about whether AI displaces or augments the labor market. There will be areas where it will be a disruptive force. And there are long-term risks around developing powerful intelligent agents. How do we make sure they are aligned to human values? How do we stay in control of them? To me, they are all valid things.
Have you seen the movie Oppenheimer?
I’m actually reading the book. I’m a big fan of reading the book before watching the movie.
I ask because you are one of the people with the most influence on a powerful and potentially dangerous technology. Does the Oppenheimer story touch you in that way?
All of us who are in one shape or another working on a powerful technology—not just AI, but genetics like Crispr—have to be responsible. You have to make sure you’re an important part of the debate over these things. You want to learn from history where you can, obviously.
Google is an enormous company. Current and former employees complain that the bureaucracy and caution has slowed them down. All eight authors of the influential “Transformers” paper, which you cite in your letter, have left the company, with some saying Google moves too slow. Can you mitigate that and make Google more like a startup again?
Anytime you’re scaling up a company, you have to make sure you’re working to cut down bureaucracy and staying as lean and nimble as possible. There are many, many areas where we move very fast. Our growth in Cloud wouldn’t have happened if we didn’t scale up fast. I look at what the YouTube Shorts team has done, I look at what the Pixel team has done, I look at how much the search team has evolved with AI. There are many, many areas where we move fast.
Yet we hear those complaints, including from people who loved the company but left.
Obviously, when you’re running a big company, there are times you look around and say, in some areas, maybe you didn’t move as fast—and you work hard to fix it. [Pichai raises his voice.] Do I recruit candidates who come and join us because they feel like they’ve been in some other large company, which is very, very bureaucratic, and they haven’t been able to make change as fast? Absolutely. Are we attracting some of the best talent in the world every week? Yes. It’s equally important to remember we have an open culture—people speak a lot about the company. Yes, we lost some people. But we’re also retaining people better than we have in a long, long time. Did OpenAI lose some people from the original team that worked on GPT? The answer is yes. You know, I’ve actually felt the company move faster in pockets than even what I remember 10 years ago.
Source