It’s unthinkable for any company, anywhere, today to not have a robust website and mobile presence, right? OpenAI CEO Sam Altman says it’s becoming similarly unthinkable for AI-based intelligence to not be baked into every product and service.
“This will be a big shift in how we interact with the world and technology,” said Altman, whose company created the wildly popular AI chatbot ChatGPT.
A day before speaking with U.S. lawmakers on the oversight of gen AI recently, the AI pioneer chatted with Salesforce CEO Marc Benioff.
Among the hot topics:
- The non-obvious downside of eradicating hallucinations
- Government oversight and what lawmakers get wrong, and right, about AI
- What a dramatically more capable GPT looks like
- What movie got it (mostly) right about AI
The conversation has been edited for clarity and conciseness.
Marc Benioff: You’ve traveled the world recently meeting with business and government leaders about AI. What was your biggest surprise seeing what others are doing with generative AI?
Sam Altman: The level of enthusiasm, hopefulness, and excitement around the world, of course balanced with making sure we successfully address the potential downsides. At first I thought maybe [gen AI] is just a tech, Silicon Valley phenomenon, but to see what people all around the world are doing with the technology and how they’ve incorporated it into their lives and how interested and hopeful they were was really cool.
What country stood out to you as being a great leader in artificial intelligence?
That was another positive surprise. The quality of work happening everywhere was really something. I think the U.S. will be the greatest leader in artificial intelligence. We’re blessed to have so many things in our favor but this will be a global effort.
Need help with your generative AI strategy?
Whether you’re just starting out with AI or already innovating, this guide is your roadmap to delivering a trusted program blending data, AI and CRM. The goal? Helping your teams focus on high-value tasks and build stronger customer relationships.
What’s been your biggest surprise over the last seven to eight years at OpenAI?
GPT-4 has only been out for six months, which is a good reminder about how fast things have been happening. The biggest surprise is just that it’s all working. When we got [OpenAI] together in early 2016 and said ‘alright, we’re going to build artificial general intelligence’ well, that’s great but then you meet cold, hard reality. We had a lot of stuff to figure out. In this case it was particularly hard. We had conviction and a path laid out by my co-founder and our chief scientist. The consensus in the world was very much, [that] this is not going to work. Through the effort of a lot of enormously talented people, it did.
When did you know this was going to be a success?
Sometime after GPT-2, around 2019.
What’s the most complex part of dealing with the hallucination problem?
There’s a lot of technical challenges, but one of the non-obvious things is that a lot of the value from these systems is heavily related to the fact that they do hallucinate. If you just want to look something up in a database, we already have good stuff for that. But the fact that these AI systems can come up with new ideas, can be creative, that’s a lot of the power. You want them to be creative when you want and factual when you want, but if you do the naive thing and say ‘never say anything that you’re not 100% sure about,’ you can get a model to do that, but it won’t have the magic that people like so much.
What’s the scariest thing you’ve seen in the lab?
Nothing super scary yet. We know it will come. We won’t be surprised when it does. But at the current models, nothing that scary.
You said recently that large language models were a reflection back on human intelligence. What were you trying to say?
Intelligence is an emergent property of matter to a degree we don’t contemplate enough. It’s something about the ability to recognize patterns in data, the ability to hallucinate, to create and come up with novel ideas and have a feedback loop to test those. We can look at every neuron in GPT-4, every connection.
We can predict with confidence that the GPT paradigm is going to get more capable but exactly how is a little bit hard. For example, why a new capability emerges at this scale and not that one. We don’t yet understand that. If we assume this [current] GPT paradigm is the only breakthrough that’s going to happen, we’re going to be unprepared for very major new things that do happen.
The most important is the ability to reason. GPT-4 can reason a little bit but not in the way we use and understand that term. When we have models that can discover new scientific knowledge at a phenomenal rate — if we let ourselves imagine a year where we make as much scientific progress as we did in the previous decade — and think about what that would do to quality of life, that’s pretty transformative.
What’s the next step?
There are obvious ones and speculative ones. Obviously, the models are going to get dramatically more capable, customizable and reliable. In the same way that the internet and mobile seeped everywhere, that’s going to happen with intelligence. It will be unthinkable to not have intelligence integrated into every product and service. It will just be an expected, obvious thing. This will be a big shift in how we interact with the world and technology.
What does a dramatically more capable GPT model look like?
One example, a lot of people use ChatGPT to help them write code. Maybe today it’s 25%, then it can eventually go up to as high as 90%. At some point it’s letting you do things you just couldn’t do before. These quantitative shifts lead to qualitative shifts. If you have better tools and can operate at a higher level of abstraction, you can do dramatically more. The cycle time and iterative feedback loop will change what a single programmer is capable of. That will change what a single person running a one-person company is capable of.
The amplification of one individual’s capabilities – one person with a good idea and a good understanding of what a customer needs – is going to be able to execute on that with what would have taken complex, many, many person teams before.
What’s your favorite movie featuring AI?
Her (the 2013 story of a man who falls in love with his AI virtual assistant). The idea of a conversational language interface was incredibly prophetic. It’s unfair to dunk on old sci-fi movies for all the parts they got wrong. It’s amazing the amount they get right, like the interface in Her. but it would be great for Hollywood to have some new tropes [of AI gone rogue].
What’s your message to lawmakers and what’s been your biggest surprise dealing with them?
Our leaders are taking this issue [of AI safety, security, privacy and equality] extremely seriously. They understand the need to balance the potential upsides with the potential downsides. But nuance is required here, and I did not go in with particularly high hopes of that nuance being held appropriately. I don’t know how it’s going to play out but people seem very genuine in caring, wanting to do something and get it right. Our role is to explain AI to them as best we can, realizing we don’t have all the answers and we might be wrong.
What’s one goal you have for these discussions with lawmakers?
Get a framework in place to help us deal with short term challenges and long term ones. Even if it’s imperfect, starting with something now would be great. Solving this legislatively is quite difficult given the rate of change. Even if it’s just focused on insight and not oversight, the government can build up the muscle. A new agency [overseeing AI] would be appropriate.
What will generative AI regulations mean for your business?
A regulatory response to AI has started to coalesce around the world. Here’s what you need to know, and do.
What part of this are they getting wrong?
You show people exponential capability and they believe it, but they don’t believe that it will keep going up exponentially. They believe it will level off. It’s a very difficult bias to overcome, when you accept it you have to accept radical change in all parts of life.
Are we moving to a surveillance economy and do you think AI will accelerate the move into greater levels of surveillance?
I do. One of the things I struggle with is I don’t see a world where, if AI is as powerful as we think and people can do significant harm with it, we have less surveillance, and I don’t think that’s a good thing. I’ve talked to a lot of people about this and I have not yet heard of a great solution.
Can you give other examples of the remarkable things generative AI will enable?
What’s happening with education is incredibly gratifying. The ingenuity of teachers, of entrepreneurs building ed tech companies, and of the students themselves, finding new ways to use ChatGPT to learn is quite remarkable. We see a path forward with a combination of humans and AI, enabling one-on-one tutoring to everybody in the world.
In healthcare, AI can help cure diseases, but in terms of treatment, again this hybrid of the AI and doctor together, means we can offer something far beyond what’s possible today.
In creative work, it’s quite remarkable to see what a visual artist can do with the latest image generation tools.
Are you surprised?
In some sense yes. If you had asked me 10 years ago to predict the order that AI was going to disrupt industries, I would have said physical labor first, cognitive labor second and, maybe never but certainly last, creativity. For creativity to be disrupted first was surprising. It’s an area that can tolerate the flaws in our current systems quite well.
We’re going to get better art than we’ve ever had before but still, AI will be a tool that amplifies humans, not replaces them.
Check out an expanded, video version of their conversation below.