OpenAI CEO Sam Altman, whose company developed the widely used ChatGPT conversational software, warned federal lawmakers Tuesday that artificial intelligence could pose significant risks to the world if the technology goes bad.
“We understand that people are worried about how it might change the way we live. So do we. But we believe we can and must work together to identify and manage potential downsides, so we can all enjoy the huge positives.”
Altman also acknowledged the risks in a March interview with ABC News’ Rebecca Jarvis, saying he was “a little scared” by the kind of technology his own company was developing. Despite the risks, Altman said that AI could also be “the greatest technology humanity has yet developed”.
“Start Here” host Brad Milk spoke to Gizmodo technology reporter Thomas Germain, who broke down Altman’s testimony, and discussed the potential risks and challenges with proposing and implementing regulations for the technology.
Brad Milkie: Thomas, can you help me analyze what happened at this hearing?
Thomas Germain: Yes, it was a little unusual. Sam Altman went before Congress and said, he’s basically begged them to protect the public from the technology that he’s creating, which might sound a little weird if you take it at face value. One of the things that I think is interesting about this is, it’s not unusual for the tech industry to demand that it be regulated. This is exactly what we’ve seen with privacy issues.
Some of the biggest supporters of privacy laws are Microsoft, Google, and Meta, in fact, because it gives tech companies a huge advantage if there are laws they can comply with. That way, if something goes wrong, they can just say, “Well, we were following the rules. It’s the government’s fault for not passing better regulation.”
It was an interesting hearing. And one of the things that was unusual about it was how friendly and positive everything was for the most part. You know, if you’ve seen any of the other hearings from other tech CEOs, they’re usually very combative. But Sam Altman has been able to connect with all of these lawmakers, and they agree on some things, which is that AI should be regulated. But how exactly? Nobody really seems to know. And there were some very vague proposals that were put forward. But it really seems like an open question, what would it mean to regulate AI?
Milky: It felt like a Spider-Man meme, where everyone refers to each other like, “You, you, me?” Because that would sound like a stupid question, even though it probably isn’t, since you just said it. What can you even organize? When it comes to artificial intelligence, what’s even on the table right now?
Germaine: Yeah, that’s a really good question. And the fact that there isn’t a good answer says a lot about the state of technology, right? We have no idea what this technology can do. I’ve talked to people who head companies that are at the forefront of building this technology. And if you ask them how far you’ll go, they really have no idea.
We don’t know what the hard technical limitations of these tools are. We really don’t know if they can replace all human labor as we are told we are supposed to be so afraid. But there are some things Congress can do. I think the most important thing when it comes to regulating AI is transparency, right? The public, or at least the regulators, need to know what datasets the AI models are being trained on, because there can be problems, right? If you train an AI on a dataset that includes all of the internet, for example, it will get a lot of racism and hate speech and other unpleasant things in there, and that will be thrown out again.
Milky: Oh, because the way AI works is like, “Hey, use this model, use all these books to teach yourself how to talk, chatbot.” However, if you’re not using the right books or just a very narrow set of books, suddenly it gets a lot more difficult.
Germain: Yeah, that’s one of the interesting things about this kind of technology, right? I think people have this image of computers that they are really smart and that they are better than people. But really, it’s rubbish inside, rubbish outside. Whatever you feed the AI, it will spit out something similar again. And we’ve seen that with other technologies, where they’re just replicating biases that have been baked in.
Another big one is copyright issues. We don’t really have a good answer as to whether the artist who works creating one of these templates should be compensated or own tool products like Chat GPT work. Congress really needs to answer that question, and if they don’t, the people who are going to answer it will be the Supreme Court. Our system is not designed for the court to write the laws. This is the job of Congress. So they really need to progress there.
And the last thing I think is to pick out the individual areas where there are particularly high risks. This has been the case with proposals to regulate AI in the European Union. There are specific rules about issues such as employment decisions, banking, healthcare, the military, and the police, for example. How can and should the police be allowed to use this technology? This is something we’ve seen with facial recognition. It went completely off the rails.
Milky: I was about to ask for an example because it sounds like if you were talking about hiring, it would be like sifting through all these resumes and figuring out who is the best person for your job. However, you might be ignoring like a group of people who shouldn’t be dismissed in this way.
Germaine: That’s absolutely true. Or maybe, you know, the person who designs the AI has a preference for male employees. So when the AI goes through and selects the most qualified candidate, it will end up doing so. I mean, then it’s not even about a person’s bias, right? If you look at employment history, you will find that men are paid at higher rates and are more likely to be hired for certain types of jobs. If you train an AI on a dataset of all the current employees in the world, it will end up replicating problems that already exist in our society.
Milky: So, as you say, here are some things that the government could regulate. Will they organize it? Because I have to say, listening to the legislators here, and we’ve talked about them before with like some old senators, they don’t seem particularly eager to get involved in this.
Germaine: No, we’ve been talking about privacy regulation for, say, the better part of a decade, and we’re no closer to the law today than we were two years ago. And I think that speaks to a broader issue, which is that a lot of the problems with AI are problems with society. This technology can make the rich even richer. It can make the poor even poorer. It can replicate many of society’s worst impulses, from misinformation to discrimination, and any number of issues. And Congress has not addressed these issues as they are today, let alone what the technology will look like in the future. So in terms of regulation, I’ll start with the issues we actually have versus some future assumptions about what this technology is going to be. Will Congress do that? I’m not optimistic. They don’t seem to be able to get anything done, much less wrap their minds around a completely new technology.
Milkie: Since Congress is a bit like, would you like to take a lead on this? And the head of OpenAI says, “No, no, I have a job.” Well, Thomas Germain from Gizmodo, thank you very much.
Germaine: Thanks for having me.
#Technology #reporter #breaks #OpenAI #CEOs #Senate #testimony #potential #regulatory #challenges