AI regulation should not repeat the same mistakes Congress made at the dawn of the social media age, lawmakers made clear at a Senate Subcommittee on Privacy and Technology hearing Tuesday.
During the hearing, where OpenAI CEO Sam Altman testified for the first time, senators from both sides of the aisle stressed the need to discover powerful technology’s firewalls before its greatest damage appears. They have repeatedly compared the risks of AI to those of social media, recognizing that AI is capable of increasing speed, size, and very different types of damage.
Lawmakers haven’t come up with specific proposals, though they’ve bounced around ideas for new agencies to regulate AI or a way to license the tool.
The hearing came after Altman met with a receptive group of House lawmakers at a private dinner on Monday, where the CEO reviewed the risks and opportunities in the technology. Tuesday’s hearing had a somewhat skeptical but not entirely combative tone toward the industry members of the panel, which included both Altman and IBM’s privacy and trust officer Christina Montgomery, along with NYU professor emeritus Gary Marcus.
Chief Richard Blumenthal (D-Conn.) opened the hearing with a recording of his observations, which he later revealed was generated by AI, both in substance and in sound itself. He read an interesting description of why ChatGPT wrote its opening notes the way it did, noting Blumenthal’s track record on data privacy and consumer protection issues. But, he said, the party’s hoax would not be so amusing if it used to say something harmful or untrue, such as false endorsement of Ukraine’s hypothetical surrender to Russia.
Blumenthal compared this moment to an earlier moment that Congress had let pass.
“Congress failed to meet the moment on social media,” Blumenthal said in his written remarks. “Now we have an obligation to do this on AI before the threats and risks become real.”
Managing Director Josh Hawley, R-Missouri, noted that Tuesday’s hearing could not have happened a year earlier because AI had not yet entered the public consciousness in such a big way. He envisioned two paths that technology could take, likening its future to either the printing press, which empowered people around the world through the dissemination of information, or the atomic bomb, which he described as “a huge technological breakthrough, but the consequences: severe, terrible, still haunt us to this day.” “.
Many lawmakers have brought up Section 230 of the Communications Decency Act, a law that has served as the tech industry’s legal liability shield for more than two decades. The law, which helps speed up the dismissal of lawsuits against tech platforms when they’re based on other users’ rhetoric or companies’ content moderation decisions, has seen criticism recently on both sides of the aisle, but with different motivations.
“We must not repeat the mistakes of the past,” Blumenthal said in his opening remarks. “For example, Section 230. Forcing companies to think ahead and take responsibility for the ramifications of their business decisions can be the most powerful tool of all.”
Passing Section 230 in the early days of the Internet was essentially Congress’s decision to “absolve the industry of liability for a period of time as it came into being,” said Sen. Dick Durbin, D-Illinois, who chairs the full committee.
Altman agreed that a new system was needed to deal with artificial intelligence.
“For a technology that is so new, we need a new framework,” Altman said. “Certainly companies like ours have a great deal of responsibility for the tools we bring out into the world, but so do the users of the tools.”
Altman continued to receive praise from lawmakers on Tuesday for his openness with the committee.
Durbin said it was refreshing to hear industry executives call for regulation, saying he can’t remember other companies so desperately asking for regulation of their industry. Big tech companies like Meta and Google have repeatedly called for national privacy regulation among other tech laws, though such efforts often follow regulatory impulses in states or elsewhere.
After the hearing, Blumenthal told reporters that comparing Altman’s testimony to that of other CEOs was like “night and day.”
“Not just in words and rhetoric, but in actual actions and his willingness to engage and commit to a specific action,” Blumenthal said. “Some big tech companies are subject to consent decisions, which they have violated. This is a far cry from the kind of collaboration Sam Altman promised. And given his track record, I think it sounds very sincere.”
Subscribe to CNBC on YouTube.
WATCH: How Nvidia grew from gaming to AI giant now running ChatGPT
#Heres #happened #OpenAI #CEO #Sam #Altmans #congressional #hearing #artificial #intelligence