“We Better Get It On”: The Political Trap That Could Slow Down the National Artificial Intelligence Act

Republicans worried that any new rules would lead to increased censorship of conservatives, while Democrats feared they would open the floodgates to hate speech and misinformation online. Now those arguments are beginning to emerge in an entirely new debate.

From a repeated invocation of Section 230 during OpenAI CEO Sam Altman’s Senate testimony Tuesday to a row over misinformation and censorship at a separate Senate hearing on the government’s use of automated systems, familiar battle lines on social media are in danger of being redrawn in Congressional directives. A look at artificial intelligence.

“The same muscle memory is coming back,” said No Wexler, a partner at PR firm Seven Letter and a former Democratic congressional staffer who worked at Google, Facebook and other tech companies.

Going back to the politics of those past technical disagreements will make it difficult for the two sides to come together on AI policy. And even if they manage to stay united, lawmakers will likely need to look beyond censorship, disinformation, political bias, or other issues raised by social media if they want to set meaningful rules for AI.

One of the reasons many lawmakers view AI through a social media lens, some on the Hill say, is the fundamental knowledge gap about a very fast-moving new technology.

“Without discussing anyone’s names, some members of the House and Senate have no idea what they’re talking about,” said the representative. Zoe Lofgren (D-Calif.), ranking member of the House Science Committee, in an interview with Politico on Thursday.

During a hearing Tuesday of the Senate Homeland Security and Governmental Affairs Committee, ranking member Rand Paul (Republic-kay) has accused the government of colluding with social media companies to deploy artificial intelligence systems that would “monitor and monitor your protected speech.”

Paul later told Politico that he would not work on AI legislation with the committee chair Gary Peters (D-Michigan) Until a Democrat acknowledges that Internet censorship is a real problem.

“Everything else is window dressing,” said Paul. “We’re fine to work with [Peters] On him, but we have to see progress on the speech defense.”

In a conversation with reporters after Tuesday’s hearing, Peters said he shares Paul’s concerns about AI and civil liberties. But he also stressed that AI is “much broader than just being associated with disinformation and potential disinformation.”

“It’s a topic that we have to think about — but it’s also a very complex one,” Peters said.

The mood was less partisan during Altman’s testimony before the Senate Judiciary Subcommittee on Privacy, Technology and the Law. But the technical topics that usually provoke intense battles were still on the fore.

Senators from both parties incl Josh Hawley (R-Mo.) f Amy Klobuchar (D-Minn.), questioned the possibility of AI systems promoting misinformation online about elections. Others, including the head of the judiciary Dick Durbin (D-Ill.) and a high ranking member Lindsey Graham (RS.C.), questioned Altman about Section 230 of the Communications Decency Act. This provision protects online platforms from legal liability for content posted by users. Attempts to reform the 27-year-old internet law for the modern age of social media have foundered time and again over partisan disagreements over censorship, misinformation and hate speech. And Section 230 may not even apply to AI systems — an idea Altman repeatedly tried to get across to senators on Tuesday.

“It’s tempting to use the framework of social media, but this is not social media,” Altman said. “It’s different, so the response we need is different.”

Lofgren, whose congressional district includes much of Silicon Valley, shares Altman’s sentiment that Section 230 “doesn’t really apply” to AI. “Apples and oranges, really,” she said.

And if lawmakers hope to tackle politically charged topics like disinformation, Lofgren said a federal data privacy bill would be more effective than new rules on artificial intelligence. “If you want to manipulate, you have to get into how you manipulate, which is really the use and abuse of personal data,” the congressman said.

Wexler said it’s too early to tell whether congressional efforts to rein in AI will end up ending in the same partisan deadlock that has derailed important rules on social media. And while he acknowledged the warning signs, he also noted clear areas of agreement — particularly on the need for further study and more transparency in AI systems.

And while Lofgren believes Congress should stop confusing social media with artificial intelligence, she sees few indications of a similar partisan divide — at least for now. “Could it be shown? She said. “But I think everyone understands that this is a technology that can turn the world upside down, and we better understand it.”

However, other observers believe it is only a matter of time before political differences emerge that have undermined congressional efforts to unite on other technical issues around AI.

“The left will say AI is hopelessly biased and discriminatory; the right will claim AI is just another conspiracy ‘woke up’ against conservatives,” said Adam Terrier, senior fellow for technology and innovation at the R Street Institute, a libertarian think tank.

“Social media culture wars are about to turn into AI culture wars,” Terrier said.

#Political #Trap #Slow #National #Artificial #Intelligence #Act

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top