The Senate hearing on AI regulation was seriously friendly

The most bizarre thing about this week’s Senate hearing on artificial intelligence was how Nice He was. Industry representatives — OpenAI CEO Sam Altman — cheerfully agreed on the need to regulate new AI technologies, while politicians seemed happy to hand over the responsibility of crafting the rules to the companies themselves. As Sen. Dick Durbin (D-Illinois) said in his opening remarks, “I can’t remember when we’ve had people representing large corporations or private sector entities come before us and ask us to organize them.”

This kind of friendship makes people nervous. A number of experts and industry figures say the hearing suggests we may be heading into an era of industry takeover in AI. If the tech giants are allowed to write the rules that govern this technology, they say, it could do a number of damages, from stifling small businesses to introducing weak regulations.

Industry capture can hurt small businesses and lead to weak regulation

Among the experts at the hearing were IBM’s Christina Montgomery and AI critic Gary Marcus, who also raised the specter of a regulatory takeover. (Marcus said the danger is that we “make it look like we’re doing something, but it’s more like greenwashing and nothing really happens, we’re just keeping the little guys away.”) Although no one from Microsoft or Google is presently , the tech industry’s informal spokesperson was Altman.

Although Altman’s OpenAI is still called a “startup” by some, it is arguably the most influential AI company in the world. Its launch of image and text generation tools such as ChatGPT and its engagement with Microsoft to recreate Bing sent shock waves through the entire tech industry. Altman himself is well positioned: he’s able to lure both VC-class fantasies and powerful AI-enhancers with grand promises to build super-intelligent AI, and maybe one day, as he puts it, “capturing the light cone of all future values ​​in the universe.”

At this week’s hearing, he wasn’t so arrogant. Altman also mentioned the problem of regulatory buyouts but was less clear about his thoughts on licensing smaller entities. “We don’t want to slow down smaller startups. We don’t want to slow down the open source effort,” he said, adding, “We still need them to comply with things.”

says Sarah Myers West, managing director of the AI ​​Now Institute the edge She was suspicious of the licensing system proposed by many of the speakers. “I think the harm would be that we end up with a kind of superficial checkbox exercise, where companies say ‘yes, we are licensed, we know the harms and we can carry on with business as usual,’ but face no real responsibility when these systems go wrong.”

“Requiring a license to train models would … further concentrate power in the hands of a few”

Other critics — particularly those who run their own AI companies — have stressed the potential threat to competition. “Regulations always favor incumbents and can stifle innovation,” said Emad Mostaki, founder and CEO of Stability AI. the edge. Clem Delango, CEO of AI Hugging Face, chirp Similar reaction: “Requesting a license to train models is like asking for a license to write code. IMO, you would increase the concentration of power in the hands of a few and greatly slow down progress, fairness, and transparency.”

But some experts say some forms of licensing may be effective. Margaret Mitchell, who was forced out of Google along with Timnit Gebru after authoring a paper on the potential harms of AI language models, describes herself as “a proponent of some degree of self-organisation, paired with top-down regulation”. I told the edge They could see the appeal of a career but perhaps to individuals rather than companies.

“You can imagine that to train a model (above some thresholds) a developer would need a ‘commercial ML developer license,’” said Mitchell, who is now the chief ethicist at Hugging Face. “This would be a straightforward way of bringing ‘responsible AI’ into the legal structure.” “.

Mitchell added that good regulation depends on setting standards that companies cannot easily eliminate in their own interests, and this requires a careful understanding of the technology being evaluated. She gave the example of facial recognition AI company Clearview, which sold itself to police forces by claiming its algorithms were “100 percent” accurate. This sounds reassuring, but experts say the company used skewed tests to produce these numbers. Mitchell added that she generally does not trust Big Tech to act in the public interest. Technology companies [have] They have shown time and time again that they do not see respecting people as part of running a company.

Even if the license is provided, it may not have an immediate effect. At the hearing, industry representatives often drew attention to hypothetical future harms and, in the process, paid little attention to known problems that AI was already enabling.

For example, researchers such as Joy Buolamwini have repeatedly identified problems with facial recognition bias, which remains inaccurate in identifying black faces and has generated many cases of wrongful arrest in the United States. Despite this, AI-driven monitoring was not mentioned at all during the hearing, while facial recognition and its flaws were only hinted at once.

Industry figures often stress the future harms of AI to avoid talking about current problems

AI Now’s West says this focus on future harm has become a common rhetorical trick among AI industry figures. She said these individuals are “putting accountability into the future”, generally by speaking of the artificial general Intelligence, or AGI: A virtual artificial intelligence system that is smarter than humans across a range of tasks. Some experts suggest that we are getting closer to creating such systems, but this conclusion is highly disputed.

This rhetorical trick was evident at the hearing. While discussing government licensing, OpenAI’s Altman quietly suggested that any licenses should only apply to future systems. “Where I think the licensing system comes in is not for what these models can do today,” he said. “But as we move towards artificial general intelligence… this is where I personally think we need such a blueprint.”

Experts have compared Congress’ (and Altman’s) proposals unfavorably to the EU’s upcoming artificial intelligence law. The current draft of this legislation does not include similar licensing mechanisms, but it does classify AI systems based on their level of risk and imposes different requirements for safeguards and data protection. But most important is the explicit ban on known and existing cases of harmful AI uses, such as predictive policing algorithms and mass surveillance, which has drawn praise from digital rights experts.

As West says, “This is where the conversation has to head if we want any kind of meaningful accountability in this industry.”

#Senate #hearing #regulation #friendly

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top