OpenAI leaders propose an international AI regulatory body

Image credits: Stephanie Reynolds/AFP/Getty Images

Artificial intelligence is developing fast enough and the risks it poses are clear enough that the OpenAI leadership believes the world needs an international regulatory body similar to the one that governs nuclear power — and fast. But not very quickly.

In a post on the company’s blog, OpenAI Founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever explain that the pace of innovation in AI is so fast that we can’t expect the current authorities to adequately rein in the technology.

While there is a certain kind of patting on the back here, it is clear to any impartial observer that the technology, most evident in OpenAI’s popular ChatGPT conversational agent, presents a unique threat as well as an invaluable resource.

The post, which is usually fairly light on details and commitments, nonetheless admits that AI will not manage itself:

We need a certain degree of coordination between leading development efforts to ensure that superintelligence is developed in a way that allows us to maintain safety and assist in the seamless integration of these systems with society.

We will probably eventually need something like [International Atomic Energy Agency] for superintelligence efforts. Any effort that exceeds a certain threshold (or resources such as computing) must be subject to an international authority that can inspect systems, order audits, test compliance with safety standards, set limits on deployment scores and security levels, etc.

The International Atomic Energy Agency is the official United Nations body for international cooperation on nuclear energy issues, although of course like other similar organizations it can want to punch. An AI governing body built on this model may not be able to go in and turn the toggle on a bad actor, but it can create and track international standards and agreements, which is at least a starting point.

The OpenAI publication notes that tracking computing power and energy use intended for AI research is one of the relatively few objective measures that can and should be reported and tracked. While it may be difficult to say that AI should or should not be used for this or that, it can be helpful to say that the resources devoted to it, like other industries, should be monitored and audited. (The company suggested an exemption for small businesses so they don’t stifle the green shoots of innovation.)

Leading AI researcher and critic Timnit Gebru today just said something similar in an interview with The Guardian: “Unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just motivation.” for profit.”

OpenAI has clearly embraced AI, to the dismay of many who hoped it would live up to its name, but at least as a market leader it is also calling for real action from the governance side — beyond hearings like the last one, Where senators line up to deliver re-election speeches that end in question marks.

While the proposal amounts to “maybe we should do something,” it’s at least a conversation starter in the industry and points to support by the world’s largest brand and single AI provider to do that thing. Public oversight is badly needed, but “we don’t yet know how to design such a mechanism.”

And though company leaders say they support putting on the brakes, there are no plans to do so just yet, because they don’t want to give up the huge potential to “improve our societies” (let alone the bottom line) and because there is a risk that bad actors get their feet right on it. Gas.

#OpenAI #leaders #propose #international #regulatory #body

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top