The leadership of OpenAI thinks that the world urgently needs an international regulatory organization to regulate AI like that which regulates nuclear power since AI is evolving too quickly. The dangers it may represent are too prominent. Just not too fast.
Sam Altman, the founder of OpenAI, Greg Brockman, the president, and Ilya Sutskever, the chief scientist, argues in a blog post for the firm that the rate of innovation in artificial intelligence is moving so quickly that we can’t expect the current regulatory framework to control the technology appropriately.
While there is a tendency for self-congratulatory behavior here, it is evident to any objective observer that the technology—notably in OpenAI’s wildly successful ChatGPT conversational agent—represents both a distinct threat and a priceless benefit.
The post, generally concise on commitments and information, still acknowledges that AI must manage itself.
The development of Ai occurs in a way that enables us to both maintain safety and aid the smooth integration of these systems with society. Therefore, there needs to be some coordination among the key development projects.
Superintelligence research is likely to require a body akin to the [International Atomic Energy Agency] at some point; any effort above a particular capability (or resource, such as computing) threshold will need to be overseen by a global body that can inspect systems, demand audits, check for compliance with safety standards, impose limitations on deployment and security levels, etc.
The International Atomic Energy Agency (IAEA) is the UN’s official body for intergovernmental cooperation on nuclear power concerns; however, naturally, like other such organizations, it sometimes needs more punch. It may not be possible for an AI-governing organization based on this model to intervene and stop a bad actor, but it can set up and keep track of global norms and agreements, which is at least a starting point.
In its post, OpenAI points out that one of the few unbiased metrics that should be published and tracked is the amount of computing power and energy used for AI research. While it may be challenging to specify whether AI should be utilized for a particular purpose, it may be essential to note that resources should be regulated and audited, just like other businesses. (The corporation advised exempting smaller companies to avoid choking off the early signs of innovation.)
Timnit Gebru, a renowned AI researcher and critic, recently stated something along these lines in an interview with The Guardian: “Companies are not going to simply self-regulate. Regulation is necessary, as is something other than pure profit.
To the dismay of many who had hoped it would live up to its name, OpenAI has openly embraced the latter; however, as the market leader, it is also calling for real action on the governance front beyond hearings like the most recent one, where Senators lined up to give reelection speeches that ended in question marks.
Even though the suggestion equates to “maybe we should, like, do something,” it at least sparks discussion in the industry and shows support from the world’s biggest AI brand and supplier for taking that action. We urgently need public scrutiny, but “we don’t yet know how to design such a mechanism.”
However, even though the company’s executives support applying the brakes, there are currently no plans to do so because they don’t want to give up on the enormous opportunity “to improve our societies” (along with bottom lines) and because there is a chance that criminals may have their foot firmly on the gas.