AI regulation train has now left the station

AI regulation train has now left the station

AI regulation train has now left the station
Image: Shutterstock
Short Url

The world has taken its first concerted steps toward taking the reins of artificial intelligence and developing responsible AI by managing its risks while maximizing its benefits. From Washington to the UN, last week witnessed an accelerated pace to try and catch up with AI and hope that, one day, regulation will be ahead of the technology, although that is doubtful considering the lightning speed of the progress of AI.
The biggest news came from the White House, where President Joe Biden met with the executives of seven of the technology companies that are leading the development of AI, including Google, OpenAI and Microsoft. He succeeded in securing a voluntary pledge and commitment to implement safeguards and follow guidelines that make AI “safe, secure and trustworthy.”
The president considered the speed of development of AI “an astounding revelation,” adding: “We will see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years.” The commitments made by the AI companies, the president said, “are a promising step, but we have a lot more work to do together.”
Not everyone was impressed by the commitments and critics said these companies should not be at the center of the conversation on regulating AI because they have a profit motive. Others saw in prioritizing these companies a discrimination against the smaller firms that are working on AI.
Regardless of the criticisms, what the administration has secured is considered historic in reining in the AI companies.
The White House emphasized the responsibility of the AI companies “to ensure their products are safe,” and it announced that they “have chosen to undertake these commitments immediately.”
But what did the AI companies commit to? The White House said they committed to three principles: safety, security and trust. Safety means that they will “ensure products are safe before introducing them to the public;” security is achieved through performing security tests partly by independent experts and by building systems that put security first; and, in so doing, they earn the public’s trust.
The president outlined the steps that his administration has taken to make AI safe. These include last October’s issuing of a “first of its kind” AI bill of rights, detailing the administration’s guidelines for AI systems. In February, the president also signed an executive order to “direct agencies to protect the public from algorithms that discriminate.” The order included new equity obligations for AI systems developed and used by federal agencies. And, in May, the administration “unveiled a new strategy to establish seven new AI research institutions to help drive breakthroughs in responsible AI innovations.”
During a UN Security Council meeting last week, the US representative also encouraged member states to endorse a “proposed Political Declaration on Responsible Military Use of AI and Autonomy.” It deals with the principles of using AI in the military domain and stresses that the military use of AI capabilities must be accountable to a human chain of command. The dangers of using AI in the military domain without human control is what keeps those who are working on regulating AI awake at night.

Regardless of the criticisms, what the US has secured is considered historic in reining in the AI companies.

Dr. Amal Mudallali

The president vowed to continue, in the weeks ahead, to “take executive action to help America lead the way toward responsible innovation.” But the White House knows that commitments, if not codified in legislation, will not be enough to regulate AI and the next step has to focus on Congress.
While the White House and the Democratic leadership in Congress say they are prioritizing legislation to regulate AI on a bipartisan basis, it is not yet clear whether they will be successful in a politically divided Congress.
The good news is that the administration is treating AI as a global issue that calls for global collaboration to rein it in. The White House expressed a willingness to “work with allies and partners to establish a strong framework to govern the development and use of AI.” The White House released a list of 21 countries that the administration consulted with regarding the voluntary commitments.
The Biden administration said it wants to make sure that these commitments “support and complement” other processes underway for AI governance, like Japan’s leadership of the G7's Hiroshima AI Process, India’s work as chair of the Global Partnership on Artificial Intelligence and the UK’s autumn summit on AI safety, which British Foreign Secretary James Cleverly announced during the UNSC meeting on the issue last week.
At the UN’s first Security Council meeting on AI, the concern among member states was about the impact of AI on peace and security. They called for a governance framework that is ethical and responsible. UN Secretary-General Antonio Guterres told the council that the organization is the “ideal place” to adopt global standards and approaches to AI.
He welcomed calls by some member states “for the creation of a new United Nations entity” for collective efforts to govern AI in order “to maximize the benefits of AI for good” and “to mitigate existing and potential risks,” through administering an “internationally agreed mechanism of monitoring and governance.” As a first step, Guterres said he is convening “a multistakeholder High-Level Advisory Board for Artificial Intelligence that will report back on the options for global AI governance by the end of this year.”
Tech executive Jack Clark, of AI company Anthropic, told the UNSC that big tech companies cannot be trusted when it comes to the safety of the AI system. He spoke about the importance of “robust and reliable evaluation of AI systems,” stressing that a lack of such evaluation would “run the risk of regulatory capture compromising global security and handing over the future to a narrow set of private sector actors.”
The problem, as Clark and others have pointed out, is that technology companies and not governments are the ones with the “sophisticated computers,” the data and the knowledge to build AI systems and do the evaluations.
The Chinese ambassador supported a central role for the UN in establishing principles for AI, as well as regulating its development “to prevent this technology from becoming a runaway horse.”
The AI companies’ pledge has raised important questions about how the US administration can hold these companies accountable if they do not abide by their commitments, and whether they can be trusted to regulate themselves. Jeff Zients, the White House chief of staff, told National Public Radio: “We will use every lever that we have in the federal government to enforce these commitments and standards.” The AI principles and regulations train has now left the station, but all indications point to an arduous ride for conductors and passengers alike.

Dr. Amal Mudallali is an American policy and international relations analyst.

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point of view