https://arab.news/8e87r
The massive economic opportunity of artificial intelligence is currently driving much market excitement, with many investors assuming the nascent technology will turbocharge productivity and rewire the global economy in the years to come.
However, the risks of AI must also be a key global discussion point. One prominent computer scientist, Geoffrey Hinton, asserts that the technology could pose a “more urgent” threat to humanity than climate change.
Key business leaders such as Elon Musk have also expressed alarm, calling for an immediate pause in the development of some systems. The Tesla founder has often warned about the dangers he perceives in advanced AI systems, even signing a letter warning that “out of control” advancement could “pose profound risks to society and humanity.”
This topic was a key theme at last week’s global AI summit, which was hosted by South Korea President Yoon Suk Yeol following a similar event convened by UK Prime Minister Rishi Sunak last year. Like the UK conference, the Seoul event brought together world leaders, tech business chiefs and academics to discuss how best to regulate the rapidly developing technology and also promote greater digital inclusion and sustainable growth through its adoption.
Perhaps the key legacy of last week’s event will be an agreement to develop AI safety institutes to align research on machine learning standards and testing. It was reached by 10 countries (France, Germany, Italy, the UK, the US, Singapore, Japan, South Korea, Australia and Canada) plus the EU. This builds from last year’s UK event, which concluded with the Bletchley Declaration on AI Safety signed by more than 25 powers, including the US, China and the EU.
While South Korea, the UK and France — the last of which will host a third AI safety event in 2024 or 2025 — are seeking to lead internationally on this agenda, they are not alone. Separately, for instance, Chinese and US officials recently held their own meeting on AI security in Geneva following a summit last year in San Francisco between Presidents Joe Biden and Xi Jinping.
The Geneva meeting saw top officials, including US National Security Adviser Jake Sullivan and Chinese Foreign Minister Wang Yi, discuss how best to reduce the risks of AI, as well as to seize the opportunities that come from the technology. The talks did not seek to foster technical collaboration, but rather focused more on how best to manage risks in the context of existing domestic efforts to promote this agenda.
Last year, Biden signed an executive order that requires AI system developers that pose risks to national security, the economy or public health to share the results of safety tests with the US government, in accordance with the Defense Production Act, before being released to the public. China is also moving forward with its own AI regulatory frameworks.
It is clear there are many challenges in terms of developing consistent, coherent regulatory frameworks for AI.
Andrew Hammond
Given their superpower status, Washington and Beijing have a special interest in this topic. However, as last week’s summit underlined yet again, this is increasingly a shared international agenda, especially in the West and Asia-Pacific.
For instance, the EU has passed comprehensive AI legislation in recent months that aligns with wider digital regulations, such as the General Data Protection Regulation and the Digital Services Act. At the multilateral level, last year’s Japanese-chaired G7 sought to develop joint principles for firms developing advanced AI systems.
In this context, the South Korea-hosted event played a valuable role in helping foster an even stronger international consensus on bringing AI more clearly into inclusive global governance structures. At present, there is a significant risk of private sector tech firms ruling the roost, not least in the context of US-Chinese geopolitical rivalry.
Moreover, unlike some previous era-defining technological advances — such as space or nuclear — AI is mostly being developed by private companies. These firms are disproportionately located in the US, such as Google, Microsoft, OpenAI and Anthropic.
Last week’s summit therefore added value by deepening the shared international understanding of major AI challenges, not only its opportunities. This includes offering innovation to help address AI knowledge gaps and offering more inclusion, including for Global South nations that do not have the financial means to develop a critical mass of AI capacity.
As AI continues its fast-paced evolution, it is clear there are many challenges in terms of developing consistent, coherent regulatory frameworks, resulting in an emerging patchwork quilt of policies globally. In Europe, for instance, the EU has sought to develop a “first mover” advantage with the world’s first comprehensive AI legislation. This new regulatory framework, which has already been criticized by some business groups as far too burdensome, represents the strictest set of measures anywhere in the world for the emerging technology.
By contrast, Sunak has sought not to “rush to regulate,” so as not to stifle UK innovation in this important new area of technology. This is despite his avowed concerns over the speed of AI development and the possibility of humanity’s “extinction” as a result of the technology.
What this underlines, from the contrasting approaches within Europe to the ongoing evolution of regulations elsewhere, including in the US and China, is that it is not yet clear whether there will be significant convergence or further divergence of measures in the years to come. Whatever comes next will have big implications for politics and business worldwide, with the South Korean and French summits likely to shape this global AI regulatory landscape.
• Andrew Hammond is an Associate at LSE IDEAS at the London School of Economics.