Where are the meaningful AI guardrails? 

Short Url

Italy has become the first Western country to block the popular artificial intelligence bot ChatGPT. Italian authorities didn’t block the AI software because the technology is advancing too quickly and becoming too powerful. Instead, the Italian data protection authority blocked the application over privacy concerns and questions over ChatGPT’s compliance with the EU’s General Data Protection Regulation. The event made international headlines and tapped into deeper global fears that AI is becoming too powerful.

Many of us can’t comprehend how the technology has developed so quickly. One reason is that there have been few guardrails from a regulatory standpoint to keep tabs on the growth of AI. Humanity needs guardrails in place, but that is much easier said than done. 

The regulation of AI is becoming increasingly vital as the technology is being used more widely in areas such as health care, finance and transportation. According to a study by researchers at the University of Pennsylvania and OpenAI, the privately held company behind ChatGPT, most jobs will be changed significantly by AI in the near future. For nearly 20 percent of jobs in the study, ranging from accountant to writer, at least half of their tasks could be completed much faster with ChatGPT and similar tools. While we don’t know what this will do to the labor market, it will have an unavoidably large impact that could have knock-on effects across society. 

There has been a slight movement toward better regulation worldwide in the past decade. The EU, in line with its data protection standards, has developed a framework for AI regulation that includes rules for high-risk AI applications, requirements for transparency and accountability, and a ban on specific uses of AI, such as social scoring — the practice of using the technology to rank people for their trustworthiness. The US has also slowly started to regulate AI, with the National Institute of Standards and Technology developing a set of principles for AI governance and the Federal Trade Commission taking enforcement actions against companies that use AI in ways that violate consumer protection laws.

Yet, these regulations are inadequate, given the speed at which AI develops. The concern over AI’s growing power and the lack of regulations has led some technology leaders to call for a pause on AI development. Last month, according to The Guardian, more than 1,800 signatories, including Elon Musk, the cognitive scientist Gary Marcus and Apple co-founder Steve Wozniak, called for a six-month pause on developing systems “more powerful” than that of ChatGPT-4. 

Regulation of AI is becoming increasingly vital as the technology is being used more widely in areas such as health care, finance and transportation.

Joseph Dana

Musk was a co-founder of OpenAI and has since expressed misgivings about how the company is run. The open letter states that “AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” 

Independent experts are vital to sensible AI regulation. This should not overshadow the fact that the sector has developed with little oversight from government authorities. This is one reason why we are having an urgent conversation about its growing power. In short, AI regulation needs to catch up to the rapid pace of technological development.

AI’s economic, societal and intellectual risks are not adequately addressed in the halls of power worldwide. The use of facial recognition technology, for example, has been shown to be biased against certain groups but is still largely unregulated in many countries, including the US. Similarly, using AI in automated decision-making systems, such as credit scoring or employment screening, has raised concerns about algorithmic bias and discrimination.

AI language models such as ChatGPT can generate harmful or misleading content, such as hate speech, and can perpetuate existing societal biases and inequalities. Then there is the issue of data privacy, which Italy has zeroed in on. Language models require access to large amounts of personal data to function effectively. Should individuals hand over their data? If so, under what terms? These are serious questions that don’t have answers right now because of the absence of sound regulations. Do you know how much of your data is being fed into a large AI language model as you read this piece? I certainly don’t, and that’s part of the problem. 

The ever-present danger with regulation, on the other hand, is that it could stifle innovation. Private companies across Silicon Valley love this line of argument because it changes the parameters of debate. They say that more research is needed to thoroughly understand AI’s risks and benefits.

Moreover, AI regulation requires input from various stakeholders, including government agencies, industry leaders and civil society organizations. Getting various stakeholders to come together and investigate how AI is changing society is not easy. Giving researchers more time to evaluate AI risk can happen while regulations are being written. These things don’t have to be mutually exclusive. 

Given the lack of movement in the world’s most powerful countries on AI regulation, it is time for small countries to ratify their own regulations. Technology-heavy economies from Estonia to Israel and the UAE have the knowledge base to draft sensible regulations and see how they affect AI development. These countries are also small enough that the regulations can be updated and amended as the technology evolves.

However, we look at it, it is time for AI guardrails.

  • Joseph Dana is the former senior editor of Exponential View, a weekly newsletter about technology and its impact on society. Twitter: @ibnezra Copyright: Syndication Bureau