STRASBOURG: Europe moved closer to adopting the world’s first artificial intelligence rules on Wednesday as EU lawmakers endorsed a provisional agreement for a technology whose use is rapidly growing across a wide swathe of industries and in everyday life, according to Reuters.
The legislation, called the AI Act, will regulate foundation models or generative AI such as Microsoft-backed OpenAI that are trained on large volumes of data to generate new content and even perform tasks.
It will restrict governments’ use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of genuine threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.
The rules will cover high-impact, general-purpose AI models and high-risk AI systems which will have to comply with specific transparency obligations and EU copyright laws.
A total of 523 EU lawmakers voted in favor of the deal while 46 were against and 49 abstained.
“I welcome the overwhelming support from the European Parliament for the EU AI Act, the world’s first comprehensive, binding framework for trustworthy AI. Europe is now a global standard-setter in trustworthy AI,” EU industry chief Thierry Breton said.
EU countries are set to give their formal nod to the deal in May, with the legislation expected to enter into force early next year and apply in 2026 although some of the provisions will kick in earlier.
Brussels may have set the benchmark for the rest of the world, said Patrick Van Eecke, a partner at law firm Cooley.
“The European Union now has the world’s first hard coded AI law. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR,” he said, referring to the EU privacy regulation.
However, he said the downside for companies is considerable red tape.
The European Parliament and EU countries had clinched a preliminary deal in December after nearly 40 hours of negotiations on issues such as governments’ use of biometric surveillance and how to regulate foundation models of generative AI such as ChatGPT.
Companies risk fines ranging from $7.5 million or 1.5 percent of turnover to $35 million or 7 percent of global turnover depending on the type of violations.