World

EU parliament greenlights landmark artificial intelligence regulations


The European Parliament has given final approval to wide-ranging rules to govern artificial intelligence.

The far-reaching regulation – the Artificial Intelligence Act – was passed by lawmakers on Wednesday. Senior European Union officials said the rules, first proposed in 2021, will protect citizens from the possible risks of a technology developing at breakneck speed while also fostering innovation.

Brussels has sprinted to pass the new law since Microsoft-backed OpenAI’s ChatGPT arrived on the scene in late 2022, unleashing a global AI race.

Just 46 lawmakers in the European Parliament in Strasbourg voted against the proposal. It won the support of 523 MEPs.

The European Council is expected to formally endorse the legislation by May. It will be fully applicable 24 months after its entry into force.

The rules will cover high-impact, general-purpose AI models and high-risk AI systems, which will have to comply with specific transparency obligations and EU copyright laws.

The act will regulate foundation models or generative AI, such as OpenAI, that are trained on large volumes of data to generate new content and perform tasks.

Government use of real-time biometric surveillance in public spaces will be restricted to cases of certain crimes; prevention of genuine threats, such as “terrorist” attacks; and searches for people suspected of the most serious crimes.

“Today is again an historic day on our long path towards regulation of AI,” said Brando Benifei, an Italian lawmaker who pushed the text through parliament with Romanian MEP Dragos Tudorache.

“[This is] the first regulation in the world that is putting a clear path towards a safe and human-centric development of AI,” he said.

“We managed to find that very delicate balance between the interest to innovate and the interest to protect,” Tudorache told journalists.

The EU’s internal market commissioner, Thierry Breton, hailed the vote.

“I welcome the overwhelming support from the European Parliament for the EU AI Act,” he said. “Europe is now a global standard-setter in trustworthy AI.”

AI policing restrictions

The EU’s rules take a risk-based approach: the riskier the system, the tougher the requirements – with outright bans on the AI tools deemed to carry the most threat.

For example, high-risk AI providers must conduct risk assessments and ensure their products comply with the law before they are made available to the public.

“We are regulating as little as possible and as much as needed with proportionate measures for AI models,” Breton told the Agence France-Presse news agency.

Violations can see companies hit with fines ranging from 7.5 million to 35 million euros ($8.2m to $38.2m), depending on the type of infringement and the firm’s size.

There are strict bans on using AI for predictive policing and systems that use biometric information to infer an individual’s race, religion or sexual orientation.

The rules also ban real-time facial recognition in public spaces with some exceptions for law enforcement. Police must seek approval from a judicial authority before any AI deployment.

Lobbies vs watchdogs

Because AI will likely transform every aspect of Europeans’ lives and big tech firms are vying for dominance in what will be a lucrative market, the EU has been subject to intense lobbying over the legislation.

Watchdogs have pointed to campaigning by French AI start-up Mistral AI and Germany’s Aleph Alpha as well as US-based tech giants like Google and Microsoft.

They warned the implementation of the new rules “could be further weakened by corporate lobbying”, adding that research showed “just how strong corporate influence” was during negotiations.

“Many details of the AI Act are still open and need to be clarified in numerous implementing acts, for example, with regard to standards, thresholds or transparency obligations,” three watchdogs based in Belgium, France and Germany said.

Breton stressed that the EU “withstood the special interests and lobbyists calling to exclude large AI models from the regulation”, maintaining: “The result is a balanced, risk-based and future-proof regulation.”

Tudorache said the law was “one of the … heaviest lobbied pieces of legislation, certainly in this mandate”, but insisted: “We resisted the pressure.”



Source link