E.U. Reaches Deal on Landmark AI Bill, Racing Ahead of U.S.

The agreement cements the bloc's role as the de facto global tech regulator, as governments scramble to address the risks created by rapid advances in AI systems.

The Washington Post

December 11, 2023

6 Min Read
Thierry Breton
Thierry BretonAngel Garcia/Bloomberg

European Union officials reached a landmark deal Friday on the world's most ambitious law to regulate artificial intelligence, paving the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

At a time when the sharpest critics of AI are warning of its nearly limitless threat, even as advocates herald its benefits to humanity's future, Europe's AI Act seeks to ensure that the technology's exponential advances are accompanied by monitoring and oversight, and that its highest-risk uses are banned. Tech companies that want to do business in the 27-nation bloc of 450 million consumers - the West's single-largest - would be compelled to disclose data and do rigorous testing, particularly for "high-risk" applications in products like self-driving cars and medical equipment.

Dragos Tudorache, a Romanian lawmaker co-leading the AI Act negotiations, hailed the deal as a template for regulators around the world scrambling to make sense of the economic benefits and societal dangers presented by artificial intelligence, especially since last year's release of the popular chatbot ChatGPT.

"The work that we have achieved today is an inspiration for all those looking for models," he said. "We did deliver a balance between protection and innovation."

The deal came together after about 37 hours of marathon talks between representatives of the European Commission, which proposes laws, and the European Council and European Parliament, which adopt them. France, Germany and Italy, speaking for the council, had sought late-stage changes aimed at watering down parts of the bill, an effort strongly opposed by representatives of the European Parliament, the bloc's legislative branch of government.

The result was a compromise on the most controversial aspects of the law - one aimed at regulating the massive foundation language models that capture internet data to underpin consumer products like ChatGPT and another that sought broad exemptions for European security forces to deploy artificial intelligence.

Carme Artigas, Spain's secretary of state for digitalization and artificial intelligence, said during a news conference following the deal that the process was at times painful and stressful but that the milestone deal was worth the lack of sleep.

The latter issue emerged as the most contentious. The final deal banned scraping faces from the internet or security footage to create facial recognition databases or other systems that categorize using sensitive characteristics such as race, according to a news release. But it created some exemptions allowing law enforcement to use "real-time" facial recognition to search for victims of trafficking, prevent terrorist threats, and track down suspected criminals in cases of murder, rape and other crimes.

European digital privacy and human rights groups were pressuring representatives of the parliament to hold firm against the push by countries to carve out broad exemptions for their police and intelligence agencies, which have already begun testing AI-fueled technologies. Following the early announcement of the deal, advocates remained concerned about a number of carve-outs for national security and policing.

"The devil will be in the detail, but whilst some human rights safeguards have been won, the EU AI Act will no doubt leave a bitter taste in human rights advocates' mouths," said Ella Jakubowska, a senior policy adviser at European Digital Rights, a collective of academics, advocates and nongovernmental organizations.

The legislation ultimately included restrictions for foundation models but gave broad exemptions to "open-source models," which are developed using code that's freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France's Mistral and Germany's Aleph Alpha, as well as Meta, which released the open-source model LLaMA.

However, some proprietary models classified as having "systemic risk" will be subject to additional obligations, including evaluations and reporting of energy efficiency. The text of the deal was not immediately available, and a news release did not specify what the criteria would trigger the more stringent requirements.

Companies that violate the AI Act could face fines up to 7 percent of global revenue, depending on the violation and the size of the company breaking the rules.

The law furthers Europe's leadership role on tech regulation. For years, the region has led the world in crafting novel laws to address concerns about digital privacy, the harms of social media and concentration in online markets.

The architects of the AI Act have "carefully considered" the implications for governments around the world since the early stages of drafting the legislation, Tudorache said. He said he frequently hears from other legislators who are looking at the E.U.'s approach as they begin drafting their own AI bills.

"This legislation will represent a standard, a model, for many other jurisdictions out there," he said, "which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others."

After years of inaction in the U.S. Congress, E.U. tech laws have had wide-ranging implications for Silicon Valley companies. Europe's digital privacy law, the General Data Protection Regulation, has prompted some companies, such as Microsoft, to overhaul how they handle users' data even beyond Europe's borders. Meta, Google and other companies have faced fines under the law, and Google had to delay the launch of its generative AI chatbot Bard in the region due to a review under the law. However, there are concerns that the law created costly compliance measures that have hampered small businesses, and that lengthy investigations and relatively small fines have blunted its efficacy among the world's largest companies.

The region's newer digital laws - the Digital Services Act and Digital Markets Act - have already impacted tech giants' practices. The European Commission announced in October that it is investigating Elon Musk's X, formerly known as Twitter, for its handling of posts containing terrorism, violence and hate speech related to the Israel-Gaza war, and Thierry Breton, a European commissioner, has sent letters demanding other companies be vigilant about content related to the war under the Digital Services Act.

In a sign of regulators' growing concerns about artificial intelligence, Britain's competition regulator on Friday announced that it is scrutinizing the relationship between Microsoft and OpenAI, following the tech behemoth's multiyear, multibillion-dollar investment in the company. Microsoft recently gained a nonvoting board seat at OpenAI following a company governance overhaul in the wake of chief executive Sam Altman's return.

Microsoft's president, Brad Smith, said in a post on X that the companies would work with the regulators, but he sought to distinguish the companies' ties from other Big Tech AI acquisitions, specifically calling out Google's 2014 purchase of the London company DeepMind.

Meanwhile, Congress remains in the early stages of crafting bipartisan legislation addressing artificial intelligence, after months of hearings and forums focused on the technology. Senators this week signaled that Washington was taking a far lighter approach focused on incentivizing developers to build AI in the United States, with lawmakers raising concerns that the E.U.'s law could be too heavy-handed.

Concern was even higher in European AI circles, where the new legislation is seen as potentially holding back technological innovation, giving further advantages to the United States and Britain, where AI research and development is already more advanced.

"There will be a couple of innovations that are just not possible or economically feasible anymore," said Andreas Liebl, managing director of the AppliedAI Initiative, a German center for the promotion of artificial intelligence development. "It just slows you down in terms of global competition."

The deal on Friday appeared to ensure that the European Parliament could pass the legislation well before it breaks in May ahead of legislative elections. Once passed, the law would take two years to come fully into effect and would compel E.U. countries to formalize or create national bodies to regulate AI, as well as a pan-regional European regulator.

About the Author(s)

The Washington Post

The latest technology news from The Washington Post.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like