Skip navigation
EU flag in front of building Alamy

Europe Moves Ahead on AI Regulation, Challenging Tech Giants’ Power

The E.U. AI Act, as it's called, would set new guardrails on generative AI and limit the use of "high-risk AI" like predictive policing tools and systems that could influence voters in elections.

European Union lawmakers on Wednesday took a key step toward setting unprecedented restrictions on how companies use artificial intelligence, putting Brussels on a collision course with American tech giants funneling billions into the technology.

The European Parliament adopted its position on legislation known as the E.U. AI Act, which would ban systems that present an "unacceptable level of risk," such as predictive policing tools, or social scoring systems, like those used in China to classify people based on their behavior and socioeconomic status. The legislation also sets new limits on "high-risk AI," like systems that could influence voters in elections or introduce harms to people's health.

The legislation would set new guardrails on generative AI, requiring content created by systems like ChatGPT to be labeled. The bill also requires models to publish summaries of copyrighted data used for training, a potential impediment for systems that generate humanlike speech by scraping text from the internet, often from sources that include a copyright symbol.

The threat posed by the legislation is so grave that OpenAI, the maker of ChatGPT, said it may be forced to pull out of Europe, depending on what's included in the final text. The Parliament's approval marks a critical step in the legislative process, but the bill is still pending negotiations with the European Council, which is composed of representatives from E.U. member states.

"We have made history today," co-rapporteur Brando Benifei, an Italian member of the European Parliament working on the AI Act, said in a news conference. Benefei said the lawmakers "set the way" for a dialogue with the rest of the world on building "responsible AI."

Unlike domestic lawmakers, the European Union has spent years developing its artificial intelligence legislation. The European Commission first released a proposal more than two years ago, and has amended it in recent months to address new concerns introduced by recent advances in generative AI.

The progress stands in stark contrast to the U.S. Congress, where lawmakers are newly grappling with the risks of AI, following the surging popularity of ChatGPT. Senate Majority Leader Charles E. Schumer (D-N.Y.), who is leading bipartisan efforts to craft an AI framework, said they are likely still months away from considering any legislative response, telling The Washington Post that lawmakers would "start looking at specific stuff in the fall."

Meanwhile, the European Union's proposed bill builds on scaffolding already in place, adding to European laws on data privacy, competition in the tech sector and the harms of social media. Already those existing laws impact companies' operations in Europe: Google planned to launch its chatbot Bard in the European Union this week, but it had to postpone following requests for privacy assessments from the Irish Data Protection Commission, which enforces Europe's General Data Protection Regulation. Italy temporarily banned ChatGPT amid concerns it broke Europe's data privacy rules.

The move solidifies Europe's position as the de facto global tech regulator, setting rules that influence tech policymaking around the world and standards that will likely trickle down to all consumers, as companies shift their practices internationally to avoid a patchwork of different policies. Microsoft for instance has said it would "extend the rights that are at the heart of GDPR" to all consumers globally, regardless of whether they reside in Europe.

Meanwhile, efforts are progressing slowly in the United States, where Congress has not passed a federal online privacy bill or other comprehensive legislation regulating social media. On Tuesday, Schumer hosted the first of three private AI briefings for lawmakers. MIT professor Antonio Torralba, who specializes in computer vision and machine learning, was scheduled to brief lawmakers on "Where is AI Today," covering where AI is deployed and what it's currently capable of. The next session will look at the future of AI and how it could evolve over the next decade, and the third, classified session will cover how the military and intelligence community currently uses AI.

Thirty-six Democrats and 26 Republicans attended the briefing, according to Gracie Kanigher, Schumer's press secretary. Senators said the strong attendance signaled the deep interest in the topic on Capitol Hill and described the briefing as largely educational. Schumer told The Post that Congress has "a lot to learn."

"It's hard to get your arms around something that is so complicated and changing so quickly but so important," he said.

Several Democratic lawmakers said they are wary of once-again falling behind in setting rules of the road on technology to Europe.

"The United States should be the standard setter … We need to lead that debate globally and I think we're behind where the E.U. is," said Sen. Michael F. Bennet (D-Colo.).

But Sen. Mike Rounds (R-S.D.), a Republican working with Schumer on AI, said he's less concerned about falling behind in setting new guardrails than he is about ensuring that the United States can stay ahead globally in terms of developing new tools like generative AI.

"We're not going to lose that lead, but what we do with legislation, our goal is to make sure that we incentivize the creation of AI, allow it to grow more quickly than in other parts of the world … but also to protect the rights of individuals," Rounds said after the briefing.

--Cat Zakrzewski and Cristiano Lima, The Washington Post

TAGS: Security
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.