Skip navigation
congress dkfieldin/iStock/Getty Images Plus

Biden’s Former Tech Adviser on What Washington Is Missing About AI

Tim Wu, an architect of President Biden's antitrust policy, outlined what he thinks officials should do to keep AI in check -- and what they should avoid.

Tim Wu, an architect of President Biden's antitrust policy, left the White House in January just as Silicon Valley's artificial intelligence craze was soaring to new heights.

But now as efforts to rein in AI tools like ChatGPT gain steam in Washington, the former Biden tech adviser is trying to ensure legislators and regulators don't veer off course.

Wu, now back at Columbia Law School, has been meeting in recent weeks with officials at the White House, the Justice Department and on Capitol Hill - including the office of Senate Majority Leader Charles E. Schumer (D-N.Y.) - to lay out his vision for how to regulate AI.

In a wide-ranging interview last week, Wu said he's concerned the AI debate so far has focused "pretty narrowly" on abstract risks posed by the tools rather than concrete harms already underway - and that industry giants are playing too big a role in shaping potential rules.

"There's a lot of . . . economic possibility in this moment. . . . There's also a lot of possibility for the most powerful technological platforms to become more powerful and more entrenched," he said.

Wu, an influential voice in discussions around tech regulation, outlined what he thinks officials should do to keep AI in check - and what they should avoid. Here's a breakdown:

Don't: Create an AI licensing system

Wu, a prominent critic of Silicon Valley's most powerful companies, shot down proposals that heavyweights like OpenAI and Microsoft have floated to create licensing requirements for operators of large AI models like ChatGPT.

"Licensing regimes are the death of competition in most places they operate," said Wu, who helped develop Biden's executive order urging greater federal action on competition.

He argued heavy licensing requirements would make compliance more difficult for smaller companies and could ultimately decide "who gets to be in the market and who doesn't."

Do: Require AI to proactively identify itself

If you're dealing with an AI model, you should know it, Wu said, and operators should be required to make it so that such tools preemptively identify themselves. It wouldn't be enough for a chatbot like ChatGPT to simply answer "yes" if you ask if they are AI, he said.

Wu said an agency such as the Federal Trade Commission could be tasked with developing formats for how different types of AI products could comply with the rules.

In addition to boosting transparency, Wu said it could help on an array of consumer protection fronts, including, for example, cracking down on misleading AI-generated reviews for Amazon products. (Amazon founder Jeff Bezos owns The Washington Post.)

Don't: Create an AI-focused federal agency

While lawmakers such as Sen. Michael F. Bennet (D-Colo.) have proposed launching a new federal agency to oversee digital platforms, including their use of AI, Wu said he's concerned such approaches could "advantage existing entities" and "freeze the industry even before it gets started."

"I'm not in favor of an approach that would create heavy compliance costs for market entry and that would sort of regulate more abstract harms," he said.

Do: Enforce the laws on the books

While there's significant discussion about what new rules may be needed to deal with AI harms and risks, Wu said the federal government isn't starting from scratch. He said enforcers can lean on existing rules against deceptive and misleading practices to tackle potential abuses.

"We need to do things like enhance what are . . . essentially the deception and fraud laws," he said, adding that mandated self-identification would help cut down on fraud and scams.

Don't: Create transparency rules and call it a day

Wu said he's all for more transparency around how AI operators make their products, but simply creating new disclosure requirements would not address underlying harms.

"It's a bad temptation in Washington . . . to, when they lack anything better as an idea, resort to transparency as a way of everyone to save face and satisfy themselves they've actually done something," he said. "It's not bad, but it's not enough."

Do: Create a robot penal code

A major hurdle, he said, is that federal law has naturally been crafted to deal with lawbreaking by humans and that cases often hinge on concepts like "intent," "malice" or "recklessness" that don't map as well onto AI - despite some claims of its sentience.

"We have a pressing need to figure out the areas of the legal code that are likely to be violated by an AI likely to cause harm, but where the laws are written with a human in mind," he said.

Wu said the Justice Department could take the lead on identifying instances where AI can cause harm, but there's not a clear legal path to seek a remedy and Congress could fill in the gaps.

Don't: Subsidize AI for tech giants

Wu hailed the United States' long tradition of "very generously" funding research in tech. But he said lawmakers should be wary of subsidizing tech giants' AI expansion efforts.

"There's no need to give money to companies that already have a lot of money and are already profitable, and that needs to be avoided at any cost," he said.

Do: Make sure content creators get paid

Companies require huge troves of data to train their AI models, at times relying on massive amounts of copyrighted material. Officials and industry leaders have stressed the importance of making sure content creators get compensated for their work, but how to do so is still a debate.

Wu said officials could model their solution after the mandatory licensing system used to ensure composers are compensated when their songs run on the radio and have content creators receive a proportional payout when their work is used to train an AI model.

Do: Encourage open-source, publicly funded AI

Wu said the government should look to supercharge efforts to create open-source AI models, which could help address concerns about concentration and spur broader innovation.

One way to do so, he said, could be to support an AI "public option," borrowing inspiration from the publicly funded ARPANET in the '70s and '80s that gave way to the creation of the internet.

"For some reason, the last 20 years we've assumed everything can happen completely privately, and I think we should learn the lesson from that," he said.

--Cristiano Lima and David DiMolfetta, The Washington Post

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish