Skip navigation
MIT Experts Call for Expanded AI Governance and Regulation Alamy

MIT Experts Call for Expanded AI Governance and Regulation

New policy brief outlines the need for AI regulation in various sectors, emphasizing AI's legal and ethical governance challenges.

This article originally appeared in AI Business.

MIT researchers and academics have put together a policy paper calling on the US government to expand the governance of AI using existing regulations.

The 10-page document, ‘creating a safe and thriving AI sector’, proposes that current legal frameworks should cover AI – like extending health care regulation to AI diagnosis. They also want AI to be covered by rules regulating government activities, including policing, setting bail and hiring.

The group said that laws and regulations regarding AI should be enforced by the same entity that governs human actions without AI in the same domain, adding: “This may require such entities to develop some AI expertise.”

The policy paper reads: “If human activity without the use of AI is regulated, then the use of AI should similarly be regulated. The development, sale, and use of AI systems should, whenever possible, be governed by the same standards and procedures as humans acting without AI in the AI’s domains of applicability.”

The MIT group contends that this would ensure higher-risk applications would be covered by existing laws. One area where this is already being explored that was referenced in the paper is autonomous vehicles, which are held to the same standards as those operated by humans.

The report's authors argue that those building general-purpose AI systems, like ChatGPT, should be required to identify the intended purpose of such a system before release. They also propose having regulators issue rules on defining intended uses so developers can ensure their systems match what’s been laid out.

The group also calls for clarifications to be made around intellectual property infringements arising from AI, including how creators can guard against and identify potential infringement. One way suggested is the mandatory labeling of AI-generated content.

The paper reads: “It is unclear whether and how current regulatory and legal frameworks apply when AI is involved, and whether they are up to the task. This leaves providers, users, and the general public in a caveat emptor situation. There is little to deter the release and use of risky systems and little incentive to proactively uncover, disclose, or remediate issues.

“Additional clarity and oversight regarding AI are needed to facilitate the development and deployment of beneficial AI and to more fully and smoothly realize AI’s potential benefits for all Americans.”

“As a country, we’re already regulating a lot of relatively high-risk things and providing governance there,” Dan Huttenlocher, dean of the MIT Schwarzman College of Computing told MIT News. “We’re not saying that’s sufficient, but let’s start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach.”

Several additional policy papers were also published covering large language modelspro-worker AI, and labeling AI-generated content.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish