AI is transforming the world in many ways, from improving healthcare and education to creating new forms of entertainment and communication. But AI also poses significant challenges and risks, such as generating misinformation, violating privacy, and undermining democratic values. How can we ensure that AI is developed and deployed in ways that are beneficial for society and aligned with existing laws and norms?
A recent policy brief from MIT proposes a framework for U.S. AI governance that aims to address these issues. The brief, written by Dan Huttenlocher, Asu Ozdaglar, and David Goldston, in consultation with an ad hoc committee on AI regulation, offers the following key recommendations:
- Extend current legal frameworks to activities involving AI, to the greatest extent possible. This means that if human activity without the use of AI is regulated, then the use of AI should similarly be regulated. For example, AI systems used in healthcare, law, hiring, and finance should be subject to the same standards and procedures as humans acting without AI in those domains.
- Require providers of AI systems to identify the intended purpose(s) of an AI system before it is deployed and to follow mechanisms such as contractual relationships, disclosures, and audits to ensure that the system performs as expected and complies with relevant regulations.
- Develop auditing regimes to evaluate the performance and impact of AI systems, both before and after deployment, especially in high-risk settings. Audits should be based on clear principles and standards and should protect the intellectual property and confidentiality of the AI system provider and the user.
- Increase the interpretability of AI systems, i.e., the ability to provide a sense of what factors influenced a recommendation or what data was drawn on for a response. This can be encouraged through regulation, liability, or user demand, depending on the context and the risk involved.
- Recognize that AI systems will often be built “on top of” each other, forming an “AI stack” that comprises multiple components. The ultimate provider and user of a service provided by an AI stack would be primarily responsible for how it operates, but the provider of a component system may also share responsibility if the component does not perform as promised.
- Regulate general-purpose AI systems, such as large language models, that can serve as the foundation of other systems and have broad uses and availability. Providers of general-purpose AI systems should disclose whether certain uses are intended or not and have guardrails against unintended or harmful uses. They should also monitor the uses and effects of their systems and label AI-generated content as such.
- Establish a new federal agency or a self-regulatory organization to oversee aspects of AI that lie beyond the scope of currently regulated application domains and that cannot be addressed through audit mechanisms. The agency should have highly qualified technical staff who can also advise existing regulatory agencies that are handling AI matters.
- Encourage more research on how to make AI systems more beneficial, both in the private sector and in the public sector. The government should increase its funding of public research on large-scale AI systems, and provide a clear framework for intellectual property rights in the context of AI.
The policy brief argues that these recommendations will help maintain U.S. AI leadership while ensuring that AI is deployed in ways that prioritize security, privacy, safety, shared prosperity, and democratic values. The brief also acknowledges the limitations and uncertainties of the current state of AI and calls for flexibility and adaptation as the technology evolves.
You can read the full policy brief here.
Leave a Reply