Holistic AI’s Response to the NTIA’s Request for Comments on Open-Weight AI Models

April 2, 2024
Authored by
No items found.
Holistic AI’s Response to the NTIA’s Request for Comments on Open-Weight AI Models

On 26 March 2024, Holistic AI submitted its response to the National Telecommunications and Information Administration’s (NTIA) request for public comments on the policy implications of Dual-Use Foundation models with widely available weights. The NTIA invited written comments on the risks and benefits related to open model weights, with the view of developing policy recommendations that mitigate the risks while maximizing the benefits. The NTIA was given this task under the purview of its mandate established by the Biden Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

The closing of this comment period on 27 March 2024 came on the same day as the NTIA’s publication of the AI Accountability Policy Report, within which Holistic AI’s policy expertise on algorithm auditing was extensively cited. We are pleased that the NTIA shares our view of the importance of independent AI audits and other evaluation methods to support AI system assurance and trust.

The request for comments on open-weight AI models sought input on the following issues:

  • The different levels of model openness
  • The benefits and risks associated with both open and closed models
  • Concerns around innovation, competition, safety, security, trustworthiness, equity, and national security when making model weights more or less open
  • The role of the U.S. government in guiding, supporting, or restricting the availability of AI model weights.

Key takeaways from our submission

Holistic AI firmly believes that the release scope of highly capable foundation models with open weights must be premised on safety calibrations, operationalised through a proportional combination of internal and external evaluations. Leveraging our expertise in algorithm auditing and AI risk management, we emphasize the necessity of evaluations as a precursor to any level of model release so as to determine risks and implement appropriate safeguards.

Examples of risks that may arise as the level of access to a model increases include the training of models by downstream users for malicious misuse; the ability to easily customize a wide range of downstream applications through adaptations such as finetuning, quantization, and pruning; and reinforcing bias where models trained on biased data continue to reinforce said biases. Moreover, our comments raise concerns about the provision of access to a model with even fewer restrictions if model weights are released with other model components like training code. Such open access to a model can have significant safety ramifications and profound economic, political, and societal effects.

As such, we contend that while opening up models can bring tremendous benefits like advancing scientific research, innovation, and democratization of the AI ecosystem, such benefits have to be tempered against the associated risks as well as considerations that increasing model access might not be the only way to realize these same benefits.

In order to navigate these trade-offs between benefits and risks, we recommend gradient-based safety strategies to determine proportionate and technically feasible risk mitigations and investigations at every level of access to a model. A thoughtful blend of access-dependent and independent approaches is therefore warranted to gauge the safety levels of a model, guiding its responsible release along the gradient. Additionally, while internal safeguards deployed by model providers have been successful in operationalizing model safety and risk management, it is crucial to embed external oversight and validation mechanisms. Audits are pivotal in this context, as they facilitate robust system assurance by conducting impartial and independent evaluations.

Additionally, we provide an initial framework to guide further research efforts on defensibly navigating the many risk-benefit trade-offs associated with increased openness and access. We encourage entities like the NTIA to incorporate these insights into their policy agenda, as we believe that integrating such perspectives will help ground the contentious open-closed source AI debate on safety calibrations, evaluations, auditing and risk management.

AI Governance with Holistic AI

Schedule a call with our governance experts to find out how Holistic AI’s specialist team can help your organization navigate the dynamic responsible AI ecosystem with confidence.

Download our comments on the model open weights here:

Download our comments here

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call