Comment Letter on NIST’s “AI Risk Management Framework Concept Paper”

Re: Feedback On AI Risk Management Concept Paper

To Whom It May Concern: The U.S. Chamber of Commerce’s Technology Engagement Center (“C_TEC”) appreciates the opportunity to submit feedback to the National Institute of Standards and Technology’s (“NIST”) “AI Risk Management Framework Concept Paper.” C_TEC appreciates NIST’s ongoing efforts to engage stakeholders during the Risk Management Framework process to help inform and further develop its approach to the voluntary framework. We also appreciate the opportunity to provide details and specifics on what we believe would be helpful in the upcoming first draft of the AI RMF.

First, C_TEC believes that more context is necessary to fully determine if the NIST AI Risk Management Framework is moving in the right direction. Specifically, we would encourage NIST to provide more information surrounding the “measurable criteria” for risk which it intends to use, especially since NIST indicated in section three “framing risk1” they plan to use a “broader definition that offers a more comprehensive view.”

Also, using the word risk to encompass positive and negative outcomes can be misleading. C_TEC would encourage NIST to be clear about how it is defining risk. Also, C_TEC would like to highlight that a critical issue for practitioners is assessing the probability of an event and its associated consequence. The necessary data may not be available to assess. The concept paper assumes that such data is widely available in sufficient quantities to develop “methods and metrics for quantitative or qualitative measurements.” The paper considers that such data is available in adequate amounts to build “methods and metrics for quantitative or qualitative measurement of the enumerated risks, including sensitivity, specificity, and confidence levels for specific inferences are identified and applied to the enumerated risks.”

Furthermore, the concept paper section highlights the risk that can be “analyzed,
quantified, or tracked where possible.” We encourage NIST to give more information and guidance in its upcoming draft on how it plans to address risk in situations when these areas cannot be easily measured in a quantifiable way.

Additionally, under the proposed AI RMF structure, C_TEC recommends NIST expand the descriptions within the functions, categories, and subcategories. This expansion should include information about the implementation of controls to mitigate risk to an acceptable level, according to predefined tolerances. Also, while the document references outcomes, these are often an end result. C_TEC recommends more emphasis on where and how AI is integrated into a process or experience. It is it how the AI is expected to be consumed that sets the contexts and possible impacts and expected value of integrating AI into the process or experience.

C_TEC would also like to reemphasize that there is a growing patchwork of local, state,
federal, and international regulations and standards around AI, which challenges AI actors and their ability to manage AI risks effectively. An overcomplicated regulatory environment potentially inhibits effective management. International standards serve as a tool for harmonizing regulatory processes to ensure greater interoperability and avoid a fragmented global network of differing regulations. This is why C_TEC supports efforts to harmonize with other international risk management standards, and only look to expand if they find substantive gaps within those standards.

Finally, C_TEC highly encourages NIST to consider policy prototyping as a mechanism to improve the efficacy of the risk management framework. C_TEC provided the following comment to NIST’s initial request for information:

“Policy prototyping is an experimentation-based approach for policy development that can provide a safe testing ground to test and learn early in the process how different approaches to the formulation of the AI RMF might play out when implemented in practice while assessing their impact before its actual release. Policy prototyping involves a variety of stakeholders that come together to co-create governance frameworks, including regulation and standards. Developing and testing governance frameworks collaboratively, this allows policymakers to see how such frameworks can integrate with other co-regulatory tools such as corporate ethical frameworks, voluntary standards, certification programs, ethical codes of conduct, and best practices, as referenced below. This method has been successfully used in Europe to test an AI Risk Assessment framework, leading to several concrete recommendations for improving self-assessments of AI.2”

C_TEC appreciates NIST’s ongoing efforts to improve the management of risk to
individuals, organizations, and those associated with AI by creating a voluntary Risk Management Framework. Establishing a voluntary Risk Management Framework has significant promise in creating an innovative environment for Artificial Intelligence, which is why we are eager to work with NIST to ensure that the AI RMF continues to support innovation in a way that strengthens public trust in AI. We thank you for your consideration of these comments and would be happy to further discuss any of these topics.

Sincerely,

Director, Policy
Chamber Technology Engagement Center
U.S. Chamber of Commerce