C_TEC Letter to NIST on Proposal for Identifying and Managing Bias within AI

National Institute of Standards and Technology 
Attn: Information Technology Laboratory 
100 Bureau Drive 
Gaithersburg, MD  20899 

Re: Proposal for Identifying and Managing Bias within Artificial Intelligence (Draft NISTIR 1270)  

To Whom It May Concern: 

The U.S. Chamber of Commerce’s Technology Engagement Center (“C_TEC”) appreciates the opportunity to submit feedback to the National Institute of Standards and Technology’s (“NIST”) draft special publication on “A Proposal for Identifying and Managing Bias within Artificial Intelligence” (“publication”). C_TEC appreciates NIST’s ongoing efforts to advance understanding of AI and convene diverse sets of stakeholders to “advance methods to understand and reduce harmful forms of AI bias.” 

The draft publication addresses the importance of managing bias in the development of AI systems and proposes approaches to assessing and mitigating such bias. C_TEC believes that reducing and mitigating unwanted bias is extremely important in building trustworthy AI. A recent C_TEC report1 surveying top industry officials indicated that 68% believe that “bias” significantly impacts consumers’ trust in AI.  

In addition, we are encouraged by NIST’s efforts to develop a voluntary Risk Management Framework (RMF) for AI. We encourage NIST to use those upcoming workshops for the RMF to further engage with stakeholders to review how specific frameworks may impact particular industries. Additionally, these workshops can discuss current rules and regulations that are already in place to determine if further guidance is needed. We also encourage NIST to revisit this draft publication as the framework is updated to ensure alignment and updates where appropriate. 

Furthermore, C_TEC commends NIST’s understanding that “Bias is neither new nor unique to AI” and that bias within algorithms can both be both positive and negative. Also, we commend NIST’s acknowledgment that the “goal is not zero risk but, rather, identifying, understanding, measuring, managing and reducing bias.” However, C_TEC believes that some areas within the draft publication could benefit from further discussion and clarification. These are outlined below.   

First, the draft publication uses the International Organization for Standardizations (“ISO”) definition of “bias,” “the degree to which a reference value deviates from the truth.” While C_TEC is supportive of using standard definitions to help ensure common understanding, we do have concerns with ISO’s definition and the term “truth” being used.  We believe that truth can be subjective as what is determined to be true one day can be deemed inaccurate the next and vice versa. For this reason, we believe that any definition of bias should avoid using the term.   

Second, the draft publication outlines a three-stage approach of pre-design, design and development, and deployment in reviewing and mitigating bias. A similar process has been defined as “pre-processing, in-processing, and post-processing.” We are concerned that there are many terms for this approach being used within the current literature. We therefore encourage NIST to use established terms when possible and that regulatory agencies be consistent in their vocabulary. This will help reduce confusion or misinterpretation as stakeholders come from a wide array of industries.  

Third, C_TEC understands that the draft publication was not looking to “identify and tackle specific bias within cases.” However, we believe that NIST should look to provide real-world examples and use cases to help stakeholders, creators, and industries further understand how such mitigation techniques may be applicable to them. Providing real-world examples and use cases would likely improve stakeholder input as it would better allow for comments on how such processes and methods are transferable to their work.  

Fourth, NIST comments that “Another cause for distrust may be due to an entire class of untested and/or unreliable algorithms deployed in decision-based settings. Often a technology is not tested – or not tested extensively – before deployment, and instead deployment may be used as testing for the technology” While we understand NIST concerns with non-tested systems, we do believe it’s important for NIST to take the context of the deployment into consideration in a manner that does not incur ambiguity or unnecessary concern. The example within the draft publication was of an AI “systems during the COVID pandemic that has turned out to be methodologically flawed and biased.” C_TEC would like to highlight that there are times in which prior testing may not be feasible (e.g., emergency circumstances and needed training datasets that have yet to be created or developed.) 

 In cases like these, NIST should develop a framework or guidelines to help organizations build mechanisms that would help enable continual improvement and continuous identification and mitigation of issues.  

We further note that bias is not necessarily a risk in all use cases, as some AI systems may be working at such low stakes that bias may need to be reviewed and addressed differently. In these instances, a framework that accounts for the range of bias impacts may not be appropriate or necessary and could impose undue implementation costs. For these reasons, C_TEC is highly supportive of adopting a risk-based approach to AI governance which is outlined within our AI principles2.  

Fifth, the draft publication recommends new “standards and guides are needed for terminology, measurement, and evaluation of bias.” C_TEC believes it’s important to highlight that many different stakeholders have previously worked closely with their regulators to develop processes to managing and mitigating potentially harmful AI bias. This is why we believe it is vital that NIST work closely with those agencies in an effort to be consistent with other policies and procedures already in place.  

For example, the financial services industry is already heavily regulated and must adhere to fair lending standards, which look to reduce unwanted bias. In other cases, much more work is still needed to develop the appropriate standards and best practices around identifying, measuring, and evaluating bias and for understanding what mitigation strategies would be considered reasonable. The NIST framework should try to harmonize its framework with existing standards and work with stakeholders to develop the appropriate guidance. 

Sixth, Bias testing in some instances requires data on protected legal classes. Much of this data is not available, and or stakeholders do not store such data. For this reason, we would encourage NIST to provide further guidance about what they would consider reasonable expectations for those stakeholders.  

Seventh, the draft publication concludes that “bias reduction techniques are needed that are flexible and can be applied across contexts, regardless of industry.” C_TEC supports a flexible, non-prescriptive, and performance-based AI framework that can adapt to rapid changes and updates. However, we believe NIST should make further clarification on what type of “bias reduction techniques” or reasonable efforts companies can take towards managing and reducing the impacts of harmful biases NIST is referring to. Specifically, whether they are explicit to the AI algorithms or the processes to change an outcome to meet the definition of fairness.    

Eight, the draft publication correctly highlights the importance of problem formulation in addressing potential bias; however, guidance on best practices and how to operationalize those practices to address this concern remain underdeveloped. We would encourage NIST to elaborate on this topic in future work to facilitate a constructive conversation around the feasibility of scaling such practices. 

Ninth, we encourage NIST to include in its report a clear plan for how its work on measuring and mitigating AI bias can translate into harmonized standards across federal agencies. The proliferation across the federal government of definitions of fairness and bias and differing expectations on what constitutes reasonable efforts to mitigate bias creates a real challenge for entities seeking guidance on how to develop and measure their AI systems. NIST can play an important role in encouraging the government to adopt a harmonized approach to AI and bias, as it did in driving the adoption of cybersecurity standards across the federal government. 

Tenth, NIST has not taken the decommissioning phase into account within the “AI Lifecycle.” There are instances when systems will be retired and new systems will be deployed, which may lead to impact to different stakeholders differentially. We ask NIST to provide further clarification on how to address and mitigate such impacts during these transitions. 

In conclusion, NIST has a critical role in convening stakeholders to discuss ways to mitigate bias within AI systems.  C_TEC continues to support NIST’s efforts on this topic and again appreciates the opportunity to submit comments on this draft publication.  We look forward to collaborating with NIST on the next steps for this draft publication and future AI-related activities.  

Sincerely, 

Michael Richards  
Director, Policy 
Chamber Technology Engagement Center