Four Policies that Government Can Pursue to Advance Trustworthy AI
This past July, DeepMind, an artificial intelligence (AI) lab in London, announced a groundbreaking discovery. Using an AI technology called AlphaFold, DeepMind was able to predict the shapes of more than 350,000 proteins, 250,000 of which were previously unknown, and help develop entire new lifesaving drugs and other biological tools, which is particular helpful in the fight against COVID-19.
Broadly, AI is poised to transform the way Americans work, socialize, and other numerous facets of our lives. Deepmind is not the only example of AI’s benefits. AI has been cited to improve weather forecasting, make access to finance more inclusive, and keep fraudsters at bay. But like any technology, AI presents some risks too. To fully enable the benefits of AI, it is incumbent on policymakers to advance polices to facilitate trustworthy AI.
A recent report from the U.S. Chamber Technology Center (C_TEC) and the Deloitte AI Institutes highlights the proper role of the federal government in facilitating trustworthy AI and the importance of sound public policies to mitigate risks posed by AI and accelerate its benefits. Based on a survey of business leaders across economic sectors focused on AI, the report examines perceptions of the risks and benefits of AI and outlines a trustworthy AI policy agenda.
Through the right policies, the federal government can play a critical role to incentive the adoption of trustworthy AI application. Here are four key policy areas the government can pursue:
1. Conduct fundamental research in trustworthy AI: Historically, the federal government has played a significant role in building the foundation of emerging technologies through conducting fundamental research. AI is no different.
- The numbers: 70% of respondents supported government investment in fundamental AI research. It has been proposed that policymakers invest $25 billion in AI R&D, a number which we project will lead to between $94 billion and $154 billion of additional economic impact.
- Next steps: Policymakers should seize this opportunity by providing full funding for the National Artificial Intelligence Initiative Act of 2020, which creates critical programs to facilitate these important investments in trustworthy AI.
2. Improve access to government data and models: High quality data is the lifeblood of developing new AI applications and tools, and poor data quality can heighten risks. Governments at all levels possess a significant amount of data that could be used to both improve the training of AI systems and create novel applications.
- The numbers: 61% of respondents agree that access to government data and models is important.
- Next steps: As a critical first step, Congress enacted the OPEN Government Data Act in 2017 to facilitate access to federal government data. Policymakers can build on the success of this law through continuing its implementation through additional funding and oversight, and ultimately expand the scope of the law to include non-sensitive government models as well as datasets at the state and local level.
3. Increase widespread access to shared computing resources: In addition to high quality data, the development of AI applications requires significant compute capacity. However, many small startups and academic institutions lack sufficient computing resources, which in turn prevents many stakeholders to fully access AI’s potential.
- The numbers: 42% of respondents supported encouraging shared computing resources to develop and train new AI models.
- Next steps: Congress took a critical first step by enacting the National AI Research Resource Task Force Act of 2020. Now, the National Science Foundation and the White House’s Office of Science and Technology Policy should fully implement the law and expeditiously develop a roadmap to unlock AI innovation across all stakeholders.
4. Enable open source tools and frameworks: Ensuring the development of trustworthy AI will require significant collaboration between government, industry, academia, and other relevant stakeholders. One key method to facilitate collaboration is through encouraging the use of open source tools and frameworks to share best practices and approaches on trustworthy AI.
- The numbers: 54% of survey respondents identified open source tools and frameworks as a priority.
- Next steps: An example of how this works in practice is the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), which is intended to be a consensus-driven, cross-sector, and voluntary framework, akin to NIST’s existing Cybersecurity Framework, whereby stakeholders can leverage as a best practice to mitigate risks posed by AI applications. Policymakers should recognize the importance of these types of approaches and continue to support their development and implementation.
The United States has an enormous opportunity to transform its economy and society in positive ways through leading in AI innovation. As other economies contemplate their approach to trustworthy AI, this report outlines a path forward on how U.S. policymakers can pursue a wide range of options to advance trustworthy AI domestically, and empower the United States to maintain global competitiveness in this critical technology sector.
The Weekly Download
Subscribe to receive a weekly roundup of the Chamber Technology Engagement Center (C_TEC) and relevant U.S. Chamber advocacy and events.
The Weekly Download will keep you updated on emerging tech issues including privacy, telecommunications, artificial intelligence, transportation, and government digital transformation.