Comment to the AI Commission on the Legal Definition of AI
Re: Legal Definition of Artificial Intelligence
Dear Co-Chairs Delaney and Ferguson, and Commissioners:
The U.S. Chamber of Commerce’s Technology Engagement Center (“C_TEC”) appreciates the opportunity to submit feedback to the Commission on Competitiveness, Inclusion, and Innovation’s first request for information around the legal definition of artificial intelligence.
C_TEC appreciates the Commission’s efforts to work with all stakeholders to find durable bipartisan policy recommendations to help the United States develop A.I. C_TEC firmly believes that the country that creates the correct regulatory climate around artificial intelligence is the one poised to take advantage of its incredible ability to help society at large.
1. What should be the legal definition of A.I.?
C_TEC believes that any definition of A.I. should be sufficiently flexible to accommodate technical progress while being precise enough to provide the necessary legal certainty. We welcome a definition of A.I. that is not overly broad and focuses on A.I. systems that learn and adapt over time.
Both the OECD Expert Group definition and the Financial Stability Boards definitions on Artificial Intelligence are strong legal definitions that the Commission should consider when determining how to define Artificial Intelligence.
OECD Expert Group on A.I.: “An A.I. system is a machine-based system that is capable of influencing the Environment by making recommendations, predictions, or decisions for a given set of objectives. It does so by utilizing machine and/or human-based inputs/data to: i) perceive real and/or virtual environments; ii) abstract such perceptions into models manually or automatically; and iii) use model interpretations to formulate options for outcomes.1“
The Financial Stability Boards: “The theory and development of cognitive computer systems* able to perform tasks that traditionally have required human intelligence. Cognitive computer systems are computer systems that learn and/or reason by acquiring knowledge and understanding through data and experience.2”
C_TEC feels that both definitions notably address the need to focus on learning systems.
2. Should there be sector-specific definitions for A.I.?
C_TEC believes that rather than sectoral definitions of A.I., a definition and assessment of an A.I. system should be contextual, because it is responsive to the relative risks and benefits of specific uses of A.I. systems. Even within a sector such as health care, there can be higher risk, and lower risk uses of A.I. systems. Moreover, a single A.I. system can be used in thousands of different applications across various sectors and contexts.
3. Currently, there are multiple definitions already being used. Please indicate what concerns or necessary additions are needed in the following definitions:
- Definition used in the John S. McCain National Defense Authorization Act for Fiscal Year 2019 (P.L. 115-232, “John McCain NDAA”):
C_TEC believes that the John McCain NDAA definition includes some positive elements in the sense that it is trying to specify important characteristics unique to artificial intelligence while trying to exclude certain software from the scope. However, the five listed criteria collectively can be quite broad. For example, almost any complex artificial system could “solve(s) tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.” Instead, we believe that either the definition should be clarified to ensure that all five criteria must be met, or the definition should use only the first criteria (with slight modifications as shown below), which would both encompass the key elements of the other four criteria and ensure it only impacts software that “learns and adapts” over time. This is the core of the difference between A.I. and other software.
Suggested redlining for criterion (1): Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, AND that learns from experience and improves performance when exposed to data sets.
- European A.I. Act Definition:
The proposed definition within the European A.I. Act is too broad. This definition of A.I. could potentially reach almost any modern software-based product because, at some level, all software is logic-based. The proposal’s definition appears broad enough to cover everything from a simple digital wristwatch to the most complex machine learning approaches of various kinds, but also includes all “logic- and knowledge-based approaches,” and any “statistical approach, Bayesian estimation, search and optimization methods.”
- Canadian definition
The proposed Canadian definition is very broad and would apply AI-specific regulation to a wide swath of technologies, many of which present low risks. We believe that the definition should be made clearer and narrower to focus on A.I. systems that learn and adapt over time. These are the capabilities that are at the core of A.I.
- OECD definition:
C_TEC believes that the definition should address the need to focus on learning systems while also being mindful of the specific context of different parts of the A.I. development lifecycle. Such a definition would allow for more nuanced A.I. governance focused on specific problems arising in specific contexts.
4. Are there any other key definitions that should be defined considering the Commission’s goal of providing policy recommendations to enhance responsible U.S. leadership in Artificial Intelligence?
C_TEC believes there are other terms that need to be defined to better position the United States to lead in the development of A.I. globally. These terms include, but are not limited to, the following: Machine Learning, Fairness, Bias, and Explainability.
5. The Commission plans to address issues pertaining to Competitiveness, Inclusion, and Innovation.
- Other than bolstering R&D, are there any other ways the U.S. can improve its competitive posture?
C_TEC believes that an overcomplicated regulatory environment would harm the United States’ competitive posture and risk stifling innovation. For this reason, C_TEC would encourage any effort to facilitate constructive dialogue between regulators to avoid fragmentation of approaches between jurisdictions, which may add additional cost, complexity, and uncertainty for firms (particularly global firms), limiting the potential benefits for both firms and their clients. Any effort the Commission can make to harmonize globally should be encouraged, as they would significantly help the U.S. to improve its overall competitive posture.
Furthermore, C_TEC firmly believes that any regulatory approach or guidance should be principles-based, technology-neutral, and focus on outcomes, not impose requirements on specific processes or techniques.
- Which issues should the Commission address to ensure A.I. is deployed in a responsible and accountable manner?
C_TEC firmly believes that the Commission should look at ways to help support and build on ongoing efforts to establish best practices in the field of responsible A.I. development. We believe that legal certainty around A.I. developers’ obligations can be achieved while still preserving the flexibility to accommodate changing needs and norms – and the ability to take full advantage of the powerful economic benefits of A.I. – as the technology evolves.
As with other emerging technologies, a proactive and thoughtful approach to determining the correct governance structure is necessary. It is essential to ensure that A.I. governance is based on our fundamental values of respect for human rights, democracy, and the rule of law. Some of the elements that contribute to such a responsible A.I. approach are:
- Identifying and Addressing Fairness, Bias & Discrimination: Advancing A.I. systems that work fairly and equitably for everyone is incredibly important. There’s not yet a consensus about how fairness should be defined or how to operationalize fairness definitions, nor is there agreement on how to reconcile fairness with users’ privacy interests. To determine whether an algorithm is indeed causing skewed distribution along protected class lines, an organization would first need to collect or infer protected class data about people, and that may be in conflict with data minimization and data protection principles. Existing legislative proposals around A.I. often attempt to address fairness by mandating certain requirements or quality standards for data sets. Focusing exclusively on issues with data sets, however, is problematic because it assumes that the primary issue with adverse bias comes from data sets. An alternative approach to regulation would be a holistic one that looks at the specific context in which the A.I. is used and whether the system’s design, inputs, and outcomes are appropriate for that context, rather than focusing exclusively on the inputs. A good approach is one that sees fairness as a process. For example, language requires companies to assess the fairness risk and goals for a specific product and to document those decisions, processes, and practices.
- Transparency & Explainability: Regulation should be underpinned by a high-risk approach to transparency and explainability, as well as user-focused. Any obligation to produce impact assessments, reports, or to explain the logic of A.I. Systems should only apply to high-risk applications. Additionally, provisions around transparency and explainability should not be too prescriptive in terms of technical details, which grant enough room for stakeholders to develop and deliver the right tools to explain to both expert and non-expert audiences how our systems decide.
- Multi-stakeholder Governance Framework: A.I. is still an emerging technology where standards and practices are being developed around the world. Building these norms, standards, and practices will require a concerted effort to coordinate between the many actors in the A.I. ecosystem and ensure that the A.I. regulatory landscape is not bifurcated to the detriment of people, innovation, and the competitiveness of digital economies. Only through the collaboration with—and engagement of—all stakeholders can we strike the right balance between regulation and innovation, crafting standards that are people-centric whilst also being practical and achievable to ensure that crucial innovation is not hindered. C_TEC would highly recommend that the Commission collaborates with other organizations and agencies, such as NIST, in any effort to coordinate best practices.
- What questions should the Commission address in terms of preparing the workforce for an AI-driven economy?
C_TEC believes that the Commission should look at how A.I. can be utilized to assist in the training of the future workforce and how A.I. can help increase the overall productivity of the labor market. C_TEC strongly believes that artificial intelligence can be an essential tool for Americans and their work. We encourage the Commission to look at what skills are necessary for the future workforce to utilize artificial intelligence efficiently and ethically.
- Are there any other issues the Commission should address?
C_TEC highlights and encourages the Commission to look at ways the government can utilize current legislative strategies in conjunction with other co- or self-regulatory instruments (corporate ethical frameworks, standards, ethical codes of conduct, etc.). Implementing Regulatory Sandboxes (R.S.) and Policy Prototyping Programs (PPPs) are examples of methods to test future laws or other governance instruments on algorithmic accountability. Given the difficulty in assessing the most appropriate, feasible, and balanced legislative instruments on a complex topic such as algorithmic accountability, R.S. and PPPs can provide a safe testing ground to assess different iterations of legislative models of governance before their actual enactment.
Conclusion:
C_TEC once again appreciates the opportunity to provide feedback to the Commission on Competitiveness, Innovation, and Inclusion’s first request for information. Furthermore, we believe that developing bipartisan and durable policy solutions is extremely important in helping the United States develop a regulatory environment that allows for A.I. to thrive and does not stifle innovation and advancement within the field. C_TEC stands ready to work with the Commission and looks forward to providing further feedback on your ongoing efforts.
Sincerely,
Michael Richards
Director, Policy
Chamber Technology Engagement Center
U.S. Chamber of Commerce
The Weekly Download
Subscribe to receive a weekly roundup of the Chamber Technology Engagement Center (C_TEC) and relevant U.S. Chamber advocacy and events.
The Weekly Download will keep you updated on emerging tech issues including privacy, telecommunications, artificial intelligence, transportation, and government digital transformation.