NIST Proposes a Risk Management Approach for Artificial Intelligence - Articles

Articles

02Sep

NIST Proposes a Risk Management Approach for Artificial Intelligence

The National Institute of Standards and Technology (NIST) is seeking input on a draft risk management approach to artificial intelligence (AI).

The framework (currently in second draft form) is “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”

Per the framework, "AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. AI systems may exhibit emergent properties or lead to unintended consequences for individuals and communities. A useful mathematical representation of the data interactions that drive the AI system’s behavior is not fully known, which makes current methods for measuring risks and navigating the risk-benefits tradeoff inadequate. AI risks may arise from the data used to train the AI system, the AI system itself, the use of the AI system, or interaction of people with the AI system. While views about what makes an AI technology trustworthy differ, there are certain key characteristics of trustworthy systems. Trustworthy AI is valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced."

NIST defines an “AI system” as “an engineered or machine-based system that can, for a given set of human-defined objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

NIST also proposed an AI risk management playbook that “includes suggested actions, references, and documentation guidance for stakeholders to achieve the outcomes for “Map” and “Govern” – two of the four proposed functions in the AI RMF. Draft material for the other two functions, Measure and Manage, will be released at a later date.”

NIST seeks public comment on the framework and playbook by September 29, 2022 (which will be made publicly available, so personal or sensitive information should not be included).

IA members with suggestions or recommendations on the potential impact on the insights industry should connect with IA staff.

UPDATE: NIST finalized the AI risk framework in January 2023.

About the Author

Howard Fienberg

Howard Fienberg

Based in Washington, DC, Howard is the Insights Association's lobbyist for the marketing research and data analytics industry, focusing primarily on consumer privacy and data security, the Telephone Consumer Protection Act (TCPA), tort reform, and the funding and integrity of the decennial Census and the American Community Survey (ACS). Howard has more than two decades of public policy experience. Before the Insights Association, he worked in Congress as senior legislative staffer for then-Representatives Christopher Cox (CA-48) and Cliff Stearns (FL-06). He also served more than four years with a science policy think tank, working to improve the understanding of scientific and social research and methodology among journalists and policymakers. Howard is also co-director of The Census Project, a 900+ member coalition in support of a fair and accurate Census and ACS. He has also served previously on the Board of Directors for the National Institute for Lobbying and Ethics and and the Association of Government Relations Professionals. Howard has an MA International Relations from the University of Essex in England and a BA Honors Political Studies from Trent University in Canada, and has obtained the Certified Association Executive (CAE), Professional Lobbying Certificate (PLC) and the Public Policy Certificate (PPC). When not running advocacy for the Insights Association, Howard enjoys hockey, NFL football, sci-fi and horror movies, playing with his dog, and spending time with family and friends.

Related

Learning From Consumer Complaints Under Virginia Privacy Law

Learning From Consumer Complaints Under Virginia Privacy Law

A recent analysis of the first year of consumer complaints under the Virginia Consumer Data Protecti...

Read More >
Hawaii Artificial Intelligence Safety and Regulation Act - S.B. 2572

Hawaii Artificial Intelligence Safety and Regulation Act - S.B. 2572

The Hawaii Artificial Intelligence Safety and Regulation Act (S.B. 2572) would launch a new state ag...

Read More >
New CPRA Rules Suddenly Take Effect Thanks to California Court Decision

New CPRA Rules Suddenly Take Effect Thanks to California Court Decision

A California court ruled that new regulations for the California Privacy Rights Act (CPRA) / Califor...

Read More >
California S.B. 892 Would Require AI Standards from Government Contractors

California S.B. 892 Would Require AI Standards from Government Contractors

California S.B. 892 would require the state to set safety, privacy, and nondiscrimination standards ...

Read More >
California A.B. 2013 Would Require Disclosures About the Training of AI

California A.B. 2013 Would Require Disclosures About the Training of AI

California A.B. 2013 would require expansive disclosures from developers of artificial intelligence ...

Read More >
Florida H.B. 473 and S.B. 658 Would Provide Safe Harbor from Data Security Lawsuits for ISO Certification

Florida H.B. 473 and S.B. 658 Would Provide Safe Harbor from Data Security Lawsuits for ISO Certification

Florida H.B. 473 and S.B. 658 would protect companies and organizations against data security breach...

Read More >
Members only Article - Please login to view