NIST Launches AI Safety Institute - Articles

Articles

14Mar

NIST Launches AI Safety Institute

A new institute focused on safety in artificial intelligence (AI) is being stood up at the National Institute of Standards and Technology (NIST), under the U.S. Department of Commerce.

Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio commented that, “the U.S. AI Safety Institute will be well positioned to develop the measurement science needed to promote safe and trustworthy development of AI,” as directed by President Biden’s recent executive order.

The recent minibus of Fiscal Year 2024 appropriations signed into law by the President included $10 million specifically to stand up this new institute.

“The Safety Institute’s ambitious mandate to develop guidelines, evaluate models, and pursue fundamental research will be vital to addressing the risks and seizing the opportunities of AI,” said Elizabeth Kelly, Special Assistant to the President for Economic Policy, and head of the new U.S. AI Safety Institute. “While our first priority will be executing the tasks assigned to NIST in President Biden’s executive order, I look forward to building the Institute as a long-term asset for the country and the world.”

Elham Tabassi, Chief AI Advisor for NIST, will serve as chief technology advisor for the new institute.

About the Author

Howard Fienberg

Howard Fienberg

Based in Washington, DC, Howard is the Insights Association's lobbyist for the marketing research and data analytics industry, focusing primarily on consumer privacy and data security, the Telephone Consumer Protection Act (TCPA), tort reform, and the funding and integrity of the decennial Census and the American Community Survey (ACS). Howard has more than two decades of public policy experience. Before the Insights Association, he worked in Congress as senior legislative staffer for then-Representatives Christopher Cox (CA-48) and Cliff Stearns (FL-06). He also served more than four years with a science policy think tank, working to improve the understanding of scientific and social research and methodology among journalists and policymakers. Howard is also co-director of The Census Project, a 900+ member coalition in support of a fair and accurate Census and ACS. He has also served previously on the Board of Directors for the National Institute for Lobbying and Ethics and and the Association of Government Relations Professionals. Howard has an MA International Relations from the University of Essex in England and a BA Honors Political Studies from Trent University in Canada, and has obtained the Certified Association Executive (CAE), Professional Lobbying Certificate (PLC) and the Public Policy Certificate (PPC). When not running advocacy for the Insights Association, Howard enjoys hockey, NFL football, sci-fi and horror movies, playing with his dog, and spending time with family and friends.

Related

Vermont H. 710 Would Restrict Development of Generative AI Systems

Vermont H. 710 Would Restrict Development of Generative AI Systems

Vermont H. 710 would require significant risk mitigation, outside evaluation, and impact assessments...

Read More >
Massachusetts AG Shares Legal Advisory on Artificial Intelligence

Massachusetts AG Shares Legal Advisory on Artificial Intelligence

The Massachusetts Attorney General (AG) recently shared an advisory for artificial intelligence (AI)...

Read More >
California A.B. 2877 Would Prohibit AI Training with Sensitive Data from Minors

California A.B. 2877 Would Prohibit AI Training with Sensitive Data from Minors

California A.B. 2877 would prohibit developers from using the sensitive personal information of mino...

Read More >
American Privacy Rights Act - 2024 Federal Privacy Legislation from Sen. Cantwell and Rep. McMorris Rodgers

American Privacy Rights Act - 2024 Federal Privacy Legislation from Sen. Cantwell and Rep. McMorris Rodgers

​​​​​​​The chairs of the U.S. Senate Commerce Committee and House Energy & Commerce Committee...

Read More >
California A.B. 2875 Would Promote Education About AI

California A.B. 2875 Would Promote Education About AI

California A.B. 2875 would require that all K-12 students receive instruction in artificial intellig...

Read More >
California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - S.B. 1047

California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - S.B. 1047

The California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (S.B. 1047...

Read More >
Members only Article - Please login to view