Biden Administration Takes First Step Towards Setting AI Standards

Biden Administration Takes First Step Towards Setting AI Standards
Image Credit: The New York Times

Biden Administration’s Push for AI Standards

The Biden administration announced on Tuesday that it has initiated the initial phase of formulating essential standards and guidance for the secure deployment of generative artificial intelligence (AI) and the methods to test and protect such systems.

The National Institute of Standards and Technology (NIST) under the Commerce Department is leading this effort and has called for public input by February 2nd to gather insights for conducting crucial tests that are vital in ensuring the safety of AI systems.

Commerce Secretary Gina Raimondo emphasized that this initiative stems from President Joe Biden’s executive order on AI issued in October. The goal is to establish “industry standards around AI safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology.”

NIST is actively working on developing guidelines for the evaluation of AI, facilitating the creation of standards, and providing testing environments for assessing AI systems. The agency is specifically seeking input from AI companies and the public on the management of risks associated with generative AI and ways to mitigate the risks of AI-generated misinformation.

Generative AI, which has the capability to generate text, photos, and videos in response to open-ended prompts, has generated both excitement and concerns in recent months. There are fears that it could lead to job displacement, influence elections, and potentially surpass human capabilities with catastrophic effects.

President Biden’s executive order mandates agencies to establish standards for testing and address potential risks related to chemical, biological, radiological, nuclear, and cybersecurity aspects.

NIST is focusing on formulating guidelines for testing, including the identification of areas where external red-teaming would be most beneficial for assessing and managing AI risks. The term “red-teaming” has been traditionally used in cybersecurity to simulate potential threats, with historical roots in U.S. Cold War simulations where the adversary was referred to as the “red team.”

In August, a groundbreaking event in the form of the first-ever U.S. public assessment “red-teaming” took place during a significant cybersecurity conference. The event, organized by AI Village, SeedAI, and Humane Intelligence, involved thousands of participants attempting to identify undesirable outputs or vulnerabilities in AI systems. The White House noted that the event effectively showcased how external red-teaming can be a valuable tool in identifying novel AI risks.

Google News Icon

Add Slash Insider to your Google News Feed

Source(s): Reuters

The information above is curated from reliable sources, modified for clarity. Slash Insider is not responsible for its completeness or accuracy. Please refer to the original source for the full article. Views expressed are solely those of the original authors and not necessarily of Slash Insider. We strive to deliver reliable articles but encourage readers to verify details independently.