The Biden administration has initiated the development of crucial standards for AI technology. This step follows President Biden’s executive order from October, which seeks to solidify America’s stance as a pioneer in ethical AI use.
The Department of Commerce’s National Institute of Standards and Technology (NIST) is spearheading this initiative. NIST has opened its doors for contributions until February 2, inviting public input to refine AI system testing — a cornerstone for ensuring AI safety.
Commerce Secretary Gina Raimondo highlighted the administration’s commitment to fostering safe, secure, and trustworthy AI. This aligns with the President’s vision for responsible AI advancement, which is not only rapid but also aligns with American leadership values.
NIST’s agenda includes crafting evaluation guidelines and testing environments for AI, alongside developing industry benchmarks. The focus is on mitigating the risks associated with generative AI — a technology capable of creating realistic texts, images, and videos, which has raised concerns over job security, election integrity, and potential misuses.
Moreover, the administration is exploring “red-teaming” to assess AI vulnerabilities. This method, rooted in cybersecurity, involves simulating attacks to uncover potential threats. The effectiveness of red-teaming was showcased in a U.S. public assessment event, underscoring its value in understanding and managing AI-related risks.