U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breadcrumb

  1. Home
  2. News
  3. Press Releases

Was this page helpful?

Department of Commerce Announces New Guidance, Tools 270 Days Following President Biden’s Executive Order on AI

FOR IMMEDIATE RELEASE

**For the first time, Commerce makes public new NIST draft guidance from the U.S. AI Safety Institute to help AI developers evaluate and mitigate risks stemming from generative AI and dual-use foundation models.**

Read the White House Fact sheet on Administration-wide actions on AI.

The U.S. Department of Commerce announced today, on the 270-day mark since President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, the release of new guidance and software to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems.

The Department’s National Institute of Standards and Technology (NIST) released three final guidance documents that were first released in April for public comment, as well as a draft guidance document from the U.S. AI Safety Institute that is intended to help mitigate risks. NIST is also releasing a software package designed to measure how adversarial attacks can degrade the performance of an AI system. In addition, Commerce’s U.S. Patent and Trademark Office (USPTO) issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI, and the National Telecommunications and Information Administration (NTIA) delivered a report to the White House that examines the risks and benefits of large AI models with widely available weights.

“Under President Biden and Vice President Harris’ leadership, we at the Commerce Department have been working tirelessly to implement the historic Executive Order on AI and have made significant progress in the nine months since we were tasked with these critical responsibilities,” said U.S. Secretary of Commerce Gina Raimondo. “AI is the defining technology of our generation, so we are running fast to keep pace and help ensure the safe development and deployment of AI. Today’s announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI, while minimizing its associated risks. We’ve made great progress, but have a lot of work ahead. We will keep up the momentum to safeguard America’s role as the global leader in AI.”

NIST’s document releases cover varied aspects of AI technology. Two were made public today for the first time. One is the initial public draft of a guidance document from the U.S. AI Safety Institute, and is intended to help AI developers evaluate and mitigate the risks stemming from generative AI and dual-use foundation models — AI systems that can be used for either beneficial or harmful purposes. The other is a testing platform designed to help AI system users and developers measure how certain types of attacks can degrade the performance of an AI system. Of the remaining three document releases, two are guidance documents designed to help manage the risks of generative AI — the technology that enables many chatbots as well as text-based image and video creation tools — and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). The third proposes a plan for U.S. stakeholders to work with others around the globe on AI standards.

“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Under Secretary of Commerce for Standards and Technology and NIST Director Laurie E. Locascio. “These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”

USPTO’s guidance update will assist USPTO personnel and stakeholders in determining subject matter eligibility under patent law (35 U.S.C. § 101) of AI inventions. This latest update builds on previous guidance by providing further clarity and consistency to how the USPTO and applicants should evaluate subject matter eligibility of claims in patent applications and patents involving inventions related to AI technology. The guidance update also announces three new examples of how to apply this guidance throughout a wide range of technologies.

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies, including AI,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO. “We look forward to hearing public feedback on this guidance update, which will provide further clarity on evaluating subject matter eligibility of AI inventions while incentivizing innovations needed to solve world and community problems.”

NTIA’s soon-to-be-published report will review the risks and benefits of dual-use foundation models whose model weights are widely available (i.e. “open-weight models”), as well as develop policy recommendations maximizing those benefits while mitigating the risks. Open-weight models allow developers to build upon and adapt previous work, broadening AI tools’ availability to small companies, researchers, nonprofits, and individuals.

Additional information on today’s announcements from NIST can be found below.

Protecting Against Misuse Risk from Dual-Use Foundation Models

AI foundation models are powerful tools that are useful across a broad range of tasks and are sometimes called “dual-use” because of their potential for both benefit and harm. NIST’s U.S. AI Safety Institute has released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models, which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security.

The draft guidance offers seven key approaches for mitigating the risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and non-consensual intimate imagery.

The AI Safety Institute is accepting comments from the public on the draft Managing the Misuse Risk for Dual-Use Foundation Models until Sept. 9, 2024, at 11:59 PM Eastern Time. Comments can be submitted electronically to [email protected] with “NIST AI 800-1, Managing Misuse Risk for Dual-Use Foundation Models” in the subject line. 

Testing how AI Models Respond to Attacks

One of the vulnerabilities of an AI system is the model at its core. By exposing a model to large amounts of training data, it learns to make decisions. But if adversaries poison the training data with inaccuracies — for example, by introducing data that can cause the model to misidentify stop signs as speed limit signs — the model can make incorrect, potentially disastrous decisions. Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks.  

The open-source software, available for free download, could help the community including government agencies and small- to medium-sized businesses conduct evaluations to assess AI developers’ claims about their systems’ performance. This software responds to Executive Order section 4.1 (ii) (B), which requires NIST to help with model testing. Dioptra does this by allowing a user to determine what sorts of attacks would make the model perform less effectively and quantifying the performance reduction so that the user can learn how often and under what circumstances the system would fail.

Managing the Risks of Generative AI

The AI RMF Generative AI Profile (NIST AI 600-1) can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. The guidance is intended to be a companion resource for users of NIST’s AI RMF. It centers on a list of 12 risks and just over 200 actions that developers can take to manage them.

The 12 risks include a lowered barrier to entry for cybersecurity attacks, the production of mis- and disinformation or hate speech and other harmful content, and generative AI systems confabulating or “hallucinating” output. After describing each risk, the document presents a matrix of actions that developers can take to mitigate them, mapped to the AI RMF.

Reducing Threats to the Data Used to Train AI Systems

The second finalized publication, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A) is designed to be used alongside the Secure Software Development Framework (SP 800-218). While the SSDF is broadly concerned with software coding practices, the companion resource expands the SSDF in part to address a major concern with generative AI systems: They can be compromised with malicious training data that adversely affect the AI system’s performance.

In addition to covering aspects of the training and use of AI systems, this guidance document identifies potential risk factors and strategies to address them. Among other recommendations, it suggests analyzing training data for signs of poisoning, bias, homogeneity and tampering.

Global Engagement on AI Standards

AI systems are transforming society not only within the U.S., but around the world. A Plan for Global Engagement on AI Standards (NIST AI 100-5), today’s third finalized publication, is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.

The guidance is informed by priorities outlined in the NIST-developed Plan for Federal Engagement in AI Standards and Related Tools and is tied to the National Standards Strategy for Critical and Emerging Technology. This publication suggests that a broader range of multidisciplinary stakeholders from many countries participate in the standards development process.