This analysis is in response to breaking news and will be updated. Please contact pr@rstreet.org to speak with the author.

The White House released the unclassified version of its National Security Memorandum (NSM) pertaining to artificial intelligence (AI) today after remarks by National Security Advisor Jake Sullivan. Broadly speaking, this action seeks to govern the use of AI in “national security systems,” continue the United States’ leadership of AI development and use, and foster AI adoption in the national security and intelligence arenas. These aims are important, but it is critical to carefully assess the NSM and the accompanying guidance document to ensure the use of AI in national security is not unduly limited and that it maximizes the ability of the United States to lead on AI in a responsible manner.

The NSM is a product of the October 30, 2023, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which some criticized as being too heavy-handed. One part of the EO called for an interagency process led by Assistant to the President for National Security Affairs and the Assistant to the President and Deputy Chief of Staff for Policy to develop the NSM within 270 days around adoption of AI and AI use by adversaries. Cybersecurity and broader national security implications of the EO were previously explored in an R Street analysis and R Street’s Cybersecurity-AI working group explored how AI can be a positive force for cybersecurity.

There is much to unpack in the NSM, but several high-level items stand out. It should be noted that these are preliminary reactions with a more detailed analysis to follow.

1.      The NSM is not an isolated action. The Office of Management and Budget previously provided rules for federal government use of AI, but that was geared toward federal civilian agencies. The NSM is intended to be complementary and build upon those themes, but apply to national security systems with both action steps directed at specific federal agencies and across all federal agencies. At the same time, specific agencies have already done tremendous work on AI use and policy like the Department of Defense’s Chief Digital and Artificial Intelligence Office. Arguably, the DoD has been leading on AI research well before the recent focus on AI by policymakers, so existing efforts should be reviewed in light of this action.

2.      The NSM is only partial. It is important to remember there is also a classified version of this document so the general public will not have the full product. Likewise, the NSM is intentionally accompanied by a governance and risk management framework that is intended to be updated more easily as needs evolve than the NSM itself (Section 4.2(e)(i))

3.      Maintaining US leadership is critical. The United States has an advantage in many ways on AI development and use, which is a central premise of the NSM. For example, a majority of leading hardware companies, AI developers, and technical talent are based in the United States. It is positive that the NSM seeks to support private sector developers, including with cybersecurity and counterintelligence resources. Also, the NSM formally designates the AI Safety Institute (AISI) as industry’s primary contact with the federal government, although related efforts have been criticized (Section 3.3(c)). The NSM provides numerous new duties to AISI from testing and evaluation guidance to potentially determining if dual-use foundation models might harm public safety (Section 3.3(e)). AISI’s increased role in national security is something to assess and watch carefully, especially given the role the Commerce Department would have with national security applications.

4.      National security uses are limited. The NSM sets out both prohibited uses of AI and high-impact use cases that require stricter oversight and due diligence (Section 4.2(e)). This guidance must be reviewed initially and assessed continuously to ensure potential national security uses are not limited, especially since the technology is rapidly evolving. Likewise, adversaries will not respect guardrails and limits, so that is an important consideration to not put the United States behind our adversaries, but still leverage AI responsibility.

5.      AI adoption must be a priority. National security agencies and the military have already leveraged AI, but that trend should continue as adversaries seek to do the same and use it for nefarious purposes. The NSM “demands” the use of AI systems in these cases. As part of this, private sector engagement is imperative since much of the development has come from that sector. However, it is important to remember that AI is not the magic answer to everything. Take cyber, for instance, where it can play a critical role, but a human should still be at the center.

Rivals will continue their efforts to surpass the United States in both AI development and use. They will also try  to undermine US efforts. That can have serious consequences for the United States’ national security. This reality must be our guiding light for AI actions both in the civilian and national security arenas.

Subscribe to our policy work



Source link

author-sign