President Biden has released a series of documents that grapples with the challenges of using A.I. tools to speed up government operations.
Credit...Haiyun Jiang for The New York Times

New Guidelines Serve as Government ‘Guardrails’ for A.I. Tools

A national security memorandum detailed how agencies should streamline operations with artificial intelligence safely.

by · NY Times

President Biden is expected to sign on Thursday the first national security memorandum detailing how the Pentagon and the intelligence agencies should use and protect artificial intelligence technology, placing “guardrails” on how such tools are employed in decisions on nuclear weapons or who is granted asylum.

The new document is the latest in a series Mr. Biden has issued that grapples with the challenges of using A.I. tools to speed up government operations — from detecting cyberattacks to predicting extreme weather — while limiting the most dystopian possibilities, including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will lapse after Mr. Biden leaves office. While most national security memorandums are adopted or amended on the margins by successive presidents, it is far from clear how former President Donald J. Trump would approach the issue if he is elected next month.

The new directive will be announced on Thursday at the National War College by Jake Sullivan, the national security adviser, who prompted many of the efforts to examine what uses and threats the new tools could pose to the United States. He acknowledged in remarks prepared for the event that one challenge is that the U.S. government funds or owns very few of the key A.I. technologies — and that they evolve so fast they defy regulation.

“Our government took an early and critical role in shaping developments — from nuclear physics and space exploration, to personal computing and the internet,” Mr. Sullivan is expected to say. “That’s not been the case with most of the A.I. revolution. While the Department of Defense and other agencies funded a large share of A.I. work in the 20th century, the private sector has propelled much of the last decade of progress.”

Mr. Biden’s aides have said, however, that the absence of guidelines about how A.I. can be used by the Pentagon, the C.I.A., or even the Justice Department is impeding development, as companies worried about what applications could be legal.

The new memorandum contains about 50 pages in its unclassified version, with a classified appendix. Some of its conclusions are obvious: It rules out, for example, ever letting A.I. systems decide when to launch nuclear weapons; that is left to the president as commander in chief.

While it seems obvious that no one would want the fate of millions to hang on an algorithm’s pick, the explicit statement is part of an effort to lure China into deeper talks about the limits that need to be placed on high-risk applications of artificial intelligence. An initial conversation with China on the topic, conducted in Europe this past spring, made no real progress.

“This focuses attention on the issue of how these tools affect the most critical decisions governments make,” said Herb Lin, a Stanford University scholar who has spent years examining the intersection of artificial intelligence and nuclear decision-making.

“Obviously, no one is going to give the nuclear codes to Chat GPT,” Dr. Lin said. “But there is a remaining question about how much information that the president is getting is processed and filtered through A.I. systems — and whether that is a bad thing.”

But the rules for nonnuclear weapons are murkier. They urge keeping human decision makers “on the loop” of targeting decisions, or overseeing A.I. tools that may be targeting weapons, but without slowing the effectiveness of the weapons. That is especially difficult if Russia and China, as seems likely, begin to make greater use of fully autonomous weapons that operate at blazing speeds because humans are removed from battlefield decisions.

Similarly, the president’s new A.I. “guardrails” would prohibit letting artificial intelligence tools make a decision on granting asylum. And they would prohibit tracking someone based on ethnicity or religion, or classifying someone as a “known terrorist” without a human weighing in.

Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected — much as early nuclear weapons were — from spying or theft by foreign adversaries. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to protect their inventions.

It empowers a new and still-obscure organization, the A.I. Safety Institute, housed within the National Institute of Standards and Technology, to help inspect A.I. tools before they are released to ensure they could not aid a terrorist group in building biological weapons or help a hostile nation like North Korea improve the accuracy of its missiles.

And it describes at length efforts to bring the best A.I. specialists from around the world to the United States, much as the United States sought to attract nuclear and military scientists after World War II, rather than risk them working for a rival like Russia.


Inside the Biden Administration

Here’s the latest news and analysis from Washington.