The White House hopes to guide how technologists develop artificial intelligence and how the government prompts and adopts AI tools, under a new executive order unveiled Monday.
The order lays out some basic safety rules to prevent AI-enabled consumer fraud, requires red-team testing of AI software for safety, and issues guidance on privacy protections. According to a press release, the White House is also working to establish new multilateral AI agreements with other nations to ensure AI’s safety and to accelerate AI adoption in government.
The White House’s order is in response to increasing public concerns about the impact of artificial intelligence on the public, employment, education and other areas. Those concerns are at odds with warnings from key business leaders and others that China’s growing investment in AI could give it an economic, technological, and military advantage in the coming decades. This new executive order aims to both address the concerns of AI misuse and its use in hazardous settings, while also encouraging AI’s advancement.
White House Deputy Chief of Staff Bruce Reed called the order “the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks.”
On safety, the order directs the National Institute of Standards and Technology, or NIST, to draft standards for red-team exercises to test the safety of AI tools before they’re released.
“The Department of Homeland Security is implementing these standards in critical infrastructure sectors and establishing the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks,” according to the White House fact sheet.
The Order also establishes a cyber security program for exploring how AI can lead to an attack. It requires the developers of the “most powerful AI systems” to share their safety tests with the government. And it asks the Department of Commerce develop methods to detect AI-generated material that might be used for disinformation or fraud.
It calls on the National Science Foundation to further develop cryptographic tools and other technologies to protect personal and private data that could be collected by AI tools, and it sets guidelines to prevent organizations and institutions from using AI in discriminatory ways. It also calls on the government to do more research on AI’s effects on the labor force.
Additionally, a large portion of the order looks at how the government can better embrace AI and form new bonds and working strategies with like-minded democratic nations to do so.
“The administration has already consulted widely on AI governance frameworks over the past several months–engaging with Australia, Brazil, Canada, Chile, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK,” the fact sheet said. The order calls on the State and Commerce departments to “lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety.”
Still, according to the fact sheet, “More action will be required, and the administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”
The post White House unveils executive order on AI safety, competition appeared first on Defense One.