h
London Office
Quick Contact

collaborate@phalanxlaw.com

Image Alt

resources

US moves to Regulate Artificial Intelligence

Regulations Will Impact Government Contractors and Dictate Procurement Standards

Background

Moves by the Biden White House and the chair of the United States Senate Select Committee on Intelligence demonstrate an intent by the US government to assert a leadership role in the emergence of AI law. Government agencies and government contractors will be affected.

First, the Biden Administration Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence demonstrates notice that the U.S. seeks a leadership position in the Global Conversation around the regulation of AI.[1] The EO leverages the US Commerce department’s National Institute of Standards and Technology (NIST), which recently promulgated rules affecting artificial intelligence. The rules are voluntary.

But that may be changing. As Politico reports, a bill promoted by Senate Intelligence Chairman Mark Warner (VA, D) would apply the NIST provisions to agencies and their procurement standards; in particular, the Bill would compel agency rulemaking in alignment with The Artificial Intelligence Risk Management Framework .[2] Parallel state efforts, notably early California regulations, may also be informed by the new law.

Who Will be Affected?

US Federal Government contractors will be affected first.[3] State procurement may also be impacted, as state procurement officials adopt or revise sympathetic requirements.

Action Items

These actions combine to demonstrate an emerging government-wide preference for systems and solutions in alignment with the NIST framework.[4] While voluntary in its current iteration, government contractors should pursue adoption now.

  • Promote adoption of the NIST Artificial intelligence Risk Management Framework
  • Monitor NIST and related rulemaking
  • Ensure company systems are in alignment with the emerging guidance

 

[1] The EO contemplates a key role for the US legislature, too; however, the US Congress has repeatedly failed to pass AI law.

[2] The Framework is based on four elements, collectively called its core: govern, map, measure, and manage. The core “provides outcomes and actions that enable, dialogue, understanding and activities to manage AI risks [.]”
[3] The most ambitious regulations may reach into non-public-sector markets. For example, when the 2019 John McCain National Defense Authorization Act sought to regulate Huawei, the resulting rulemaking required certifications that the company—and not just the division engaged in government contracting—complied with the prohibition throughout company-wide systems and processes.

[4] The term ‘‘framework’’ means document number NIST AI 100–1 ‘‘Artificial Intelligence Risk Management Framework’’, or any successor document.