The White House has unveiled draft rules aimed at ensuring federal agencies rigorously evaluate and continuously monitor algorithms deployed in critical sectors like healthcare, law enforcement, and housing for potential discriminatory or detrimental effects on human rights.
Upon implementation, these rules could lead to substantial alterations in the application of AI-dependent systems within the US government, including the FBI’s utilization of facial recognition technology, which has faced criticism for its perceived failure to uphold civil liberties. The proposed regulations mandate government agencies to scrutinize existing algorithms by August 2024, with a directive to cease using any that fail to meet compliance standards.
“If the benefits do not significantly outweigh the risks, agencies should refrain from employing the AI,” the memo emphasizes. However, it also introduces an exception for models related to national security, allowing agencies to grant themselves waivers if discontinuing an AI model’s use would impede critical agency operations to an unacceptable degree.
The release of these draft rules by the White House Office of Management and Budget comes just two days after President Biden signed an executive order outlining a comprehensive strategy to increase government adoption of AI, while simultaneously prioritizing safeguard measures against potential harm stemming from the technology. Ensuring public safety from the potential risks of AI was a central theme of the executive order, which included provisions mandating reporting requirements for developers of large AI models and computing clusters.
The proposed rules by the OMB also introduce a requirement for testing and independent evaluation of algorithms obtained from private companies as part of federal contracts. The OMB, in its role of coordinating departments with presidential priorities, will oversee this aspect. Government agencies will be tasked with evaluating and monitoring both current and future algorithms for potential adverse impacts on privacy, democracy, market concentration, and accessibility to government services.
Under the draft memo, the testing and evaluation of algorithms will be conducted by individuals with no direct involvement in the development of the system. Furthermore, it encourages external “red teaming” tests of generative AI models. The directive also instructs leaders of federal agencies to explore avenues for utilizing generative AI, such as OpenAI’s ChatGPT, while mitigating associated risks.
President Biden’s AI executive order mandates the OMB to provide guidance to federal agencies within the next five months. The office is inviting public comments on the draft policy until December 5.
“The framework establishes a set of binding requirements for federal agencies to implement safeguards for the use of AI, ensuring that we can harness its benefits while maintaining public trust in the services provided by the federal government,” stated Jason Miller, OMB’s deputy director for management.