On March 28, 2024, the White House Office of Management and Budget (OMB) released a memorandum concerning AI, with the specific subject matter of “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.” The scope of the memorandum is limited to the use of AI in Federal agencies, and yet it notes that it nevertheless has far-ranging importance, as Federal agencies “have a distinct responsibility to identify and manage AI risks because of the role they play in our society.”
The memorandum has a heavy focus on risk mitigation and requires that concrete safeguards in Federal agency use of AI must be implemented by December 1, 2024, or agencies must pause the use of the non-compliant AI models until remedial measures are implemented.
RIGHTS-IMPACTING AI AND SAFETY-IMPACTING AI
Risk mitigation in the memorandum places much focus on rights-impacting AI and safety-impacting AI. Rights-impacting AI is defined as “AI whose output serves as a principal basis for a decision or action concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect” when it comes to such things as civil rights, equal opportunities, and access to government services. Safety-impacting AI is defined as “AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety,” of such things as human life or well-being, climate, critical infrastructure and strategic assets or resources.
The memorandum specifies which categories of AI are “presumed to impact rights and safety,” in Appendix I. Additionally, agencies will be required, as applicable, to “identify which use cases are safety-impacting and rights-impacting AI and report additional detail on the risks … that such uses pose.”
POLICY IN RELATION TO HEALTHCARE
When it comes to healthcare, the memorandum touches upon the use of AI in the Federal healthcare system and notes the importance of human oversight over AI tools such as when it serves to provide support in making critical diagnostics decisions. It provides as an example that “a human being [must be] overseeing the process to verify the tools’ results and avoids disparities in healthcare access.”
According to an accompanying Fact Sheet released by the White House, AI is also used to advance public health and it notes that the Center for Disease Control and Prevention is “using AI to predict the spread of disease and detect the illicit use of opioids, and [that] the Center for Medicare and Medicaid Services is using AI to reduce waste and identify anomalies in drug costs.”
Additionally, OMB talks about rights-impacting AI and safety impacting AI in relation to healthcare. When it comes to rights-impacting AI, OMB seeks to help ensure the rights of individuals are protected in healthcare,
When it comes to safety-impacting AI, the following uses in relation to healthcare all fall under this category:
- Carrying out the medically relevant functions of medical devices;
- Providing medical treatments;
- Determining medical treatments;
- Providing medical or insurance health-risk assessments;
- Providing drug-addiction assessments or determining access to medication;
- Conducting risk assessments for suicide or other violence;
- Detecting or preventing mental-health issues;
- Flagging patients for interventions; and
- Allocating care in the context of public insurance or controlling health insurance costs.
AI GOVERNANCE
OMB notes that functions inherently present within the Federal agencies can support the “strong governance structure” needed when it comes to artificial intelligence. The memorandum states that “agencies are encouraged to strategically draw upon their policy, programmatic, research and evaluation, and regulatory functions to support the implementation of this memorandum’s requirements and recommendations.”
Additionally, the memorandum calls for the designation of a Chief AI Officer (CAIO) within each agency who will “bear primary responsibility on behalf of the head of their agency” when it comes to the implementation and coordination of the AI policy and compliance plans must also be developed on an ongoing basis.
In terms of risk management for artificial intelligence, among other things, CAIO’s are responsible for:
- managing an agency program that supports the enterprise in identifying and managing risks from the use of AI, especially safety-impacting and rights-impacting AI;
- working with relevant senior agency officials to establish or update processes to measure, monitor, and evaluate the ongoing performance and effectiveness of the agency’s AI applications and whether the AI is advancing the agency’s mission and meeting performance objectives;
- overseeing agency compliance with the requirements to manage risks from the use of AI, including those established in this memorandum and in relevant law and policy;
- conducting risk assessments, as necessary, of the agency’s AI applications to ensure compliance with this memorandum; and
- in partnership with relevant agency officials (e.g. authorizing, procurement, legal, data governance, human capital, and oversight officials), establishing controls to ensure that their agency does not use AI that is not in compliance with this memorandum, including by assisting these relevant agency officials in evaluating Authorizations to Operate based on risks from the use of AI.
AI USE AND INNOVATION
The OMB AI Policy doesn’t just seek to govern the use of AI, but also to encourage its use and innovation. It touches upon a wide range of activities starting with seeking to “improve operations and deliver efficiencies across the Federal government,” and improving the “accessibility of government services.”
Additional specific areas cited include:
- Improving public health;
- Reducing food insecurity;
- Addressing the climate crisis;
- Advancing equitable outcomes;
- Protecting democracy and human rights; and
- Growing economic competitiveness.
The memorandum also calls for agencies to prepare adequate IT infrastructure, such as high-performance computing infrastructure specialized for AI training and inference as well as data infrastructure, such as the “capacity to sufficiently share, curate, and govern agency data for use in training, testing and operating AI.” Cybersecurity concerns and generative AI are also touched upon.
MANAGING RISK FROM THE USE OF AI
The memorandum requires Federal agencies to implement minimum risk practices as defined therein as to rights-impacting and safety-impacting AI and sets a deadline of December 1, 2024. Agencies must stop using non-compliant AI in their operations after that date until such non-compliance is remedied.
Minimum risk practices include the following:
- Completing an AI impact assessment;
- Testing the AI for performance in a real-word context;
- Independently evaluating the AI;
- Conducting ongoing monitoring;
- Regularly evaluating risks from the use of AI;
- Mitigating emerging risks to rights and safety;
- Ensuring adequate human training and assessment;
- Providing additional human oversight, intervention, and accountability as part of decisions or actions that could result in a significant impact on rights or safety; and
- Providing public notice and plain-language documentation.
Categories: AI, Cybersecurity, Uncategorized
Leave a Reply