What Could Horizontal AI Legislation Look Like In the US? Exploring the US Algorithmic Accountability Act

January 9, 2023
Authored by
Adam Williams
Content Writer at Holistic AI
What Could Horizontal AI Legislation Look Like In the US? Exploring the US Algorithmic Accountability Act

Policymakers around the world are increasingly recognizing the importance of regulation and legislation to promote safety, fairness, and ethics in the use of AI tools.

While the US has made the most significant progress with vertical legislation that targets specific use cases — such as the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144 (aka the bias audit law), and Colorado’s insurtech law SB169 – Europe has made the most meaningful steps towards horizontal legislation that targets multiple use cases at once.

Indeed, the EU AI Act, with its risk-based approach, seeks to become the global gold standard for AI regulation. However, the European Commission is not the only authority making progress towards horizontal regulation – Canada, DC, California, and the U.S. (at the Federal level) have all proposed their own horizontal regulations targeting what lawmakers in each jurisdiction have identified as the most critical applications of AI.

Compared to their EU counterpart, these proposals have been less successful. The U.S. federal Algorithmic Accountability Act, for example, has now been introduced three times; first in 2019 by Representative Yvette Clarke, then in 2022 by Senator Ron Wyden, and again in 2023 by Representative Clarke.
With this said it’s clear that the US – at the local, state, and federal levels – is determined to impose more conditions on the use of algorithms and AI, which will soon see enterprises needing to navigate an influx of rules.

In this blog post, we’ll look at the history and potential future of the Algorithmic Accountability Act of 2023. Using the latest version of the bill as well as learnings from horizontal legislation from elsewhere in the world, we can extrapolate what horizontal legislation might look like in the US, and how organizations can begin to prepare.

Skip to:

Would the Algorithmic Accountability Act apply to my systems?

Would the Algorithmic Accountability Act apply to my organization?

What are the potential requirements of the Algorithmic Accountability Act?

De-risking horizontal AI regulation in the US in advance

What is the likelihood of horizontal AI regulation in the US?

Would the Algorithmic Accountability Act apply to my systems?

The Algorithmic Accountability Act targets systems used to make critical decisions in what is known as an augmented critical decision process.

In short, it’s a system used to help you make a decision or judgment that has a legal, material, or otherwise significant impact on a consumer, stakeholder, or coworker's life.

If you’re using a system to dictate access to, availability of, or cost of any of the following, your system would likely be regulated by the proposed act.

Algorithmic Accountability Act

To quote the proposed legislation:

“any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.”

It’s worth noting that passive computing infrastructure supporting regulated systems is intermediary technology to which the Act would not apply. Such supporting systems include web hosting tools, data storage, and cybersecurity.

Would the Algorithmic Accountability Act apply to my organization?

If the Federal Trade Commission (FTC) has jurisdiction over your organization, the Algorithmic Accountability Act would apply.

Don’t know whether the FTC has jurisdiction over your organization? Under 5(a)(2) of the Federal Trade Commission Act their jurisdiction is established as:

  1. Deploy an augmented critical decision process; and
  2. Had greater than $50,000,000 in average annual revenue or are deemed to have more than $250,000,000 in equity value for the 3-taxable-year period before the most recent fiscal year;
  3. Possesses, analyses, or uses identifying information on more than 1,000,000 consumers, households, or consumer devices for the development or deployment of any automated decision system or augmented critical decision process; or
  4. Is substantially owned, operated, or controlled by an entity that meets points i. or ii. That:
    • Had more than $5,000,000 in annual revenue or $25,000,000 in equity value for the previous three-taxable-year period; and
    • Deploys an automated decision system developed for use in an augmented critical decision process; or
    • That met these requirements in the previous three years.

Given inflation, the amounts specified above will be increased by the percentage increase in the consumer price index each fiscal year.

Importantly, this will apply to entities across the 50 states, in DC, and any territory of the U.S.

What are the potential requirements of the Algorithmic Accountability Act?

At this juncture, there are two categories of requirements organizations potentially regulated by the act should know about:

  • Algorithmic impact assessments
  • Annual summary reports

An algorithmic impact assessment is an ongoing evaluation of an automated decision system (ADS) or automated critical decision process that studies its impact on consumers. The act requires you to conduct impact assessments for ADSs that are used in augmented critical decision-making processes. These assessments should be done not only for the ADS themselves but also for the overall augmented decision processes. Importantly, impact assessments need to occur both before and following the deployment of these systems.

In terms of conducting the impact assessment, there are 11 key requirements:

  1. Process Evaluation: For new augmented critical decision processes, any pre-existing processes used for the decision must be evaluated including any known harms or shortcomings as well as the intended benefits and purpose of augmenting the decision.
  1. Stakeholder Consultation: Consultation with relevant stakeholders including documentation of points of contact, dates of consultations, and information about the terms and process and any recommendations made.
  1. Privacy Assessment: Perform ongoing testing and evaluation of privacy risks and privacy-enhancing measures including data minimization and information security.
  1. Performance Evaluation: Ongoing testing and evaluation of the current and historical performance of the system or augmented process, including investigation into data quality, monitoring of system performance, and documentation of any testing carried out in relation to specific subgroup populations.
  1. Training and Education: Ongoing training and education for relevant employees, contractors, or others on the negative impacts of the ADSs and augmented critical decision processes and industry best practices.
  1. Guardrails and Limitations: Assessment of the need and development of guardrails of or limitations on certain uses of the system or decision process.
  1. Data Documentation: Maintenance of data documentation and other inputs used to develop, test, maintain, or update the system including information about the source and structure of the data, data collection methods and informed consent, and whether alternatives were explored.
  1. Rights, Transparency, and Explainability: Evaluation of the rights of consumers including notices and other transparency and explainability measures.
  1. Negative Impact Assessment: Identification of any likely negative impacts of the ADS of augmented critical decision process on consumers and appropriate mitigation strategies.
  1. Documentation and Milestones: Ongoing documentation of the development and deployment process including the logging of milestones and testing dates and points of contact for those involved in these processes.
  1. Resource Identification: Identification of any capabilities, tools, standards, or datasets and other resources necessary to improve the ADS, augmented critical decision process, or impact assessment including performance (e.g., accuracy, robustness, and reliability), fairness and bias, transparency and explainability, privacy and security, personal and public safety, efficiency, or cost.

Secondly, the Act would require covered entities to submit an annual summary report of the impact assessments to the FTC.

Summary reports will need to include the following:

  • Contact information for the covered entity
  • A detailed description of the specific critical decision the augmented decision process is intended to make. This should be matched to the nine use cases.
  • The purpose for the ADS or augmented critical decision process.
  • Any stakeholders consulted by the covered entity.
  • Testing and evaluation of the ADS or augmented critical decision process including methods, metrics, results, and evaluation of differential performance.
  • Any publicly stated guardrail or limitation maintained on certain system uses in development, testing, maintenance, or updates.
  • Transparency or explainability measures.
  • Any recommendations or mitigations to improve the performance of the impact assessment development and deployment of the ADS or augmented critical decision process.

Documentation related to the above must be maintained for at least 3 years after the tool is retired.

Interestingly, any of these requirements that were not feasible to be complied with must be documented, suggesting some leniency.

The proposed Act notes that certain documentation of assessments may only be possible at particular stages of development and deployment, allowing some flexibility in the process.

Finally, there are some requirements of the FTC, whether the Act passes or not.

If the Act does pass this time around, the FTC would be required to publicly publish a report summarizing information provided in the reports received. The bill would also require the FTC to establish a repository with a subset of the information about each ADS and augmented critical decision process for which summaries were received. This repository would be updated quarterly. The stated purpose of this is to provide consumers with greater information about how decisions are made about them, as well as to allow researchers to study the use of these systems.

To support compliance, the FTC would also be required to publish guidelines on how the requirements of the impact assessment could be met, including resources developed by the National Institute of Standards and Technology (NIST).

Further, the FTC would provide training materials to support the determination of whether entities are covered by the law and update such guidance and training materials in line with feedback or common questions.

De-risking horizontal AI regulation in the US in advance

Organizations monitored by the FTC should be aware that whether the Act passes or not, the FTC has signaled a more active role in enforcing safe and ethical AI through a joint statement that existing laws may also be used to regulate AI use.

While the specific filings required from the Act may or may not come to pass, AI users and vendors should keep in mind that systems that would meet the criteria of supporting critical decisions are rife with regulatory risk from existing laws monitored by a variety of agencies.

With this said the process-centered auditing required in the Act is likely best practice whether the act passes or not. In particular:

What is the likelihood of horizontal AI regulation in the US?

The Algorithmic Accountability Act of 2019 and Algorithmic Accountability Act of 2022 failed to make it out of the 116th and 117th Congresses, respectively. This signaled that Congress was reluctant to proceed with passing its own algorithm law.

With the EU AI Act expected to dominate global discourse once finalized, it could be the case that the US will adopt the EU rules and introduce its own equivalent, eventually replacing proposals for an Algorithmic Accountability Act.

Indeed, the Algorithmic Accountability Act is much less mature and comprehensive than the EU AI Act, so may not be adequate to make algorithms safer and fairer alone, particularly without considering how the law will interact with other existing laws, including those targeting automated systems, such as New York City Local Law 144.

Only time will tell whether this third attempt will be successful, but it is clear that the US – at the local, state, and federal levels – is determined to impose more conditions on the use of algorithms and AI, which will soon see enterprises needing to navigate an influx of rules.

Preparing early is the best way to ensure compliance. Future-proof your organization with Holistic AI.

Schedule a call with a member of our specialist governance, risk management, and compliance team to find out more.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call