US AI Regulation
We’re Hiring!
Join our team

US Algorithmic Accountability Act: Third Time Lucky?

October 25, 2023
Authored by
Researcher at Holistic AI
Researcher at Holistic AI
US Algorithmic Accountability Act: Third Time Lucky?

Policymakers around the world are increasingly recognising the importance of regulation and legislation to promote safety, fairness, and ethics in the use of AI tools.

While the US has made the most significant progress with vertical legislation that targets specific use cases — such as the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144 (aka the bias audit law), and Colorado’s insurtech law SB169 – Europe has made the most meaningful steps towards horizontal legislation that targets multiple use cases at once.

Indeed, the EU AI Act, with its risk-based approach, seeks to become the global gold standard for AI regulation. But the European Commission is not the only authority making progress towards horizontal regulation – Canada, DC, California, and the U.S. (at the Federal level) have all proposed their own horizontal regulations targeting what lawmakers in each jurisdiction have identified as the most critical applications of AI.

However, compared to their EU counterpart, these proposals have been less successful. The U.S. federal Algorithmic Accountability Act, for example, has now been introduced three times; first in 2019 by Representative Yvette Clarke, then in 2022 by Senator Ron Wyden, and again in 2023 by Representative Clarke.

In this blog post, we provide an overview of the requirements of the Algorithmic Accountability Act of 2023 as well as the types of systems and entities covered.

Which systems are targeted by the Algorithmic Accountability Act?

The Algorithmic Accountability Act targets automated decision systems, which are defined as:

“any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.”

Passive computing infrastructure is intermediary technology that does not influence the outcome of decisions, including web hosting tools, data storage, and cybersecurity. On the other hand, the Algorithmic Accountability Act does target systems used to make critical decisions in what is known as an augmented critical decision process. Here, a critical decision is a decision or judgement that has a legal, material, or otherwise significant impact on a consumer’s life in terms of access to, availability of, or cost of the following:

Is my organisation in the scope of the Algorithm Accountability Act?

The Algorithmic Accountability Act would be applicable to the entities over which the Federal Trade Commission (FTC) has jurisdiction under section 5(a)(2) of the Federal Trade Commission Act if they:

i. Deploy an augmented critical decision process; and

ii. Had greater than $50,000,000 in average annual revenue or are deemed to have more than $250,000,000 in equity value for the 3-taxable-year period prior to the most recent fiscal year;

iii. Possesses, analyses, or uses identifying information on more than 1,000,000 consumers, households, or consumer devices for the development or deployment of any automated decision system or augmented critical decision process; or

iv. Is substantially owned, operated, or controlled by an entity that meets points i. or ii. That:

  • Had more than $5,000,000 in annual revenue or $25,000,000 in equity value for the previous three-taxable-year period; and
  • Deploys an automated decision system developed for use in an augmented critical decision process; or
  • That met these requirements in the previous three years.

Given inflation, the amounts specified above will be increased in accordance with the percentage increase in the consumer price index each fiscal year. Importantly, this will apply to entities across the 50 states, in DC, and any territory of the U.S.

What are the impact assessment requirements of the Algorithmic Accountability Act?

An algorithmic impact assessment is an ongoing evaluation of an automated decision system (ADS) or automated critical decision process that studies its impact on consumers. In particular, impact assessments must be carried out for deployed ADSs developed for use in augmented critical decision processes, as well as for augmented critical decision processes themselves, both prior to and after they are deployed.

In terms of conducting the impact assessment, there are 11 key requirements:

  1. Process Evaluation: For new augmented critical decision processes, any pre-existing processes used for the decision must be evaluated including for any known harms or shortcomings as well as the intended benefits and purpose of augmenting the decision.
  1. Stakeholder Consultation: Consultation with relevant stakeholders including documentation of points of contact, dates of consultations, and information about the terms and process and any recommendations made.
  1. Privacy Assessment: Perform ongoing testing and evaluation of privacy risks and privacy-enhancing measures including data minimisation and information security.
  1. Performance Evaluation: Ongoing testing and evaluation of the current and historical performance of the system or augmented process, including investigation into data quality, monitoring of system performance, and documentation of any testing carried out in relation to specific subgroup populations.
  1. Training and Education: Ongoing training and education for relevant employees, contractors, or others on the negative impacts of the ADSs and augmented critical decision processes and industry best practices.
  1. Guardrails and Limitations: Assessment of the need and development of guardrails of or limitations on certain uses of the system or decision process.
  1. Data Documentation: Maintenance of data documentation and other inputs used to develop, test, maintain, or update the system including information about the source and structure of the data, data collection methods and informed consent, and whether alternatives were explored.
  1. Rights, Transparency, and Explainability: Evaluation of the rights of consumers including notices and other transparency and explainability measures.
  1. Negative Impact Assessment: Identification of any likely negative impacts of the ADS of augmented critical decision process on consumers and appropriate mitigation strategies.
  1. Documentation and Milestones: Ongoing documentation of the development and deployment process including the logging of milestones and testing dates and points of contact for those involved in these processes.
  1. Resource Identification: Identification of any capabilities, tools, standards, or datasets and other resources necessary to improve the ADS, augmented critical decision process, or impact assessment including performance (e.g., accuracy, robustness, and reliability), fairness and bias, transparency and explainability, privacy and security, personal and public safety, efficiency, or cost.

What are the requirements for summary reports to the FTC?

Covered entities must submit an annual summary report of the impact assessment to the FTC that includes contact information for the covered entity, a detailed description of the specific critical decision the augmented decision process is intended to make in reference to the nine use cases, and the purpose for the ADS or augmented critical decision process.

The summary must also identify any stakeholders consulted by the covered entity, and the testing and evaluation of the ADS or augmented critical decision process including methods and metrics, results, and evaluation of differential performance.

Furthermore, the summary should include any publicly stated guardrail or limitation on certain uses of the system or decision process and data used to develop, test, maintain, or update the decision process or ADS, as well as any transparency or explainability measures.

Interestingly, any of these requirements that were not feasible to be complied with must be documented, suggesting some leniency. Indeed, the proposed Act does note that certain documentation of assessments may only be possible at particular stages of development and deployment, allowing some flexibility in the process.

Finally, the summary must include any recommendations or mitigations to improve the performance of the impact assessment development and deployment of the ADS or augmented critical decision process. Related documentation must be maintained for at least 3 years after the tool is retired.

Publicly accessible repository

If the Act does pass this time around, the FTC would be required to publicly publish a report summarising information provided in the reports received. The bill would also require the FTC to establish a repository with a subset of the information about each ADS and augmented critical decision process for which summaries were received. This repository would be updated quarterly. The stated purpose of this is to provide consumers with greater information about how decisions are made about them, as well as to allow researchers to study the use of these systems.

Support from the FTC

In order to support compliance, the FTC would be required to publish guidelines on how the requirements of the impact assessment could be met, including resources developed by the National Institute of Standards and Technology (NIST).

Further, the FTC would provide training materials to support the determination of whether entities are covered by the law and update such guidance and training materials in line with feedback or common questions.

Will it be third time lucky?

The Algorithmic Accountability Act of 2019 and Algorithmic Accountability Act of 2022 failed to make it out of the 116th and 117th Congresses, respectively. This signalled that Congress is reluctant to proceed with passing their own algorithm law.

With the EU AI Act expected to dominate global discourse once finalised, it could be the case that the US will adapt the EU rules and introduce its own equivalent, eventually replacing proposals for an Algorithmic Accountability Act.

Indeed, the Algorithmic Accountability Act is much less mature and comprehensive than the EU AI Act, so may not be adequate to make algorithms safer and fairer alone, particularly without considering how the law will interact with other existing laws, including those targeting automated systems, such as New York City Local Law 144.

Only time will tell whether this third attempt will be successful, but it is clear that the US – at the local, state, and federal levels – is determined to impose more conditions on the use of algorithms and AI, which will soon see enterprises needing to navigate an influx of rules.

Preparing early is the best way to ensure compliance. Future-proof your organisation with Holistic AI.

Schedule a call with a member of our specialist governance, risk management, and compliance team to find out more.

Authored by Airlie Hilliard, Senior Researcher at Holistic AI.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.
deployment options

HAI Platform has two deployment options, Hybrid and Fully Managed

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Hybrid Cloud

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →
product

AI Governance

A command centre suite for executive-level management of AI applications

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →

Register AI usage and development

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Learn more →
US Algorithmic Accountability Act: Third Time Lucky?
US AI Regulation

US Algorithmic Accountability Act: Third Time Lucky?

October 25, 2023

Policymakers around the world are increasingly recognising the importance of regulation and legislation to promote safety, fairness, and ethics in the use of AI tools.

While the US has made the most significant progress with vertical legislation that targets specific use cases — such as the Illinois Artificial Intelligence Video Interview Act, New York City Local Law 144 (aka the bias audit law), and Colorado’s insurtech law SB169 – Europe has made the most meaningful steps towards horizontal legislation that targets multiple use cases at once.

Indeed, the EU AI Act, with its risk-based approach, seeks to become the global gold standard for AI regulation. But the European Commission is not the only authority making progress towards horizontal regulation – Canada, DC, California, and the U.S. (at the Federal level) have all proposed their own horizontal regulations targeting what lawmakers in each jurisdiction have identified as the most critical applications of AI.

However, compared to their EU counterpart, these proposals have been less successful. The U.S. federal Algorithmic Accountability Act, for example, has now been introduced three times; first in 2019 by Representative Yvette Clarke, then in 2022 by Senator Ron Wyden, and again in 2023 by Representative Clarke.

In this blog post, we provide an overview of the requirements of the Algorithmic Accountability Act of 2023 as well as the types of systems and entities covered.

Which systems are targeted by the Algorithmic Accountability Act?

The Algorithmic Accountability Act targets automated decision systems, which are defined as:

“any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.”

Passive computing infrastructure is intermediary technology that does not influence the outcome of decisions, including web hosting tools, data storage, and cybersecurity. On the other hand, the Algorithmic Accountability Act does target systems used to make critical decisions in what is known as an augmented critical decision process. Here, a critical decision is a decision or judgement that has a legal, material, or otherwise significant impact on a consumer’s life in terms of access to, availability of, or cost of the following:

Is my organisation in the scope of the Algorithm Accountability Act?

The Algorithmic Accountability Act would be applicable to the entities over which the Federal Trade Commission (FTC) has jurisdiction under section 5(a)(2) of the Federal Trade Commission Act if they:

i. Deploy an augmented critical decision process; and

ii. Had greater than $50,000,000 in average annual revenue or are deemed to have more than $250,000,000 in equity value for the 3-taxable-year period prior to the most recent fiscal year;

iii. Possesses, analyses, or uses identifying information on more than 1,000,000 consumers, households, or consumer devices for the development or deployment of any automated decision system or augmented critical decision process; or

iv. Is substantially owned, operated, or controlled by an entity that meets points i. or ii. That:

  • Had more than $5,000,000 in annual revenue or $25,000,000 in equity value for the previous three-taxable-year period; and
  • Deploys an automated decision system developed for use in an augmented critical decision process; or
  • That met these requirements in the previous three years.

Given inflation, the amounts specified above will be increased in accordance with the percentage increase in the consumer price index each fiscal year. Importantly, this will apply to entities across the 50 states, in DC, and any territory of the U.S.

What are the impact assessment requirements of the Algorithmic Accountability Act?

An algorithmic impact assessment is an ongoing evaluation of an automated decision system (ADS) or automated critical decision process that studies its impact on consumers. In particular, impact assessments must be carried out for deployed ADSs developed for use in augmented critical decision processes, as well as for augmented critical decision processes themselves, both prior to and after they are deployed.

In terms of conducting the impact assessment, there are 11 key requirements:

  1. Process Evaluation: For new augmented critical decision processes, any pre-existing processes used for the decision must be evaluated including for any known harms or shortcomings as well as the intended benefits and purpose of augmenting the decision.
  1. Stakeholder Consultation: Consultation with relevant stakeholders including documentation of points of contact, dates of consultations, and information about the terms and process and any recommendations made.
  1. Privacy Assessment: Perform ongoing testing and evaluation of privacy risks and privacy-enhancing measures including data minimisation and information security.
  1. Performance Evaluation: Ongoing testing and evaluation of the current and historical performance of the system or augmented process, including investigation into data quality, monitoring of system performance, and documentation of any testing carried out in relation to specific subgroup populations.
  1. Training and Education: Ongoing training and education for relevant employees, contractors, or others on the negative impacts of the ADSs and augmented critical decision processes and industry best practices.
  1. Guardrails and Limitations: Assessment of the need and development of guardrails of or limitations on certain uses of the system or decision process.
  1. Data Documentation: Maintenance of data documentation and other inputs used to develop, test, maintain, or update the system including information about the source and structure of the data, data collection methods and informed consent, and whether alternatives were explored.
  1. Rights, Transparency, and Explainability: Evaluation of the rights of consumers including notices and other transparency and explainability measures.
  1. Negative Impact Assessment: Identification of any likely negative impacts of the ADS of augmented critical decision process on consumers and appropriate mitigation strategies.
  1. Documentation and Milestones: Ongoing documentation of the development and deployment process including the logging of milestones and testing dates and points of contact for those involved in these processes.
  1. Resource Identification: Identification of any capabilities, tools, standards, or datasets and other resources necessary to improve the ADS, augmented critical decision process, or impact assessment including performance (e.g., accuracy, robustness, and reliability), fairness and bias, transparency and explainability, privacy and security, personal and public safety, efficiency, or cost.

What are the requirements for summary reports to the FTC?

Covered entities must submit an annual summary report of the impact assessment to the FTC that includes contact information for the covered entity, a detailed description of the specific critical decision the augmented decision process is intended to make in reference to the nine use cases, and the purpose for the ADS or augmented critical decision process.

The summary must also identify any stakeholders consulted by the covered entity, and the testing and evaluation of the ADS or augmented critical decision process including methods and metrics, results, and evaluation of differential performance.

Furthermore, the summary should include any publicly stated guardrail or limitation on certain uses of the system or decision process and data used to develop, test, maintain, or update the decision process or ADS, as well as any transparency or explainability measures.

Interestingly, any of these requirements that were not feasible to be complied with must be documented, suggesting some leniency. Indeed, the proposed Act does note that certain documentation of assessments may only be possible at particular stages of development and deployment, allowing some flexibility in the process.

Finally, the summary must include any recommendations or mitigations to improve the performance of the impact assessment development and deployment of the ADS or augmented critical decision process. Related documentation must be maintained for at least 3 years after the tool is retired.

Publicly accessible repository

If the Act does pass this time around, the FTC would be required to publicly publish a report summarising information provided in the reports received. The bill would also require the FTC to establish a repository with a subset of the information about each ADS and augmented critical decision process for which summaries were received. This repository would be updated quarterly. The stated purpose of this is to provide consumers with greater information about how decisions are made about them, as well as to allow researchers to study the use of these systems.

Support from the FTC

In order to support compliance, the FTC would be required to publish guidelines on how the requirements of the impact assessment could be met, including resources developed by the National Institute of Standards and Technology (NIST).

Further, the FTC would provide training materials to support the determination of whether entities are covered by the law and update such guidance and training materials in line with feedback or common questions.

Will it be third time lucky?

The Algorithmic Accountability Act of 2019 and Algorithmic Accountability Act of 2022 failed to make it out of the 116th and 117th Congresses, respectively. This signalled that Congress is reluctant to proceed with passing their own algorithm law.

With the EU AI Act expected to dominate global discourse once finalised, it could be the case that the US will adapt the EU rules and introduce its own equivalent, eventually replacing proposals for an Algorithmic Accountability Act.

Indeed, the Algorithmic Accountability Act is much less mature and comprehensive than the EU AI Act, so may not be adequate to make algorithms safer and fairer alone, particularly without considering how the law will interact with other existing laws, including those targeting automated systems, such as New York City Local Law 144.

Only time will tell whether this third attempt will be successful, but it is clear that the US – at the local, state, and federal levels – is determined to impose more conditions on the use of algorithms and AI, which will soon see enterprises needing to navigate an influx of rules.

Preparing early is the best way to ensure compliance. Future-proof your organisation with Holistic AI.

Schedule a call with a member of our specialist governance, risk management, and compliance team to find out more.

Authored by Airlie Hilliard, Senior Researcher at Holistic AI.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Manage risks. Embrace AI.

Our AI Governance, Risk and Compliance platform empowers your enterprise to confidently embrace AI

Get Started