SB 1103 – Connecticut’s Call for AI Regulation

March 10, 2023
Authored by
Ayesha Gulley
Senior Policy Associate at Holistic AI
SB 1103 – Connecticut’s Call for AI Regulation

Connecticut lawmakers have taken decisive action and proposed a new bill in the General Law Committee that would establish an Office of Artificial Intelligence and create a government task force to develop an AI Bill of Rights. These efforts are focused on regulating the use of AI by State Agencies.

Introduced by State Senator James Maroney, the Bill, SB 1103, would allow for government oversight, mandate the inventory and testing of state-used algorithms, close existing data privacy loopholes and enumerate citizen protections through an AI Bill of Rights. The extensive application of algorithms in government operations is not exclusive to Connecticut, and thus, it is of the utmost importance to generate solutions to safeguard people from potential harm and allow them to comprehend the repercussions of automated systems use.

This article gives a brief description of the Bill, its significant points, and emphasises the necessity for governments to stay current with AI regulations in order to guarantee that automated systems do not adversely affect individuals.

Governments’ use of AI is extensive

From assigning students to schools, to allocating state resources, to setting bail, government authorities are increasingly using algorithms to automate key processes. In an attempt to regulate the use of AI by State Agencies, Connecticut lawmakers have introduced SB 1103, An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy. The Bill seeks to:

  • Establish an Office of Artificial Intelligence, responsible for developing and establishing automated system procedures for use by state agencies in designing, utilizing, and procuring automated systems;
  • Exempt air carriers from certain provisions concerning data privacy;
  • Provide that a controller shall not process the personal data of a consumer for purposes of targeted advertising, or sell the consumer's personal data without the consumer's consent, under circumstances where a controller has actual knowledge, or wilfully disregards, that the consumer is at least thirteen years of age but younger than sixteen years of age; and
  • Establish a task force to (a) study artificial intelligence, and (b) develop an artificial intelligence Bill of Rights.

It also establishes the Automated Systems Procedures, which outlines the protocols and processes for developing, procuring, and implementing automated decision systems or automated final decision systems. Finally, it outlines the requirements for critical decisions, including notification to an individual as to the algorithm used and the right to appeal. Critical decisions here refer to education, employment, essential utilities, family planning, financial services, credit and mortgage services, healthcare, housing, legal services, government benefits, and public services.

What is required?

Effective 1 July 2023, any "State agency" (department, board, commission, council, institution, office, constituent unit of the state system of higher education, technical education and career school or other agency in the executive, legislative or judicial branch of state government) would be required to:

  1. Examine their automated systems biennially and deactivate any application of the system that demonstrates performance or outcomes that are inconsistent with the procedures;
  1. Be transparent in disclosing any information that is relevant to such state agency's use of such automated system; and
  1. Implement safeguards to ensure that such automated system is properly applied, utilized and functioning, and provides appropriate training to all personnel responsible for designing, utilizing or procuring such automated system.

The procedures must ensure that the automated systems comply with all applicable laws prohibiting discrimination and addressing privacy, civil rights, and civil liberties; do not disproportionately impact any individual or group based on any differentiating characteristic; and are safe, secure, and resilient.

Managing inventory

SB 1103 requires the Office of Artificial Intelligence to review and inventory all automated systems developed, used, or procured by state agencies during the calendar year beginning 1 January 2023. This inventory will include the name of such automated system, a description of its capabilities, the data used, purpose and intended use of the system, how data was processed and stored, and the financial impact of the system. Moreover, the automated system must disclose where it discriminated against any individual or group of individuals in violation of state or federal law. The review will also determine if the system infringed any legal rights of Connecticut residents or posed any risk to the state.

Notice

In addition, the Bill requires state agencies developing, utilizing, or procuring any AI system after 1 January 2024 to provide the Office of Artificial Intelligence with at least sixty days of advance written notice.  Following notice, the Office will review the automated system to determine if it would result in any discrimination or disproportionate impact prohibited by state or federal laws and share the outcome with the relevant state agency.  On or after 1 July 2025, the Office may also periodically re-evaluate automated systems to ensure compliance with the automated system procedures.

If passed, SB 1103 would require the Artificial Intelligence Officer to prepare and submit a report to the joint standing committee of the General Assembly relating to consumer protection. The report would include details on automated system procedures and updates, legislation recommendations, information on automated systems used by state agencies, and any other relevant information determined by the Artificial Intelligence Officer. The Bill would take effect 1 July 2023 and the report must be submitted by 15 February 2025 and annually thereafter.

Inventory & Notice

Biased algorithms have been shown to cause serious harm

Connecticut's government has seen a rapid, largely unchecked spread of algorithm use. According to a white paper from the Yale Law School, it is difficult for the state to hold algorithms accountable. The Department of Children and Families (DCF) refused to provide the source code for an algorithm intended to reduce the risk of children experiencing a life-threatening episode, citing trade secret protection. Although the system had been used for three years, the DCF had not evaluated it for efficacy or bias. The Department of Education (DOE) also refused to release the source code of an algorithm used to assign students to schools, indicating no effort had been made to assess it for efficacy or bias.

Questions remain

Despite the support for SB 1103, there have been critiques raised in the public hearing and in written testimony, such as doubts about whether the Bill goes beyond what is feasible at the moment, the need for standards prior to implementing legislation, and whether a task force is essential.

Algorithms are unaccountable – it is important to ensure they are functioning

Not unique to the State, without proper testing and ongoing evaluation, algorithms can function improperly or perpetuate historic biases reflected in the algorithm’s code or, for a machine-learning algorithm, embedded in the data used to train it. To promote transparency and accountability, state agencies must develop and procure automated systems that have been tested and validated to ensure they are functioning as intended.

Taking steps early is the best way to get ahead of AI regulations. At Holistic AI, we have a team of experts who, informed by relevant policies, can help you keep track and manage the risks of your AI. Reach out to us at we@holisticai.com to learn more about how we can help.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call