Since the rise of ChatGPT in late 2022, state lawmakers have tried various strategies to regulate artificial intelligence (AI), with mixed outcomes. At least 25 states, Puerto Rico and the District of Columbia introduced AI bills in the 2023 legislative session.[1]The first half of 2024 saw similar legislative activity, with three states passing comprehensive AI bills.
On May 8, the Colorado legislature sent the Colorado Artificial Intelligence Act (Colorado Act) to the desk of Gov. Jared Polis, who has until June 7 to sign it. If signed, it will become the first state law in the nation to comprehensively regulate the use of high-risk AI systems to prevent algorithmic discrimination.
The Colorado Act defines a high-risk AI system as one that becomes a substantial factor in making a “consequential decision,” which is a decision having a “material legal or similarly significant effect” on the provision, cost or terms of an educational or employment opportunity, a financial or lending service, an essential government service, a healthcare service, housing, insurance, or legal services. A system may be a “substantial factor” in such consequential decisions if it assists in making them, is capable of altering their outcome and is generated by AI. This includes any use of AI to generate content or make a prediction or recommendation used to make a consequential decision about a consumer.
The Colorado Act would require both the developers and the deployers of high-risk AI systems to exercise reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination. A “developer” is an individual or entity doing business in Colorado that develops or intentionally and substantially modifies a high-risk AI system. A “deployer” is an individual or entity doing business in Colorado that deploys a high-risk AI system. “Algorithmic discrimination” means any condition in which the use of an AI system results in “an unlawful differential treatment or impact that disfavors an individual or group” on the basis of protected classifications under federal or Colorado law.
Both developers and deployers are entitled to a rebuttable presumption of reasonable care if they comply with certain requirements. For developers, these include disclosing to deployers and other developers of the high-risk system (i) a statement describing the reasonably foreseeable uses and known harmful uses of the system; (ii) summaries of the training data and known or reasonably foreseeable limitations of the system, including risks of algorithmic discrimination; (iii) how the system was evaluated for performance and mitigation of algorithmic discrimination, the intended outputs, any measures taken to mitigate risks and how the system should and should not be used; (iv) sufficient information for a deployer to complete an impact assessment; and (v) summaries of the high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available and how the developer manages known or reasonably foreseeable risks of discrimination that may arise as a consequence.
In addition, if a developer has discovered that a deployed high-risk system has caused or is reasonably likely to have caused algorithmic discrimination, it must notify the attorney general and any known deployers within 90 days of discovery.
For deployers of high-risk systems, the criteria include (i) implementing a risk management policy and program for the system; (ii) completing an impact assessment of the system; (iii) notifying a consumer regarding consequential decisions affecting him or her that the system made or was a substantial factor in making, providing the consumer with certain information about that decision, and, if applicable, informing the consumer of the right to opt out of profiling under the Colorado Privacy Act; (iv) if the system has been used to make an adverse decision concerning a consumer, giving the consumer an opportunity to appeal that decision through human review if technically feasible; (v) making available on the deployer’s website a statement summarizing the types of high-risk systems that it currently deploys; how it manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from their deployment; and the nature, source and extent of the information that it collects and uses; and (vi) if it has discovered that a deployed high-risk system has caused algorithmic discrimination, notifying the attorney general within 90 days of discovery.
Developers and deployers would have an affirmative defense if they had in place a program complying with “a nationally or internationally recognized risk management framework for artificial intelligence systems that the bill or the attorney general designates” or take specified measures to discover and correct the alleged violations.
Deployers with fewer than 50 employees that do not use their own data to train high-risk AI systems are exempted from most of the Colorado Act’s requirements. The attorney general has exclusive enforcement authority and is empowered to promulgate rules for implementation. If signed, the Colorado Act would take effect on Feb. 1, 2026.
Colorado’s focus on high-risk and discriminatory use cases has invited comparison to the European Union Artificial Intelligence Act (EU AI Act), which was adopted by the European Parliament on March 13. The EU AI Act creates four tiers of risk to regulate AI systems based on use cases, some of which are subject to similar disclosures and bias audits as those set forth in the Colorado law. President Joe Biden and California Gov. Gavin Newsom have signed executive orders emphasizing transparency and seeking to prevent bias concerning access to essential goods and services such as employment, housing and health care. See our prior alerts on the EU AI Act, the White House executive order and the California executive order for more information.
Utah enacted the Utah Artificial Intelligence Policy Act (UAIPA) on March 13, which took effect on May 1. In contrast to the Colorado Act, the UAIPA focuses specifically on transparency and disclosure requirements for consumer-facing generative AI tools, i.e., AI systems “trained on data, [interacting] with a person using text, audio, or visual communication, and [generating] non-scripted outputs similar to outputs created by a human, with limited or no human oversight.” These include OpenAI’s ChatGPT and other chatbots.
The UAIPA creates two sets of disclosure requirements. If members of a “regulated” occupation (i.e., one that requires a license or certification to operate) use generative AI in the provision of regulated services, they must “prominently disclose” to a customer at the outset that he or she is interacting with generative AI. For other commercial uses of generative AI, the user must “clearly and conspicuously disclose” to the customer that he or she is interacting with generative AI if prompted or asked by the customer.
Like the Colorado Act, the UAIPA contains no private right of action. The Utah Division of Consumer Protection may impose fines up to $2,500 per violation and seek injunctive and declarative relief in addition to disgorgement and attorney’s fees. The Utah attorney general may seek up to $5,000 per violation from any party that violates an initial enforcement order.
Tennessee passed the Ensuring Likeness, Voice and Image Security (ELVIS) Act on March 21, which takes effect July 1. The law aims to protect the music industry and artists by prohibiting the use of deepfakes, which include AI-generated content of a person’s likeness and voice to create false video or audio clips.
With the ELVIS Act, the Tennessee legislature first expanded the individual property right in one’s name, photograph or likeness to encompass the use of one’s voice, whether actual or simulated. Second, the ELVIS Act created a private right of action against anyone who “publishes, performs, distributes, transmits, or otherwise makes available to the public an individual’s voice” without prior authorization. Third, the ELVIS Act created a private right of action against anyone who “distributes, transmits, or otherwise makes available an algorithm, software, tool, or other technology, service, or device, the primary purpose or function of which is the production of an individual’s photograph, voice, or likeness” without prior authorization. While the text of the ELVIS Act does not directly reference AI, Gov. Bill Lee’s signing statement made clear that the bill’s primary purpose was to provide the music industry with a means of deterring AI impersonators.
We will continue to follow the latest trends in AI regulation. Should you have any questions about this article or AI issues in general, we invite you to reach out to Kramer Levin’s Artificial Intelligence group for assistance.
[1]https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation