The year 2024 witnessed significant developments in the legal landscape governing artificial intelligence (AI). Three states passed comprehensive AI legislation, with others passing multiple laws that regulate certain AI uses. Sector-specific regulators, including the Securities and Exchange Commission (SEC), Federal Trade Commission (FTC), Department of Justice (DOJ) and New York Department of Financial Services (DFS), continued enforcement efforts under existing laws and regulations that impact AI. And litigation continued to rise over intellectual property issues related to AI datasets, training and development.
Foreign governments continued to pass new AI laws, issue guidelines or establish related task forces. The United Nations adopted a resolution promoting “safe, secure and trustworthy” AI systems. And multiple countries, including the United States, signed the first binding international AI treaty proposed by the Council of Europe.
Kramer Levin issued numerous alerts in 2024 on major developments in this burgeoning area of law. We briefly summarize those alerts below.
On March 18, 2024, the SEC announced that it settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their purported use of AI. Delphia and Global Predictions were fined $225,000 and $175,000, respectively. Without admitting or denying the SEC’s allegations, both firms agreed to be censured and to cease and desist from further violations as part of the settlement.
SEC Chair Gary Gensler has repeatedly warned businesses against AI washing, a reference to the practice of greenwashing, in which businesses misrepresent how environmentally friendly their operations are. In various interviews and comments, Gensler has reiterated that AI-related statements, such as all public disclosures, must be truthful and accurate. Businesses should not claim they are using AI in ways they are not and must also “fairly and accurately describe the material risks” of any genuine AI use. The FTC has likewise issued numerous warnings about making false or unsubstantiated claims about AI use or failing to prevent biased or discriminatory results from AI.
States Take Varying Approaches to AI Regulation
Since the rise of ChatGPT in late 2022, state lawmakers have tried various strategies to regulate AI, with mixed outcomes. At least 25 states, Puerto Rico and the District of Columbia introduced AI bills in the 2023 legislative session. The first half of 2024 saw similar legislative activity, with three states passing comprehensive AI legislation regulating the private sector.
The Colorado Artificial Intelligence Act (Colorado AI Act) is the most robust of the three and seeks to prevent algorithmic discrimination arising from the use of high-risk AI systems. It defines a high-risk AI system as one that becomes a substantial factor in making a “consequential decision,” which is a decision having a “material legal or similarly significant effect” on the provision of educational or employment opportunities, financial or lending services, essential government services, health care, housing, insurance, or legal services. Colorado requires both developers and deployers of such systems to protect consumers from the risks of algorithmic discrimination, including with bias impact assessments and risk management policies, by notifying consumers of AI use for consequential decisions, and by analyzing and publishing statements regarding foreseeable risks and mitigation measures.
Utah’s Artificial Intelligence Policy Act, which took effect on May 1, 2024, is a transparency law that creates two sets of AI disclosure requirements. Members of a “regulated” occupation (one that requires a license or certification) must “prominently disclose” to a customer at the outset that he or she is interacting with generative AI. For all other commercial uses, the deployer of an AI system must “clearly and conspicuously disclose” to the customer that he or she is interacting with generative AI, but only if prompted or asked.
Tennessee’s Ensuring Likeness, Voice and Image Security (ELVIS) Act, which took effect on July 1, 2024, aims to protect the music industry and artists by prohibiting “deepfakes,” which include AI-generated content of a person’s likeness and voice to create fake video or audio clips. Tennessee expanded the individual property rights in one’s name, photograph or likeness to encompass the use of one’s voice, whether actual or simulated, and created a private right of action against anyone who publishes, distributes or otherwise makes deepfakes available to the public, or any software tools whose primary purpose is to create deepfakes, without prior authorization from the individual.
Artificial Intelligence Quarterly Update
In this quarterly update, we review the latest developments in three subjects salient to corporate use of AI. First, we discuss the risks associated with AI, the case for board oversight and how the board can exercise oversight over management’s implementation of AI. Second, we review recent trends in AI intellectual property litigation. Finally, we provide an overview of three state laws that broadly regulate AI and emerging topics in potential AI legislation.
DOJ Updates Corporate Compliance Programs Criteria To Include Focus on AI Emerging Technologies
In prepared remarks delivered on Sept. 23, 2024, at the Society of Corporate Compliance and Ethics conference in Grapevine, Texas, Principal Deputy Assistant Attorney General Nicole M. Argentieri, head of DOJ’s Criminal Division, announced updates to DOJ’s guidance on the Evaluation of Corporate Compliance Programs (ECCP).
The ECCP sets forth criteria for prosecutors to consider in determining the adequacy and effectiveness of a corporation’s compliance program when a corporation comes within the remit of the Criminal Division’s oversight authority. As a second significant update, DOJ also expanded its assessment of whether a company sufficiently encourages employees to proactively report misconduct and without fear of retaliation. A third update to the ECCP is consideration of whether compliance personnel have appropriate access to company data, resources and technology, and whether companies are channeling resources and technology into gathering and leveraging data for compliance purposes.
All of the related programs and associated assessment criteria reflect a desire by the DOJ to deter corporate misconduct, incentivize corporations to invest in robust compliance programs, and increase self-reporting of corporate misconduct.
NY Department of Financial Services Releases AI Cybersecurity Guidance
In October 2024, the DFS issued guidance concerning cybersecurity risks associated with AI and measures that covered entities (generally, banks, insurers and other classes of financial firms) may take to mitigate those risks. While the guidance does not impose any new obligations beyond the existing DFS cybersecurity regulations (known as Part 500), it presents DFS’ views on how covered entities should apply Part 500 to AI threats.
The guidance highlights AI risks, with a particular emphasis on the use of deepfake technology to impersonate individuals and trick employees into divulging sensitive information. The guidance notes that AI may also enhance the speed and scale of cyberattacks, may enlarge the quantity of nonpublic information that covered entities store and may create supply chain vulnerabilities since the use of AI often involves third-party vendors. The rest of the guidance discusses how AI risks may affect the existing requirements under Part 500.
This quarterly update covers the latest developments in international AI legislation. On Sept. 5, 2024, the European Commission signed CETS No. 225, which is the first legally binding international AI treaty. On March 21, 2024, the United Nations General Assembly adopted a landmark resolution promoting “safe, secure and trustworthy” AI systems that will benefit sustainable development for all (the Resolution). The Resolution calls on all Member States and stakeholders “to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights.” The EU AI Act also began to take effect, with key enforcement dates on Feb. 2, 2025 (for unacceptable risks), Aug. 2, 2025 (for general-purpose AI models), and Aug. 2, 2026 (for high-risk AI systems).
On Nov. 12, 2024, the Canadian government announced the launch of the Canadian Artificial Intelligence Safety Institute (CAISI). CAISI is tasked with studying AI risks, promoting responsible development and informing legislative policy. In April 2024, Japan released the country’s AI Guidelines for Business Ver1.0, which call on all actors in the AI space to follow 10 principles: safety, fairness, privacy protection, data security, transparency, accountability, education and literacy, fair competition, innovation, and a human-centric approach that “enables diverse people to seek diverse well-being.” And Israel published for public comment a draft policy for regulation and ethics in the field of AI.
As we head into 2025, we will continue to monitor these and other developments. Please reach out to Kramer Levin’s Artificial Intelligence Group for more information.