For users and developers of artificial intelligence (AI), keeping abreast of the evolving legal landscape is challenging but critical. This update highlights notable recent developments in global AI regulation.

International Treaties and Resolutions

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law — On Sept. 5, 2024, the European Commission signed CETS No. 225 (Convention), which is the first legally binding international AI treaty. The Convention seeks to ensure that all activities within the life cycle of AI systems adhere to the principles of human dignity, equality, nondiscrimination, transparency, accountability, privacy and safe innovation.

The Convention applies to public authorities and private actors acting on their behalf but does not apply directly to private actors. Rather, signatories to the Convention have wide discretion in how to enforce its provisions within their borders but have agreed to make public commitments to doing so.

The Convention has been signed by Andorra, Georgia, Iceland, Norway, Moldova, the United Kingdom, Israel and the United States, among others. Observers to the Convention, who have not yet signed it, include Australia, Argentina, Canada, Costa Rica, Japan, Mexico, Peru and Uruguay.

Key Objectives:

  1. Accountability for Human Rights – The Convention calls on its signatories to safeguard human rights from AI threats, preserve respect for human dignity and individual autonomy, and ensure accountability for adverse impacts AI may have on democratic processes and the rule of law.
  2. Transparency and Oversight – The Convention calls for domestic measures to ensure adequate transparency and oversight. These measures may include content labeling or watermarking as well as documenting human oversight, training, testing and risk remediation efforts.
  3. Equality – The Convention calls for regulatory or technical measures to ensure that AI systems respect equality (including specifically gender equality) and avoid all forms of discrimination proscribed by international and domestic law.
  4. Privacy – The Convention calls for domestic measures to protect individual privacy. The Convention’s explanatory report notes the sensitivity of an individual’s life experiences, engagements, private personal matters, autonomy and control over personal data.
  5. Reliability and Trust – The Convention calls for domestic measures to promote the reliability of AI systems and trust in their outputs, which may include data integrity, accuracy and security standards.

Seizing the Opportunities of Safe, Secure and Trustworthy AI Systems for Sustainable Development — On March 21, 2024, the United Nations General Assembly adopted a landmark resolution promoting “safe, secure and trustworthy” AI systems that will benefit sustainable development for all (the Resolution). The Resolution calls on all Member States and stakeholders “to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights.” This is the first time the General Assembly has adopted a resolution on AI.

Key Objectives:

  1. Equality – The Resolution resolves to bridge digital divides, calling on Member States to assist developing countries with access to the benefits of digital transformation and AI systems. The Resolution encourages all Member States to promote trustworthy AI systems in an inclusive and equitable manner.
  2. Sustainable Development – The Resolution resolves to promote safe, secure AI systems and foster an environment for such systems to address the world’s greatest challenges, including economic, social and environmental development in line with the U.N.’s 2030 Agenda for Sustainable Development.
  3. International Collaboration – The Resolution encourages Member States to share best practices on data governance and promote international cooperation. These best practices should advance trusted cross-border data flows and make AI development more inclusive and beneficial to all.
  4. Resources – The Resolution calls upon specialized agencies, funds and other entities in the U.N. system, within their respective mandates and resources, to leverage the opportunities and address the challenges posed by AI in a coordinated manner.

Europe

EU AI Act — As we previously reported, the EU AI Act took effect on Aug. 1, 2024, with emphasis on data quality, transparency, human oversight, accountability and ethical questions. Notable upcoming dates under the act, which will take effect in stages, include:

  • Feb. 2, 2025 – Prohibitions on AI systems that present unacceptable risk will apply. These include uses that deploy subliminal or deceptive techniques; exploit vulnerabilities such as age, disability or economic status; perform social scoring; classify people based on behavior or personal characteristics; and perform biometric sorting or remote biometric identification, among others.
  • Aug. 2, 2025 – Obligations on general purpose AI models will take effect, including documentation and notification requirements. Member States must appoint national authorities to oversee compliance by this date.
  • Aug. 2, 2026 – Obligations on high-risk AI systems and uses will take effect, including AI used in critical infrastructure, education, employment, essential services, law enforcement, immigration and those that use biometrics. These obligations include data governance, risk management, recordkeeping, transparency and human oversight requirements. Member States must establish one national AI “regulatory sandbox” by this date.[1]

Canada

On Nov. 12, 2024, the Canadian government announced the launch of the Canadian Artificial Intelligence Safety Institute (CAISI). CAISI is tasked with studying AI risks, promoting responsible development and informing legislative policy.

Canada is expected to regulate AI at the federal level, although its Artificial Intelligence and Data Act (AIDA) is currently making its way through the Canadian parliament. As written, the AIDA seeks to “protect Canadians, ensure the development of responsible AI in Canada, and to prominently position Canadian firms and values in global AI development.” The AIDA is intended to align with the EU AI Act and the U.S. National Institute of Standards and Technology Risk Management Framework.

Key Objectives:

  1. Safety and Accountability – The AIDA would regulate “high-impact” AI systems in line with Canadian consumer protection and human rights laws, and “ensure accountability at each point where risk may be introduced.” The AIDA has not yet defined high-impact systems.
  2. Fostering Innovation and Governance – The AIDA would task the minister of innovation, science and industry with enforcement, “to ensure that policy and enforcement move together as the technology evolves.” The AIDA would also appoint a new AI and data commissioner to oversee development and administration.
  3. Criminal Penalties – The AIDA would create new criminal penalties to curb “reckless and malicious uses of AI that cause serious harm to Canadians and their interests.”

Japan

So far Japan has taken an innovation-friendly approach to AI, choosing “soft law” guidelines and strategies over regulation. In April 2024, Japan released the country’s AI Guidelines for Business Ver1.0 (“Guidelines”). Although the Guidelines are not binding, they aim to balance societal concerns and individual rights while fostering innovation.

The Guidelines call on all actors in the AI space to follow 10 principles: safety; fairness; privacy protection; data security; transparency; accountability; education and literacy; fair competition; innovation; and a human-centric approach that “enables diverse people to seek diverse well-being.” The Guidelines specifically encourage AI developers to consider the impact their products may have on society, take measures to address that impact, advance innovation, prevent bias, and ensure data safety, security and reliability.

Japan has also published national strategies for AI, including AI Strategy 2022 (Strategy), which aims to overcome social inequalities and improve Japan’s industrial competitiveness. The Strategy promotes three principles: dignity; diversity and inclusion; and sustainability. The Strategy also seeks to ensure that Japan can attract and develop human talent in the field of AI; build an international network for research, education and social infrastructure; and use AI to protect Japanese people from imminent crises or large-scale disasters.

In September 2024, however, Prime Minister Fumio Kishida indicated that Japan may start regulating AI in response to mounting risks. Japan has also faced criticism that its current lack of technology regulation has led to widespread intellectual property infringement. The Liberal Democratic Party in Japan has called for regulations on generative and high-risk AI systems by the end of 2024.

Israel

Israel published for public comment a draft policy for regulation and ethics in the field of AI (Policy). This is the first AI regulatory policy proposed by Israel.

Key Objectives:

  1. Coordinated Sector-Specific Legislation – Rather than establishing a national AI law, the Policy directs various regulators to “examine the need to promote concrete regulation in their field, while maintaining a uniform government policy.” Under the Policy, these sector-specific regulations would be compatible with international norms, contain risk management tools and frameworks, use regulatory experimentation tools such as AI sandboxes (see 1 below), favor a soft law approach such as self-certification and voluntary standards where appropriate, and be crafted with the participation of the public.
  2. Ethical Principles – The Policy advances the following principles: respect for fundamental rights and public interests; using AI to promote growth, development and Israeli leadership in innovation; equality and the prevention of unwarranted discrimination; transparency and public relations; reliability, durability, security and safety; and responsibility.
  3. Risk Management – Under the Policy, legislators will work with industry leaders to draft a uniform tool for AI risk management, which should also create a common language for government officials, regulators and private entities to ensure responsible innovation.
  4. Government Knowledge Center – The Policy will establish an AI knowledge and coordination center to organize input and recommendations. This center will receive comments from experts and the public, and advise government ministries on ethical and responsible AI governance.

Should you have any questions about this article or AI issues in general, we invite you to reach out to Kramer Levin’s Artificial Intelligence Group for assistance.


[1]A "regulatory sandbox" is a controlled environment that allows businesses to test new products without onerous regulations or penalties, usually under supervision and for a limited amount of time, before entering the market.

Related Practices