The New York Department of Financial Services (DFS) issued guidance recently concerning cybersecurity risks associated with artificial intelligence (AI) and measures that covered entities (generally, banks, insurers and other classes of financial firms) may take to mitigate those risks. While the guidance does not impose any new obligations beyond the existing DFS cybersecurity regulations (known as Part 500), it presents DFS’s views on how covered entities should apply Part 500 to AI threats.
The guidance highlights four AI risks, with a particular emphasis on the use of deepfake technology to impersonate individuals and trick employees into divulging sensitive information. The guidance notes that AI may also enhance the speed and scale of cyberattacks, may enlarge the quantity of nonpublic information that covered entities store, and may create supply chain vulnerabilities since the use of AI often involves third-party vendors.
The rest of the guidance discusses how AI risks may affect the existing requirements under Part 500. Covered entities should consider AI threats as part of their periodic risk assessments and adjust their cybersecurity policies and procedures as needed. Covered entities should incorporate AI risks into their vendor management programs and consider contractual terms to better protect data shared with vendors that use or provide AI tools. Covered entities should incorporate AI threats into their cybersecurity training for all personnel, especially AI-enhanced social engineering techniques and how to defend against attacks that use deepfake impersonation technology.
Covered entities should also adjust their data management, monitoring and access controls for new AI risks. Data inventories should identify which information systems use AI. Controls should be in place to prevent threat actors from accessing the large quantities of data on which AI tools rely. Covered entities should continuously monitor their information systems that are connected to AI for unusual queries that may indicate an attack, or separate their AI tools from sensitive systems and data. The guidance also suggests that covered entities base their access controls on “zero trust” principles, but notes that this is not specifically required under Part 500.
Given this guidance, covered entities should review their compliance with Part 500 and consider their use of AI, their vendors’ use of AI and any related risks. We discuss the guidance in more detail below.
The guidance outlines four specific risks that AI may pose to covered entities:
The remainder of the guidance discusses examples of controls and measures that help entities combat AI-related risks.
Part 500 requires covered entities to maintain cybersecurity programs based on company-specific risk assessments. The guidance explains that such risk assessments should consider deepfakes and other threats posed by AI and should specifically consider the covered entity’s own AI use, the AI technologies used by vendors, and the potential vulnerabilities in AI applications that could compromise information systems or NPI. Covered entities must update their risk assessments and the resulting cybersecurity policies at least annually, or whenever material changes to a company’s risk profile occur.
A covered entity is encouraged to exercise due diligence before using a TPSP that will access its information systems and/or NPI. When doing so, DFS strongly recommends that the covered entity consider the threats facing TPSPs from the use of AI and AI-enabled products and services; how those threats, if exploited, could impact the covered entity; and how the TPSPs protect themselves from such exploitation. Contracts with vendors must require timely notification to the covered entity of any data security event that the vendor suffers, as required by Part 500. But the guidance suggests that covered entities should consider additional representations and warranties for vendors that use AI to ensure the vendor is adequately protecting NPI, including requirements to utilize enhanced privacy, security and confidentiality options in an AI tool, if available.
Starting Nov. 1, 2025, Part 500 will require multi-factor authentication (MFA) for access to a covered entity’s information systems or NPI, including by employees, contractors and vendors. Part 500 defines MFA as verification “using at least two of three authentication factors: knowledge factors, such as a password; inherence factors, such as biometric characteristics; and possession factors, such as a token.” The guidance explains that not all forms of authentication are equally effective and suggests covered entities consider authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks by avoiding authentication via text, voice or video. Where biometrics are used for authentication, the guidance suggests that covered entities use a technology containing liveness detection or texture analysis to verify that the biometric marker comes from a live person. Finally, the guidance reiterates that Part 500 currently requires covered entities to limit user access to only the systems and NPI necessary to perform that user’s job, and suggests (but does not require) that covered entities employ “zero trust” principles[2] for access control. At minimum, access privileges must be reviewed annually under Part 500.
Part 500 requires at least annual cybersecurity training for all personnel that includes how to defend against social engineering attacks. The guidance suggests that this training incorporate AI-enhanced social engineering attacks by simulated phishing and voice, video impersonation exercises, and procedures for what to do when receiving a request for credentials, an urgent money transfer, or access to NPI by phone or video. Notably, the guidance emphasizes that senior executives and members of a covered entity’s “senior governing body”[3] must also receive this training under Part 500, and these senior executives should be aware of the enhanced risks presented by AI, including deepfake attacks.
Covered entities must have a monitoring process in place that can identify new security vulnerabilities promptly so remediation of incidents, such as unauthorized access, can occur quickly. Part 500 also requires covered entities to monitor the activity of authorized users as well as email and web traffic to block malicious content and protect against the installation of malicious code. Covered entities that use AI-enabled products or services or allow personnel to use AI applications, such as ChatGPT, should also consider monitoring for unusual query behaviors that might indicate an attempt to extract NPI. They should also consider blocking queries from personnel that might expose NPI to a public AI product or system.
Part 500 requires covered entities to properly dispose of NPI that is no longer necessary for business operations or other legitimate business purposes, including NPI used for AI purposes. Additionally, although not required by Part 500 until Nov. 1, 2025, all covered entities should maintain data inventories.
Moreover, if a covered entity uses AI, “controls should be in place to prevent threat actors from accessing the vast amounts of data maintained for the accurate functioning of the AI.” In these cases, a covered entity should identify all information systems that use or rely on AI, including (if applicable) the information systems that maintain, or rely on, AI-enabled products and services. These entities should also maintain an inventory of all such systems.
We will continue to follow these and other legal developments related to cybersecurity and AI. Please reach out to Kramer Levin’s Artificial Intelligence, Privacy, Cybersecurity and Data Innovation or Insurance Transactional and Regulatory groups for more information.
[1]Part 500 generally defines NPI as electronic information that: (1) if compromised would cause a material adverse impact to the operations or security of the covered entity; (2) information that can be used to identify an individual (in combination with government identifiers), account numbers, security codes or passwords, or biometric records; and (3) medical or healthcare-related information, except age or gender. 23 NYCRR § 500.1(k).
[2]Under the guidance, “zero trust” means: “Covered Entities should not implicitly trust the identity of any Authorized User by default. Covered Entities should, to the extent possible and appropriate to their risks, require authentication to verify the identity of an Authorized User each time the Authorized User wants to access an Information System with NPI maintained thereon.”
[3]Part 500 defines “senior governing body” as “the board of directors (or an appropriate committee thereof) or equivalent governing body or, if neither of those exist, the senior officer or officers of a covered entity responsible for the covered entity’s cybersecurity program.” 23 NYCRR § 500.1(q).