The New York Department of Financial Services’ (DFS) January 2019 insurance circular letter, which advised New York-licensed life insurance carriers on the use of external consumer data and information sources in underwriting, illustrates the emerging regulatory landscape surrounding these tools. While DFS stated its support for the use of external data sources to improve the provision of insurance through better pricing, it expressed concern that the accuracy and reliability of lifestyle indicators could vary and therefore negatively impact consumers and the insurance sector generally.
This article is part of a series examining regulator guidance and developing common law around the use of AI. This article addresses DFS’s increased focus on issues concerning technology and cybersecurity.
“Unconventional sources” prompted regulatory intervention
DFS investigated the use by life insurers of external data. DFS explained that “external data” for the purposes of the circular letter includes any information that is not directly related to the medical condition of the applicant that is used to supplement medical underwriting or to establish “lifestyle indicators” that may contribute to an underwriting assessment.
DFS identified two areas of concern with the use of external data sources that it identified during its investigation:
- Discrimination: The use of external AI data could potentially impact the availability and affordability of life insurance for protected classes of consumers. DFS reminded insurers that they cannot base their pricing on race, color, creed, national origin, status as a victim of domestic violence, past lawful travel, sexual orientation and other prohibited grounds of discrimination. Applicable insurance law also prohibits insurers from refusing to insure persons with a physical or mental disability, impairment or disease, or history of same, except where permitted by law and based on sound actuarial principles, or where based on actual or reasonably anticipated experiences. DFS also cited federal civil rights laws as bearing on insurers’ antidiscrimination obligations.
DFS expressed concern that external data sources could potentially collect data or use information that the insurer would otherwise be prohibited from using. In this regard, DFS explained that the use of external sources has the “strong potential to mask” discrimination. In particular, DFS noted that external sources that use “geographical data (including community-level mortality, addiction, or smoking data), homeownership data, credit information, educational attainment, licensures, civil judgments and court records” could also serve to mask illicit race-based underwriting. Additionally, DFS noted potentially problematic models that make health predictions based on (1) a consumer’s retail purchase history, (2) social media, internet or mobile activity, (3) geographic location tracking, (4) the condition or type of an applicant’s electronic devices (and any systems or applications operating on them) or (5) how the consumer appears in a photograph. DFS explained that these models and inputs may lack sufficient rationale or actuarial basis and might additionally impact protected classes of consumers.
Based on these concerns, DFS issued the following guidelines:
a. An insurer should not use external data and predictive models unless the insurer has determined that the tools do not collect or utilize prohibited criteria. Reliance on a vendor’s claim of nondiscrimination or the proprietary nature of a third-party process is not enough, as the burden remains with the insurer to observe the law.
b. An insurer should not use an external data source, algorithm or predictive model in underwriting or rating unless the insurer can establish that the underwriting or rating guidelines are not unfairly discriminatory in violation of the N.Y. Insurance Law. Specifically, the insurer should consider whether the underwriting guidelines derived from the external data are supported by generally accepted actuarial principles or actual or reasonably anticipated experiences that justify different results for similarly situated applicants. Statistical data alone is not sufficient, as there must still be a valid rationale or explanation supporting the differential treatment.
DFS noted that, subject to the principles described above, an insurer may establish guidelines and practices to assess an applicant’s health status and identify individuals at higher mortality risk if based on sound actuarial principles or if related to actual or reasonably anticipated experience.
- Lack of transparency: DFS found that life insurers relying on external data for underwriting purposes should be transparent with consumers. DFS noted that, pursuant to the N.Y. Insurance Law, an insurer must notify an applicant of the right to receive the reason for “any declination, limitation, rate differential or other adverse underwriting decision.” DFS explains in the Circular Letter that an “adverse underwriting decision” includes the “inability of an applicant to utilize an expedited, accelerated or algorithmic underwriting process in lieu of a traditional medical underwriting.” Where an insurer is using external data sources, the reason or reasons provided to an applicant must include in detail all information upon which the insurer based any adverse underwriting decision, including the specific source of the information upon which the insurer based its decision. Here, too, the insurer may not rely on the proprietary nature of a third party’s process or analytical tool to justify the lack of specificity related to the adverse underwriting action. Additionally, the insurer must obtain consumer consent before accessing external data, if required by law or regulation.
The failure to adequately disclose to a consumer the material elements of an accelerated or algorithmic underwriting process, and the external data sources upon which that process relies, may constitute an “unfair trade practice” under Article 24 of the N.Y. Insurance Law, which could subject the insurer to administrative and judicial penalties.
DFS specified the circular letter should not be interpreted as an exhaustive list of issues that could arise with the use of external data, whether for life or other kinds of insurance.
Algorithms and predictive models might be focus of exams
DFS cautioned that it reserves the right to audit underwriting criteria, algorithms and models, including as part of routine examinations. Noncompliance might lead to disciplinary action, including fines, revocation and suspension of license and the withdrawal of product forms.