The insurance industry uses underwriting guidelines to determine who they will insure and at what rate. Insurance deals in risk, so insurance companies create these guidelines to determine the circumstances under which they will assume that risk—and when they won’t because someone is too risky to insure.
While insurance companies are prohibited from discriminating based on factors like race, guidelines use particular facts about people to measure risk and set rates. This means some form of discrimination is both necessary and legal. Still, questions about what counts as fair or unfair discrimination have received increasing attention, especially since George Floyd’s murder in 2020.
- Insurance companies use underwriting guidelines to set rates and determine who they will insure.
- Companies set rates based on risk and factors they’re allowed by law to consider, such as an applicant’s gender.
- While insurance companies say certain factors are actuarially sound criteria for setting rates, consumer advocates think companies should determine rates using factors people can control.
- Historically biased insurance rules include redlining, restrictive covenants, race-based insurance premiums, and what advocates call subtle proxies for unfair discrimination, such as using ZIP codes and credit scores to price auto insurance.
- In recent years, regulators and members of the insurance industry have proposed policies to reduce discrimination.
The National Association of Insurance Commissioners (NAIC) is the standard-setting organization for the insurance industry. In response to the George Floyd protests, the NAIC held a special session on race to scrutinize the connection between insurance and racial discrimination.
Though overt racial discrimination has become less common, NAIC members said forms of discrimination persist, especially in the use of big data.Moreover, as discussed below, lawsuits and investigations have alleged that long-standing discriminatory practices, such as redlining and race-based premiums, continue to affect the industry.
The categories presented here are not exhaustive. Health insurance, for example, has been another area of concern, particularly with federal rules. For example, the Centers for Medicare and Medicaid Services released a final rule, Nondiscrimination in Health and Health Education Programs or Activities, on June 12, 2020.
The ruling was immediately criticized by California Insurance Commissioner Ricardo Lara, among others, as a roadblock to healthcare access for LGBTQ+ people, people with disabilities, and anyone whose primary language is not English.
Types of Discrimination
Underwriting guidelines rely on a form of discrimination based on risk profiles. They separate people into high- and low-risk categories to determine premiums and encourage customers to reduce their risky behaviors. Though this is considered acceptable, the history of underwriting is flush with unacceptable discrimination, something known as unfair discrimination.
Under U.S. law, underwriting guidelines can’t use unfair discrimination. Unfair discrimination targets protected classes, such as race, national origin, sex, or religion. The discrimination’s form can vary, ranging from higher prices and weaker policies to denying coverage.
Disparate Impact vs. Unfair Discrimination
Conversations about algorithmic modeling in insurance tend to conflate “unfair discrimination” and “disparate impact,” which are legally separate concepts, according to Susan T. Stead of Squire Patton Boggs, LLP. Disparate impact, Stead says, is a legal method to prove discrimination in the absence of “overt discrimination” against a protected class. On the other hand, unfair discrimination occurs when the same risks are treated differently because of a factor unrelated to risk. It is banned by laws in every state.
A legal review from the University of Michigan Law School in 2013 found that anti-discrimination laws “vary a great deal” by state and across insurance types. It also reported that a “surprising” number of jurisdictions didn’t have specific laws restricting unfair discrimination based on race, suggesting that the federal government needs to take a larger role in regulating race-based discrimination in insurance.
Notable Examples of Underwriting Guidelines Discrimination
Redlining and housing
Redlining is a form of discrimination that has received popular attention in recent years for its continued effect on inequality. The practice traces back to the Franklin Delano Roosevelt administration.
The federal government began insuring home mortgages to grow homeownership and the White middle class during that time. The Home Owners’ Loan Corporation (HOLC), a government agency, classified neighborhoods across the country on a perceived level of risk based on factors like:
- The age and condition of the housing
- Access to transportation
- Community amenities
- Proximity to undesirable properties (e.g., polluting industries)
- The residents’ employment status and economic class
- The residents’ ethnic and racial composition
The neighborhoods were color-coded on maps according to risk. Communities with predominantly ethnic and racial minority populations were colored in red (hence, “redlined”). These areas were considered “hazardous,” so lenders refused to make loans. In short, redlining shunted resources, including loans and insurance, away from communities of color.
HOLC described the red neighborhoods on its maps as “hazardous” and “characterized by detrimental influences in a pronounced degree, undesirable population or an infiltration of it.” HOLC recommended lenders “refuse to make loans in these areas [or] only on a conservative basis.”
Underwriting guidelines from the Federal Housing Administration (FHA) spelled out the explicitly racial component of these maps, even stating that “incompatible racial groups should not be permitted to live in the same communities.” The effects of these maps and racially restrictive covenants were devastating and persist today.
The Civil Rights Act
Since then, the more explicit forms of discrimination have become illegal. The 1948 Supreme Court case Shelley v. Kraemer, for instance, found that racial covenants are unenforceable because they violate the 14th Amendment. Importantly, the Civil Rights Act made many forms of racial discrimination illegal. This had an impact on race-based premiums in life insurance, discussed below.
Several other developments in this area would touch on redlining specifically. The 1968 Fair Housing Act, passed after the assassination of Dr. Martin Luther King Jr., disallowed redlining based on race.
The 1965 Housing and Urban Development Act, meant to coordinate federal housing programs, established grants for poor homeowners, rent subsidies for the elderly and physically challenged, greater access to public housing, and favorable loans for military veterans. And lenders have to disclose census information regarding their lending because of the 1975 Home Mortgage Disclosure Act (HMDA).
Despite this, there are allegations that this discrimination still occurs in practice. For example, a series of lawsuits from New York alleged that redlining practices continued into the 21st century.
Race and life insurance
According to an article by Mary L. Heen in the Northwestern Journal of Law & Social Policy, the life insurance industry has a history of reinforcing racial hierarchies in the U.S. After Reconstruction, she writes, the insurance industry pointed toward high mortality rates and innate racial differences to justify life insurance that offered emancipated enslaved people only two-thirds of the benefits that were offered to White people.
Companies with race premiums tended to ignore any statistics that didn’t fit preconceived hierarchies when setting premium rates, such as women having a lower mortality rate, suggesting that the risk involved was not the primary motivating factor in setting premiums.
In 1958 the Travelers Insurance Company became the first company to offer life insurance at a lower rate for women than men.
Such practices continued into the 20th century. In 1940, for instance, the NAIC published a study that looked into mortality rates by race, which insurers then used to set race-based premiums. The study fueled discriminatory underwriting policies until sometime after the use of race was banned, according to the NAIC.
Insurers at this time carried two sets of rate books, one reflecting higher rates for Blacks, who mainly were purchasing “industrial life insurance” to cover burial costs. The policies offered to Black people covered less and were more expensive, with premiums as much as 30% to 40% higher, according to George Nichols III, president and CEO of the American College of Financial Services.
Beginning in 2000, the insurance industry paid out $556 million in restitution and fines for lawsuits related to millions of policies sold with race-based premiums and payouts in the 20th century.
Race-based premiums remained legal until 1964, during the Lyndon B. Johnson presidency when pressure from civil rights activists led to the passage of the Civil Rights Act.
Race and auto insurance
Auto insurance policies first appeared in the U.S. in 1897. In 1938, New Hampshire passed a state insurance law mandating that insurers offer specific kinds of coverage, known as an assigned risk plan, making it the first state to do so. No-fault insurance came later when Massachusetts introduced it in 1970. Guaranteed access to auto insurance would also come in the 1970s. In 1976, South Carolina passed a law that guaranteed auto insurance access to everyone within its jurisdiction who was eligible, according to the NAIC.
Other updates in the 1970s touched on access to auto insurance. Some major items from that decade and the next include:
- In 1977, a state report from the Michigan Insurance Bureau recommended making underwriting standards for auto insurance subject to the bureau’s oversight to nix “subjectivity.”
- In 1978, Massachusetts created a statewide system for regulating auto insurance that guaranteed access and banned using protected characteristics to set prices.
- In 1978, the Michigan Supreme Court found that mandates of no-fault coverage were unconstitutional.
- In 1986, the Government Accountability Office (GAO) conducted a comprehensive study of auto insurance, including how the cost and availability of insurance were affected by states that restricted the factors insurers use for pricing.
In recent years, investigations have revealed that practices like redlining have persevered in subtle forms in the auto insurance industry. A 2017 investigation published in Consumer Reports that used payout data found disparate auto insurance prices in California, Illinois, Missouri, and Texas that its authors say cannot be explained by differences in risk, suggesting what they call a “subtler form of redlining.”
In 2020, the Consumer Federation of America (CFA) reported that research from the organization had revealed ongoing discrimination in the auto insurance agency. According to the CFA, most auto insurance companies were using non-driving factors that affected the rates of drivers with certain characteristics.
“The companies will insist that they never ask for a customer’s race, but if they are serious about confronting systemic racism, it is time they recognize that their pricing tools use proxies for race that make government-required auto insurance more expensive for Black Americans,” said Doug Heller, an insurance expert for the CFA.
Legislation has been proposed to limit discriminatory practices in auto insurance, such as HR 3693 and HR 1756, two bills from 2019 that tried to limit the use of income proxies and credit scores to set policy rates, though the bills never made it out of committee. The incorporation of credit scores in auto insurance began in 1995 with the Fair Isaac Corporation and ChoicePoint. Critics have argued that it is a “surrogate for redlining” and drives up premium rates for communities of color.
In 2021, Colorado passed a bill that protects a list of classes—including race, sexual orientation, and gender identity and expression—from discrimination in underwriting. The law requires insurance companies to demonstrate that their “use of external data and complicated algorithms do not discriminate on the basis of these classes,” including race, sex, sexual orientation, or gender identity.
“Under this new law, insurance companies will have to prove that their pricing strategies don’t result in unfair discrimination against otherwise good drivers,” said Heller. “Colorado now has the tools it needs to end discrimination and ensure auto insurance is priced fairly so that everyone can afford the coverage they need.”
The use of agorithms
Insurance companies use sets of instructions known as algorithms to calculate insurance rates (and trade stocks and manage asset liability, among other uses). However, algorithms can contribute to discrimination in insurance underwriting. In 2020, as part of a Congressionally appropriated initiative to modernize the FHA, the agency launched an algorithmic underwriting system for single-family forward mortgages. It was the first such system launched by the FHA, which it said would help streamline the mortgage process.
However, questions about the actual impact of algorithmic insurance processes persist.
Advocates say these algorithms promote or extend bias, leading to regulatory proposals to address the issue. One was the 2020 Data Accountability and Transparency Act, which would have created a federal agency to protect privacy and ban the use of personal data to discriminate against protected classes.
It also homed in on underwriting practices specifically and would have required continuous testing for bias in using such algorithms. When bias was found, according to analysts interpreting the bill, it would have required proof that the algorithm was necessary, that its function couldn’t be accomplished through another non-discriminatory means, and that the discrimination wasn’t intentional. A new version of the bill, the Diversity and Inclusion Data Accountability and Transparency Act, was introduced in the House in March 2021.
Restrictions on algorithmic practices also exist at the state level. For example, regulators in New York prevent insurers from using algorithms that would “have a disparate impact on the protected classes identified in New York and federal law.”
However, regulatory analysts have written that insurers in the state cannot collect information on legally protected classes, somewhat complicating this rule by making it hard to figure out the effects algorithms have. Other states—including California, Connecticut, Illinois, New Jersey, Michigan, Maryland, and Massachusetts—either have enacted or have considered some form of restriction on the inclusion of personal information in underwriting, ranging from restrictions on the use of genetic data in life insurance to considering education, job, and ZIP codes as criteria.
What Criteria Do Insurance Underwriters Consider?
The specifics differ from company to company and by the insurance product. However, insurance underwriters search for the risk factors identified by their company in its underwriting guidelines. For example, life insurance underwriters look at age, gender, health history, marital status, and smoking/drinking habits. On the other hand, auto insurers look at driving records, age, gender, years of driving experience, and claim histories.
What Is Unfair Discrimination in Insurance?
Unfair discrimination happens when similar risks are treated differently, and premiums aren’t based on relative risk but on factors like race.
What Is Redlining?
Redlining is the now-illegal discriminatory practice of denying loans or insurance to residents of certain areas based on their race or ethnicity. Sociologist John McKnight coined the term in the 1960s to describe color-coded maps created by the Home Owners’ Loan Corporation that marked racial minority neighborhoods in red, labeling them “hazardous” to lenders. Redlining significantly contributed to the racial wealth gap that persists today.
The Bottom Line
NAIC members present at the 2020 session on race recommended several ways to redress existing inequalities, such as increasing minority presence in the industry, educating consumers, and regulating big data to ensure transparency, protect privacy, and deter discrimination. The NAIC has also established a special committee to address these issues.