In a since-deleted tweet thread, insurance company, Lemonade, boasted that their artificial intelligence (AI) technology has helped them reduce its loss ratios by determining its customers’ “level of risk.” In an example, provided by the company, itself, Lemonade claims its AI has improved fraud detection through the use of non-verbal cues. This method of data collection raises a lot of concern about the accuracy and inherent bias that comes from a system designed to maximize profitability.
Lemonade Insurance and AI Technology
Since its inception in 2015, Lemonade has become one of the fastest-growing businesses in the insurance section. The company, which is now valued at $5 billion, became popular through its easy-to-use, app-based platform. For many users, it was a quick and painless way to get renters, homeowners, pet, and/or life insurance. Even claims are digitally filed through the app by customers submitting a video recording detailing the loss.
In a series of tweets, which have now been deleted, Lemonade details its business model and how it collects data. Following the backlash of the tweets, Lemonade tweeted again and posted a blog in an attempt to cover up its blunder claiming it was “a poorly worded tweet…, which led to confusion.” Ironically, a company that touts its use of AI should know, once something is posted on the internet, it will be there forever. The result of Lemonade’s transparency is a blatant confession of how the company uses data collected by AI for its own best interests at the customer’s expense.
Lemonade’s tweet states, “a typical homeowners policy form has 20-40 fields…, so traditional insurers collect 20-40 data points per user.” However, Lemonade uses bots and machine learning, which is a type of AI. This method of AI involves exposing a computer to a bunch of data, and that computer makes judgments, or predictions, about the information it processes based on the patterns it notices. “AI Maya,” Lemonade’s bot, asks 13 questions but collects over 1,600 data points, which produces “nuanced profiles” and “remarkably predictive insights” of its users. Lemonade claims this data helps determine its customers’ level of risk and improve its loss ratios, which, in other words, is taking more in premiums than it had to pay out in claims.
Besides this admission that Lemonade’s goal is to pay out fewer claims, the collection of data points could be used in other parts of the insurance process, like determining the cost of premiums or if someone is too risky to insure at all. Furthermore, due to the manner customers submit claims—through a video—Lemonade’s AI uses non-verbal cues, or facial recognition, to assist in determining whether a claim is fraudulent.
The Dangers of Using Artificial Intelligence in Insurance Claims Handling
Insurance policyholders make payments—premiums—to the company for the peace of mind that if something goes wrong, the insurance company will pay to cover the loss or damage. However, insurance companies make money by being paid more premiums than it pays out in claims, which essentially undermines the purpose of having insurance. In a deleted tweet, Lemonade stated its use of predictive data lowers its loss ratio—“In Q1 2017, our loss ratio was 368% (friggin’ terrible), and in Q1 2021 it stood at 71%!” In other words, in 2017, Lemonade was not making as much money off its customers as it was paying out claims at a far greater rate than the amount of premiums paid by customers. In the company’s own words, this was bad for business, but through the use of AI it was able to improve loss ratio by 2021 (i.e. reduce the amount paid out by denying customers claims and make more money). Insurance companies can “improve” loss ratios by reducing the amount paid out on claims and/or increasing the amount paid in premiums. One way Lemonade says it improved its loss ratio was using AI to detect fraud.
How AI Impacts Evaluation of Insurance Claims
Lemonade stated in the deleted tweets that its AI analyzes claims using “non-verbal cues” or facial recognition technology. Traditionally, insurance claims are evaluated based on objective data, such as photos, financial information, receipts, etc. In Texas, and many other states, insurance companies must have a reasonable basis to deny a claim. That is why legitimate insurers have adjusters evaluate claims based on objective data—not facial expressions.
Machine learning systems, like Lemonade’s AI technology, can be biased based on who builds it, how it’s developed, and how it’s ultimately used. Because insurance companies make money from denying claims and charging higher premiums, it is not far-fetched that Lemonade would build and use its AI to maximize profits. In general, facial recognition technology is notoriously biased as machine learning works on old data—not new data—because it has not been collected yet. Old data sets are found to utilize information that is predominately male and Caucasian, leading AI to have a lower accuracy in correctly identifying the faces of people of color, women, children, and those who are gender-nonconforming. Evaluating claims based on non-objective data, or “non-verbal cues,” such as racial or ethnic data, accents, genders, age, etc., leads to a real risk of wrongfully denying claims through racial profiling.
Lemonade’s boast that its AI helps improve loss ratios by increasing fraud detection is incredibly worrisome as it’s beneficial to the company to deny claims for fraud and is using a system that has inherently biased algorithms. It is likely that Lemonade’s system, whether purposeful or not, has a propensity for racial profiling when evaluating customers’ videos detailing their claims as it relies on information other than objective data of the actual damage, such as facial expressions and physical appearance. Furthermore, there is a real question of legality as claims may be denied based on no reasonable basis.
How We Can Help
Studies have shown that AI can discriminate against certain races, genders, economic classes, and disabilities, among other categories, leading those people to be denied housing, jobs, education, and now insurance coverage. Said best by a spokesperson of Fight for the Future, Lemonade’s deleted tweets provide insight into how it and other companies are “using AI to increase profits with no regard for peoples’ privacy or the bias inherent in these algorithms.” Being wrongfully accused of insurance fraud is not only incredibly offensive but can also have legal implications in addition to the economic injury of not being rightfully paid on a claim. If you or someone you know has used Lemonade’s services only to find its AI flagged you for fraud and wrongfully denied your claim, our team of insurance claims attorneys at Raizner Law have the necessary experience to right this wrong.