Landmark Case Challenges AI In Housing
In a landmark case highlighting the potential harms of AI decision-making in housing, SafeRent, an AI-powered tenant screening company, has agreed to a $2.3 million settlement and to halt its scoring system.
In a Rush? Here are the Quick Facts!
- SafeRent rejected Mary Louis’s rental application despite a strong reference from her landlord.
- A lawsuit alleged SafeRent’s scoring discriminated against Black and Hispanic renters using vouchers.
- Federal agencies are monitoring the case as AI regulation in housing remains limited.
The lawsuit, brought by tenants Mary Louis and Monica Douglas, alleged that the algorithm used by SafeRent disproportionately discriminated against Black and Hispanic renters who relied on housing vouchers, violating the Fair Housing Act, as first reported by The Guardian.
Mary Louis, a security guard in Massachusetts, was among over 400 renters affected by SafeRent’s controversial system. Despite receiving a glowing reference from her landlord of 17 years and using a low-income housing voucher guaranteeing partial rent payment, her application was rejected.
The rejection came after SafeRent assigned her a score of 324, far below the management company’s minimum requirement of 443. No explanation for the score or appeal process was provided, as reported by The Guardian.
The lawsuit, filed in 2022, accused SafeRent of using an opaque scoring system that factored in irrelevant financial data, such as credit card debt, while ignoring the guaranteed payments provided by housing vouchers, said The Guardian.
Studies show that Black and Hispanic renters are more likely to have lower credit scores and rely on vouchers than white applicants, exacerbating existing inequalities, as reported by the National Consumer Law Center.
Louis described her frustration with the algorithm’s lack of context. “I knew my credit wasn’t good. But the AI doesn’t know my behavior – it knew I fell behind on paying my credit card but it didn’t know I always pay my rent,” she said to The Guardian.
The settlement, approved on November 20, is notable not only for its financial component but also for mandating operational changes.
SafeRent can no longer use a scoring system or recommend tenancy decisions for applicants using housing vouchers without independent validation by a third-party fair housing organization. Such adjustments are rare in settlements involving tech companies, which typically avoid altering core products, noted The Guardian.
“Removing the thumbs-up, thumbs-down determination really allows the tenant to say: ‘I’m a great tenant’,” said Todd Kaplan, an attorney representing the plaintiffs, as reported by The Guardian.
The case underscores growing concerns about the use of AI in foundational aspects of life, including housing, employment, and healthcare.
A 2024 Consumer Reports survey revealed widespread discomfort with algorithmic decision-making, particularly in high-stakes areas. Critics argue that these systems often rely on flawed statistical assumptions, leading to discriminatory outcomes.
Kevin de Liban, a legal expert on AI harms, noted that companies face little incentive to create equitable systems for low-income individuals. “The market forces don’t work when it comes to poor people,” he said, emphasizing the need for stronger regulations, as reported by The Guardian.
“To the extent that this is a landmark case, it has a potential to provide a roadmap for how to look at these cases and encourage other challenges,” Kaplan said, though experts caution that litigation alone cannot keep companies accountable, as reported by The Guardian.
For renters like Louis, however, the settlement represents a hard-fought victory, paving the way for fairer treatment of those reliant on housing assistance programs.
Leave a Comment
Cancel