FTC accuses Rite Aid of misusing facial recognition technology in.JPGw1440

FTC accuses Rite Aid of misusing facial recognition technology in stores – The Washington Post

Comment on this storyCommentAdd to your saved storiesSave

Pharmacy chain Rite Aid misused facial recognition technology in a way that subjected shoppers to unfair searches and humiliation, the Federal Trade Commission said Tuesday. This is part of a landmark agreement that could raise questions about the use of the technology in stores, airports and other venues across the country.

Federal regulators said Rite Aid activated facial scanning technology, which uses artificial intelligence to try to identify people captured on surveillance cameras, in hundreds of stores between 2012 and 2020 in hopes of cracking down on shoplifters and other problem customers.

But the chain's “reckless” failure to implement security precautions, coupled with the technology's long history of inaccurate matches and racial bias, ultimately led to store employees falsely accusing customers of theft, leading to “embarrassment, harassment and other harm.” They included family members, colleagues and friends, the FTC said in a statement.

In one case, a Rite Aid employee searched an 11-year-old girl for a false facial recognition match, causing her to become so distressed that her mother missed work, the FTC said in a complaint in federal court. In another case, employees called the police on a black customer after the technology mistook her for the intended target, a white woman with blonde hair.

Rite Aid said in a statement that it only used facial recognition in “a limited number of stores” and that it ended the pilot program more than three years ago before the FTC's investigation began.

As part of a settlement, the company agreed not to use the technology for five years, delete the collected facial images and report compliance to the FTC annually, the FTC said.

“We respect the FTC’s investigation and are consistent with the agency’s mission to protect consumer privacy,” the company said.

Rite Aid's system scanned the faces of customers entering and looked for matches in a large database of suspected and confirmed shoplifters, the FTC said. If the system found a match, it would prompt store employees to monitor the shopper closely.

However, the database contained low-resolution images captured by grainy surveillance cameras and cell phones, which affected the quality of the matches, the FTC said. These illegitimate matches would then lead employees to chase customers around the store or call the police, even if they hadn't seen any crime.

Rite Aid did not tell customers it was using the technology, the FTC said, and it instructed employees not to disclose its use “to consumers or the media.” The FTC said Rite Aid contracted with two companies to help create its “persons of interest” database, which included tens of thousands of images. These companies were not identified.

The FTC said major mistakes were commonplace. Between December 2019 and July 2020, the system generated more than 2,000 “match alerts” for the same person in distant stores around the same time, even though the scenarios were “impossible or implausible,” the FTC said.

In one case, Rite Aid's system generated more than 900 “gambling alerts” for a single person over a five-day period at 130 different stores, including in Seattle, Detroit and Norfolk, regulators said.

The system generated thousands of false matches, and many of them involved the faces of women, blacks and Latinos, the FTC said. Federal and independent researchers have found in recent years that these groups are more likely to be misidentified by facial recognition software, although proponents of the technology say the systems have improved since then.

According to the FTC, Rite Aid also prioritized using the technology in stores predominantly served by people of color. Although about 80 percent of Rite Aid's stores are in predominantly white areas, the FTC found that most of the stores that used the facial recognition program were in areas where there are not many white people.

The false accusations left many shoppers feeling like they were being racially profiled. In a note cited by the FTC, a shopper wrote to Rite Aid that the experience of being stopped by an employee was “emotionally damaging.” “It’s not every black man [a] You shouldn’t feel like a thief either,” the unnamed customer wrote.

The FTC said Rite Aid's use of the technology in 2010 violated a privacy regulation. This is part of an FTC settlement filed after it was discovered that employees of the pharmacy chain had thrown people's health records into open trash cans. Rite Aid must implement a robust information security program that must be overseen by the company's top executives.

According to a “scorecard” from Fight for the Future, an advocacy group, the FTC action could impact other major retail chains in the United States that have used facial recognition technology, such as Home Depot, Macy's and Albertsons.

Evan Greer, the group's director, said in a statement: “The message to corporate America is clear: Stop using discriminatory and invasive facial recognition now or prepare to pay the price.”

FTC Commissioner Alvaro Bedoya, who founded a Georgetown Law research center critical of facial recognition before joining the FTC last year, said in a statement that the Rite Aid case was “part of a broader trend of algorithmic injustice.” and called on company executives for federal lawmakers to ban or restrict the use of “biometric surveillance tools” on customers and employees.

“There are some decisions that should not be automated at all; Many technologies should not be used at all,” Bedoya wrote. “I urge lawmakers who want stronger protections against biometric surveillance to include these protections in legislation and enact them into law.”

Joy Buolamwini, an AI researcher who has studied the racial bias of facial recognition, said the Rite Aid case is an “urgent reminder” that the country's failure to enact comprehensive privacy laws leaves Americans vulnerable to risky public experiments did surveillance.

“These are the types of common sense restrictions that have been a long time coming to protect the public from the reckless introduction of surveillance technologies,” she said in a text message. “The face is the final frontier of privacy and it is more important now than ever that we fight for our biometric rights, from airports to drugstores to schools and hospitals.”