A major company is using face recognition to settle scores.
What's new: MSG Entertainment, which operates large entertainment venues in several cities in the United States, used face recognition to block its perceived enemies from attending events at New York’s Madison Square Garden and Radio City Music Hall, The New York Times reported.
What happened: MSG used the technology on at least two occasions to eject attorneys who work at law firms involved in litigation against the company.
- In November 2022, guards at Radio City Music Hall prevented Kelly Conlon from attending a concert with her daughter after face recognition identified her as an attorney at a firm representing a personal-injury lawsuit against MSG.
- The previous month, Madison Square Garden ejected Barbara Hart after face recognition identified her as an attorney at a different firm suing MSG on behalf of some of its shareholders.
- MSG claimed that the actions were legal and in accordance with its established policy of barring attorneys employed by firms engaged in active lawsuits against the company, regardless of whether the attorney is involved in the lawsuit.
Behind the news: New York does not restrict use of face recognition by private companies. MSG venues have used the technology since at least 2018 to compare attendees’ faces to a database of photographs and flag individuals the company considers undesirable. Prior to Conlon’s ejection, a judge ruled that MSG has a right to deny entry to anyone who doesn’t hold a valid ticket; Conlon’s employer sued in a case that is ongoing.
Why it matters: Privacy advocates have long feared that face recognition could enable powerful interests to single out individuals for retribution. MSG’s use of the technology to target its perceived enemies certainly fits that description.
We're thinking: Face recognition is a flashpoint in AI, and rightly so. We need to protect privacy and fairness even as we improve safety and productivity. But outrage over such ill-considered uses of the technology could lead regulators to ban it despite its potential for good — for instance, by helping security personnel identify people who are legally barred from an area. Regulators who focus on face recognition should address ethical gray areas as well as outright abuses.