White House Supports Limits on AI: U.S. Rules Protect Citizens from AI-Powered Surveillance and Discrimination
As governments worldwide mull their AI strategies and policies, the Biden administration called for a “bill of rights” to mitigate adverse consequences.What’s new: Top advisors to the U.S. president announced a plan to issue rules
The Social Nightmare: Facebook Whistleblower Exposes How Company Harms Users
Scrutiny of Facebook intensified after a whistleblower leaked internal research showing the company has known that its ongoing drive to engage users has harmed individuals and society at large.
Distance Killing: Israeli Agents Assasinated Iranian Scientist With AI-Sniper Rifle
A remote sniper used an automated system to take out a human target located thousands of miles away.What happened: The Israeli intelligence agency Mossad used an AI-assisted rifle in the November killing of Iran’s chief nuclear scientist,
UN Calls Out AI: UN Report Highlights AI-Related Risks for Privacy, Bias
Human rights officials called for limits on some uses of AI.What’s new: Michelle Bachelet, the UN High Commissioner for Human Rights, appealed to the organization’s member states to suspend certain
Rules for Recommenders: China Bans Harmful Recommendation Algorithms
China moved toward a clamp down on recommendation algorithms.What’s new: China’s internet regulatory agency proposed rules that include banning algorithms that spread disinformation and threaten national security.
AI Engineers Weigh In on AI Ethics: Survey Shows How AI Engineers Feel About Ethical Issues
Machine learning researchers tend to trust international organizations, distrust military forces, and disagree on how much disclosure is necessary when describing new models, a new study found.
Weak Foundations Make Weak Models: Foundation AI Models Pass Flaws to Fine-Tuned Variants
A new study examines a major strain of recent research: huge models pretrained on immense quantities of uncurated, unlabeled data and then fine-tuned on a smaller, curated corpus.
Fighting Addiction or Denying Care?: NarxCare Medical AI Denies Painkillers to Patients in Need
An epidemic of opioid abuse in the U.S. killed 93,000 people in 2020 alone. An algorithm intended to help doctors prescribe the drugs responsibly may be barring worthy patients from pain relief.
User Privacy Versus Child Safety: Apple to scan user phones for images of child abuse.
Apple, which has made a point of its commitment to user privacy, announced that it will scan iPhones for evidence of child abuse. The tech giant will include a machine learning model on the device.
U.S. Lax on Face Recognition: U.S. agency calls for stricter face recognition controls.
A U.S. government watchdog agency called for stronger face recognition protocols for federal agencies. An audit found that, while many employ face recognition, they may not know where it came from, how it’s being used, or the hazards involved.
When Algorithms Manage Humans: Amazon drivers say AI unfairly graded their performance.
Some delivery drivers fired by Amazon contend that the retailer’s automated management system played an unfair role in terminating their employment. Drivers in Amazon Flex, an Uber-like program that enables independent drivers to earn money delivering the company’s packages.
Is Ethical AI an Oxymoron?: Survey finds tech pros are pessimistic about ethical AI.
Many people both outside and inside the tech industry believe that AI will serve mostly to boost profits and monitor people — without regard for negative consequences.
Face Recognition for the Masses: PimEyes is reverse image search for face recognition.
Face recognition tech tends to be marketed to government agencies, but PimEyes offers a web app that lets anyone scan the internet for photos of themself — or anyone they have a picture of. The company says it aims to help people control their online presence and fight identity theft.
Subscribe to The Batch
Stay updated with weekly AI News and Insights delivered to your inbox