A prominent AI researcher has turned his back on computer vision over ethical issues.
What happened: The co-creator of the popular object-recognition network You Only Look Once (YOLO) said he no longer works on computer vision because the technology has “almost no upside and enormous downside risk.”
Why he quit: Joseph Redmon, a graduate student at the University of Washington with a charmingly unorthodox résumé, said on Twitter, “I stopped doing CV research because I saw the impact my work was having.” He didn’t respond to a request for an interview.
- “I loved the work but the military applications and privacy concerns eventually became impossible to ignore,” he said.
- Redmon disclosed his decision in a discussion sparked by a call for papers from NeurIPS requiring authors to include a statement discussing “ethical aspects and future societal consequences” of their work.
- He previously aired his concerns in a 2018 TEDx talk. He had been “horrified” to learn that the U.S. Army used his algorithms to help battlefield drones track targets, he said, urging the audience to make sure technology is used for good.
Behind the news: Redmon and his faculty advisor Ali Farhudi devised YOLO in 2016 to classify objects in real time, funded partly by Google and the U.S. Office of Naval Research. The work won a People’s Choice Award at that year’s Computer Vision and Pattern Recognition conference. The last update came in April 2018.
Why it matters: Concerns are mounting over a number of ethical concerns in machine learning including biased output, potential misuse, and adverse social impacts. The field stands to lose more talented researchers if it doesn’t come to grips with issues like this.
We’re thinking: Researchers need to recognize the ethical implications of their work and guide it toward beneficial uses. Many technologies have both civilian and military uses, and opting out may not be as powerful as helping to shape the field from within.