Published
Reading time
2 min read
Ayanna Howard

As AI engineers, we have tools to design and build any technology-based solution we can dream of. But many AI developers don’t consider it their responsibility to address potential negative consequences as a part of this work. As a result, we continue to hear about inequities in the delivery of medical care, access to life-changing educational opportunities, financial assistance to people of meager means, and many other critical needs.

In the coming year, I hope the AI community can reach a broad consensus on how to build ethical AI.

The key, I believe, is training AI engineers to attend more fully to the potential consequences of their work. Typically, we’ll design a cool algorithm that matches faces in a database or generates chatbot conversations, and hand it off. Then we move on to the next project, oblivious to the fact that police departments are using our system to match mugshots to pencil sketches, or hate groups are using our chatbot to spread fear and lies.

This is not how things work in other areas of engineering. If you’re a civil engineer and you want to build a bridge, you need to model the entire scenario. You don’t model a generic bridge, but a particular bridge that crosses a particular river in a particular town. You consider all the conditions that come with it, including cars, people, bicycles, strollers, and trains that might cross it, so you can design the right bridge given the circumstances.

Similarly, we need to think about our work within the context of where it will be deployed and take responsibility for potential harms it may cause, just like we take responsibility for identifying and fixing the bugs in our code.

Training AI engineers with this mindset can start by bringing real-world examples into the training environment, to show how the abstract concepts we learn play out in reality. In a course about word embeddings, for instance, we can look closely at their role in, say, hate speech on social media and how such messages bear on people of a particular gender, religion, or political affiliation — people just like us.

And this training is not just for students. Practicing doctors and nurses are required to get continuing education credits to continue practicing. Why not in AI? Employers can make sure their developers get continuing education in ethical AI as a condition of ongoing employment.

This may seem like a big change, but it could happen very quickly. Consider the response to Covid-19: Educational institutions and companies alike immediately implemented work-from-home policies that previously they had considered impossible. And one of the nice things about technology is that when the top players change, everyone else follows to avoid losing competitive advantage. All it takes is for a few leaders to set a new direction, and the entire field will shift.

Ayanna Howard directs the Human-Automation Systems Lab and chairs Interactive Computing at Georgia Institute of Technology.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox