Pie & AI Asia: On Ethical AI with Andrew Ng

 

Amid growing doubt that technology can be a positive force, it’s more important than ever to make sure that we, the AI community, act ethically.  There are already several existing AI ethics codes that corporations and governments have published. But if we hope that the global AI community will follow a set of guidelines, then this community needs to have a bigger voice in its development. We need an ethical code written by the AI community, for the AI community. 

Last Friday, we took a first step in this direction. On December 6, four community groups in Asia joined us for our first Pie & AI group discussion event on AI and ethics. Andrew started with an interactive discussion, introducing major ethical AI issues including deepfakes and adversarial attacks. He argued that current AI ethics frameworks, like this one, were not concrete or actionable enough. 

Each city then came up with three actionable ethics statements and was assigned one ethical AI topic that we based off of this survey of current AI ethics frameworks. They presented their top three actionable statements to Andrew at the end of the event. Here’s what they came up with:

 

Community group: Artificial Intelligence Hong Kong
Topic: Transparency (Increase explainability and interpretability of AI systems)
Top 3 Guidelines: 
  • An AI engineer should make their AI programmes transparent, explainable and understandable to end users, while allowing companies to keep a suitable degree of privacy about their innovative technology.
  • Although biased conclusions may be unavoidable due to data sources and limited knowledge on aspects like demographics and cultural differences (eg. facial recognition for different races), an AI engineer should try their best to remain unbiased by using various tools like sensitivity analysis and stating assumptions and constraints in order to deliver a professional judgment.
  • An AI engineer should be vigilant about malicious coding and be responsible for their own coding algorithms. These algorithms should be beneficial to humankind.
Community group: Machine Learning Tokyo
Topic: Fairness (Preventing, monitoring and mitigating unwanted bias and discrimination.)
Top 3 Guidelines: 
  • An AI engineer should, when creating data, seek review from diverse perspectives and people impacted by their data, to avoid unintended consequences or bias.
  • An AI engineer should build robust and explainable models and apply tools to visualize, making the models human-friendly.
  • An AI engineer should associate data with models trained on it and keep track of data lineage.

 

Community group: AI Pilipinas
Topic: Privacy and Security (Ensuring AI systems are secure and protect user data)
Top 3 Guidelines:
  • An AI engineer should respect the user’s right to privacy by making sure to get user consent for specific services through understandable and unambiguous language.
  • An AI engineer should create or design a unified privacy framework based on laws of a jurisdiction of a geographic area
  • An AI engineer should ensure reliability of the AI system in terms of security, and have a fail-safe for human intervention in case the data or privacy is breached

 

Community group: AI Singapore
Topic: Beneficence (Ensuring that AI systems promote human well-being and are aligned with human values)
Top 3 Guidelines
  • An AI engineer should ensure that the data used to train models are properly anonymized and do not contain personally identifiable information.
  • An AI engineer should encourage greater creativity in the AI ecosystem by acknowledging open source contributions and respecting their terms and conditions.
  • An AI engineer should ensure that training datasets are sufficient in amount, unbiased, labelled, machine readable, and accessible to a reasonable degree. The benefits of AI should also not favour or disadvantage any group of individuals.

 

We were blown away by how thoughtful and insightful all four groups were during the event. This is only the starting point–we hope to host many more events about ethical AI and discuss how the deep learning community can navigate these issues together.

 

Keep reading

We use cookies to collect information about our website and how users interact with it. We’ll use this information solely to improve the site. You are agreeing to consent to our use of cookies if you click ‘OK’. All information we collect using cookies will be subject to and protected by our Privacy Policy, which you can view here.

OK