The European Commission pulled ahead of the geopolitical pack, issuing guidelines for ethical development of artificial intelligence.
What happened: Europe’s Ethics Guidelines for Trustworthy AI seek to promote the commission's vision of beneficent artificial intelligence. AI must be legal, ethical, robust, and respectful of human welfare and autonomy. It must protect social institutions and vulnerable populations such as children.
Why it matters: The first of their kind, the new guidelines set a bar for AI policy. Europe’s work is bound to serve as a starting point for other countries.
Behind the news: AI mishaps from viral disinformation to autonomous vehicle crashes, as well as fears of surveillance and autonomous weapons, have led to calls for limits on AI:
- The Organization for Economic Cooperation and Development plans to issue its own guidelines.
- The U.S. Congress is considering the Algorithmic Accountability Act, which calls for rules to evaluate AI systems.
The hitch: Europe’s guidelines are non-binding, and there’s no ready way to enforce them. And some of the principles, such as transparency, aren’t yet technically feasible.
What’s next: The European Union will test the framework with a number of companies and organization during the coming year. Then it expects to propose next steps.