How do you make sure AI is trustworthy? The EU wrote a checklist.

A woman stands on an automated production line with holographic glasses on which she receives work instructions and can check off items with gestures.

The European Union wants to lead the world in developing ethical AI.

The European Union has released a new set of guidelines for ensuring that AI is “trustworthy.” The principles are somewhat abstract and they don’t have the force of law. But they provide a good starting point for AI developers, companies, and individuals trying to figure out whether new AI systems are ethical.

The EU guidelines were written by an independent group of 52 experts, who incorporated feedback from more than 500 public commenters. The experts are now inviting companies and organizations to show that they’re committed to trustworthy AI by voluntarily adopting the guidelines — and especially by using a particular checklist (which the experts call their “practical assessment list”) when developing and deploying AI systems.

The new guidelines come amidst a backdrop of mounting public concern that AI is increasingly affecting many aspects of our lives, from the cars we drive to the way we sentence criminals to the scientific discoveries we’re capable of making, sometimes in ways that harm rather than help us. The EU isn’t the first to tackle this by issuing a list of recommendations — the White House released 23 guidelines in 2016, for instance — but it’s a bit more concrete and less anodyne than previous lists, notably because of the checklist.

The EU’s key requirements for trustworthy AI include some that are commonly discussed, like transparency (we should know when an AI system is making decisions about us and those decisions should be explainable to us), robustness and safety (AI systems shouldn’t be vulnerable to adversarial attacks by hackers), and non-discrimination and fairness (AI systems shouldn’t be biased along the lines of race or gender).

But the guidelines also cover ground that is less often thought about, like the impact of AI on the environment. “The broader society, other sentient beings and the environment should also be considered as stakeholders throughout the AI system’s life cycle,” the experts write. “Sustainability and ecological responsibility of AI systems should be encouraged.”

Some question how useful these guidelines really are given that there’s currently no legal mechanism for enforcing them. What will incentivize AI developers to adopt ethical principles that could slow them down? “Self-regulation is not going to work,” Yoshua Bengio, a leader in AI, recently told Nature. “Do you think that voluntary taxation works? It doesn’t.”

Yet there’s one aspect of the EU’s project that does have the potential to be useful right now: the checklist. It’s made up of easy-to-understand questions, which company executives can put to AI developers before agreeing to buy a new AI system for the workplace, and which the average citizen can ask, too. Here are some sample questions:

  • Did you ensure a stop button or procedure to safely abort an operation where needed? Does this procedure abort the process entirely, in part, or delegate control to a human?
  • Did you assess potential forms of attacks to which the AI system could be vulnerable?
  • Did you verify what harm would be caused if the AI system makes inaccurate predictions? Did you put in place ways to measure whether your system is making an unacceptable amount of inaccurate predictions?
  • Did you establish a strategy or a set of procedures to avoid creating or reinforcing unfair bias in the AI system, both regarding the use of input data as well as for the algorithm design?
  • Did you assess whether the AI system is usable by those with special needs or disabilities or those at risk of exclusion? How was this designed into the system and how is it verified?

The experts emphasize that their list of questions is not exhaustive or final. In fact, they invite companies, organizations, and others to pilot the use of this checklist and provide feedback. Based on the responses they receive, they’ll revise the list by early 2020.

Viewing their guidelines as “a living document to be reviewed and updated over time” is a wise move, because AI tech will inevitably change (probably very quickly!), as will our awareness of its risks.

What we stand to lose without clear AI guidelines

The experts stress that these risks are both technical and non-technical. In the first category is the possibility that an AI system will have an unexpected weakness — like, say, a Tesla that can be fooled into driving into oncoming traffic. The experts recommend getting trusted security experts to deliberately attempt to hack a system, and offering “bug bounties” to incentivize people to find and report vulnerabilities. Tesla already offers cash rewards (and free cars) to researchers who succeed in hacking its systems.

On the nontechnical front, the EU experts discuss the risks that AI can pose to citizens’ autonomy. For example, they warn against “normative citizen scoring (general assessment of ‘moral personality’ or ‘ethical integrity’) in all aspects and on a large scale by public authorities.” That sounds like a veiled reference to China’s evolving social credit system, which monitors people’s behavior through their online activity and assigns them a “citizen score.”

Speaking of China, the fact that the EU is releasing guidelines like this is the latest indication that it’s trying to position itself as the world leader in AI ethics, since it can’t compete with countries like China (or the US) when it comes to actual AI development.

“Europe has a unique vantage point based on its focus on placing the citizen at the heart of its endeavours,” the experts write. “This focus is written into the very DNA of the European Union through the Treaties upon which it is built.”

The EU isn’t the only body trying to establish itself as the global leader in ethical AI, though. In May, the OECD is slated to release its own set of recommendations.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Source: https://www.vox.com/future-perfect/2019/4/9/18303539/ai-eu-trustworthy-guidelines