Bug bounty to hunt for Bias in Algorithms

Venturebeats reports that researchers from several AI research labs suggest paying developers to find bias in AI algorithms. This idea is similar to “bug bounties” used by many companies which are paid to security researchers who discover security flaws in software systems. The goal is to build a community to hunt for such biases, i.e. prejudice against specific groups of people, and also report them. 

In our view, the core problem with such an approach is understanding the intended behavior of an algorithm. It is usually very clear and well defined what a security vulnerability is, however, it is the very nature of algorithms to discriminate; they should tell a cat from a dog for example. Some biases, such as offering football fans news about football instead of curling, are even intended behavior. Our own approach, EDAP, starts with the ethical decisions that go into the software design. Thus, the result of the deliberation could be used as a specification document for such testing.