Impartiality Rules for Accountable Algorithm
As we saw in one of my last posts, technology companies tend to claim neutrality of their algorithms. However, that it is not true. My impression is that many people want us to believe that because decisions are made by machines, they are better and lack of bias. I can think of several reasons to reject this claim, not exhaustive, there can be many more: 1) the algorithms are designed by humans. 2) the dataset to educate the algorithm can have bias 3) even if the algorithm only learned and improved themselves, nobody really knows how the algorithm was optimized in that way.
For those reasons, the main question is whether someone is accountable for this?. The problem is not easy, because if we talk about machines educated by humans (and their datasets) it could be easier to infer their responsibility. However, what do we do in case of artificial intelligence that optimizes its algorithm autonomously ?. Many propose greater transparency of the algorithms as a solution.
But is that possible? My answer is no. No company will want to show its algorithms. Even if we design legislation, it would have to preserve the property rights of those companies. And, just like Coca-Cola, once the recipe is shown, the possibility of copying it is very high. I do not see space at least in the medium term for this type of regulation to prosper. It’s years before we can talk about the dissemination of the codes, even in controlled environments such as a court. Furthermore, the temporary mismatch between policy and technology has no chance of being achieved in the medium term. Technology will be always faster than policy.
Luckily there are some lights on what to do. Kroll et al, in their paper Accountable Algorithm, try to explain how to deal with these problems. It teaches us how to use the same technology to verify if a code is in compliance with impartiality. I am not an expert in computing, therefore I must make an act of faith here. However, the arguments and the tools are quite compelling. In this sense, I believe that they give in the correct point when trying to create a framework on which to evaluate the algorithms. In simple, it is generating a series of ex-ante rules that allow the results of the algorithm to be evaluated ex-post. In a certain way, they also endow with impartiality the very act of judging in the future the behaviour of an algorithm. (link: https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2765268)