Impartiality Rules for Accountable Algorithm

0

As we saw in one of my last posts, technology companies tend to claim neutrality of their algorithms. However, that it is not true. My impression is that many people want us to believe that because decisions are made by machines, they are better and lack of bias. I can think of several reasons to reject this claim, not exhaustive, there can be many more: 1) the algorithms are designed by humans. 2) the dataset to educate the algorithm can have bias 3) even if the algorithm only learned and improved themselves, nobody really knows how the algorithm was optimized in that way.

For those reasons, the main question is whether someone is accountable for this?. The problem is not easy, because if we talk about machines educated by humans (and their datasets) it could be easier to infer their responsibility. However, what do we do in case of artificial intelligence that optimizes its algorithm autonomously ?. Many propose greater transparency of the algorithms as a solution.

But is that possible? My answer is no. No company will want to show its algorithms. Even if we design legislation, it would have to preserve the property rights of those companies. And, just like Coca-Cola, once the recipe is shown, the possibility of copying it is very high. I do not see space at least in the medium term for this type of regulation to prosper. It’s years before we can talk about the dissemination of the codes, even in controlled environments such as a court. Furthermore, the temporary mismatch between policy and technology has no chance of being achieved in the medium term. Technology will be always faster than policy.

Luckily there are some lights on what to do. Kroll et al, in their paper Accountable Algorithm, try to explain how to deal with these problems. It teaches us how to use the same technology to verify if a code is in compliance with impartiality. I am not an expert in computing, therefore I must make an act of faith here. However, the arguments and the tools are quite compelling. In this sense, I believe that they give in the correct point when trying to create a framework on which to evaluate the algorithms. In simple, it is generating a series of ex-ante rules that allow the results of the algorithm to be evaluated ex-post. In a certain way, they also endow with impartiality the very act of judging in the future the behaviour of an algorithm. (link: https://papers.ssrn.com/sol3/Papers.cfm?abstract_id=2765268)

POLICY MEMO. GOVERNMENT AS A PLATFORM

0

Disclaimer: In this fictional policy memo to the Governor, I am assuming that the MASS.GOV has not being enhanced during the last years. As you know, the Mass. Government did an extensive reform to the digital system. However, this task it is not a critical appraisal of that reform.

POLICY MEMO
FROM: CIO for Massachusetts, Ricardo Batarce
TO: Governor of the Commonwealth of Massachusetts, Charlie Baker
SUBJECT: New digital Platform. State Government as a Platform. Mass.gov

ISSUE:
Citizens are using our webpage and the use is concentrated. According to data, our page has more than 250,000 thousand pages and documents available. The 76% of the constituents engage with the Massachusetts government through our website. Also, only 10% of our pages comprise nearly 90% of our traffic.

Nonetheless, mainly because of the lack of a user-centred approach, it is not uncommon for us to have multiple problems and complains about usability and access to services. The system is complex to use because it was built in an agency-centred approach. Currently, people waste much time trying to find the services that they need. Consequently, our State is not being effective in delivering services, and less efficient in delivering subsidies. Therefore, our users-citizens are unhappy and frustrated with our Government.

WHY TO BUILD IT
In the digital era that we live, our website is the entrance of citizens to the State Government. Many citizens will never know our physical offices, but they will try to obtain our services from our website. In that sense, we can define it as a digital front door. Our digital face is the way our government interacts with citizens. In many cases, it is the only way we can reach citizens and they can reach us.

It is crucial to have an attractive value proposition if we want more happy citizens. Our goal as a government is to serve our taxpayers. Therefore, if we want to provide better quality services increasing efficiency and reducing cost we must think about a substantive digital reform. The whole idea on we have looked at our relationship with our users (citizens) must be changed.

WHAT TO BUILD
First, it is necessary to build a simple yet a new multipurpose platform (infrastructure) considering the users as the centre. Second, the platform should allow third parties to freely build (under certain standards) on our system. As an example, our system must allow third parties to build new things. That is apps like in the Apple Store or services with our data as well as some services that use Google maps. Thus, it cannot be a system for a specific purpose, but we must open it to new uses that our citizens can also find in the future.

ADDITIONAL BASIC PRINCIPLES FOR THE PLATFORM: In order to really create a platform, it must comply with certain general principles about the data. It must be findable, readable and shareable. Specifically: 1) accessible and usable to build on; 2) open data, license-free and non-proprietary; 3) machine processable, updated, primary and complete (also APIs). In that sense, one of our objectives is also to strengthen our standards of transparency, accountability and foster innovation.

WHERE TO START AND HOW TO BUILD IT
Prioritizing. If only 10% of the pages concentrated the 90% of the traffic we should start there. In this sense, it is the TRANSIT WEBPAGE the most visited. By experimenting and improving this specific service we can scale it later to the whole platform. I propose to start with the most valuable services for the users.

In this sense, given the size of the Project, it is best to divide it into different phases. Before even thinking in allocating a budget of one billion dollars, we can run some pilots by designing prototypes of the system that we need to build. After that process, we would be able to estimate a real cost of such a project. It would be not necessary to expend that amount of money.

The principles of Lean start-up and Agile can be used in order to create an MVP. The idea will be to demonstrate that the objectives are feasible to achieve with the least amount of resources possible. Only a frugal initial budget is necessary, not even a million dollars for the first year. Because the department already has the human resources, it is only necessary to relocate them efficiently in a task force to develop this new project.

The objective of using these methodologies is to save resources and time. In addition, it will allow us to understand the user’s needs and their expectation of value received. This first stage will serve us to develop a viable minimum product in the first 3 months testing our assumptions with a specific service. In order to launch the new platform within the next 9-12 months.

WHICH TYPE OF GOVERNANCE
Problem: in most cases, websites are controlled by different teams and, therefore, there are different expectations and incentives on how each of these solutions should work.
Proposal: in order to manage a system of this nature, we need a Centralized Digital Services Unit, with centralized standards. The main argument for centralizing is the need to have one owner of the product (which is the Platform and its services). In addition, we understand that management and political problems will arise due to the tension between our internal buyers and our users. Thus, these problems that are typical of systems with multiple users (stakeholders) and multiples interest can be better managed by a single digital authority reporting directly to the Governor. Additionally, in terms of design, the system can have an open and collaborative approach leveraging the open data.

REFERENCES:
• MassGov. https://medium.com/massdigital/testing-1…
• Osterwalder & Pigneur, Value Proposition Design Value Proposition Design (1999), pp. 13-38
• O’Reilly Tim- “Government as a Platform” in Lathrop & Ruma (ed), Open Government (2010)

GOOGLE-DEEPMIND: BIG DATA AND HEALTHCARE

0

Big Data is a revolutionary tool that can help us to shape the future changing the rules of the game forever in different sectors. In this post, I want to explore how this amazing achievement poses specific complexities of the relationship with Healthcare. In that sense, it seems appropriate to ask ourselves, can we tolerate the risks of using big data in health to help us solve our current and future health problems as patients?

So far, governments and companies have worked with limited volumes of information. The promise of big data is to manage millions of data flows in real time that allows us to anticipate events and make better decisions. However, the results have not always been the best (not all is fun and games). The Google-Deepmind NHS case affected thousands of patients from the NHS in the UK. The case is a good example of sharing private information without the consent of the patients to a private company.

In 2015-2016, the British NHS through the Royal Free London NHS Foundation Trust, together with DeepMind Technologies Limited, a subsidiary of Google’s AI, launched an alliance with the National Health Service of the United Kingdom to improve the process of providing care using big data with its application called Streams. The objective of the program was to develop a clinical alert application for kidney injuries with medical data collected for years by the public service whose servers Google would have access to.

As you might think, the British audience did not take this news very well. Without apparent access restrictions and without the explicit consent of British patients, an agreement was signed authorizing the transfer of clinical records of millions of patients from the public health system to a private company.

Big Data projects like this make it possible to use the information available from health services to understand various public health issues based on evidence. It can allow the authorities to understand where to best allocate scarce resources such as hospitals, health centres, ambulances, medical specialists. Also, the supply chain of supplies and remedies can be improved. Medical tests, diagnoses, treatments, and medical practices can be improved. Vaccines could be anticipated. In addition, it could be known which procedures are working and which ones are not working reducing human errors. The NHS and other public or private healthcare providers could save thousands of lives in the long term.

Then, What is the problem? Is a private company having access to private information? Who owns the data, the patient, the companies, or the State through its health services? What really bothers the citizens, will be the profit of the private company? Would it change something if instead of Google it had been an agreement with a respected Cancer Foundation? How do we resolve the conflict between consent and ownership of the data?

I do not have all the answers, but we live in an era where people want to know (transparency). People want accountability not only from the State but also from large private corporations. So, for me, the problem here is the type of information that is being shared and the lack of transparency. Also, the conditions of access and sharing. In general, for all types of similar cases, it is important to know how the data is obtained. I believe the major take away is that even if it is legitimate that the State (or private companies) wants to optimize processes and offer better services and public policies (services) to citizens. There is no reason to justify a violation of the rights of patients, regardless of whether we believe that privacy is a human right, a constitutional right or simply a legal right.

REFERENCES:
• Google’s NHS deal does not bode well for the future of data-sharing. Neil Lawrence. https://www.theguardian.com/media-networ…
• Powles and Hodson. Google DeepMind and healthcare in an age of algorithms. Health and Technology. https://www.ncbi.nlm.nih.gov/pubmed/2930…

Protected: AGILE vs WATERFALL. FAILURE AND LEARNING: INTEGRATED SYSTEM OF FOREIGN TRADE

Enter your password to view comments.

This content is password protected. To view it please enter your password below:

Algorithms neutrality

0

In this post, I will try to show some of the risks to democracy and individual privacy in relation to the use of algorithms. Some companies claim neutrality of their algorithms. Is that true?

The Internet, its platforms and social networks have given wonderful things to the world. It is difficult not to recognise their contribution to democracy and their help the political mobilisation in many examples. To mention just a few, the Egyptian revolution in 2011, Brazil Summer 2013, Occupy Wall Street 2011, and in Chile the successful student revolution to improve quality and access to education, also in 2011.

However, many current thinkers question the real contribution of social media to democracy. Rather, the role of companies behind this apparent illusion is criticised.

The underlying problem is the data. Regarding data, in some sense, the huge internet Corporations like Facebook-Twitter-Google-Amazon are trying to replace the State. They know more about us than many national security services across the world. In this sense, it seems that privacy policies are not working optimally. Many questions keep circling. How do they collect it? Where do they store it? Under what security conditions? Who can access it?, Humans?, AI?; Is someone accountable for this?

The rise of AI and the use of big data to understand citizens/consumers is an issue that concerns many. The thinker, Nick Srnicek argues that these companies are platform monopolies in a platform-based capitalism of businesses. For him, it is time to nationalise these companies. In their thinking, these companies are already too big and represent a risk for democracy itself.

In the same line of tough, Evgeny Morozov in his book the Net Delusion alerts us to the Perils of Algorithmic Gatekeeping. Morozov expresses scepticism about the widespread view that the Internet is helping to democratise authoritarian regimes. He argues that it could also be used as a powerful tool for mass surveillance, political repression, and expanding nationalist and extremist propaganda. He indicates that it is naive and even counterproductive to promote the Internet to promote democracy. He proposes the dis-intermediation of the social media corporations. Morozov also argues for State Intervention.

I am not convinced that this is the solution. Neither I know the answer. What I do know is that today the data is one of the most precious commodities. And many companies are willing to do anything in order to have access to people’s data. An excellent example of this is Google’s negotiation to enter China´s market, despite censorship. Therefore, we must take responsibility for this problem. Sooner or later it will hit us. We should be thinking about some type of solution/regulation to address these pressing challenges. The self-regulation has proved to be insufficient and inefficient.

Additionally, something that worries me especially is the biases of the algorithms. I will show you two examples where one is particularly concerned about the functioning of the algorithms. Of course, the obvious question to ask is whether they are sufficiently regulated? Or are we adequately protected? And the answer is a No. However, there is still much to be done in this before thinking about nationalising or dismembering these big technology companies to avoid monopolies or excessive power.

1.- Google: Bettina Wulff. Germany’s first lady, in September 2012, sued Google for “auto-completing” searches for her name with terms like “prostitute” and “escort.” Google invokes the neutrality of its algorithms and claims that its Autocomplete results simply reflect what others searched for.

2.-Twitter bias _ #occupywallstreet tag
Twitter is an engine, which constantly shows and creates what is happening concerning certain discussions. In this case, Twitter was accused of censoring the discussion as it was not reflected in the network’s tendency (trending topic) during that period. Naturally, many wondered if Twitter was participating in the political forum.

Respect for privacy is not so bad in Chile.

0

It is not a secret that the security agencies in the world are nourished by mobiles phones and CCTV cameras to monitor people, among other devices. In an article in the Washington Post that I read recently, they perfectly describe how the process works with the cell phones, and it is scary.  https://www.washingtonpost.com/apps/g/pa…)

It is true that many people do not want to know how the system works to provide security for millions of civilians. They just want to live quietly, or apparently calm. However, it still seems to me quite worrying the gradual acceptance of the police state that wants to control everything.

Having lived the last two years outside of my country. My impression is that in countries like the USA or the UK, surveillance increases every year, mainly by tracking via mobile phones and by using CCTV cameras. In London, it is impossible to walk without seeing how cameras surround one in all directions. How did that happen? Will it be cultural? Will it be part of the problems of the first world ?. In this sense, the USA patriot act of 2001 is a good example of these effects.

In countries like Chile (my country), national security is not such a relevant issue in society. Maybe because we live at the end of the world. Perhaps because we do not have potential threats of terrorism. Maybe because we do not have a budget. For any of those reasons, the use of CCTV cameras is not as widespread. In the case of cell phones, it is complicated to follow someone or hear calls without judicial authorisation. In this way, although Chile does not come from a culture of individual liberties, laws that give more control to the police have not been passed in Congress.

How have some countries got here? Probably, it is culturally accepted as a security measure to combat crime and terrorism in some countries more than in others. Perhaps, like frogs in a hot hole, citizens have not realized how security agencies can monitor their lives and have become accustomed. Moreover, it is interesting to see this phenomenon that countries are founded on individual liberties.

The essential problem is the weakening of civil liberties since in the future we can even fear to express our ideas or go to certain places for fear or protection, for example, demonstrations against the government.

It is not healthy to live in a Society where the State has such broad powers to monitor us. As citizens, we should be more concerned about these issues and organise against laws that reduce our fundamental civil liberty, including that of expression. Security and constitutional freedoms can coexist in harmony.

Log in