You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

~ Archive for Uncategorized ~

Death by social media: thoughts on the emerging social challenge.

ø

Rumors have always spread since the beginning of time, sometimes with disastrous consequences for society. They are harmful because there is no way good way to fight them. Yet they can be everywhere. The more vicious they are, the more they are repeated and believed, and targets can suffer irreparable harm from them. Yet modern day social media platforms have multiplied the risk. Whereas in the past, a rumor limped its way around a community, today, it spreads far and wide in a matter of seconds. Platforms like Facebook and WhatsApp are not inherently bad, but they have accentuated the toxic rumors risk. In India, this phenomenon has evolved sometimes with tragic consequences which warrant government intervention in the interest of public safety.

Recurring tragedies
The following cases provide examples of this challenge:

  • In 2017, a message falsely alleging child abductors, speaking  Hindi, Bangla and Malyali were on the prawl, was circulated on Whatsapp, leading to the lynching of innocent people in Jharkhand.
  • In May 2018, a 55-year-old woman was killed by a mob in southern Indian state of Tamil Nadu, amid false Whats App messages and rumors over kidnapping of children.
  • In June 2018, two men that had stopped to ask for directions were beaten to death by a large mob in north-eastern Assam state. They were mistaken for being child abductors following a message spread on Whatsapp leading unsuspecting villagers into believing that their children may be targeted by abduction gangs.

A significant adaptive challenge

It may very well be that mischief-makers bent on causing community alarm and despondency spread these messages, with tragic consequences including disturbing the peace. We cannot however, just pin the blame on the technology, because that assumes people were good prior to the advent  of social platforms.

What is clear is that there is no technical fix to this challenge of vigilante justice. There is neither a silver bullet to it, nor a known answer to the challenge. This situation is a typical adaptive challenge, which has no easy answers. Nonetheless, the remedies to this situation can be found by analysing the key stakeholders to the challenge. These stakeholders can be mapped as follows:

The stakeholders to these adaptive challenges can can be managed as follows:

  • Facebook and Whats App
    With over half a billion internet users in India, according to the Washington Post, both platforms have a significant customer base. Facebook has over 200 million users while WhatsApp has over 270 million users. Since Whats App is owned by Facebook, we can argue that the same company has nearly the entire population on the Internet in India on their platforms. Because India contributes almost a quarter of Facebook 2.27 billion users, it is important that Facebook understands its responsibilities to public safety in a market it has a significant stake in. To that extend, engaging both Facebook and Whats App ought to focus on:
    – Engaging both platforms, which derive significant value from India, to send messages to their users warning of the spread of such toxic rumours on their platforms, especially once alerted by either the public or authorities.
    – Request users to flag or alert them within those platforms where a suspiciously toxic and dangerous message or video if being circulated. This helps to crowd-source risk management or flagging of such messages as it may not be possible for the platforms to check all messages/content users circulate. This is especially important for Whats App which does not currently have such a feature.
    – Engage Facebook to prioritise posts by professional media which usually flag such rumors when they start circulating. This is especially important because Facebook in March 2018 presented a separate newsfeed on their platform that prioritises content from family and friends while hiding away posts by professional news organisations. This has the effect of promoting and amplifying such rumors among friends and family making professional news debunking of such rumors less prominent.
    – Nudge Facebook and Whatsapp to engage local language moderators who understand local nuances to sift through content and moderate it. This would be complicated for Whatsapp that encrypts its messages, but this is where they need to crowdsource this risk by implementing a reporting function.
    – Push both platforms to set up a local corporate entity as well as appoint a grievance officer to manage and help curb the spread of rumors that have claimed several lives.
  • Local Community/Village Leaders
    For local communities, the fundamental issue is educate them to promote community cohesion and peace. For example, once the rumors start spreading among the community, the leaders in those communities can flag the issues with authorities to ensure that law enforcement kicks in. This issue goes beyond the social media platforms, it is about promoting tranquility among people in local communities.Dealing with the community entails engaging community leaders. For this to be successful, the ministry can arrange training, for example via seminars, for both law enforcement officers and community leaders in order to curb vigilante justice.In addition, there is no point in wasting a crisis. The ministry should coordinate with the justice ministry to ensure that the law is not just enforced, but perfect examples are set for perpetrators of vigilante justice.
  • Technology Ministry/State Government
    The ministry and state government both play a role in ensuring that the above stakeholders play their part, and that education programs are implemented. Evolving adaptive challenges like these require leadership, and its important for the ministry to lead these efforts, including engaging all stakeholders. In addition, the ministry can also try the following:
    -Holding social media groups administrators responsible. This is particularly important for WhatsApp where toxic messages can be circulated/broadcast rapidly within groups. Holding administrators within groups can help push for responsible actions within groups to make sure that someone is accountable.
    – In times of emergency situations where there is a severe breach of the peace or loss of lives due to nasty fall-outs arising from these rumours, the ministry can try, as a last resort to temporarily switch of the Internet access to the two social media platforms via local internet service providers. While turning off the Internet can be a plan B in times of crisis, it is not a solution to the spread of hate rumors. It simply slows down the pace. What is important is to work towards building social cohesion. Neither Facebook nor Whatsapp created the problems of distrust in the communities.
  • Local Media/Radio Stations
    Professional media can be mobilised too to help correct and counter toxic rumors that have had tragic consequences. Radio would be most effective where it has coverage as the message can be broadcast to a broad audience. 

A double edged-sword.
Technology has brought with it massive advantages to information dissemination. Platforms like Facebook have democratized communication tools by enabling anyone with a smartphone the ability to broadcast. However, these are just tools, and they can be used to spread constructive messages, or hate. Just like in the past, traditional mass media has been used to mobilize mass violence. The genocide in Rwanda in the 90s provides an informative case study. We cannot wait for tragedies and genocides to occur before we move in to manage these new e-platforms. They must be responsible, and must be held responsible as they can supercharge content that ratchets up tribal and religious hate which can upset fragile social balances in communities. 

Who is using who? Data privacy concerns and the data industrial complex

ø

Giving a keynote at a conference in London this week, Microsoft CEO Satya Nadella added his voice to the discourse about data privacy, advocating for its recognition as a human right. He was speaking in support of Europe’s General Data Protection Regulations (GDPR). Nadella is not the only tech titan supporting stringent data privacy. Apple’s CEO, Tim Cook, recently pushed for the same. These voices, spurred by various issues in recent years, have brought issues of data privacy and protection of personal information to the fore. This triggers thought-provoking questions such as, should data privacy be a human right?

This got me thinking about two services that I use, Google and Facebook, which track me. I kicked into self-reflection mode, asking myself the following questions :

• Why would they need my data?
• What happens when they collect my data?
• How much power do the service providers amass by collecting my data and aggregating it with other data?
• Why have I been alright with this scenario?
• What should I do about it?

What is data Privacy?
Data privacy has become topical in recent times. It is about the ability of individuals or entities determining what sensitive information or personally identifiable information1 (PII) in a computer system can be shared with third parties. Due to increasing debate around these issues, the standards and expectations have evolved. On the other hand, it seems regulations in different jurisdictions have gradually changed, and Europe’s GDPR has set the regulation standards very high, which is why leaders in the tech industry have come out to praise such regulations.

Why is data privacy important?
A broad number of dynamics have brought this issue to the forefront of technology debates. Issues such as the recent class action lawsuit against Google and Facebook2 for deceptively and secretly tracking users’ locations and collecting their data, even when users were led to believe that they had switched off such tracking, as well as the recent Cambridge Analytica scandal that affected the US 2016 elections have increased the temperatures. Key questions over data privacy include collecting and sharing personal data with third parties without consent as well as whether third parties can track websites visited by a user.

While services such as Google and Facebook generally encrypt user information, the greatest concern is not whether PII is un-encrypted and therefore vulnerable, its what these platforms with unlimited amounts of personal data, with or without the user knowing, do with the data they collect. At a conference in Brussels last month, Cook took a indirect dig at the two companies, referring to them as the data-industrial complex that takes user data information and getting it “weaponized against us with military efficiency”. He further warned against the sugarcoating of the consequences, adding that such surveillance “and these stockpiles of personal data serve only to enrich the companies that collect them. This should make us very uncomfortable. It should unsettle us.” Cook’s words, albeit strong, warn us of an unsettling development of mass surveillance (tracking), stockpiling (storage and manipulation), and weaponization (abuse and manipulation) of PII, very key issues underpinning the debate on data privacy. But the issue goes beyond these three issues, it speaks to how these companies deemed part of the data industrial complex can become extremely powerful, without being answerable to anyone, giving rise to calls for regulation.

What happens to our data?
As I use Facebook and Google services, the firms collect my data. The question is, what data do they collect and what do they do with it? Among other personal information, Google collects the following data on me:

• It tracks where I have been. It stores details about my  location as long as I have my phone running, if I inadvertently turn location tracking. (Its possible to see a whole timeline of where I have been from the point I started using Google on my device. This information can be found here.
• It keeps a advertisement profile on me. This profile is created using my information like possible income, weight, gender, age, relationship status and other personal details to create a customized advertisement profile based on my data. You can find your information here.
• It keeps a record of everything I have ever searched, including what I deleted. This search history is kept across all my devices, meaning that even if I deleted my search history on one device, it still remains accessible on other devices. You can find your information here.
• It keeps all video viewing history on YouTube. From this, it can figure out my political earnings, my religion, my mood among other issues.

On the other hand, Facebook stores among other details, the following information:

• All data on things that I like and find interesting. This is based on my “like” clicks on its website, as well as what my friends like about me.
• All my geographical locations based on my log-in positions, the times I logged in and the device I used.
• All applications that have been connected to my Facebook account. This enables them to estimate and profile my preferences and interests.

Both Google and Facebook store massive amounts of my data extracted and archived by their algorithms. For example, Facebook keeps data enough to fill up hundreds of thousands word documents, including all of my activity on its website. This information can be found here. On the other hand, Google stores data large enough to fill hundreds of thousands of Word documents if printed. This information can be downloaded here.
This means that gaining access to anyone’s account gives access to all information on anything and everything there is to know about the user, including calendar data diarizing every aspect of the user’s life.

Googles says it collects all information on me to better deliver better services, including providing me more tailored advertisements, never mind that I don’t really need the advertisements. Yet questions arise from how it uses this information it collects. Most likely, it could use it for services that border on manipulation, that if packaged with advanced artificial intelligence systems and machine learning will deliver highly personalized services in the form of digital agents such as an advanced Google Assistant service. On the other hand, Facebook doesn’t directly sell my data, it sells access to me as all my data is used to create targeted advertising. The Cambridge Analytica scandal, however, exposed the unapproved mining of customer data for political manipulation purposes. Whichever way these companies use our data, nefarious or otherwise, what should be of greater concern is that we all are vulnerable to exploitation in unlimited ways that can benefit governments, companies and other entities that have the money to buy access to us.

Is what they disclose as collecting the same as what they actually disclose?

Prior to the Cambridge Analytica scandal, Facebook enunciated a policy which did not bother many users. But when the Cambridge Analytica issues came to light, it raised the specter of far more use of data than was previously disclosed to users. It is therefore reasonable to ask if Google also actually collects far more data than its disclosure policy lets on. Whatever the case, its important to keep guard of what information we let the companies collect. The best way to do that is to limit the amount of tracking and access requested by the services via the devices we use. In general, it is much easier to let go of Facebook’s services than it is to not use a variety of Google’s applications. That said, it is more regulation designed to diminish the significant power the data industrial complex has acquired that will protect users and keep their power in check. If necessary, the application of anti-trust laws to break up the tech giants may become an option on the table, and an exploration by Lina Khan of how to break up the monopoly of Amazon provides interesting direction.3

References
1.  Martin, K. E (2015). Ethical Issues in the Big Data Industry. MIS Quarterly Executive, June 2015 (14:2)

2. https://nakedsecurity.sophos.com/2018/10/25/google-and-facebook-accused-of-secretly-tracking-users-locations/ 

3. https://www.theatlantic.com/magazine/archive/2018/07/lina-khan-antitrust/561743/ 

Should HKS mandate stakeholders to use LastPass? A discussion on cyber-security.

2

In the 1987 sci-fi comic Spaceball, character Dark Helmet, after being told the lock combination to the air shield remarked, “So the combination is, one, two, three, four, five? That’s the stupidest combination I’ve ever heard in my life! That’s the kind of thing an idiot would have on his luggage!” Most web applications today would discourage users to have such a short password. Nonetheless, according to Security Magazine, the worst password in 2017, for the second year in a row, remained “123456”. Another interesting example is the inadvertent leak of his iCloud password by Kanye West when he unlocked his phone live on video, revealing his passcode online. This goes to show that there are many system users who are very sloppy when it comes to protecting data and restricting access to information, for various reasons. As such, they continue to use weak, easy-to-crack passwords to protect information online.

In general, a system is as strong as its weakest link. This is as true for general ICT users as it is for organisations. This means that a lot of online data is still very vulnerable to hacking and a lot of online systems are vulnerable to intrusion. It is therefore entirely possible for institutions, such as HKS and its stakeholder users to spend a lot of money on security systems and still not be truly secure, because of security vulnerabilities, especially at the human level. Moreso because, by its very nature, as host to strategic experts, former cabinet officials, top global security and international relations resource persons, and generally as a repository of groundbreaking knowhow, some of it proprietary or of a strategic nature, the school is a potential target for cyber attacks.

Because most users use multiple passwords to access different online resources, many end up using the same passwords across multiple platforms, in come cases simplifying them to easily remember them. To deal with this challenge, password managers can be used to store multiple passwords. Password managers keep login details for online applications or websites and help log into them automatically such that one does not need to remember all their passwords.  This is achieved by encrypting the multiple passwords with a master password so that all you need to remember is that master password. This is very convenient. Given this convenience, the question that arises is, should HKS make this mandatory for its system stakeholders?

Password management solutions have their own vulnerabilities,  depending on their engineering. As Schneier argues, despite possible flaws of password managers, they are still a convenient way of managing complex passwords, creating a trade-off with the reality that users sometimes use weak and vulnerable passwords.

Back to the question, should LastPass be mandatory or not? This question depends on the totality of defenses against cyberthreats such as hacking. In the case of HKS, there are multiple levels of (multi-layer)  security – the first being Harvard key, which is Harvard University’s unified user credential, that uniquely identifies users and provides them access to applications and services. The second layer is the mandatory two-factor authentication solution that requires the user to validate their access through verification via a second device. LastPass would therefore be another additional layer, more useful, especially for access to non-HKS online resources. On this basis, I argue that LastPass should be encouraged, but not mandated.

Yet there are additional reasons to not make Lastpass mandatory.

Mandating LastPass would amount to mandating a specific vendor solution, including the flaws that come with it.  LastPass, like other password managers, comes with its own vulnerabilities, which, even though they get patched from time to time, have been exploited by hackers. For example, in 2016, a hacker blogged about how he harvested LastPass passwords. The fact that they save users the headaches by helping them auto-log into accounts doesn’t mean they are no longer immune to security breaches.

Hacking of passwords is an adversarial act, which may be motivated by a variety of reasons such as curiosity, obsession, boredom, thrill-seeking, warfare, malice, revenge-seeking, pursuit of money, and self-promotion among various other motivations. Making one passoword management solution mandatory makes all users vulnerable to LastPass’s own technological weaknesses once an adversary identifies them.

In addition, besides truncating the boundaries between public and private spaces as all passwords for all sorts of applications are stored in the same solution, LastPass, like some of the password managers in its category also allows syncing across multiple devices, which amplifies the risk factor of attack via password syncing, as highlighted by Silver and others. Such synchronization opens up the risk of password extraction from multiple devices.

Cyber-security threats at HKS are potentially high. There are multiple security layers for protecting and restricting access to Harvard-specific resources. Users at HKS can use LastPass for managing passwords, for personal online access and HKS related access. However, the foregoing arguments show that though desirable, it is not necessary to mandate the use of LastPass as a password management solution at Harvard.

Government as a Platform: Rethinking government in Massachusetts.

ø

Towards a Massachusetts 2.0
Governments everywhere are faced with increased complexity in delivering a good standard of service in the context of global challenges such as natural disasters, economic turbulence, climate change, trade wars, energy shortages and demographic changes among other factors. This has a bearing on the federal authorities as much as it affects state governments and the state of Massachusetts is no exception. In light of these dynamics, harnessing technology to deliver services to increasingly discerning state citizens becomes a necessity for efficient service delivery. This memorandum explores what implementing government as a platform (GaaP) in Massachusetts entails, criteria for deploying services on the platform and governance model for managing the service. I shall call this Massachusetts 2.0.

GaaP: What it entails for the State of Massachusetts & the Criteria for Deployment.
GaaP is one way Massachusetts can gravitate towards an open government. Open government enables the government to co-innovate with citizens, enable mass collaboration and networks the government with other system-wide stakeholders, building trust in the process. The underlying principle and philosophy behind GaaP is that government information is a national asset and that the information as well as a services must be delivered to citizens when needed. As suggested by Tim O’Reilly (1) , GaaP enables the government to be a convener and enabler rather than the initiator of civic action.

To move towards GaaP, the state will have to do work on a number of issues. First it must establish a set of standards. These are a set of rules that help anyone to develop any programs and applications that communicate and cooperate with the state’s platform, Massachusetts 2.0. Second, everything must be centered on simplicity. It means Massachusetts 2.0 must be stripped of elaborate features to a core set of minimal services so that feature filled innovations are farmed out to private innovators. Third, the platform must be open by default. This means that it must be designed to enable participation by anyone who can access the public data and use it for public good in line with set standards. Fourth, the platform and the state must make provision for mistakes and errors as various stakeholders experiment and play around with the platform. This means that citizens must not be unduly punished for mistakes as they experiment with public data, as Massachusetts 2.0 evolves. Sixth, citizens should be allowed, whether private players or non-profits, to mine data to extract insights and innovate around it in more ways than the state government can imagine. Seventh, and last, we can lead by example, by using the same standards to start deploying some services to show how far other players can go. In other words, we will be creating a public, sophisticated and far more liberal and open version of Apple’s App Store, which allowed the state government to collaborate with citizens.

Deploying Massachusetts 2.0.
The first step is to be to develop a comprehensive set of standards. To allow collaboration of the state with citizens, this can be triggered by issuing an executive directive to that effect by the Governor. This directive provides a framework within which the state can function and evolve under the clear direction that it is guided by open government. It is not necessary to recreate standards. Instead, the state can adopt and adapt existing open standards as well open source solutions.

The next step would be to build a simple platform that exposes the underlying data from the state’s systems. This entails ensuring that Massachusetts state internal systems are service-oriented. It is necessary to first audit this and improve it prior to exposing the underlying data.

Once the platform is set, the state can start deploying some of its services on the platform, to lead by example. The state was a leader in providing universal healthcare in the entire United States. One of the services that can be built by the state on the platform is a healthcare service. This would also allow other players to come in and mine data for various other applications and uses, in line with set standards. Other state-mandated services such as licencing, state taxes among others can be deployed on the platform.

Governing model.
Europe and other countries like China favor extensive regulation by the state to achieve what they call platform fairness (2). The state has a duty to regulate the conduct of citizens, natural or corporate. This is to ensure that use of public data and all public assets conforms to the set standards and is not contrary to good public policy. In the same vein, it important for the state to ensure platform fairness to facilitate and engender anti-trust monopolistic behavior in a manner that stimulates innovation and competition.

The governance model proposed is a consultative one, where a mechanisms such as a board, is facilitated through the Governor’s directive order. The board will have representatives of the private sector. The issue of control of servers in this situation becomes paramount. Given the public nature of data exposed by the platform, it is recommended that the state retain control of the servers, while ensuring that the consultative board collects all feedback from consultations to ensure the feedback is implemented promptly by the state.

In conclusion, the Internet is an open platform and its evolution has created immense opportunities to deliver more open forms of government that harness the collective wisdom of citizens. For the state of Massachusetts to harness that wisdom, GaaP is the way to go. Such a platform will ensure that the private sector and talented individuals can in multiple ways leverage public data to create service innovations that can change the state in ways unimaginable.

1. O’Reilly, T (2010). Government as a platform. https://www.mitpressjournals.org/doi/pdf/10.1162/INOV_a_00056

2. The Economist (2016). Regulating technology companies, taming the beast.  https://www.economist.com/business/2016/05/28/taming-the-beasts  (retrieved on 2/10/2018)

Hello world!

9

Welcome to my Weblogs at Harvard.

Log in