“Cyber War, Cyber Conflict, and Cyber Crime”

This past week, a few hundred million dollars worth of Ethereum were rendered inaccessible due to a poorly written smart contract, highlighting a lax attitude toward security in the blockchain world. Parity, a popular Ethereum wallet, offered a type of “multi-signature wallet” controlled by a smart contract. These contracts were themselves reliant on library code contained in another master contract, which was responsible for the flaw. Seven days ago, a Github user going by “ghost” casually posted an issue on Parity’s repository claiming that he killed the library code. Furthermore, he clarifies that he’s new to the Ethereum ecosystem and was simply playing around. How could this be, right? Surely some random person can’t walk in and grab the keys of a safe and throw them away? Yet that’s exactly what happened. He first made himself the owner of the library contract (warning bell, anyone could do this!), and then sent a kill signal, leading the library contract to self-destruct. Without this library, nobody is able to access the funds within their multisig Parity wallets, affecting major Ethereum stakeholders such as ICOs raising millions. First of all, the wallet library probably should not have been implemented as just a regular smart contract. Second, the contract lacked proper access control. Third, there existed a self-destruct function without any means of recovery.

In the world of traditional finance, properly audited, secure code is a paramount concern since vast amounts of customer funds are being entrusted to firms in question. Meanwhile, in the crypto world, the situation is still a Wild West, partly because there’s no real government regulation in place and the technology is still maturing, finding its best practices. If adoption needs to grow, and innovative applications be found, we need a new approach to developing smart contracts with the precise eye of a lawyer, a proper audit system in place. Furthermore, perhaps we should work on protecting people from themselves when they’re developing contracts, by altering the programming language itself. Ethereum smart contracts are typically written in Solidity, which offers much room for error. One could draw an analogy to C code: versatile and powerful but offering many opportunities for an inexperienced programmer to cause a major bug. A higher-level language could offer more protection. Maybe one could even use formal proof-based mathematical verification to ensure a contract behaves as intended.

I mention this as an example of decentralized, powerful technology leaving gaping holes for adversaries to exploit, much like the internet in general. Imagine if we do one day operate our voting systems and other critical infrastructure on the blockchain. Indeed, much of our critical infrastructure today is internet-connected, creating many potential vectors for adversaries to find flaws, conduct espionage, and bring down our systems. While we like to talk about government strengthening our cyber defenses, much of this burden is ultimately shouldered by the variety of private and decentralized products we use every day, by nature of the tech industry. Every last citizen and product can be vulnerable, and will be targeted, making the challenge of cyber defense uniquely difficult. All it takes is one government agent clicking on that all-too-tempting phishing email from “àpple.com”, or a zero-day exploit in Windows 10, for critical national security apparatus to become possibly exposed. Or perhaps not a government agent, but an important corporate figure will be targeted, having impacts on the broader US economy. Corporate espionage betwen the US and China is a continuing topic of concern.

Lockscreen bypasses on iOS seem to spring up every year, and FaceID was just shown to be fooled by a certain specially crafted mask. Facebook and Twitter have been used at media for spreading propaganda. It becomes clear quickly that government must coordinate its efforts with the private industry in order to stand a chance against foreign adversaries in a cyber warfront. Meanwhile, corporations must strike a balance between maintaining governmental independence and accepting help from the government in taking defensive measures to preserve security.

“Who Runs the Internet: Jurisdiction and Governance”

At first glance, the EFF manifesto looked like an overly optimistic techno-libertarian view of the world. After all, from today’s perspective, much of the dreams espoused in the letter seem moot. Is the internet really a tool for individual liberty, or one enabling manipulation and control on a scale we can’t imagine? The reality is that today, government, including the US, do indeed try to regulate the internet. Moreover, even if we ignore the government’s interference, corporations are more dominant than ever in setting the rules of the internet. 25 years ago, the internet may have seemed like a playground of personal webpages and trivia that are out there for you to discover. Today, much of that information is funneled through a small set of internet giants, like Google and Facebook. In exchange for that convenience, we allow these companies to determine what’s interesting and what’s not, what’s acceptable and what’s not. Remember the shock when we found out Facebook was running internal experiments toying with user emotions by altering their news feed content? And content does not flow as seamlessly across geopolitical borders as hoped. Yes, we have VPNs and various piracy methods, but content is often locked to one region or another. Nonetheless, the internet on the whole still remains decentralized at least in theory, with any restrictions on free flow of information circumventable with enough effort.

 

When it comes to regulating the internet, concentration of market power in one place creates an easy point for the government to target. Interestingly enough, we seem to have come full circle with walled gardens, as we’ve noted throughout our seminars. We went from the AT&T telephone monopoly, easily controlled by the government, to an online walled garden created by services like CompuServe, to a more decentralized system of web pages on the internet. Today, once again, companies like Google and Facebook are increasingly acting as gatekeepers to the broader internet. These platforms, knowing our preferences, are able to surface the content we want better than any other means. With Google’s AMP pages and Facebook’s Instant Articles, even outside content is hosted within these walled gardens, based on the premise of a good user experience. When platforms start to play more of a gatekeeper role, they expose themselves to questioning when questionable content spreads on these networks, as occurred during the 2016 presidential campaign. And with concentration of power within a few platforms, the government is able to exert its influence more efficiently, as we saw with recent Congressional grilling of tech companies in relation to Russian interference. With the internet playing an increasingly dominant role in civic society and the economy, government is going to continue to try and figure out its place in setting the rules, for better or for worse.

“Voting, Polling, and Politics in the Connected World”

This was quite a fitting time to discuss the reality of politics moving increasingly to the digital sphere, for better or worse. Fears of Russian interference in the 2016 election reached a fever pitch this past week, with Congress grilling tech executives about their role in allowing for misinformation and bot spam to spread on their platforms. As the citizens of a nation conduct their lives online, it’s only natural that politics evolves to meet them where they are, ruthlessly taking advantage of social media and digitized information to mobilize voters to cause an intended effect.

The internet has created a new set of industries around marketing, like SEO (search engine optimization) or narrowly tailored social media marketing, or the phenomenon of “influencers” whose professional lives and personal lives blend seamlessly. Barack Obama was perhaps the first presidential candidate to begin taking advantage of the social media revolution, targeting millennials especially, on platforms like Facebook and Twitter in 2008/2012. Back then, social media was seen as a force for good in politics, keeping to the mission of transparency and increased engagement with relevant constituents. Little did we foresee that by bringing political campaigning online, we’d be opening up our democratic infrastructure to attack on an unprecedented scale. Before the internet, attacks would generally have to be through the traditional media gatekeepers, or by physically altering voting booths. Now, all it takes is a connection to the internet to allow anyone, including non-citizens, to spread political information. And due to the viral nature of online echo chambers, that information can have drastic effects on the outcome of an election.

Even domestically, campaigns are adapting to a digital reality of information being the most important commodity. By analyzing online behavior of potential voters, campaigns can build deeply personalized models and target voters in ways that make them likely to respond positively. Subtle additions to their news feeds, which citizens rely on for their information, can alter behavior, which both private industry and political campaigns use to their advantage. Since targeted advertising relies heavily on psychological tricks, it brings up important ethical questions of how these advertisements should be labeled. Should an Instagram influencer be allowed to show off a company’s product without disclosing the funding they’re receiving in exchange? These questions become even more critical when dealing with political campaigning. Without appropriate labeling, money is essentially able to buy votes, since whoever controls the money is able to purchase the most advertising, and platforms are all too happy to take that money.

Along with targeted advertising, the second disconcerting aspect of 2016’s campaigning was the spread of fake news. Some of this may have been associated with large-scale political disruption campaigns, but much of it has been attributed to entrepreneurial individuals trying to drive clicks to their websites for ad revenue. The question is whether platforms like Facebook and Twitter are willing to do anything about this. After all, more users and engagement on their platforms is traditionally a good thing they can pitch to advertisers, right? Sure, they could take steps to combat misinformation, but corporations don’t just do things out the goodness of their hearts. However, with increasing Congressional scrutiny, companies would rather take some voluntary steps to quell the tide rather than risk burdensome regulations being imposed on them. It’s under this calculus that Mark Zuckerberg announced during Facebook’s earnings that the company would be taking greater steps to combat fake news and bot accounts, at the risk of sacrificing profits. If Facebook does indeed take a greater role in such efforts, we run another risk of having a corporation determining who/what is real and not, a perhaps even more dystopian scenario as we live out our lives on these platforms. For now, I think the best approach to take would be to algorithmically flag questionable content and present a warning to users, allowing for informativeness taking advantage of artificial intelligence while avoiding the pitfalls of outright censorship.

 

 

Cambridge Analytics