You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Speaking of searches…

ø

There is much debate about whether or not the government should have the authority to conduct searches of personal computing devices.  Obviously, this is a continuation of an ongoing struggle that is centuries old.  However, I think there are 2 important elements that make this version of the debate unique.

First, in the case of technology devices, designing the software in such a way to allow law enforcement to access it, either through weak encryption or a back door, inherently sacrifices the user’s security.  This is not a feature that has entered the privacy debate before.  Providing law enforcement a way in also means there’s a way in for bad guys.

Second, it is very difficult to provide limited access.  In the case of physical locations, judges can issue warrants for specific spaces on a property or within a house.  The warrant can also include certain items or types of items.  In the case of a hard drive, how do you do this?  Is a warrant issued for the entire hard drive?  Can law enforcement only open certain folders on the hard drive?  If so, how are they supposed to know which folders they’d like to look in when requesting the warrant?  This presents a new dilemma – and one that privacy advocates use to argue against allowing law enforcement access.

However, the second point seems much simpler to me than the first.  A judge can simply restrict the collection of and admission as evidence to certain types of files.  For example, if law enforcement thinks there are communication messages on a phone that would incriminate someone, a warrant could allow allow the collection of text messages and/or emails.  This would prevent law enforcement from simply scouring the hard drive for any type of file that might be incriminating.

While some situations may require it, universally allowing law enforcement to access one’s entire hard drive with a warrant in all cases is too much.  While law enforcement needs tools and access to do their jobs effectively, we still need to adhere to basic privacy principles, which have undergirded the American way of life, when adapting to new contexts.

Are technologists crying wolf?

ø

In the 1990s, the US went through Crypto-War I between law enforcement agencies and technologists.  The debate was about LE getting some sort of “golden key” or “exceptional access” to encrypted communication – a way they could listen in on calls or read messages that bad guys were sending, with a warrant, of course.  This isn’t really crazy or unprecedented, as telecommunication boxes around the country are, by law, equipped with hardware that allow LE agencies to tap phone lines.  As technology – and encryption, specifically – improved, it’s logical that LE agencies saw their ability to tap phone calls and other forms of communications diminishing.  So, they asked law makers for help.

In the end, the LE enforcement community lost.  However, instead of losing almost all ability to monitor communications of bad guys, they innovated and actually developed more capability than ever.  But now we’re back to the 1990s status, Crypto War II, where the LE community is claiming encryption is too hard for them to get around once again.

Technologists use the same arguments they used in the 90s – that giving LE any sort of exceptional access would either mean weakening encryption overall, which would give other people access, too, or creating keys, which would need to be stored, which then creates security and privacy problems for everyone, not just the bad guys.  They also claim that it is impossible to do it (encryption) any other way.

So here’s my question – are the technologists crying wolf?  Just like LE agencies had to innovate and overcome challenges after losing Crypto War I, which resulted in them developing more capability than they have ever had, couldn’t technologists do the same if they were to lose today’s Crypto War II?  Would the requirement for them to grant some sort of exceptional access to LE agencies create enough pressure to develop new schemes/models of encryption?  Is it really impossible?

I’m not saying we should necessarily side with LE now and force tech companies to give them exceptional access, because the unwitting victims would ultimately be all the good, innocent bystanders like you and me.  But I’m also not saying I believe the technologists when they say there’s no other way.  Unfortunately, the most likely way to actually figure out if there is another way is to put some pressure on the techies…which would require us to side with LE.  Seems we’re stuck here between the ole rock and a hard place…

Can the GDPR be enforced?

ø

The General Data Protection Regulation (GDPR) in the EU provides sweeping privacy and data protection measures for EU citizens.  It is so broad that it applies to any company, anywhere in the world, that collects data on EU citizens.  Obviously, this has enforcement challenges.  For example, how can the EU enforce this regulation on a company based in California?  Why would a California based company agree to pay fines levied against it by a government in the EU?

Doing a little research, my understanding is that this primarily relies on international law and existing agreements and relationships, such as the US-EU Privacy Shield.  The US and EU have a very positives past with regard to respecting and enforcing each other’s laws.  If the EU were to fine, say, Google, a few million dollars, it would likely be in the US’s interest to enforce this in order to preserve this relationship.  In the context of western democracies with positive relations, I think this is likely to work well – allowing the GDPR to really punch above its weight class abroad.

However, what about in countries where good relations don’t exist?  Will China enforce the GDPR?  North Korea? Iran?  Likely not.  As we have seen with litigation involving claims in the South China Sea, even if international law – and literally every other country in the world – determines something, this does not equate to the disputed country (China, in this example), acquiescing.  The same is likely true for cases involving GDPR – the EU will likely have no enforcement mechanism.

This is problematic because, although personal data can be misused and stolen even in western countries, the truly dubious efforts seem to come from other states.  Just look at the IP theft from China and the millions of dollars worth of capital Iran and North Korea have stolen from financial institutions and cryptocurrency brokers.  Although the GDPR is groundbreaking in providing robust security measures in an environment where the rule of law prevails, it does nothing elsewhere.  Unfortunately, that is precisely where the biggest threats emanate from.

Is privacy possible anymore?

ø

In today’s technology-centered, hyperconnected world, is privacy even possible anymore?  Even attempts to de-identify data can prove futile.  When data is published seemingly de-identified, it’s often possible to re-identify people using other, public information.  For example, when Hubway posted data about bicycle pickup and drop-off times, some college students scrubbed it against Twitter posts to identify over half of the users.  How did they find the Twitter posts?  By searching #Hubway.  People are giving themselves away!

With so much data publicly available, maintaining absolute privacy may be a lost cause.  A quick Google search of yourself will likely yield results showing your current address (complete with a pin of your house on a map if you’re like me!), previous addresses, age, telephone number, and family members.  This isn’t completely new, as most Americans used to list their phone number and address in the local phonebook.  However, now it is much easier for people across the globe to access the same, and more, information – and aggregating and cross-referencing the data can lead to all sorts of problems.

Although we may not be able to completely protect ourselves and our privacy, there are certainly some steps we can take to make ourselves harder targets, making it less likely someone will want to spend the time and energy required to uncover our private information.  Here are just a few ideas I can think of:

  • Stop Tweeting so much.  If you Tweet about everywhere you go, then you’re pretty much relinquishing your right to privacy.
  • Lock up your Facebook profile.  The only information available to the public (at most) should be your name, a profile picture, possibly your city, and place of employment.  Trolls on Facebook that don’t know you don’t need to read about your entire life story, to include photo evidence.
  • Turn off location services, other than for fitness apps and Google Maps.  At least only allow location services only when the app is running.
  • Actually read privacy policies and consider not agreeing to sharing all the information requested.
  • Use a VPN when connected to the internet.
  • Browse in an “incognito” window.

Obviously, there are some more structural things that need to happen, both in terms of public regulation and private company stewardship, but this post is mainly about people making good decisions to take care of themselves.  We can beat the “we want more privacy regulation” drum all we want, but if we’re still Tweeting every step we take, then it’s sort of our own fault in the end, isn’t it?

 

e-identities…convenience comes with risks

ø

I have recently learned about government-granted e-identities that utilize biometric data, such as in Estonia and India.  At first, I thought this was genius.  It provides a central registry for all citizens – a place anyone else can go to lookup one’s public key, a way to authenticate yourself, and a way to protect your personal information.

How nice would be it to know that you could send an encrypted email to your friend, one that you knew only they could read?  Estonia’s system allows you to do just that because you can access anyone’s public key information.

Or wouldn’t it be nice if you could vote from the comfort of your own home?  Well, since you can authenticate who you are online, now you can!

Or what about stopping people from making fraudulent benefit claims to the government?  The Aadhaar system is on it!

While there’s a lot of convenience and goodness that can come from systems like these, there’s still some cause to pump the brakes a bit before we get carried away.

First, although the foundational infrastructure (of having an e-identity) is a great tool, layering too many services on top can be dangerous.  It’s just like using the same password for multiple services and daisy-chaining emails – if a hacker gets into one, they’ll get access to your entire online world.  No matter how secure “they” claim it is, your stored data is always at risk and could be stolen.  If you used your e-identity for your bank accounts, credit cards, voting, social media, and everything else (this is what I mean by layering too many services), once a malicious actor got ahold of small bit of information they could cause you serious harm.

I think a nationally-sponsored e-identity can be great for things like boarding planes quicker, going through customs, and signing documents online, but one should use extreme caution to use the same bit of biometric/password/card/PIN information for lots of different accounts and services.

Now lets take a special look at voting for a second.  The idea of voting from the comfort of my recliner sounds great (which I could do in Estonia), but I see all sorts of ways this could be abused – mainly through coercion.  The benefit of having people physically show up at polls is ensuring they are free to vote in private and for whomever they choose.  You cannot guarantee this otherwise.

Close election?  No problem!  Instead of a phone bank, just setup a computer bank and bring in anyone you can find and either outright coerce them or incentivize them to vote for the candidate of your choice. Gangs in the US setup “classrooms” for minors to fill out social security forms – they make millions in fraudulent social security claims each year doing this on a mass scale.  Think they couldn’t do something similar for voting?  You’ll never know how many voters had someone standing over their shoulder watching to ensure they voted a certain way.

If you’re willing to write off this potential risk, I’d still argue one should physically go to the polls to cast their vote – there should be some sort of minimal expectation to get to keep our democracy (yes, I know that absentee ballots exist, but their use is so limited I think any coercion associated with an absentee ballot would have a negligible, if any, effect).

Beyond the security concerns of the ubiquitous use of one’s uniquely issued e-identity, I think we should be hesitant to use such a system to allow for online voting.  Getting your identity stolen is terrible, but you can recover and the impact is relatively small – it pretty much just effects you.  However, if an election gets stolen…well, that’s a bigger deal (as we may or may not already know).

Are you more worried about big brother or little brother?

ø

In the United States we are more concerned about the government encroaching on our rights and privacy than we are other people or corporations.  This makes sense, given our history and why this country even exists.  However, in this new age of technology and big data, I think it’s time we re-think these priorities.  For so long people have been concerned about “big brother” watching their every move, which is more science-fiction than real life.  However, now there’s millions of “little brothers” that actually can…and do.

Let’s just think about this realistically for a minute.  It’s not even feasible for the government to constantly monitor over 330 million people.  It would nearly require the government to employ 165 million people just to keep tabs on the other half.  They don’t have the will or resources.  And to be honest, most of us are not that interesting.

Even if the government were able to somehow record and store all sorts of things about you, they simply do not scan through the data just looking for you to mess up.  They only have the ability – and legal justification – to go looking whenever they have probable cause.  They go looking for something specific when there’s a specific reason to.

On the flip side, large corporations do store tons of data about you.  They have huge server farms filled with 1s and 0s all about YOU.  And they don’t need a reason to manipulate and analyze your data – they do it all. the. time.  How protected is your data?  Who knows?  It’s not regulated.  Can they sale your data to someone else?  Yup.  Can someone else hack into their systems and take your data?  Yup.

So really, who makes you more nervous – the blue-collar police officer already overwhelmed with real cases trying to put away bad guys to be concerned about your internet history OR large corporations that essentially take your data without you even knowing it (or at least understanding it) and could lay it out there for any loser with too much time on his hands to steal?

Forget about the boys and girls at the NSA and FBI, it’s Facebook and Google that really scare me.

Drones and the “Plain View” Doctrine

ø

Some of you may be familiar with the “Plain View” doctrine that came out of the 1987 US Supreme Court Case Arizona v. Hicks.  Basically, it says that a law enforcement officer does not need a warrant to seize an item they immediately recognize as evidence or contraband while they are lawfully present in an area protected by the 4th Amendment.

Think of it this way – if an officer is walking by your house and sees bomb making material in your yard, in “plain view” from the sidewalk or street, he can seize it.  However, if the material is inside your house, out of “plain view,” then he cannot enter your residence without your permission without a warrant, which requires probable cause.

Now lets think about drones.  Are items a drone might be able to see in “plain view?”  I would argue no.  I think US citizens should still have a reasonable expectation of privacy from drones.  Drones should be employed by law enforcement as another search tool once a warrant is issued.  Otherwise, we can imagine many scenarios where the technology could be abused.  For example, lets say in a sparsely populated county in Montana there’s a particular rancher no one really likes, including the Sheriff.  If the Sheriff is allowed to fly his drone whenever/wherever he wants, he could certainly fly it over this rancher’s property just looking for something to hem the guy up.  Basically, he’s being targeted, with no probable cause, just because the people around town don’t like him.

However, I certainly believe drones have a place in law enforcement.   Lets say the Sheriff has probable cause that the aforementioned rancher has some contraband cached on his property, so he obtains a warrant to search the property.  Instead of soaking up a lot of resources and man hours searching all over the property, it makes perfect sense that he should be able to employ his drone to help.

Lets also consider emergency situations.  Imagine a high speed chase, or a shootout.  Drones certainly seem appropriate to use to follow suspects driving recklessly in the interest of public safety.  And, if armed suspects are barricaded behind some sort of bunker or around a corner, it makes sense to be able to use a drone to go conduct reconnaissance and help police determine their course of action.

In the end, I think drones can be a valuable asset for law enforcement personnel to employ in emergency situations or when a legal warrant is granted, but should not be used simply for patrolling, out looking for someone to mess up.

Privacy Policies

ø

Have you ever actually read a Privacy Policy? Probably not.  Most likely, when you install a new application, or receive and update to the “Terms and Conditions,” you simply click “OK” or “Accept” and move forward – even slightly perturbed the little popup interrupted your flow.

I think we do this for two reasons: 1) the policy is long and cumbersome and we probably wouldn’t understand it anyway; and 2) what happens if I click “No” or do not agree with the terms?

After a cursory review of several privacy policies (including Facebook and Google), I can tell you the first reason is valid.  Not only is it partially coded in legalese, as a lay consumer, I have no idea the implications of what it’s saying.  So my data may be sold or given for “research purposes.”  What does that mean?  And how will that entity store my data?  Safely, I hope.  The policy says they take securing my data very seriously, but is that true?  Have they sufficiently invested in securing it?

The second reason is an even bigger issue.  If consumers do not agree to the terms, they don’t get access to the service.  On the surface, this may seem simple – if you don’t like it, don’t use it.  But is that realistic?  Some of these huge corporations essentially run, or control access to, the internet.  So, if we deny the terms we’re essentially saying “no thank you” to the internet.  Can we really operate in the 21st century as productive members of society without it?  Seems like our hands are pretty much forced on the issue – we simply must accept the terms so we can participate in modern society.

So what’s the solution?  Well, I don’t know – nor does anyone at this point as its the topic of much debate from large tech companies to domestic and international regulators.  However, I think a good starting point is to give users more options.  Rather than simply accept or reject the terms, users should be able to customize the level of privacy they want.  Facebook is making some decent strides in this direction – you can tailor your profile settings to let anyone or a select group of people see your information.  You can even tailor each post.  However, the problem is you have to go find how to do it.  You don’t get prompted with questions to make you customize your settings.  The default is wide open – it’s up to you to close, or slow, the spigot.  So, either change to default to SUPER PRIVATE, or prompt users to go through the privacy settings before allowing access to the programs so that they have to choose…and explain it to them in a way that’s understandable.

Log in