Month: September 2019

What law might clear the way for VRM/Me2B development?

VRM/Me2B developers shouldn’t have to wait for laws to pave the way through a wall-like status quo.  (And we say that in our Privacy Manifesto.) But a good law or two should help.

That was I had hoped—even expected—the GDPR to do.  Specifically, I called it  “the world’s most heavily weaponized law protecting personal privacy,” said it was “aimed at companies that track people without asking” and that it would “blow away the (mostly US-based) surveillance economy, especially tracking-based ‘adtech,’ which supports most commercial publishing online.”

That hasn’t happened.

It has been sixteen months since the GDPR went into effect (May 2018), and violation of personal privacy online today remains as pervasive as ever. Worse, violators take advantage of a loophole* in the GDPR that allows them to continue tracking people by requiring (or appearing to require) “consent” to  cookies and other means of tracking (so you can get “personalized,” “interest-based” or “relevant” advertising, the perpetrators say). As long as various EU countries’ Data Protection Authorities (who enforce the GDPR) fail to focus on simple fact that nearly every website and its third parties are doing the same bad things Google and Facebook are accused of doing, the practice will continue, and the GDPR will remain a failure at stopping widespread spying-based adtech.

Meanwhile, many privacy advocates in the U.S. (including me) have invested hope in the California Consumer Privacy Act (CCPA), which will go into effect on January 1, 2020.  I invite you to visit the operative language in that law, starting  here. As legalese goes, it’s remarkably readable. Meanwhile, Wikipedia compresses these rather well under the heading Intentions of the Act:

The intentions of the Act are to provide California residents with the right to:

  1. Know what personal data is being collected about them.
  2. Know whether their personal data is sold or disclosed and to whom.
  3. Say no to the sale of personal data.
  4. Access their personal data.
  5. Request a business to delete any personal information about a consumer collected from that consumer.
  6. Not be discriminated against for exercising their privacy rights.

Note that this presumes that nearly all agency resides on the data collectors’ side, and that the only agency possible on the individual’s side is asking to know or say no to what others who collected personal data can do with it.

That’s not enough.

Making matters worse is that we are mere “consumers” to the CCPA, “data subjects” to the GDPR and “users” to the computer industry—in each case with no more freedom and agency than what potential violators of our privacy (e.g. the websites and services of the world) separately grant us, through their countless, lengthy and infinitely varied privacy policies, terms and “agreements.”

In other words, we’re still at Square Zero, and Square One is neither the CCPA nor the GDPR. Those are relevant in the ways that guard rails are relevant to a winding road; but we don’t have the road yet.

While I’ve made it clear elsewhere that we need tech more than policy (because tech of our own—VRM tech—gives us agency), it will sure help to have policy that guides the deployment of that tech.

So,  what law might actually open the way for VRM development, preferably by simply giving individuals a new power they’ve been lacking, such as real control over just one aspect of their privacy: what Louis Brandeis and Samuel Warren called “the right to be let alone” when we’re online?

I like two.

First is the Do-Not-Track Act of 2019. It’s model legislation from DuckDuckGo, and explained this way:

When you turn on the setting in your browser that says “Do Not Track”, you probably expect to no longer be tracked on most websites you visit. Right? Well, you would be wrong. But don’t worry, you’re not alone.

Our recent study on the Do Not Track (DNT) browser setting indicated that about a quarter of people have turned on this setting, and most were unaware big sites do not respect it. That means approximately 75 million Americans, 115 million citizens of the European Union, and many more people worldwide are, right now, broadcasting a DNT signal.

All of these people are actively asking the sites they visit to not track them. Unfortunately, no law requires websites to respect your Do Not Track signals, and the vast majority of sites, including most all of the big tech companies, sadly choose to simply ignore them.

Let’s change that now. Let’s put teeth behind this widely used browser setting by making a law that would align with current consumer expectations and empower people to more easily regain control of their online privacy.

While DuckDuckGo actively supports the passing of strong, comprehensive privacy laws, we also recognize that it will take time for them to take effect worldwide. In the meantime, governments can provide immediate relief by enacting separate, simpler Do Not Track legislation.

It is extremely rare to have such an exciting legislative opportunity like this, where the hardest work — coordinated mainstream technical implementation and widespread consumer adoption — is already done.

That’s why we’re announcing draft legislation that can serve as a starting point for legislators in America and beyond. It’s entitled the “Do-Not-Track Act of 2019” and, if it were to be enacted, would require sites to respect the Do Not Track browser setting in this manner:

  1. No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission.
  2. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn’t be able to use your data from Whatsapp in unrelated situations (like for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history.

Under this proposed law, these restrictions would only come into play if a consumer has turned on the Do Not Track signal for their Internet traffic. To keep the Internet from breaking, these restrictions would have very narrowly tailored exceptions for debugging, auditing, security, non-commercial security research, and reporting, and further limited by mandated data-minimization requirements.

In particular, each of these narrow exceptions would only apply if a site adopts strict data-minimization practices, such as using the least amount of personal information needed, and anonymizing it whenever possible. And importantly, this draft legislation takes a more realistic view of what constitutes anonymous data vs. de-identified data. Legislators need to appreciate that users can be re-identified unless companies implement extra measures of protection.

Katherine Druckman and I also talked about this a bit with Gabriel Weinberg, CEO and founder of DuckDuckGo, in our Reality 2.0 podcast with him last month.

The other is Adrian Gropper‘s Patient Privacy Rights Information Governance Label. It says,

Patient Privacy Rights Information Governance Label August 19, 2019 Note: 0-to-5 of the boxes to be checked by the application, device, or service provider.1. No sharing: The data is never shared with any external entities. It is not even shared in de-identified form.

2. No aggregation: The data is never aggregated with other types of input or data from external sources. This includes mixing the data gathered via The Service with other data, such as patient-reported outcomes.

3. Always voluntary self-identification: The user of The Service is able to choose their own identity. The user does not need to have their identity verified unless required by law.

4. Digital agent support: The user is able to specify a digital agent, trustee, or equivalent information manager, and this specified agent will not be subject to certification or censorship.

5. No vendor lock-in: The Service is easily and conveniently substitutable, so the user can easily move their data to another vendor providing a similar service. This prevents vendor lock-in and is often accomplished using Open Standards. Indications for Use: The five separately self-asserted statements on the PPR Information Governance Label are subject to legal enforcement as would the privacy policy associated with The Service.

While not proposed as a law, it would be good to have a law that imposes those requirements, and leaves room for individuals to provide for exceptions, for example when they have working relationships with a service provider.

Maciej Ceglowski also has some good suggestions.


*Part 1 under Article 6 of the GDPR, covering the “Lawfulness of processing,” says, “Processing shall be lawful only if and to the extent that at least one of the following applies: (a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes.” Hence the consent notices with an “accept” button in front of websites.  These are most often presented as “cookie notices.” (Which are actually required by earlier EU law that was to some degree ignored until the GDPR came along.)

Whether a notice on the front of a website talks cookies or not, it usually means the site is obtaining your consent to being tracked “to personalize content and advertising” (or whatever) by spying on you. I’ve been told by GDPR experts that this really isn’t a loophole, and that most of these consent notices actually violate the GDPR’s letter and not just its spirit. Still, while that might be true, violation of the GDPR’s spirit remains normative.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

© 2024 ProjectVRM

Theme by Anders NorenUp ↑