data

You are currently browsing the archive for the data category.

When digital identity ceases to be a pain in the ass, we can thank Kim Cameron and his Seven Laws of Identity, which he wrote in 2004, formally published in early 2005, and gently explained and put to use until he died late last year. Today, seven of us will take turns explaining each of Kim’s laws at KuppingerCole‘s EIC conference in Berlin. We’ll only have a few minutes each, however, so I’d like to visit the subject in a bit more depth here.

To understand why these laws are so important and effective, it will help to know where Kim was coming from in the first place. It wasn’t just his work as the top architect for identity at Microsoft (to which he arrived when his company was acquired). Specifically, Kim was coming from two places. One was the physical world where we live and breathe, and identity is inherently personal. The other was the digital world where what we call identity is how we are known to databases. Kim believed the former should guide the latter, and that nothing like that had happened yet, but that we could and should work for it.

Kim’s The Laws of Identity paper alone is close to seven thousand words, and his IdentityBlog adds many thousands more. But his laws by themselves are short and sweet. Here they are, with additional commentary by me, in italics.

1. User Control and Consent

Technical identity systems must only reveal information identifying a user with the user’s consent.

Note that consent goes in the opposite direction from all the consent “agreements” websites and services want us to click on. This matches the way identity works in the natural world, where each of us not only chooses how we wish to be known, but usually with an understanding about how that information might be used.

2. Minimun Disclosure for a Constrained Use

The solution which discloses the least amount of identifying information and best limits its use is the most stable long term solution.

There is a reason we don’t walk down the street wearing name badges: because the world doesn’t need to know any more about us than we wish to disclose. Even when we pay with a credit card, the other party really doesn’t need (or want) to know the name on the card. It’s just not something they need to know.

3. Justifiable Parties

Digital identity systems must be designed so the disclosure of identifying information is limited to parties having a necessary and justifiable place in a given identity relationship.

If this law applied way back when Kim wrote it, we wouldn’t have the massive privacy losses that have become the norm, with unwanted tracking pretty much everywhere online—and increasingly offline as well. 

4. Directed Identity

A universal identity system must support both “omni-directional” identifiers for use by public entities and “unidirectional” identifiers for use by private entities, thus facilitating discovery while preventing unnecessary release of correlation handles.

All brands, meaning all names of public entities, are “omni-directional.” They are also what Kim calls “beacons” that have the opposite of something to hide about who they are. Individuals, however, are private first, and public only to the degrees they wish to be in different circumstances. Each of the first three laws are “unidirectional.”

5. Pluralism of Operators and Technologies

A universal identity system must channel and enable the inter-working of multiple identity technologies run by multiple identity providers.

This law expresses learnings from Microsoft’s failed experiment with Passport and a project called “Hailstorm.” The idea with both was for Microsoft to become the primary or sole online identity provider for everyone. Kim’s work at Microsoft was all about making the company one among many working in the same broad industry.

6. Human Integration

The universal identity metasystem must define the human user to be a component of the distributed system integrated through unambiguous human-machine communication mechanisms offering protection against identity attacks.

As Kim put it in his 2019 (and final) talk at EIC, we need to turn the Web “right side up,” meaning putting the individual at the top rather than the bottom, with each of us in charge of our lives online, in distributed homes of our own. That’s what will integrate all the systems we deal with. (Joe Andrieu first explained this in 2007, here.)

7. Consistent Experience Across Contexts

The unifying identity metasystem must guarantee its users a simple, consistent experience while enabling separation of contexts through multiple operators and technologies.

So identity isn’t just about corporate systems getting along with each other. It’s about giving each of us scale across all the entities we deal with. Because it’s our experience that will make identity work right, finally, online. 

I expect to add more as the conference goes on; but I want to get this much out there to start with.

By the way, the photo above is from the first and only meeting of the Identity Gang, at Esther Dyson’s PC Forum in 2005. The next meeting of the Gang was the first Internet Identity Workshop, aka IIW, later that year. We’ve had 34 more since then, all with hundreds of participants, all with great influence on the development of code, standards, and businesses in digital identity and adjacent fields. And all guided by Kim’s Laws.

 

Throughout the entire history of what we call media, we have consumed its contents on producers’ schedules. When we wanted to know what was in newspapers and magazines, we waited until the latest issues showed up on newsstands, at our doors, and in our mailboxes. When we wanted to hear what was on the radio or to watch what was on TV, we waited until it played on our stations’ schedules. “What’s on TV tonight?” is perhaps the all-time most-uttered question about a medium. Wanting the answers is what made TV Guide required reading in most American households.

But no more. Because we have entered the Age of Optionality. We read, listen to, and watch the media we choose, whenever we please. Podcasts, streams, and “over the top” (OTT) on-edmand subscription services are replacing old-fashioned broadcasting. Online publishing is now more synchronous with readers’ preferences than with producers’ schedules.

The graph above illustrates what happened and when, though I’m sure the flat line at the right end is some kind of error on Google’s part. Still, the message is clear: what’s on and what’s in have become anachronisms.

The centers of our cultures have been held for centuries by our media. Those centers held in large part because they came on a rhythm, a beat, to which we all danced and on which we all depended. But now those centers are threatened or gone, as media have proliferated and morphed into forms that feed our attention through the flat rectangles we carry in our pockets and purses, or mount like large art pieces on walls or tabletops at home. All of these rectangles maximize optionality to degrees barely imaginable in prior ages and their media environments: vocal, scribal, printed, broadcast.

We are now digital beings. With new media overlords.

The Digital Markets Act in Europe calls these overlords “gatekeepers.” The gates they keep are at entrances to vast private walled gardens enclosing whole cultures and economies. Bruce Schneier calls these gardens feudal systems in which we are all serfs.

To each of these duchies, territories, fiefs, and countries, we are like cattle from which personal data is extracted and processed as commodities. Purposes differ: Amazon, Apple, Facebook, Google, Twitter, and our phone and cable companies each use our personal data in different ways. Some of those ways do benefit us. But our agency over how personal data is extracted and used is neither large nor independent of these gatekeepers. Nor do we have much if any control over what countless customers of gatekeepers do with personal data they are given or sold.

The cornucopia of options we have over the media goods we consume in these gardens somatizes us while also masking the extreme degree to which these private gatekeepers have enclosed the Internet’s public commons, and how algorithmic optimization of engagement at all costs has made us into enemy tribes. Ignorance of this change and its costs is the darkness in which democracy dies.

Shoshana Zuboff calls this development The Coup We Are Not Talking About. The subhead of that essay makes the choice clear: We can have democracy, or we can have a surveillance society, but we cannot have both. Her book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, gave us a name for what we’re up against. A bestseller, it is now published in twenty-six languages. But our collective oblivity is also massive.

We plan to relieve some of that oblivity by having Shoshana lead the final salon in our Beyond the Web series at Indiana University’s Ostrom Workshop. To prepare for that, Joyce and I spoke with Shoshana for more than an hour and a half last night, and are excited about her optimism toward restoring the public commons and invigorating democracy in our still-new digital age. This should be an extremely leveraged way to spend an hour or more on April 11, starting at 2PM Eastern time. And it’s free.

Use this link to add the salon to your calendar and join in when it starts.

Or, if you’re in Bloomington, come to the Workshop and attend in person. We’re at 513 North Park Avenue.

 

 

The frog of war

“Compared to war, all other forms of human endeavor shrink to insignificance. God help me, I do love it so.” — George S. Patton (in the above shot played by George C. Scott in his greatest role.)


Is the world going to croak?

Put in geological terms, will the Phanerozoic eon, which began with the Cambrian explosion a half billion years ago, end at the close of the Anthropocene epoch, when the human species, which has permanently put its mark on the Earth, commits suicide with nuclear weapons? This became a lot more plausible as soon as Putin rattled his nuclear saber.

Well, life will survive, even if humans do not. And that will happen whether or not the globe warms as much as the IPCC assures us it will. If temperatures in the climate of our current interglacial interval peak with both poles free of ice, the Mississippi river will meet the Atlantic at what used to be St. Louis. Yet life will abound, as life does, at least until the Sun gets so large and hot that photosynthesis stops and the phanerozoic finally ends. That time is about a half-billion years away. That might seem like a long time, but given the age of the Earth itself—about 4.5 billion years—life here is much closer to the end than the beginning.

Now let’s go back to human time.

I’ve been on the planet for almost 75 years, which in the grand scheme is a short ride. But it’s enough to have experienced history being bent some number of times. So far I count six.

First was on November 22, 1963, when John F. Kennedy was assassinated. This was when The Fifties actually ended and The Sixties began. (My great aunt Eva Quakenbush, née Searls or Searles—it was spelled both ways—told us what it was like when Lincoln was shot and she was 12 years old. “It changed everything,” she said. So did the JFK assassination.)

The second was the one-two punch of the Martin Luther King and Bobby Kennedy assassinations, on April 4 and June 6, 1968. The former was a massive setback for both the civil rights movement and nonviolence. And neither has fully recovered. The latter assured the election of Richard Nixon and another six years of the Vietnam war.

The third was the Internet, which began to take off in the mid-1990s. I date the steep start of hockey stick curve to April 30, 1995, when the last backbone within the Internet that had forbidden commercial traffic (NSFnet) shut down, uncorking a tide of e-commerce that is still rising.

The fourth was 9/11, in 2001. That suckered the U.S. into wars in Afghanistan and Iraq, and repositioned the country from the world’s leading peacekeeper to the world’s leading war-maker—at least until Russia stepped up.

The fifth was the Covid pandemic, which hit the world in early 2020 and is still with us, causing all sorts of changes, from crashes in supply chains to inflation to complete new ways for people to work, travel, vote, and think.

Sixth is the 2022 Russian invasion of Ukraine, which began on February 24, 2022, just eleven days ago as I write this.

Big a thing as this last bend is—and it’s huge—there are too many ways to make sense of it all:

I didn’t list the threat of thermonuclear annihilation among the six big changes in history I’ve experienced because I was raised with it. Several times a year we would “duck and cover” under our desks when the school would set off air raid sirens. Less frequent than fire drills, these were far more scary, because we all knew we were toast, being just five miles by air from Manhattan, which was surely in the programmed crosshairs on one or more Soviet nukes.

Back then I put so little faith in adult wisdom, and its collective expression in government choices, that I had a bucket list of places I’d like to see before nuclear blasts or fallout doomed us all. My top two destinations were the Grand Canyon and California: exotic places for a kid whose farthest family venturings from New Jersey were to see relatives in North Carolina and North Dakota. (Of no importance but of possible interest is that I’ve now been a citizen of California for 37 years, married to an Angelino for 32 of those, and it still seems exotic to me. Mountains next to cities and beaches? A tradition of wildfires and earthquakes? Whoa.)

What’s around the corner we turned two Thursdays ago? Hard to tell, in spite of all that’s being said by Wise Ones in the links above. One things I do know for sure: People have changed, because more and more of them are digital now, connected to anybody and anything at any distance, and able to talk, produce “content” and do business—and to look and think past national and territorial boundaries. We make our tools and then our tools make us, McLuhan taught. Also, all media work us over completely. We have been remade into digital beings by our wires, waves, and phones. This raises optionalities in too many ways to list.

I’m an optimist by nature, and since the ’90s have been correctly labeled a cyber-utopian. (Is there anything more utopian than The Cluetrain Manifesto?) To me, the tiny light at the end of Ukraine’s tunnel is a provisional belief that bad states—especially ones led by lying bastards who think nothing of wasting thousands or millions of innocent lives just to build an empire—can’t win World War Wired. Unless, that is, the worst of those bastards launches the first nuke and we all go “gribbit.”

Our challenge as a species, after we stop Russia’s land grab from becoming a true world war, is to understand fully how we can live and work in the Wired World as digital as well as physical beings.

Tags:

If you’re getting health care in the U.S., chances are your providers are now trying to give you a better patient experience through a website called MyChart.

This is supposed to be yours, as the first person singular pronoun My implies. Problem is, it’s TheirChart. And there are a lot of them. I have four (correction: five*) MyChart accounts with as many health care providers, so far: one in New York, two in Santa Barbara, one in Mountain View, and one in Los Angeles. I may soon have another in Bloomington, Indiana. None are mine. All are theirs, and they seem not to get along. Especially with me. (Some later correction on this below, and from readers who have weighed in. See the comments.)

Not surprisingly, all of them come from a single source: Epic Systems, the primary provider of back-end information tech to the country’s health care providers, including most of the big ones: Harvard, Yale, Mayo, UCLA, UChicago, Duke, Johns Hopkins, multiple Mount Sinais, and others like them. But, even though all these MyChart portals are provided by one company, and (I suppose) live in one cloud, there appears to be no way for you, the patient, to make those things work together inside an allied system that is truly yours (like your PC or your car is yours), or for you to provide them with data you already have from other sources. Which you could presumably do if My meant what it says.

The way they work can get perverse. For example, a couple days ago, one of my doctors’ offices called to tell me we would need to have a remote consult before she changed one of my prescriptions. This, I was told, could not be done over the phone. It would need to be done over video inside MyChart. So now we have an appointment for that meeting on Monday afternoon, using MyChart.

I decided to get ahead of that by finding my way into the right MyChart and leaving a session open in a browser tab. Then I made the mistake of starting to type “MyChart” into my browser’s location bar, and then not noticing that the top result was one of the countless other MyCharts maintained by countless other health care providers. But this other one looked so much like one of mine that I wasted an hour or more, failing to log in and then failing to recover my login credentials. It wasn’t until I called the customer service number thankfully listed on the website that I found I was trying to use the MyChart of some provider I’d never heard of—and which had never heard of me.

Now I’m looking at one of my two MyCharts for Santa Barbara, where it shows no upcoming visits. I can’t log into the other one to see if the Monday appointment is noted there, because that MyChart doesn’t know who I am. So I’m hoping to unfuck that one on Monday before the call on whichever MyChart I’ll need to use. Worst case, I’ll just tell the doctor’s office that we’ll have to make do with a phone call. If they answer the phone, that is.

The real problem here is that there seem to be hundreds or thousands of different health care providers, all using one company’s back end to provide personal health care information to millions of patients through hundreds or thousands of different portals, all called the same thing (or something close), while providing no obvious way for patients to gather their own data from multiple sources to use for their own independent purposes, both in and out of that system. Or any system.

To call this fubar understates the problem.

Here’s what matters: Epic can’t solve this. Nor can any or all of these separate health care systems. Because none of them are you.

You’re where the solution needs to happen. You need a simple and standardized way to collect and manage your own health-related information and engagements with multiple health care providers. One that’s yours.

This doesn’t mean you need to be alone in the wilderness. You do need expert help. In the old days, you used to get that through your primary care physician. But large health care operations have been hoovering up private practices for years, and one of the big reasons for that has been to make the data management side of medicine easier for physicians and their many associated providers. Not to make it easier for you. After all, you’re not their customer. Insurance companies are their customers.

In the midst of this is a market hole where your representation in the health care marketplace needs to sit. I know just one example of how that might work: the HIE of One. (HIE is Health Information Exchange.) For all our sakes, somebody please fund that work.

Far too much time, sweat, money, and blood is being spilled trying to solve this problem from the center outward. (For a few details on how awful that is, start reading here.)

While we’re probably never going to make health care in the U.S. something other than the B2B insurance business it has become, we can at least start working on a Me2B solution in the place it most needs to work: with patients. Because we’re the ones who need to be in full command of our relationships with our providers as well as with ourselves.

Health care, by the way, is just one category that cries out for solutions that can only come from the customers’ side. Customer Commons has a list of fourteen, including this one.

The first of these is identity. The self-sovereign approach to that would start with a wallet that is truly mine, and includes all these MyCharts. Hell, Epic could do one. Hint hint.


*Okay, now it’s Monday, and I’m a half-hour away from my consult with my doctor, via Zoom, inside MyChart. Turns out I was not yet registered with this MyChart, but at least there was a phone number I could call, and on the call (which my phone says took 14 minutes) we got my ass registered. He also pointed me to where, waaay down a very long menu, there is a “Link my accounts” choice, which brings up this:

Credit where due:

It was very easy to link my four known accounts, plus another (the one in Mountain View) that I had forgotten but somehow the MyChart master brain remembered. I suspect, given all the medical institutions I have encountered in my long life, that there are many more. Because in fact I had been to the Mountain View hospital only once, and I don’t even remember why, though I suppose I could check.

So that’s the good news. The bad news remains the same. None of these charts are mine. They are just views into many systems that are conditionally open to me. That they are now federated (that’s what this kind of linking-up is called) on Epic’s back end does not make it mine. It just makes it a many-theirs.

So the system still needs to be fixed. From our end.

 

 

 

 

 

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy.

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

Yesterday (March 29), Zoom updated its privacy policy with a major rewrite. The new language is far more clear than what it replaced, and which had caused the concerns I detailed in my previous three posts:

  1. Zoom needs to clean up its privacy act,
  2. More on Zoom and privacy, and
  3. Helping Zoom

Those concerns were shared by Consumer ReportsForbes and others as well. (Here’s Consumer Reports‘ latest on the topic.)

Mainly the changes clarify the difference between Zoom’s services (what you use to conference with other people) and its websites, zoom.us and zoom.com (which are just one site: the latter redirects to the former). As I read the policy, nothing in the services is used for marketing. Put another way, your Zoom sessions are firewalled from adtech, and you shouldn’t worry about personal information leaking to adtech (tracking based advertising) systems.

The websites are another matter. Zoom calls those websites—its home pages—”marketing websites.” This, I suppose, is so they can isolate their involvement with adtech to their marketing work.

The problem with this is an optical one: encountering a typically creepy cookie notice and opting gauntlet (which still defaults hurried users to “consenting” to being tracked through “functional” and “advertising” cookies) on Zoom’s home page still conveys the impression that these consents, and these third parties, work across everything Zoom does, and not just its home pages.

And why call one’s home on the Web a “marketing website”—even if that’s mostly what it is? Zoom is classier than that.

My advice to Zoom is to just drop the jive. There will be no need for Zoom to disambiguate services and websites if neither is involved with adtech at all. And Zoom will be in a much better position to trumpet its commitment to privacy.

That said, this privacy policy rewrite is a big help. So thank you, Zoom, for listening.

 

Here’s the latest satellite fire detection data, restricted to just the last twelve hours of the Thomas Fire, mapped on Google Earth Pro:That’s labeled 1830 Mountain Standard Time (MST), or 5:30pm Pacific, about half an hour ago as I write this.

And here are the evacuation areas:

Our home is in the orange Voluntary Evacuation area. So we made a round trip from LA to prepare the house as best we could, gather some stuff and go. Here’s a photo album of the trip, and one of the last sights we saw on our way out of town:

This, I believe, was a fire break created on the up-slope side of Toro Canyon. Whether purely preventive or not, it was very impressive.

And here is a view of the whole burn area, which stretches more than forty miles from west to east (or from Montecito to Fillmore):

Here you can see how there is no fresh fire activity near Lake Casitas and Carpinteria, which is cool (at least relatively). You can also see how Ojai and Carpinteria were saved, how Santa Barbara is threatened, and how there are at least five separate fires around the perimeter. Three of those are in the back country, and I suspect the idea is to let those burn until they hit natural fire breaks or the wind shifts and the fires get blown back on their own burned areas and fizzle out there.

The main area of concern is at the west end of the fire, above Santa Barbara, in what they call the front country: the slope on the ocean’s side of the Santa Ynez Mountains, which run as a long and steep spine, rising close to 4000 feet high in the area we care about here. (It’s higher farther west.)

This afternoon I caught a community meeting on KEYT, Santa Barbara’s TV station, which has been very aggressive and responsible in reporting on the fire. I can’t find a recording of that meeting now on the station’s website, but I am watching the station’s live 6pm news broadcast now, devoted to a news conference at the Ventura County Fairgrounds. (Even though I’m currently at a house near Los Angeles, I can watch our TV set top box remotely through a system called Dish Anywhere. Hats off to Dish Network for providing that ability. In addition to being cool, it’s exceptionally handy for evacuated residents whose homes still have electricity and a good Internet connection. I thank Southern California Edison and Cox for those.)

On KEYT, Mark Brown of @Cal_Fire just spoke about Plans A, B and C, one or more of which will be chosen based on how the weather moves. Plan C is the scariest (and he called it that), because it involves setting fire lines close to homes, intentionally scorching several thousand acres to create an already-burned break, to stop the fire. “The vegetation will be removed before the fire has a chance to take it out, the way it wants to take it out,” he says.

Okay, that briefing just ended. I’ll leave it there.

So everybody reading this knows, we are fine, and don’t need to be at the house while this is going on. We also have great faith that 8000 fire fighting personnel and all their support systems will do the job and save our South Coast communities. What they’ve done so far has been nothing short of amazing, given the enormous geographical extent of this fire, the exceptionally rugged nature of the terrain, the record dryness of the vegetation, and other disadvantages. A huge hat tip to them.

 

 

dat is the new love

Personal data, that is.

Because it’s good to give away—but only if you mean it.

And it’s bad to take it, even it seems to be there for the taking.

I bring this up because a quarter million pages (so far) on the Web say “data is the new oil.”

That’s because a massive personal data extraction industry has grown up around the simple fact that our data is there for the taking. Or so it seems. To them. And their apologists.

As a result, we’re at a stage of wanton data extraction that looks kind of like the oil industry did in 1920 or so:

It’s a good metaphor, but for a horrible business. It’s a business we need to reform, replace, or both. What we need most are new industries that grow around who and what we are as individual human beings—and as a society that values what makes us human.

Our data is us. Each of us. It is our life online. Yes, we shed some data in the course of our activities there, kind of like we shed dandruff and odors. But that’s no excuse for the extractors to frack our lives and take what they want, just because it’s there, and they can.

Now think about what love is, and how it works. How we give it freely, and how worthwhile it is when others accept it. How taking it without asking is simply wrong. How it’s better to earn it than to demand it. How it grows when it’s given. How we grow when we give it as well.

True, all metaphors are wrong, but that’s how metaphors work. Time is not money. Life is not travel. A country is not a family. But all those things are like those other things, so we think and talk about each of them in terms of those others. (By saving, wasting and investing time; by arriving, departing, and moving through life; by serving our motherlands, and honoring our founding fathers.)

Oil made sense as a metaphor when data was so easy to take, and the resistance wasn’t there.

But now the resistance is there. More than half a billion people block ads online, most of which are aimed by extracted personal data. Laws like the GDPR have appeared, with heavy fines for taking personal data without clear permission.

I could go on, but I also need to go to bed. I just wanted to get this down while it was in the front of my mind, where it arrived while discussing how shitty and awful “data is the new oil” was when it first showed up in 2006, and how sadly popular it has become since then:

It’s time for a new metaphor that expresses what our personal data really is to us, and how much more it’s worth to everybody else if we keep, give and accept it on the model of love.

You’re welcome.

 

Who Owns the Internet? — What Big Tech’s Monopoly Powers Mean for our Culture is Elizabeth Kolbert‘s review in The New Yorker of several books, one of which I’ve read: Jonathan Taplin’s Move Fast and Break Things—How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy.

The main takeaway for me, to both Elizabeth’s piece and Jon’s book, is making clear that Google and Facebook are at the heart of today’s personal data extraction industry, and that this industry defines (as well as supports) much of our lives online.

Our data, and data about us, is the crude that Facebook and Google extract, refine and sell to advertisers. This by itself would not be a Bad Thing if it were done with our clearly expressed (rather than merely implied) permission, and if we had our own valves to control personal data flows with scale across all the companies we deal with, rather than countless different valves, many worthless, buried in the settings pages of the Web’s personal data extraction systems, as well as in all the extractive mobile apps of the world.

It’s natural to look for policy solutions to the problems Jon and others visit in the books Elizabeth reviews. And there are some good regulations around already. Most notably, the GDPR in Europe has energized countless developers (some listed here) to start providing tools individuals (no longer just “consumers” or “users”) can employ to control personal data flows into the world, and how that data might be used. Even if surveillance marketers find ways around the GDPR (which some will), advertisers themselves are starting to realize that tracking people like animals only fails outright, but that the human beings who constitute the actual marketplace have mounted the biggest boycott in world history against it.

But I also worry because I consider both Facebook and Google epiphenomenal. Large and all-powerful though they may be today, they are (like all tech companies, especially ones whose B2B customers and B2C consumers are different populations—commercial broadcasters, for example) shallow and temporary effects rather than deep and enduring causes.

I say this as an inveterate participant in Silicon Valley who can name many long-gone companies that once occupied Google’s and Facebook’s locations there—and I am sure many more will occupy the same spaces in a fullness of time that will surely include at least one Next Big Thing that obsolesces advertising as we know it today online. Such as, for example, discovering that we don’t need advertising at all.

Even the biggest personal data extraction companies are also not utilities on the scale or even the importance of power and water distribution (which we need to live), or the extraction industries behind either. Nor have these companies yet benefitted from the corrective influence of fully empowered individuals and societies: voices that can be heard directly, consciously and personally, rather than mere data flows observed by machines.

That direct influence will be far more helpful than anything they’re learning now just by following our shadows and sniffing our exhaust, mostly against our wishes. (To grok how little we like being spied on, read The Tradeoff Fallacy: How Marketers are Misrepresenting American Consumers and Opening Them Up to Exploiitation, a report by Joseph Turow, Michael Hennessy and Nora Draper of the Annenberg School for Communication at the University of Pennsylvania.)

Our influence will be most corrective when all personal data extraction companies become what lawyers call second parties. That’s when they agree to our terms as first partiesThese terms are in development today at Customer Commons, Kantara and elsewhere. They will prevail once they get deployed in our browsers and apps, and companies start agreeing (which they will in many cases because doing so gives them instant GDPR compliance, which is required by next May, with severe fines for noncompliance).

Meanwhile new government policies that see us only as passive victims will risk protecting yesterday from last Thursday with regulations that last decades or longer. So let’s hold off on that until we have terms of our own, start performing as first parties (on an Internet designed to support exactly that), and the GDPR takes full effect. (Not that more consumer-protecting federal regulation is going to happen in the U.S. anyway under the current administration: all the flow is in the other direction.)

By the way, I believe nobody “owns” the Internet, any more than anybody owns gravity or sunlight. For more on why, see Cluetrain’s New Clues, which David Weinberger and I put up 1.5 years ago.

« Older entries