VRM

You are currently browsing the archive for the VRM category.

My post yesterday saw action on Techmeme (as I write this, it’s at #2) and on Twitter (from Don Marti, Augustine Fou, et. al.), and in thoughtful blog posts by John Gruber in Daring Fireball and Nick Heer in Pixel Envy. All pushed back on at least some of what I said. Here are some excerpts, with my responses. First, John:

Doc Searls:

Here’s what’s misleading about this message: Felix would have had none of those trackers following him if he had gone into Settings → Privacy → Tracking, and pushed the switch to off […].

Key fact: it is defaulted to on. Meaning Apple is not fully serious about privacy. If Apple was fully serious, your iPhone would be set to not allow tracking in the first place. All those trackers would come pre-vaporized.

For all the criticism Apple has faced from the ad tech industry over this feature, it’s fun to see criticism that Apple isn’t going far enough. But I don’t think Searls’s critique here is fair. Permission to allow tracking is not on by default — what is on by default is permission for the app to ask. Searls makes that clear, I know, but it feels like he’s arguing as though apps can track you by default, and they can’t.

But I don’t think Searls’s critique here is fair. Permission to allow tracking is not on by default — what is on by default is permission for the app to ask. Searls makes that clear, I know, but it feels like he’s arguing as though apps can track you by default, and they can’t.

I’m not arguing that. But let’s dig down a bit on all this.

What Apple has here is a system for asking in both directions (apps asking to track, and users asking apps not to track). I think this is weird and unclear, while simply disallowing tracking globally would be clear. So would a setting that simply turns off all apps’ ability to track. But that’s not what we have.

Or maybe we do.

To review… in Settings—>Privacy—>Tracking, is a single OFF/ON switch for “Allow Ads to Request to Track.” It is by default set to ON. (I called AppleCare to be sure about this. The guy I spoke to said yes, it is.) Below that setting is a bit of explanatory text with a “Learn more” link that goes to this long column of text one swipes down four times (at least on my phone) to read:

Okay, now look in the fifth paragraph (three up from where you’re reading now). There it says that by turning the setting to OFF, “all apps…will be blocked from accessing the device’s Advertising Identifier.” Maybe I’m reading this wrong, but it seems plain to me that this will at least pre-vaporize trackers vectored on the device identifier (technically called IDFA: ID For Advertisers).

After explaining why he thinks the default setting to ON is the better choice, and why he likes it that way (e.g. he can see what apps want to track, surprisingly few do, and he knows which they are), John says this about the IDFA:

IDFA was well-intentioned, but I think in hindsight Apple realizes it was naive to think the surveillance ad industry could be trusted with anything.

And why “ask” an app not to track? Why not “tell”? Or, better yet, “Prevent Tracking By This App”? Does asking an app not to track mean it won’t?

This is Apple being honest. Apple can block apps from accessing the IDFA identifier, but there’s nothing Apple can do to guarantee that apps won’t come up with their own device fingerprinting schemes to track users behind their backs. Using “Don’t Allow Tracking” or some such label instead of “Ask App Not to Track” would create the false impression that Apple can block any and all forms of tracking. It’s like a restaurant with a no smoking policy. That doesn’t mean you won’t go into the restroom and find a patron sneaking a smoke. I think if Apple catches applications circumventing “Ask App Not to Track” with custom schemes, they’ll take punitive action, just like a restaurant might ask a patron to leave if they catch them smoking in the restroom — but they can’t guarantee it won’t happen. (Joanna Stern asked Craig Federighi about this in their interview a few weeks ago, and Federighi answered honestly.)

If Apple could give you a button that guaranteed an app couldn’t track you, they would, and they’d label it appropriately. But they can’t so they don’t, and they won’t exaggerate what they can do.

On Twitter Don Marti writes,

Unfortunately it probably has to be “ask app not to track” because some apps will figure out ways around the policy (like all mobile app store policies). Probably better not to give people a false sense of security if they are suspicious of an app

—and then points to P&G Worked With China Trade Group on Tech to Sidestep Apple Privacy Rules, subtitled “One of world’s largest ad buyers spent years building marketing machine reliant on digital user data, putting it at odds with iPhone maker’s privacy moves” in The Wall Street Journal. In it is this:

P&G marketing chief Marc Pritchard has advocated for a universal way to track users across platforms, including those run by Facebook and Alphabet Inc.’s Google, that protects privacy while also giving marketers information to better hone their messages.

Frustrated with what it saw as tech companies’ lack of transparency, P&G began building its own consumer database several years ago, seeking to generate detailed intelligence on consumer behavior without relying on data gathered by Facebook, Google and other platforms. The information is a combination of anonymous consumer IDs culled from devices and personal information that customers share willingly. The company said in 2019 that it had amassed 1.5 billion consumer identifications world-wide.

China, where Facebook and Google have a limited presence, is P&G’s most sophisticated market for using that database. The company funnels 80% of its digital-ad buying there through “programmatic ads” that let it target people with the highest propensity to buy without presenting them with irrelevant or excessive ads, P&G Chief Executive Officer David Taylor said at a conference last year.

“We are reinventing brand building, from wasteful mass marketing to mass one-to-one brand building fueled by data and technology,” he said. “This is driving growth while delivering savings and efficiencies.”

In response to that, I tweeted,

Won’t app makers find ways to work around the no tracking ask, regardless of whether it’s a global or a one-at-a-time setting? That seems to be what the
@WSJ is saying about  @ProcterGamble ‘s work with #CAID device fingerprinting.

Don replied,

Yes. Some app developers will figure out a way to track you that doesn’t get caught by the App Store review. Apple can’t promise a complete “stop this app from tracking me” feature because sometimes it will be one of those apps that’s breaking the rules

Then Augustine Fou replied,

of course, MANY ad tech companies have been working on fingerprinting for years, as a work around to browsers (like Firefox) allowing users to delete cookies many years ago. Fingerprinting is even more pernicious because it is on server-side and out of control of user entirely

That last point is why I’ve long argued that we have a very basic problem with the client server model itself: that it all but guarantees a feudal system in which clients are serfs and site operators (and Big Tech in general) are their lords and masters. Though my original metaphor for client-server (which I have been told was originally a euphemism for slave-master) was calf-cow:

Here’s more on that one, plus some other metaphors as well:

I’ll pick up that thread after visiting what Nick says about fingerprinting:

There are countless ways that devices can be fingerprinted, and the mandated use of IDFA instead of those surreptitious methods makes it harder for ad tech companies to be sneaky. It has long been possible to turn off IDFA or reset the identifier. If it did not exist, ad tech companies would find other ways of individual tracking without users’ knowledge, consent, or control.

And why “ask” an app not to track? Why not “tell”? Or, better yet, “Prevent Tracking By This App”? Does asking an app not to track mean it won’t?

History has an answer for those questions.

Remember Do Not Track? Invented in the dawn of tracking, back in the late ’00s, it’s still a setting in every one of our browsers. But it too is just an ask — and ignored by nearly every website on Earth.

Much like Do Not Track, App Tracking Transparency is a request — verified as much as Apple can by App Review — to avoid false certainty. Tracking is a pernicious reality of every internet-connected technology. It is ludicrous to think that any company could singlehandedly find and disable all forms of fingerprinting in all apps, or to guarantee that users will not be tracked.

I agree. This too is a problem with the feudal system that the Web + app world has become, and Nick is right to point it out. He continues,

The thing that bugs me is that Searls knows all of this. He’s Doc Searls; he has an extraordinary thirteen year history of writing about this stuff. So I am not entirely sure why he is making arguments like the ones above that, with knowledge of his understanding of this space, begin to feel disingenuous. I have been thinking about this since I read this article last night and I have not come to a satisfactory realistic conclusion.

Here’s a realistic conclusion (or at least the one that’s in my head right now): I was mistaken to assume that Apple has more control here than it really does, and it’s right for all these guys (Nick, John, Augustine, Don and others) to point that out. Hey, I gave in to wishful thinking and unconscious ad hominem argumentation. Mea bozo. I sit corrected.

He continues,

Apple is a big, giant, powerful company — but it is only one company that operates within the realities of legal and technical domains. We cannot engineer our way out of the anti-privacy ad tech mess. The only solution is regulatory. That will not guarantee that bad actors do not exist, but it could create penalties for, say, Google when it ignores users’ choices or Dr. B when it warehouses medical data for unspecified future purposes.

We’ve had the GDPR and the CCPA in enforceable forms for awhile now, and the main result, for us mere “data subjects” (GDPR) and “consumers” (CCPA) is a far worse collection of experiences in using the Web.

At this point my faith in regulation (which I celebrated, at least in the GDPR case, when it went into force) is less than zero. So is my faith in tech, within the existing system.

So I’m moving on, and working on a new approach, outside the whole feudal system, which I describe in A New Way. It’s truly new and small, but I think it can be huge: much bigger than the existing system, simply because we on the demand side will have better ways of informing supply (are you listening, Mark Pritchard?) than even the best surveillance systems can guess at.

“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world,” Archimedes is said to have said.

For almost all of the last four years, Donald Trump was one hell of an Archimedes. With the U.S. presidency as his lever and Twitter as his fulcrum, the 45th President leveraged an endless stream of news-making utterances into a massive following and near-absolute domination of news coverage, worldwide. It was an amazing show, the like of which we may never see again.

Big as it was, that show ended on January 8, when Twitter terminated the @RealDonaldTrump account. Almost immediately after that, Trump was “de-platformed” from all these other services as well: PayPal, Reddit, Shopify, Snapchat, Discord, Amazon, Twitch, Facebook, TikTok, Google, Apple, Twitter, YouTube and Instagram. That’s a lot of fulcrums to lose.

What makes them fulcrums is their size. All are big, and all are centralized: run by one company. As members, users and customers of these centralized services, we are also at their mercy: no less vulnerable to termination than Trump.

So here is an interesting question: What if Trump had his own fulcrum from the start? For example, say he took one of the many Trump domains he probably owns (or should have bothered to own, long ago), and made it a blog where he said all the same things he tweeted, and that site had the same many dozens of millions of followers today? Would it still be alive?

I’m not sure it would. Because, even though the base protocols of the Internet and the Web are peer-to-peer and end-to-end, all of us are dependent on services above those protocols, and at the mercy of those services’ owners.

That to me is the biggest lesson the de-platforming of Donald Trump has for the rest of us. We can talk “de-centralization” and “distribution” and “democratization” along with peer-to-peer and end-to-end, but we are still at the mercy of giants.

Yes, there are work-arounds. The parler.com website, de-platformed along with Trump, is back up and, according to @VickerySec (Chris Vickery), “routing 100% of its user traffic through servers located within the Russian Federation.” Adds @AdamSculthorpe, “With a DDos-Guard IP, exactly as I predicted the day it went offline. DDoS Guard is the Russian equivalent of CloudFlare, and runs many shady sites. RiTM (Russia in the middle) is one way to think about it.” Encrypted services such as Signal and Telegram also provide ways for people to talk and be social. But those are also platforms, and we are at their mercy too.

I bring all this up as a way of thinking out loud toward the talk I’ll be giving in a few hours (also see here), on the topic “Centralized vs. Decentralized.” Here’s the intro:

Centralised thinking is easy. Control sits on one place, everything comes home, there is a hub, the corporate office is where all the decisions are made and it is a power game.

Decentralised thinking is complex. TCP/IP and HTTP created a fully decentralised fabric for packet communication. No-one is in control. It is beautiful. Web3 decentralised ideology goes much further but we continually run into conflicts. We need to measure, we need to report, we need to justify, we need to find a model and due to regulation and law, there are liabilities.

However, we have to be doing both. We have to centralise some aspects and at the same time decentralise others. Whilst we hang onto an advertising model that provides services for free we have to have a centralised business model. Apple with its new OS is trying to break the tracking model and in doing so could free us from the barter of free, is that the plan which has nothing to do with privacy or are the ultimate control freaks. But the new distributed model means more risks fall on the creators as the aggregators control the channels and access to a model. Is our love for free preventing us from seeing the value in truly distributed or are those who need control creating artefacts that keep us from achieving our dreams? Is distributed even possible with liability laws and a need to justify what we did to add value today?

So here is what I think I’ll say.

First, we need to respect the decentralized nature of humanity. All of us are different, by design. We look, sound, think and feel different, as separate human beings. As I say in How we save the world, “no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M.LamarNicole and Jonas Maines.)”

This simple fact of our distributed souls and talents has had scant respect from the centralized systems of the digital world, which would rather lead than follow us, and rather guess about us than understand us. That’s partly because too many of them have become dependent on surveillance-based personalized advertising (which is awful in ways I’ve detailed in 136 posts, essays and articles compiled here). But it’s mostly because they’re centralized and can’t think or work outside their very old and square boxes.

Second, advertising, subscriptions and donations through the likes of (again, centralized) Patreon aren’t the only possible ways to support a site or a service. Those are industrial age conventions leveraged in the early decades of the digital age. There are other approaches we can implement as well, now that the pendulum is started to swing back from the centralized extreme. For example, the fully decentralized EmanciPay. A bunch of us came up with that one at ProjectVRM way back in 2009. What makes it decentralized is that the choice of what to pay, and how, is up to the customer. (No, it doesn’t have to be scary.) Which brings me to—

Third, we need to start thinking about solving business problems, market problems, technical problems, from our side. Here is how Customer Commons puts it:

There is … no shortage of of business problems that can only be solved from the customer’s side. Here are a few examples :

  1. Identity. Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves, and to reveal to others only what we need them to know. Working on this challenge is the SSI—Self-Sovereign Identity—movement. The solution here for individuals is tools of their own that scale.
  2. Subscriptions. Nearly all subscriptions are pains in the butt. “Deals” can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.
  3. Terms and conditions. In the world today, nearly all of these are ones companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a “legitimate interest.”
  4. Payments. For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to be), customers should have their own pricing gun: a way to signal—and actually pay willing sellers—as much as they like, however they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.
  5. Internet of Things. What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on—all silo’d in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent customers, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be, or have, picos.)
  6. Loyalty. All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in position to truly and fully express it. We should have our own loyalty programs, to which companies are members, rather than the reverse.
  7. Privacy. We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters, and other ways to limit what others can see or hear—and to signal to others what’s okay and what’s not. Instead, all we have are unenforced promises by others not to watching our naked selves, or to report what they see to others. Or worse, coerced urgings to “accept” spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.
  8. Customer service. There are no standard ways to call for service yet, or to get it. And there should be.
  9. Advertising. Our main problem with advertising today is tracking, which is failing because it doesn’t work. (Some history: ad blocking has been around since 2004, it took off in 2013, when the advertising and publishing industries gave the middle finger to Do Not Track, which was never more than a polite request in one’s browser not to be tracked off a site. By 2015, ad blocking alone was the biggest boycott i world history. And in 2018 and 2019 we got the GDPR and the CCPA, two laws meant to thwart tracking and unwanted data collection, and which likely wouldn’t have happened if we hadn’t been given that finger.) We can solve that problem from the customer side with intentcasting,. This is where we advertise to the marketplace what we want, without risk that our personal data won’t me misused. (Here is a list of intentcasting providers on the ProjectVRM Development Work list.)

We already have examples of personal solutions working at scale: the Internet, the Web, email and telephony. Each provides single, simple and standards-based ways any of us can scale how we deal with others—across countless companies, organizations and services. And they work for those companies as well.

Other solutions, however, are missing—such as ones that solve the eight problems listed above.

They’re missing for the best of all possible reasons: it’s still early. Digital living is still new—decades old at most. And it’s sure to persist for many decades, centuries or millennia to come.

They’re also missing because businesses typically think all solutions to business problems are ones for them. Thinking about customers solving business problems is outside that box.

But much work is already happening outside that box. And there already exist standards and code for building many customer-side solutions to problems shared with businesses. Yes, there are not yet as many or as good as we need; but there are enough to get started.

A lot of levers there.

For those of you attending this event, I’ll talk with you shortly. For the rest of you, I’ll let you know how it goes.

When some big outfit with a vested interest in violating your privacy says they are only trying to save small business, grab your wallet. Because the game they’re playing is misdirection away from what they really want.

The most recent case in point is Facebook, which ironically holds the world’s largest database on individual human interests while also failing to understand jack shit about personal boundaries.

This became clear when Facebook placed the ad above and others like it in major publications recently, and mostly made bad news for itself. We saw the same kind of thing in early 2014, when the IAB ran a similar campaign against Mozilla, using ads like this:

That one was to oppose Mozilla’s decision to turn on Do Not Track by default in its Firefox browser. Never mind that Do Not Track was never more than a polite request for websites to not be infected with a beacon, like those worn by marked animals, so one can be tracked away from the website. Had the advertising industry and its dependents in publishing simply listened to that signal, and respected it, we might never have had the GDPR or the CCPA, both of which are still failing at the same mission. (But, credit where due: the GDPR and the CCPA have at least forced websites to put up insincere and misleading opt-out popovers in front of every website whose lawyers are scared of violating the letter—but never the spirit—of those and other privacy laws.)

The IAB succeeded in its campaign against Mozilla and Do Not Track; but the the victory was Pyrrhic, because users decided to install ad blockers instead, which by 2015 was the largest boycott in human history. Plus a raft of privacy laws, with more in the pipeline.

We also got Apple on our side. That’s good, but not good enough.

What we need are working tools of our own. Examples: Global Privacy Control (and all the browsers and add-ons mentioned there), Customer Commons#NoStalking term, the IEEE’s P7012 – Standard for Machine Readable Personal Privacy Terms, and other approaches to solving business problems from the our side—rather than always from the corporate one.

In those movies, we’ll win.

Because if only Apple wins, we still lose.

Dammit, it’s still about what The Cluetrain Manifesto said in the first place, in this “one clue” published almost 21 years ago:

we are not seats or eyeballs or end users or consumers.
we are human beings — and out reach exceeds your grasp.
deal with it.

We have to make them deal. All of them. Not just Apple. We need code, protocols and standards, and not just regulations.

All the projects linked to above can use some help, plus others I’ll list here too if you write to me with them. (Comments here only work for Harvard email addresses, alas. I’m doc at searls dot com.)

December 10, 2020: This matter has been settled now, meaning Flickr appears not to be in trouble, and my account due for renewal will be automatically renewed. I’ve appended what settled the matter to the bottom of this post. Note that it also raises another question, about subscriptions. — Doc

I have two Flickr accounts, named Doc Searls and Nfrastructure. One has 73,355 photos, and the other 3,469. They each cost $60/year to maintain as pro accounts. They’ve both renewed automatically in the past; and the first one is already renewed, which I can tell because it says “Your plan will automatically renew on March 20, 2022.”

The second one, however… I dunno. Because, while my Account page says “Your plan will automatically renew on December 13, 2020,” I just got emails for both accounts saying, “This email is to confirm that we have stopped automatic billing for your subscription. Your subscription will continue to be active until the expiration date listed below. At that time, you will have to manually renew or your subscription will be cancelled.” The dates match the two above. At the bottom of each, in small print, it says “Digital River Inc. is the authorized reseller and merchant of the products and services offered within this store. Privacy Policy Terms of Sale Your California Privacy Rights.”

Hmmm. The Digital River link goes here, which appears to be in Ireland. A look at the email’s source shows the mail server is one in Kansas, and the Flickr.com addressing doesn’t look spoofed. So, it doesn’t look too scammy to me. Meaning I’m not sure what the scam is. Yet. If there is one.

Meanwhile, I do need to renew the subscription, and the risk of not renewing it is years of contributions (captions, notes, comments) out the window.

So I went to “Manage your Pro subscription” on the second one (which has four days left to expiration), and got this under “Update your Flickr Pro subscription information”

Plan changes are temporarily disabled. Please contact support for prompt assistance.

Cancel your subscription

The Cancel line is a link. I won’t click on it.

Now, I have never heard of a company depending on automatic subscription renewals switching from those to the manual kind. Nor have I heard of a subscription-dependent company sending out notices like these while the renewal function is disabled.

I would like to contact customer support; but there is no link for that on my account page. In fact, the words “customer” and “support” don’t appear there. “Help” does, however, and goes to https://help.flickr.com/, where I need to fill out a form. This I did, explaining,

I am trying to renew manually, but I get “Plan changes are temporarily disabled. Please contact support for prompt assistance.” So here I am. Please reach out. This subscription expires in four days, and I don’t want to lose the photos or the account. I’m [email address] for this account (I have another as well, which doesn’t renew until 2022), my phone is 805-705-9666, and my twitter is @dsearls. Thanks!

The robot replied,

Thanks for your message – you’ll get a reply from a Flickr Support Hero soon. If you don’t receive an automated message from Flickr confirming we received your message (including checking your spam folders), please make sure you provided a valid and active email. Thanks for your patience and we look forward to helping you!

Me too.

Meanwhile, I am wondering if Flickr is in trouble again.

I wondered about this in 2011 and again in 2016, (in my most-read Medium post, ever). Those were two of the (feels like many) times Flickr appeared to be on the brink. And I have been glad SmugMug took over the Flickr show in 2018. (I’m a paying SmugMug customer as well.) But this kind of thing is strange and has me worried. Should I be?

[Later, on December 10…]

Heard from Flickr this morning, with this:

Hi Doc,

When we migrated your account to Stripe, we had to cancel your subscription on Digital River. The email you received was just a notice of this event. I apologize for the confusion.

Just to confirm, there is no action needed at this time. You have an active Pro subscription in good standing and due for renewal on an annual term on December 14th, 2020.

To answer your initial question, since your account has been migrated to Stripe, while you can update your payment information, changes to subscription plans are temporarily unavailable. We expect this functionality to be restored soon.

I appreciate your patience and hope this helps.

For more information, please consult our FAQ here: https://help.flickr.com/faq-for-flickr-members-about-our-payment-processor-migration-SyN1cazsw

Before this issue came up, I hadn’t heard of Digital River or Stripe. Seems they are both “payment gateway” services (at least according to Finances Online). If you look down the list of what these companies can do, other than payment processing alone—merchandising, promotions, channel partner management, dispute handling, cross-border payment optimization, in-app solutions, risk management, email services, and integrations with dozens of different tools, products and extensions from the likes of Visa, MasterCard, Sage and many other companies with more obscure brand names—you can understand how a screw-up like this one can happen when moving from one provider to another.

Now the question for me is whether subscription systems really have to be this complex.

(Comments here only work for Harvard people; so if you’re not one of those, please reply elsewhere, such as on Twitter, where I’m @dsearls.)

The goal here is to obsolesce this brilliant poster by Despair.com:

I got launched on that path a couple months ago, when I got this email from  The_New_Yorker at e-mail.condenast.com:

Why did they “need” a “confirmation” to a subscription which, best I could recall, was last renewed early this year?

So I looked at the links.

The “renew,” Confirmation Needed” and “Discounted Subscription” links all go to a page with a URL that began with https://subscriptions.newyorker.com/pubs…, followed by a lot of tracking cruft. Here’s a screen shot of that one, cut short of where one filled in a credit card number. Note the price:

I was sure I had been paying $80-something per year, for years. As I also recalled, this was a price one could only obtain by calling the 800 number at NewYorker.com.

Or somewhere. After digging around, I found it at
 https://w1.buysub.com/pubs/N3/NYR/accoun…, which is where the link to Customer Care under My Account on the NewYorker website goes. It also required yet another login.

So, when I told the representative at the call center that I’d rather not “confirm” a year for a “discount” that probably wasn’t, she said I could renew for the $89.99 I had paid in the past, and that the deal would be good  through February of 2022. I said fine, let’s do that. So I gave her my credit card, said this was way too complicated, and added that a single simple subscription price would be better. She replied,  “Never gonna happen.” Let’s repeat that:

Never gonna happen.

Then I got this by email:

This appeared to confirm the subscription I already had. To see if that was the case, I went back to the buysub.com website and looked under the Account Summary tab, where it said this:

think this means that I last renewed on February 3 of this year, and what I did on the phone in August was commit to paying $89.99/year until February 10 of 2022.

If that’s what happened, all my call did was extend my existing subscription. Which was fine, but why require a phone call for that?

And WTF was that “Account Confirmation Required” email about? I assume it was bait to switch existing subscribers into paying $50 more per year.

Then there was this, at the bottom of the Account summary page:

This might explain why I stopped getting Vanity Fair, which I suppose I should still be getting.

So I clicked on”Reactivate and got a login page where the login I had used to get this far didn’t work.

After other failing efforts that I neglected to write down, I decided to go back to the New Yorker site and work my way back through two logins to the same page, and then click Reactivate one more time. Voila! ::::::

So now I’ve got one page that tells me I’m good to March 2021 next to a link that takes me to another page that says I ordered 12 issues last December and I can “start” a new subscription for $15 that would begin nine months ago. This is how one “reactivates” a subscription?  OMFG.

I’m also not going into the hell of moving the print subscription back and forth between the two places where I live. Nor will I bother now, in October, to ask why I haven’t seen another copy of Vanity Fair. (Maybe they’re going to the other place. Maybe not. I don’t know, and I’m too weary to try finding out.)

I want to be clear here that I am not sharing this to complain. In fact, I don’t want The New YorkerVanity Fair, Wred, Condé Nast (their parent company) or buysub.com to do a damn thing. They’re all FUBAR. By design. (Bonus link.)

Nor do I want any action out of Spectrum, SiriusXM, Dish Network or the other subscription-based whatevers whose customer disservice systems have recently soaked up many hours of my life.

See, with too many subscription systems (especially ones for periodicals), FUBAR is the norm. A matter of course. Pro forma. Entrenched. A box outside of which nobody making, managing or working in those systems can think.

This is why, when an alien idea appears, for example from a loyal customer just wanting a single and simple damn price, the response is “Never gonna happen.”

This is also why the subscription fecosystem can only be turned into an ecosystem from the outside. Our side. The subscribers’ side.

I’ll explain how at Customer Commons, which we created for exactly that purpose. Stay tuned for that.


Two exceptions are Consumer Reports and The Sun.

door knocker

Remember the dot com boom?

Doesn’t matter if you don’t. What does matter is that it ended. All business manias do.

That’s why we can expect the “platform economy” and “surveillance capitalism” to end. Sure, it’s hard to imagine that when we’re in the midst of the mania, but the end will come.

When it does, we can have a “privacy debate.” Meanwhile, there isn’t one. In fact there can’t be one, because we don’t have privacy in the online world.

We do have privacy in the offline world, and we’ve had it ever since we invented clothing, doors, locks and norms for signaling what’s okay and what’s not okay in respect to our personal spaces, possessions and information.

That we hardly have the equivalent in the networked world doesn’t mean we won’t. Or that we can’t. The Internet in its current form was only born in the mid-’90s. In the history of business and culture, that’s a blip.

Really, it’s still early.

So, the fact that websites, network services, phone companies, platforms, publishers, advertisers and governments violate our privacy with wanton disregard for it doesn’t mean we can’t ever stop them. It means we haven’t done it yet, because we don’t have the tech for it. (Sure, some wizards do, but muggles don’t. And most of us are muggles.)

And, since we don’t have privacy tech yet, we lack the simple norms that grow around technologies that give us ways signal our privacy preferences. We’ll get those when we have the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells.

This is what many of us have been working on at ProjectVRM, Customer Commons, the Me2B Alliance, MyData and other organizations whose mission is getting each of us the tech we need to operate at full agency when dealing with the companies and governments of the world.

I bring all this up as a “Yes, and” to a piece in Salon by Michael Corn (@MichaelAlanCorn), CISO of UCSD, titled We’re losing the war against surveillance capitalism because we let Big Tech frame the debate. Subtitle: “It’s too late to conserve our privacy — but to preserve what’s left, we must stop defining people as commodities.”

Indeed. And we do need the “optimism and activism” he calls for. In the activism category is code. Specifically, code that gives us the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells

Some of those are in the works. Others are not—yet. But they will be. Inevitably. Especially now that it’s becoming clearer every day that we’ll never get them from any system with a financial interest in violating it*. Or from laws that fail at protecting it.

If you want to help, join one or more of the efforts in the links four paragraphs up. And, if you’re a developer already on the case, let us know how we can help get your solutions into each and all of our digital hands.

For guidance, this privacy manifesto should help. Thanks.


*Especially publishers such as Salon, which Privacy Badger tells me tries to pump 20 potential trackers into my browser while I read the essay cited above. In fact, according to WhoTracksMe.com, Salon tends to run 204 tracking requests per page load, and the vast majority of those are for tracking-based advertising purposes. And Salon is hardly unique. Despite the best intentions of the GDPR and the CCPA, surveillance capitalism remains fully defaulted on the commercial Web—and will continue to remain entrenched until we have the privacy tech we’ve needed from the start.

For more on all this, see People vs. Adtech.

[This is the third of four posts. The last of those, Zoom’s new privacy policy, visits the company’s positive response to input such as mine here. So you might want to start with that post (because it’s the latest) and look at the other three, including this one, after that.]

I really don’t want to bust Zoom. No tech company on Earth is doing more to keep civilization working at a time when it could so easily fall apart. Zoom does that by providing an exceptionally solid, reliable, friendly, flexible, useful (and even fun!) way for people to be present with each other, regardless of distance. No wonder Zoom is now to conferencing what Google is to search. Meaning: it’s a verb. Case in point: between the last sentence and this one, a friend here in town sent me an email that began with this:

That’s a screen shot.

But Zoom also has problems, and I’ve spent two posts, so far, busting them for one of those problems: their apparent lack of commitment to personal privacy:

  1. Zoom needs to cleanup its privacy act
  2. More on Zoom and privacy

With this third post, I’d like to turn that around.

I’ll start with the email I got yesterday from a person at a company engaged by Zoom for (seems to me) reputation management, asking me to update my posts based on the “facts” (his word) in this statement:

Zoom takes its users’ privacy extremely seriously, and does not mine user data or sell user data of any kind to anyone. Like most software companies, we use third-party advertising service providers (like Google) for marketing purposes: to deliver tailored ads to our users about Zoom products the users may find interesting. (For example, if you visit our website, later on, depending on your cookie preferences, you may see an ad from Zoom reminding you of all the amazing features that Zoom has to offer). However, this only pertains to your activity on our Zoom.us website. The Zoom services do not contain advertising cookies. No data regarding user activity on the Zoom platform – including video, audio and chat content – is ever used for advertising purposes. If you do not want to receive targeted ads about Zoom, simply click the “Cookie Preferences” link at the bottom of any page on the zoom.us site and adjust the slider to ‘Required Cookies.’

I don’t think this squares with what Zoom says in the “Does Zoom sell Personal Data?” section of its privacy policy (which I unpacked in my first post, and that Forbes, Consumer Reports and others have also flagged as problematic)—or with the choices provided in Zoom’s cookie settings, which list 70 (by my count) third parties whose involvement you can opt into or out of (by a set of options I unpacked in my second post). The logos in the image above are just 16 of those 70 parties, some of which include more than one domain.

Also, if all the ads shown to users are just “about Zoom,” why are those other companies in the picture at all? Specifically, under “About Cookies on This Site,” the slider is defaulted to allow all “functional cookies” and “advertising cookies,” the latter of which are “used by advertising companies to serve ads that are relevant to your interests.” Wouldn’t Zoom be in a better position to know your relevant (to Zoom) interests, than all those other companies?

More questions:

  1. Are those third parties “processors” under GDPR, or “service providers by the CCPAs definition? (I’m not an authority on either, so I’m asking.)
  2. How do these third parties know what your interests are? (Presumably by tracking you, or by learning from others who do. But it would help to know more.)
  3. What data about you do those companies give to Zoom (or to each other, somehow) after you’ve been exposed to them on the Zoom site?
  4. What targeting intelligence do those companies bring with them to Zoom’s pages because you’re already carrying cookies from those companies, and those cookies can alert those companies (or others, for example through real time bidding auctions) to your presence on the Zoom site?
  5. If all Zoom wants to do is promote Zoom products to Zoom users (as that statement says), why bring in any of those companies?

Here is what I think is going on (and I welcome corrections): Because Zoom wants to comply with GDPR and CCPA, they’ve hired TrustArc to put that opt-out cookie gauntlet in front of users. They could just as easily have used Quantcast‘s system, or consentmanager‘s, or OneTrust‘s, or somebody else’s.

All those services are designed to give companies a way to obey the letter of privacy laws while violating their spirit. That spirit says stop tracking people unless they ask you to, consciously and deliberately. In other words, opting in, rather than opting out. Every time you click “Accept” to one of those cookie notices, you’ve just lost one more battle in a losing war for your privacy online.

I also assume that Zoom’s deal with TrustArc—and, by implication, all those 70 other parties listed in the cookie gauntlet—also requires that Zoom put a bunch of weasel-y jive in their privacy policy. Which looks suspicious as hell, because it is.

Zoom can fix all of this easily by just stopping it. Other companies—ones that depend on adtech (tracking-based advertising)—don’t have that luxury. But Zoom does.

If we take Zoom at its word (in that paragraph they sent me), they aren’t interested in being part of the adtech fecosystem. They just want help in aiming promotional ads for their own services, on their own site.

Three things about that:

  1. Neither the Zoom site, nor the possible uses of it, are so complicated that they need aiming help from those third parties.
  2. Zoom is the world’s leading sellers’ market right now, meaning they hardly need to advertise at all.
  3. Being in adtech’s fecosystem raises huge fears about what Zoom and those third parties might be doing where people actually use Zoom most of the time: in its app. Again, Consumer Reports, Forbes and others have assumed, as have I, that the company’s embrasure of adtech in its privacy policy means that the same privacy exposures exist in the app (where they are also easier to hide).

By severing its ties with adtech, Zoom can start restoring people’s faith in its commitment to personal privacy.

There’s a helpful model for this: Apple’s privacy policy. Zoom is in a position to have a policy like that one because, like Apple, Zoom doesn’t need to be in the advertising business. In fact, Zoom could follow Apple’s footprints out of the ad business.

And then Zoom could do Apple one better, by participating in work going on already to put people in charge of their own privacy online, at scale. In my last post. I named two organizations doing that work. Four more are the Me2B Alliance, Kantara, ProjectVRM, and MyData.

I’d be glad to help with that too. If anyone at zoom is interested, contact me directly this time. Thanks.

 

 

 

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

Tags: , , , ,

Here’s the popover that greets visitors on arrival at Rolling Stone‘s website:

Our Privacy Policy has been revised as of January 1, 2020. This policy outlines how we use your information. By using our site and products, you are agreeing to the policy.

That policy is supplied by Rolling Stone’s parent (PMC) and weighs more than 10,000 words. In it the word “advertising” appears 68 times. Adjectives modifying it include “targeted,” “personalized,” “tailored,” “cookie-based,” “behavioral” and “interest-based.” All of that is made possible by, among other things—

Information we collect automatically:

Device information and identifiers such as IP address; browser type and language; operating system; platform type; device type; software and hardware attributes; and unique device, advertising, and app identifiers

Internet network and device activity data such as information about files you download, domain names, landing pages, browsing activity, content or ads viewed and clicked, dates and times of access, pages viewed, forms you complete or partially complete, search terms, uploads or downloads, the URL that referred you to our Services, the web sites you visit after this web site; if you share our content to social media platforms; and other web usage activity and data logged by our web servers, whether you open an email and your interaction with email content, access times, error logs, and other similar information. See “Cookies and Other Tracking Technologies” below for more information about how we collect and use this information.

Geolocation information such as city, state and ZIP code associated with your IP address or derived through Wi-Fi triangulation; and precise geolocation information from GPS-based functionality on your mobile devices, with your permission in accordance with your mobile device settings.

The “How We Use the Information We Collect” section says they will—

Personalize your experience to Provide the Services, for example to:

  • Customize certain features of the Services,
  • Deliver relevant content and to provide you with an enhanced experience based on your activities and interests
  • Send you personalized newsletters, surveys, and information about products, services and promotions offered by us, our partners, and other organizations with which we work
  • Customize the advertising on the Services based on your activities and interests
  • Create and update inferences about you and audience segments that can be used for targeted advertising and marketing on the Services, third party services and platforms, and mobile apps
  • Create profiles about you, including adding and combining information we obtain from third parties, which may be used for analytics, marketing, and advertising
  • Conduct cross-device tracking by using information such as IP addresses and unique mobile device identifiers to identify the same unique users across multiple browsers or devices (such as smartphones or tablets, in order to save your preferences across devices and analyze usage of the Service.
  • using inferences about your preferences and interests for any and all of the above purposes

For a look at what Rolling Stone, PMC and their third parties are up to, Privacy Badger’s browser extension “found 73 potential trackers on www.rollingstone.com:

tagan.adlightning.com
 acdn.adnxs.com
 ib.adnxs.com
 cdn.adsafeprotected.com
 static.adsafeprotected.com
 d.agkn.com
 js.agkn.com
 c.amazon-adsystem.com
 z-na.amazon-adsystem.com
 display.apester.com
 events.apester.com
 static.apester.com
 as-sec.casalemedia.com
 ping.chartbeat.net
 static.chartbeat.com
 quantcast.mgr.consensu.org
 script.crazyegg.com
 dc8xl0ndzn2cb.cloudfront.net
cdn.digitru.st
 ad.doubleclick.net
 securepubads.g.doubleclick.net
 hbint.emxdgt.com
 connect.facebook.net
 adservice.google.com
 pagead2.googlesyndication.com
 www.googletagmanager.com
 www.gstatic.com
 static.hotjar.com
 imasdk.googleapis.com
 js-sec.indexww.com
 load.instinctiveads.com
 ssl.p.jwpcdn.com
 content.jwplatform.com
 ping-meta-prd.jwpltx.com
 prd.jwpltx.com
 assets-jpcust.jwpsrv.com
 g.jwpsrv.com
pixel.keywee.co
 beacon.krxd.net
 cdn.krxd.net
 consumer.krxd.net
 www.lightboxcdn.com
 widgets.outbrain.com
 cdn.permutive.com
 assets.pinterest.com
 openbid.pubmatic.com
 secure.quantserve.com
 cdn.roiq.ranker.com
 eus.rubiconproject.com
 fastlane.rubiconproject.com
 s3.amazonaws.com
 sb.scorecardresearch.com
 p.skimresources.com
 r.skimresources.com
 s.skimresources.com
 t.skimresources.com
launcher.spot.im
recirculation.spot.im
 js.spotx.tv
 search.spotxchange.com
 sync.search.spotxchange.com
 cc.swiftype.com
 s.swiftypecdn.com
 jwplayer.eb.tremorhub.com
 pbs.twimg.com
 cdn.syndication.twimg.com
 platform.twitter.com
 syndication.twitter.com
 mrb.upapi.net
 pixel.wp.com
 stats.wp.com
 www.youtube.com
 s.ytimg.com

This kind of shit is why we have the EU’s GDPR (General Data Protection Regulation) and California’s CCPA (California Consumer Privacy Act). (No, it’s not just because Google and Facebook.) If publishers and the adtech industry (those third parties) hadn’t turned the commercial Web into a target-rich environment for suckage by data vampires, we’d never have had either law. (In fact, both laws are still new: the GDPR went into effect in May 2018 and the CCPA a few days ago.)

I’m in California, where the CCPA gives me the right to shake down the vampiretariat for all the information about me they’re harvesting, sharing, selling or giving away to or through those third parties.* But apparently Rolling Stone and PMC don’t care about that.

Others do, and I’ll visit some of those in later posts. Meanwhile I’ll let Rolling Stone and PMC stand as examples of bad acting by publishers that remains rampant, unstopped and almost entirely unpunished, even under these new laws.

I also suggest following and getting involved with the fight against the plague of data vampirism in the publishing world. These will help:

  1. Reading Don Marti’s blog, where he shares expert analysis and advice on the CCPA and related matters. Also People vs. Adtech, a compilation of my own writings on the topic, going back to 2008.
  2. Following what the browser makers are doing with tracking protection (alas, differently†). Shortcuts: Brave, Google’s Chrome, Ghostery’s Cliqz, Microsoft’s Edge, Epic, Mozilla’s Firefox.
  3. Following or joining communities working to introduce safe forms of nourishment for publishers and better habits for advertisers and their agencies. Those include Customer CommonsMe2B AllianceMyData Global and ProjectVRM.

______________

*The bill (AB 375), begins,

The California Constitution grants a right of privacy. Existing law provides for the confidentiality of personal information in various contexts and requires a business or person that suffers a breach of security of computerized data that includes personal information, as defined, to disclose that breach, as specified.

This bill would enact the California Consumer Privacy Act of 2018. Beginning January 1, 2020, the bill would grant a consumer a right to request a business to disclose the categories and specific pieces of personal information that it collects about the consumer, the categories of sources from which that information is collected, the business purposes for collecting or selling the information, and the categories of 3rd parties with which the information is shared. The bill would require a business to make disclosures about the information and the purposes for which it is used. The bill would grant a consumer the right to request deletion of personal information and would require the business to delete upon receipt of a verified request, as specified. The bill would grant a consumer a right to request that a business that sells the consumer’s personal information, or discloses it for a business purpose, disclose the categories of information that it collects and categories of information and the identity of 3rd parties to which the information was sold or disclosed…

Don Marti has a draft letter one might submit to the brokers and advertisers who use all that personal data. (He also tweets a caution here.)

†This will be the subject of my next post.

Thoughts at #ID2020

I’m at the ID2020 (@ID2020) Summit in New York. The theme is “Rising to the Good ID Challenge.” My notes here are accumulating at the bottom, not the top. Okay, here goes…

At that last link it says, “The ID2020 Alliance is setting the course of digital ID through a multi-stakeholder partnership, ensuring digital ID is responsibly implemented and widely accessible.”

I find myself wondering if individuals are among the stakeholders. Also this:

There is also a manifesto. It says, among other things, “The ability to prove one’s identity is a fundamental and universal human right.” and “We live in a digital era. Individuals need a trusted, verifiable way to prove who they are, both in the physical world and online.”

That’s good. I’d also want more than one way, which may be the implication here.

The first speaker is from Caribou Digital. What follows is from her talk.

“1. It’s about the user, not just the use case.”

Hmm… I believe identity needs to be about independent human beings, not just “users” of systems.

“2. Intermediaries are still critical.”

The focus here is on family and institutional intermediaries, especially in the less developed world. Which is fine; but people should not need intermediaries in all cases. If you tell someone your name, or give them a business card no intermediary is involved. That same convention should be available online.

“3. It’s not just about an ‘ID.’ It’s not even about an identity system. It’s about an identification ecosystem.”

This is fine, but identification is about what systems do, not about what individuals do or have; and by itself tends to exclude self-sovereign identity. Self-sovereign is how identity works in the physical world. Here we are nameless (literally, anonymous) to most others, and reveal information about who we are (business cards, student ID, drivers license) on an as-needed basis that obeys Kim Cameron’s Laws of Identity, notably “minimum disclosure for a constrained use,” “justifiable parties” and “personal control and consent.”

4. “A human-centered, inclusive, respectful vision for the next stage of identification in a digital age.”

We need human-driven. -centered is something organizations do. I visited the difference long ago here and here.

That’s over and the first panel is on now. Most of it is inaudible where I sit. The topic now is self-sovereign and decentralized. The audience seems to be pushing that. @MatthewDavie just said something sensible, I think, but don’t have a quote.

This:

And this. Read the thread that follows. There are disagreements and explanations.

Here’s the ID2020 search on Twitter.

Background, at least on where I’m coming from: https://www.google.com/search?q=”doc+searls”+identity.

For the interested, @identitywoman, @windley and I (@dsearls) put on the Internet Identity Workshop, October 1-3 at the Computer History Museum in Silicon Valley. This one will be our 29th. (The first was in 2005 and there are two per year.) It’s an unconference: no keynotes or panels, just breakouts on topics attendees choose and lead. It’s the most consequential conference I know.

@MatthewDavie: “If we do this, and it doesn’t work with the current power players, we’re going to end up with a second-class system.” I suspect this makes sense, but I’m not sure what “this” is.

“Sovereign ownership of data” just came up from the audience. I think it’s possible for individuals to act in a self-sovereign way in sharing identity data, but not that this data is exclusively own-able. Some thoughts on that from Elizabeth Renieris (@HackyLawyER). Mine agree.

The second panel is on now. It’s mostly inauduble.

Now Dakota Gruener (@DakotaGruener), Executive Director of ID2020 is speaking. She’s telling a moving story about a homeless neighbor, Colin, who is denied services for lack of official ID(s).

New panel: Decentralization in National ID Programs.

Kim Cameron is on the panel now: “I spent thirty years building the world’s identity systems.” There were gasps. I yelled “It’s true.” He continued: “I’m now trying to rile up the world’s populations…”

John Jordan just made a point about how logins are a screwed up way to do things online and don’t map to what we know well and do in the everyday world. (I think that’s close. The sound system is dim at this end of the room.)

Kim just sourced my wife (who is here and now deeper than I am in this identity stuff), adding that “people know something is wrong” when they mention shoes somewhere and then see ads for shoes online. “We have technology. We have consciousness. We have will. So let’s do something.”

John: “What we want is to be in control of our relationships. Those are ours. Those are decentralized… People are decentralized.”

Kim: “What it means is recognizing that identity is who we are. It begins with us. .. only we know the aggregate of these attributes. In daily life we reveal some of those attributes, but never the aggregate. We need a system that begins with the indi and recognizes that they are in control, and choose what they reveal separately. We don’t want aggregates of ourselves to be everywhere. We need systems that recognize that, and are based on control by the individual, consent of the individual.”

“We do need assertions from people other than ourselves. The government can provide useful claims about a person. So can a university, or a bank. I can say somebody is a great guy. The identity fabric is all these claims.” Not quite verbatim, but close.

John: “Personal data should never be presented in a non-cryptographic way.” Something like that.

Kim on the GDPR: “We have it because the population demanded it… what will happen is this vision of people in control of their identity, and the Internet becoming reliable and trustworthy and probabilistic (meaning you’re being guessed at) rather than fully useful. Let’s give people their own wallets, let them run their own lives, make up their own minds… the world of legislation will grow, and it will do that around the will of people. … they need an identity system based on individuals rather than institutions overstepping their bounds… and we will see conflicts around this, with both good and bad government interventions.”

John: “I’d like to see legislation that forbids companies from holding personal information they don’t have to.” (Not verbatim, but maybe close. Again, hard to hear.)

Kim: “The current identity systems of the world are collapsing… you will have major institutions switching over to these decentralized identity systems, not from altruism, but from liability.”

Elizabeth heard and tweeted about one of the thing that was inaudible to me at this end of the room: “Thank you @LudaBujoreanu for addressing the deep disconnect between the reality on the ground of those without ID and the privileged POV from which many of these #digitalid systems are built @ID2020’s #id2020summit cc @WomeninID

Next panel: “Cities Driving Innovation in Good ID.”

Scott David from the audience just talked about “Turning troubles into problems,” and the challenge of doing that for individuals in an identity context.”

This reminds me of what Gideon Litchfield said about the difference between debates and conflicts, and I expanded on a bit here. Our point was that there are some issues that become locked in conflict with no real debate between sides. Scott’s distinction is toward a way out. Interesting. I’d like to know more.

Ken Banks tweets, “It’s an increasingly crowded space… #digitalidentity #ID2020″:

Image
He adds, “Already lots of talk of putting people first. Hopefully the #digitalidentity community will deliver, and not fall into the trap of saying one thing and doing another, a common issue with in the tech-for-development/#ICT4D sector. #ID2020 #GoodID

Two tweets…

@Gavi: “Government representatives, tech experts & civil society will gather at #UNGA74 today to discuss the potential of #DigitalID. Biometric ID data could help us better monitor which children need to be vaccinated and when. #ID2020

Image

 

Now I can’t find the other one. It argued that there is a 2-3% error rate for biomentric.

For lunch David Carroll (@ProfCarroll) of The New School (@thenewschool) is talking. Title: A data quest: holding tech to account. He starred in The Great Hack, on Netflix.

He’s sourcing Democracy Disrupted, by the UK ICO. “the sortable, addressable… algorithmic democracy. “Couterveillance: advertisers get all the privacy. We get none.”

“Parable of the great hadk: data rights must externd to digital creditoship. Identity depends on it.”

200 million America has no access to data held about them, by, for istance, Acxiom.

“A simple bill of data rights. Creditorship, objection, control, knowledge.” (Here’s something that’s not it, but interesting enough for me to flag for later reading.)

Now a panel moderated by Raffi Kirkorian. Also Cameron Birge of Microsoft and the Emerson Collective, Karen Ottoni, Demora Compari, Matthew Yarger and Christine Leong. (Again the sound is weak at this end of the room. Not picking up much here.)

Okay, that’s it. I’ll say more after I pull some pix together and complete these public notes…

Well, I have the pix, but the upload thing here in WordPress gives me an “http error” when I try to upload them. And now I’ve gotta drive to Boston, so that’ll have to wait.

« Older entries