problems

You are currently browsing the archive for the problems category.

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy.

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world,” Archimedes is said to have said.

For almost all of the last four years, Donald Trump was one hell of an Archimedes. With the U.S. presidency as his lever and Twitter as his fulcrum, the 45th President leveraged an endless stream of news-making utterances into a massive following and near-absolute domination of news coverage, worldwide. It was an amazing show, the like of which we may never see again.

Big as it was, that show ended on January 8, when Twitter terminated the @RealDonaldTrump account. Almost immediately after that, Trump was “de-platformed” from all these other services as well: PayPal, Reddit, Shopify, Snapchat, Discord, Amazon, Twitch, Facebook, TikTok, Google, Apple, Twitter, YouTube and Instagram. That’s a lot of fulcrums to lose.

What makes them fulcrums is their size. All are big, and all are centralized: run by one company. As members, users and customers of these centralized services, we are also at their mercy: no less vulnerable to termination than Trump.

So here is an interesting question: What if Trump had his own fulcrum from the start? For example, say he took one of the many Trump domains he probably owns (or should have bothered to own, long ago), and made it a blog where he said all the same things he tweeted, and that site had the same many dozens of millions of followers today? Would it still be alive?

I’m not sure it would. Because, even though the base protocols of the Internet and the Web are peer-to-peer and end-to-end, all of us are dependent on services above those protocols, and at the mercy of those services’ owners.

That to me is the biggest lesson the de-platforming of Donald Trump has for the rest of us. We can talk “de-centralization” and “distribution” and “democratization” along with peer-to-peer and end-to-end, but we are still at the mercy of giants.

Yes, there are work-arounds. The parler.com website, de-platformed along with Trump, is back up and, according to @VickerySec (Chris Vickery), “routing 100% of its user traffic through servers located within the Russian Federation.” Adds @AdamSculthorpe, “With a DDos-Guard IP, exactly as I predicted the day it went offline. DDoS Guard is the Russian equivalent of CloudFlare, and runs many shady sites. RiTM (Russia in the middle) is one way to think about it.” Encrypted services such as Signal and Telegram also provide ways for people to talk and be social. But those are also platforms, and we are at their mercy too.

I bring all this up as a way of thinking out loud toward the talk I’ll be giving in a few hours (also see here), on the topic “Centralized vs. Decentralized.” Here’s the intro:

Centralised thinking is easy. Control sits on one place, everything comes home, there is a hub, the corporate office is where all the decisions are made and it is a power game.

Decentralised thinking is complex. TCP/IP and HTTP created a fully decentralised fabric for packet communication. No-one is in control. It is beautiful. Web3 decentralised ideology goes much further but we continually run into conflicts. We need to measure, we need to report, we need to justify, we need to find a model and due to regulation and law, there are liabilities.

However, we have to be doing both. We have to centralise some aspects and at the same time decentralise others. Whilst we hang onto an advertising model that provides services for free we have to have a centralised business model. Apple with its new OS is trying to break the tracking model and in doing so could free us from the barter of free, is that the plan which has nothing to do with privacy or are the ultimate control freaks. But the new distributed model means more risks fall on the creators as the aggregators control the channels and access to a model. Is our love for free preventing us from seeing the value in truly distributed or are those who need control creating artefacts that keep us from achieving our dreams? Is distributed even possible with liability laws and a need to justify what we did to add value today?

So here is what I think I’ll say.

First, we need to respect the decentralized nature of humanity. All of us are different, by design. We look, sound, think and feel different, as separate human beings. As I say in How we save the world, “no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M.LamarNicole and Jonas Maines.)”

This simple fact of our distributed souls and talents has had scant respect from the centralized systems of the digital world, which would rather lead than follow us, and rather guess about us than understand us. That’s partly because too many of them have become dependent on surveillance-based personalized advertising (which is awful in ways I’ve detailed in 136 posts, essays and articles compiled here). But it’s mostly because they’re centralized and can’t think or work outside their very old and square boxes.

Second, advertising, subscriptions and donations through the likes of (again, centralized) Patreon aren’t the only possible ways to support a site or a service. Those are industrial age conventions leveraged in the early decades of the digital age. There are other approaches we can implement as well, now that the pendulum is started to swing back from the centralized extreme. For example, the fully decentralized EmanciPay. A bunch of us came up with that one at ProjectVRM way back in 2009. What makes it decentralized is that the choice of what to pay, and how, is up to the customer. (No, it doesn’t have to be scary.) Which brings me to—

Third, we need to start thinking about solving business problems, market problems, technical problems, from our side. Here is how Customer Commons puts it:

There is … no shortage of of business problems that can only be solved from the customer’s side. Here are a few examples :

  1. Identity. Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves, and to reveal to others only what we need them to know. Working on this challenge is the SSI—Self-Sovereign Identity—movement. The solution here for individuals is tools of their own that scale.
  2. Subscriptions. Nearly all subscriptions are pains in the butt. “Deals” can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.
  3. Terms and conditions. In the world today, nearly all of these are ones companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a “legitimate interest.”
  4. Payments. For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to be), customers should have their own pricing gun: a way to signal—and actually pay willing sellers—as much as they like, however they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.
  5. Internet of Things. What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on—all silo’d in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent customers, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be, or have, picos.)
  6. Loyalty. All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in position to truly and fully express it. We should have our own loyalty programs, to which companies are members, rather than the reverse.
  7. Privacy. We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters, and other ways to limit what others can see or hear—and to signal to others what’s okay and what’s not. Instead, all we have are unenforced promises by others not to watching our naked selves, or to report what they see to others. Or worse, coerced urgings to “accept” spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.
  8. Customer service. There are no standard ways to call for service yet, or to get it. And there should be.
  9. Advertising. Our main problem with advertising today is tracking, which is failing because it doesn’t work. (Some history: ad blocking has been around since 2004, it took off in 2013, when the advertising and publishing industries gave the middle finger to Do Not Track, which was never more than a polite request in one’s browser not to be tracked off a site. By 2015, ad blocking alone was the biggest boycott i world history. And in 2018 and 2019 we got the GDPR and the CCPA, two laws meant to thwart tracking and unwanted data collection, and which likely wouldn’t have happened if we hadn’t been given that finger.) We can solve that problem from the customer side with intentcasting,. This is where we advertise to the marketplace what we want, without risk that our personal data won’t me misused. (Here is a list of intentcasting providers on the ProjectVRM Development Work list.)

We already have examples of personal solutions working at scale: the Internet, the Web, email and telephony. Each provides single, simple and standards-based ways any of us can scale how we deal with others—across countless companies, organizations and services. And they work for those companies as well.

Other solutions, however, are missing—such as ones that solve the eight problems listed above.

They’re missing for the best of all possible reasons: it’s still early. Digital living is still new—decades old at most. And it’s sure to persist for many decades, centuries or millennia to come.

They’re also missing because businesses typically think all solutions to business problems are ones for them. Thinking about customers solving business problems is outside that box.

But much work is already happening outside that box. And there already exist standards and code for building many customer-side solutions to problems shared with businesses. Yes, there are not yet as many or as good as we need; but there are enough to get started.

A lot of levers there.

For those of you attending this event, I’ll talk with you shortly. For the rest of you, I’ll let you know how it goes.

Let’s say the world is going to hell. Don’t argue, because my case isn’t about that. It’s about who saves it.

I suggest everybody. Or, more practically speaking, a maximized assortment of the smartest and most helpful anybodies.

Not governments. Not academies. Not investors. Not charities. Not big companies and their platforms. Any of those can be involved, of course, but we don’t have to start there. We can start with people. Because all of them are different. All of them can learn. And teach. And share. Especially since we now have the Internet.

To put this in a perspective, start with Joy’s Law: “No matter who you are, most of the smartest people work for someone else.” Then take Todd Park‘s corollary: “Even if you get the best and the brightest to work for you, there will always be an infinite number of other, smarter people employed by others.” Then take off the corporate-context blinders, and note that smart people are actually far more plentiful among the world’s customers, readers, viewers, listeners, parishioners, freelancers and bystanders.

Hundreds of millions of those people also carry around devices that can record and share photos, movies, writings and a boundless assortment of other stuff. Ways of helping now verge on the boundless.

We already have millions (or billions) of them are reporting on everything by taking photos and recording videos with their mobiles, obsolescing journalism as we’ve known it since the word came into use (specifically, around 1830). What matters with the journalism example, however, isn’t what got disrupted. It’s how resourceful and helpful (and not just opportunistic) people can be when they have the tools.

Because no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M. Lamar. Nicole and Jonas Maines.)

Yes, there are some wheat/chaff distinctions to make here. To thresh those, I dig Carlo Cipolla‘s Basic Laws on Human Stupidity (.pdf here) which stars this graphic:

The upper right quadrant has how many people in it? Billions, for sure.

I’m counting on them. If we didn’t have the Internet, I wouldn’t.

In Internet 3.0 and the Beginning of (Tech) History, @BenThompson of @Stratechery writes this:

The Return of Technology

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

—followed by this graphic:

If you want to know what he means by “Politics,” read the piece. I take it as something of a backlash by regulators against big tech, especially in Europe. (With global scope. All those cookie notices you see are effects of European regulations.) But the bigger point is where that arrow goes. We need infrastructure there, and it won’t be provided by regulation alone. Tech needs to take the lead. (See what I wrote here three years ago.) But our tech, not big tech.

The wind is at our backs now. Let’s sail with it.

Bonus links: Cluetrain, New Clues, World of EndsCustomer Commons.

And a big HT to my old buddy Julius R. Ruff, Ph.D., for turning me on to Cipolla.

[Later…] Seth Godin calls all of us “indies.” I like that. HT to @DaveWiner for flagging it.

When some big outfit with a vested interest in violating your privacy says they are only trying to save small business, grab your wallet. Because the game they’re playing is misdirection away from what they really want.

The most recent case in point is Facebook, which ironically holds the world’s largest database on individual human interests while also failing to understand jack shit about personal boundaries.

This became clear when Facebook placed the ad above and others like it in major publications recently, and mostly made bad news for itself. We saw the same kind of thing in early 2014, when the IAB ran a similar campaign against Mozilla, using ads like this:

That one was to oppose Mozilla’s decision to turn on Do Not Track by default in its Firefox browser. Never mind that Do Not Track was never more than a polite request for websites to not be infected with a beacon, like those worn by marked animals, so one can be tracked away from the website. Had the advertising industry and its dependents in publishing simply listened to that signal, and respected it, we might never have had the GDPR or the CCPA, both of which are still failing at the same mission. (But, credit where due: the GDPR and the CCPA have at least forced websites to put up insincere and misleading opt-out popovers in front of every website whose lawyers are scared of violating the letter—but never the spirit—of those and other privacy laws.)

The IAB succeeded in its campaign against Mozilla and Do Not Track; but the the victory was Pyrrhic, because users decided to install ad blockers instead, which by 2015 was the largest boycott in human history. Plus a raft of privacy laws, with more in the pipeline.

We also got Apple on our side. That’s good, but not good enough.

What we need are working tools of our own. Examples: Global Privacy Control (and all the browsers and add-ons mentioned there), Customer Commons#NoStalking term, the IEEE’s P7012 – Standard for Machine Readable Personal Privacy Terms, and other approaches to solving business problems from the our side—rather than always from the corporate one.

In those movies, we’ll win.

Because if only Apple wins, we still lose.

Dammit, it’s still about what The Cluetrain Manifesto said in the first place, in this “one clue” published almost 21 years ago:

we are not seats or eyeballs or end users or consumers.
we are human beings — and out reach exceeds your grasp.
deal with it.

We have to make them deal. All of them. Not just Apple. We need code, protocols and standards, and not just regulations.

All the projects linked to above can use some help, plus others I’ll list here too if you write to me with them. (Comments here only work for Harvard email addresses, alas. I’m doc at searls dot com.)

If you listen to Episode 49: Parler, Ownership, and Open Source of the latest Reality 2.0 podcast, you’ll learn that I was blindsided at first by the topic of Parler, which has lately become a thing. But I caught up fast, even getting a Parler account not long after the show ended. Because I wanted to see what’s going on.

Though self-described as “the world’s town square,” Parler is actually a centralized social platform built for two purposes: 1) completely free speech; and 2) creating and expanding echo chambers.

The second may not be what Parler’s founders intended (see here), but that’s how social media algorithms work. They group people around engagements, especially likes. (I think, for our purposes here, that algorithmically nudged engagement is a defining feature of social media platforms as we understand them today. That would exclude, for example, Wikipedia or a popular blog or newsletter with lots of commenters. It would include, say, Reddit and Linkedin, because algorithms.)

Let’s start with recognizing that the smallest echo chamber in these virtual places is our own, comprised of the people we follow and who follow us. Then note that our visibility into other virtual spaces is limited by what’s shown to us by algorithmic nudging, such as by Twitter’s trending topics.

The main problem with this is not knowing what’s going on, especially inside other echo chambers. There are also lots of reasons for not finding out. For example, my Parler account sits idle because I don’t want Parler to associate me with any of the people it suggests I follow, soon as I show up:

l also don’t know what to make of this, which is the only other set of clues on the index page:

Especially since clicking on any of them brings up the same or similar top results, which seem to have nothing to do with the trending # topic. Example:

Thus endeth my research.

But serious researchers should be able to see what’s going on inside the systems that produce these echo chambers, especially Facebook’s.

The problem is that Facebook and other social networks are shell games, designed to make sure nobody knows exactly what’s going on, but feels okay with it, because they’re hanging with others who agree on the basics.

The design principle at work here is obscurantism—”the practice of deliberately presenting information in an imprecise, abstruse manner designed to limit further inquiry and understanding.”

To put the matter in relief, consider a nuclear power plant:

(Photo of kraftwerk Grafenrheinfeld, 2013, by Avda. Licensed CC BY-SA 3.0.)

Nothing here is a mystery. Or, if there is one, professional inspectors will be dispatched to solve it. In fact, the whole thing is designed from the start to be understandable, and its workings accountable to a dependent public.

Now look at a Facebook data center:

What it actually does is pure mystery, by design, to those outside the company. (And hell, to most, maybe all, of the people inside the company.) No inspector arriving to look at a rack of blinking lights in that place is going to know either. What Facebook looks like to you, to me, to anybody, is determined by a pile of discoveries, both on and off of Facebook’s site and app, around who you are and what to machines you seem interested in, and an algorithmic process that is not accountable to you, and impossible for anyone, perhaps including Facebook itself, to fully explain.

All societies, and groups within societies, are echo chambers. And, because they cohere in isolated (and isolating) ways it is sometimes hard for societies to understand each other, especially when they already have prejudicial beliefs about each other. Still, without the further influence of social media, researchers can look at and understand what’s going on.

Over in the digital world, which overlaps with the physical one, we at least know that social media amplifies prejudices. But, though it’s obvious by now that this is what’s going on, doing something to reduce or eliminate the production and amplification of prejudices is damn near impossible when the mechanisms behind it are obscure by design.

This is why I think these systems need to be turned inside out, so researchers can study them. I don’t know how to make that happen; but I do know there is nothing more large and consequential in the world that is also absent of academic inquiry. And that ain’t right.

BTW, if Facebook, Twitter, Parler or other social networks actually are opening their algorithmic systems to academic researchers, let me know and I’ll edit this piece accordingly.

December 10, 2020: This matter has been settled now, meaning Flickr appears not to be in trouble, and my account due for renewal will be automatically renewed. I’ve appended what settled the matter to the bottom of this post. Note that it also raises another question, about subscriptions. — Doc

I have two Flickr accounts, named Doc Searls and Nfrastructure. One has 73,355 photos, and the other 3,469. They each cost $60/year to maintain as pro accounts. They’ve both renewed automatically in the past; and the first one is already renewed, which I can tell because it says “Your plan will automatically renew on March 20, 2022.”

The second one, however… I dunno. Because, while my Account page says “Your plan will automatically renew on December 13, 2020,” I just got emails for both accounts saying, “This email is to confirm that we have stopped automatic billing for your subscription. Your subscription will continue to be active until the expiration date listed below. At that time, you will have to manually renew or your subscription will be cancelled.” The dates match the two above. At the bottom of each, in small print, it says “Digital River Inc. is the authorized reseller and merchant of the products and services offered within this store. Privacy Policy Terms of Sale Your California Privacy Rights.”

Hmmm. The Digital River link goes here, which appears to be in Ireland. A look at the email’s source shows the mail server is one in Kansas, and the Flickr.com addressing doesn’t look spoofed. So, it doesn’t look too scammy to me. Meaning I’m not sure what the scam is. Yet. If there is one.

Meanwhile, I do need to renew the subscription, and the risk of not renewing it is years of contributions (captions, notes, comments) out the window.

So I went to “Manage your Pro subscription” on the second one (which has four days left to expiration), and got this under “Update your Flickr Pro subscription information”

Plan changes are temporarily disabled. Please contact support for prompt assistance.

Cancel your subscription

The Cancel line is a link. I won’t click on it.

Now, I have never heard of a company depending on automatic subscription renewals switching from those to the manual kind. Nor have I heard of a subscription-dependent company sending out notices like these while the renewal function is disabled.

I would like to contact customer support; but there is no link for that on my account page. In fact, the words “customer” and “support” don’t appear there. “Help” does, however, and goes to https://help.flickr.com/, where I need to fill out a form. This I did, explaining,

I am trying to renew manually, but I get “Plan changes are temporarily disabled. Please contact support for prompt assistance.” So here I am. Please reach out. This subscription expires in four days, and I don’t want to lose the photos or the account. I’m [email address] for this account (I have another as well, which doesn’t renew until 2022), my phone is 805-705-9666, and my twitter is @dsearls. Thanks!

The robot replied,

Thanks for your message – you’ll get a reply from a Flickr Support Hero soon. If you don’t receive an automated message from Flickr confirming we received your message (including checking your spam folders), please make sure you provided a valid and active email. Thanks for your patience and we look forward to helping you!

Me too.

Meanwhile, I am wondering if Flickr is in trouble again.

I wondered about this in 2011 and again in 2016, (in my most-read Medium post, ever). Those were two of the (feels like many) times Flickr appeared to be on the brink. And I have been glad SmugMug took over the Flickr show in 2018. (I’m a paying SmugMug customer as well.) But this kind of thing is strange and has me worried. Should I be?

[Later, on December 10…]

Heard from Flickr this morning, with this:

Hi Doc,

When we migrated your account to Stripe, we had to cancel your subscription on Digital River. The email you received was just a notice of this event. I apologize for the confusion.

Just to confirm, there is no action needed at this time. You have an active Pro subscription in good standing and due for renewal on an annual term on December 14th, 2020.

To answer your initial question, since your account has been migrated to Stripe, while you can update your payment information, changes to subscription plans are temporarily unavailable. We expect this functionality to be restored soon.

I appreciate your patience and hope this helps.

For more information, please consult our FAQ here: https://help.flickr.com/faq-for-flickr-members-about-our-payment-processor-migration-SyN1cazsw

Before this issue came up, I hadn’t heard of Digital River or Stripe. Seems they are both “payment gateway” services (at least according to Finances Online). If you look down the list of what these companies can do, other than payment processing alone—merchandising, promotions, channel partner management, dispute handling, cross-border payment optimization, in-app solutions, risk management, email services, and integrations with dozens of different tools, products and extensions from the likes of Visa, MasterCard, Sage and many other companies with more obscure brand names—you can understand how a screw-up like this one can happen when moving from one provider to another.

Now the question for me is whether subscription systems really have to be this complex.

(Comments here only work for Harvard people; so if you’re not one of those, please reply elsewhere, such as on Twitter, where I’m @dsearls.)

The goal here is to obsolesce this brilliant poster by Despair.com:

I got launched on that path a couple months ago, when I got this email from  The_New_Yorker at e-mail.condenast.com:

Why did they “need” a “confirmation” to a subscription which, best I could recall, was last renewed early this year?

So I looked at the links.

The “renew,” Confirmation Needed” and “Discounted Subscription” links all go to a page with a URL that began with https://subscriptions.newyorker.com/pubs…, followed by a lot of tracking cruft. Here’s a screen shot of that one, cut short of where one filled in a credit card number. Note the price:

I was sure I had been paying $80-something per year, for years. As I also recalled, this was a price one could only obtain by calling the 800 number at NewYorker.com.

Or somewhere. After digging around, I found it at
 https://w1.buysub.com/pubs/N3/NYR/accoun…, which is where the link to Customer Care under My Account on the NewYorker website goes. It also required yet another login.

So, when I told the representative at the call center that I’d rather not “confirm” a year for a “discount” that probably wasn’t, she said I could renew for the $89.99 I had paid in the past, and that the deal would be good  through February of 2022. I said fine, let’s do that. So I gave her my credit card, said this was way too complicated, and added that a single simple subscription price would be better. She replied,  “Never gonna happen.” Let’s repeat that:

Never gonna happen.

Then I got this by email:

This appeared to confirm the subscription I already had. To see if that was the case, I went back to the buysub.com website and looked under the Account Summary tab, where it said this:

think this means that I last renewed on February 3 of this year, and what I did on the phone in August was commit to paying $89.99/year until February 10 of 2022.

If that’s what happened, all my call did was extend my existing subscription. Which was fine, but why require a phone call for that?

And WTF was that “Account Confirmation Required” email about? I assume it was bait to switch existing subscribers into paying $50 more per year.

Then there was this, at the bottom of the Account summary page:

This might explain why I stopped getting Vanity Fair, which I suppose I should still be getting.

So I clicked on”Reactivate and got a login page where the login I had used to get this far didn’t work.

After other failing efforts that I neglected to write down, I decided to go back to the New Yorker site and work my way back through two logins to the same page, and then click Reactivate one more time. Voila! ::::::

So now I’ve got one page that tells me I’m good to March 2021 next to a link that takes me to another page that says I ordered 12 issues last December and I can “start” a new subscription for $15 that would begin nine months ago. This is how one “reactivates” a subscription?  OMFG.

I’m also not going into the hell of moving the print subscription back and forth between the two places where I live. Nor will I bother now, in October, to ask why I haven’t seen another copy of Vanity Fair. (Maybe they’re going to the other place. Maybe not. I don’t know, and I’m too weary to try finding out.)

I want to be clear here that I am not sharing this to complain. In fact, I don’t want The New YorkerVanity Fair, Wred, Condé Nast (their parent company) or buysub.com to do a damn thing. They’re all FUBAR. By design. (Bonus link.)

Nor do I want any action out of Spectrum, SiriusXM, Dish Network or the other subscription-based whatevers whose customer disservice systems have recently soaked up many hours of my life.

See, with too many subscription systems (especially ones for periodicals), FUBAR is the norm. A matter of course. Pro forma. Entrenched. A box outside of which nobody making, managing or working in those systems can think.

This is why, when an alien idea appears, for example from a loyal customer just wanting a single and simple damn price, the response is “Never gonna happen.”

This is also why the subscription fecosystem can only be turned into an ecosystem from the outside. Our side. The subscribers’ side.

I’ll explain how at Customer Commons, which we created for exactly that purpose. Stay tuned for that.


Two exceptions are Consumer Reports and The Sun.

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.

door knocker

Remember the dot com boom?

Doesn’t matter if you don’t. What does matter is that it ended. All business manias do.

That’s why we can expect the “platform economy” and “surveillance capitalism” to end. Sure, it’s hard to imagine that when we’re in the midst of the mania, but the end will come.

When it does, we can have a “privacy debate.” Meanwhile, there isn’t one. In fact there can’t be one, because we don’t have privacy in the online world.

We do have privacy in the offline world, and we’ve had it ever since we invented clothing, doors, locks and norms for signaling what’s okay and what’s not okay in respect to our personal spaces, possessions and information.

That we hardly have the equivalent in the networked world doesn’t mean we won’t. Or that we can’t. The Internet in its current form was only born in the mid-’90s. In the history of business and culture, that’s a blip.

Really, it’s still early.

So, the fact that websites, network services, phone companies, platforms, publishers, advertisers and governments violate our privacy with wanton disregard for it doesn’t mean we can’t ever stop them. It means we haven’t done it yet, because we don’t have the tech for it. (Sure, some wizards do, but muggles don’t. And most of us are muggles.)

And, since we don’t have privacy tech yet, we lack the simple norms that grow around technologies that give us ways signal our privacy preferences. We’ll get those when we have the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells.

This is what many of us have been working on at ProjectVRM, Customer Commons, the Me2B Alliance, MyData and other organizations whose mission is getting each of us the tech we need to operate at full agency when dealing with the companies and governments of the world.

I bring all this up as a “Yes, and” to a piece in Salon by Michael Corn (@MichaelAlanCorn), CISO of UCSD, titled We’re losing the war against surveillance capitalism because we let Big Tech frame the debate. Subtitle: “It’s too late to conserve our privacy — but to preserve what’s left, we must stop defining people as commodities.”

Indeed. And we do need the “optimism and activism” he calls for. In the activism category is code. Specifically, code that gives us the digital equivalents of buttons, zippers, locks, shades, curtains, door knockers and bells

Some of those are in the works. Others are not—yet. But they will be. Inevitably. Especially now that it’s becoming clearer every day that we’ll never get them from any system with a financial interest in violating it*. Or from laws that fail at protecting it.

If you want to help, join one or more of the efforts in the links four paragraphs up. And, if you’re a developer already on the case, let us know how we can help get your solutions into each and all of our digital hands.

For guidance, this privacy manifesto should help. Thanks.


*Especially publishers such as Salon, which Privacy Badger tells me tries to pump 20 potential trackers into my browser while I read the essay cited above. In fact, according to WhoTracksMe.com, Salon tends to run 204 tracking requests per page load, and the vast majority of those are for tracking-based advertising purposes. And Salon is hardly unique. Despite the best intentions of the GDPR and the CCPA, surveillance capitalism remains fully defaulted on the commercial Web—and will continue to remain entrenched until we have the privacy tech we’ve needed from the start.

For more on all this, see People vs. Adtech.

If the GDPR did what it promised to do, we’d be celebrating Privmas today. Because, two years after the GDPR became enforceable, privacy would now be the norm rather than the exception in the online world.

That hasn’t happened, but it’s not just because the GDPR is poorly enforced.  It’s because it’s too easy for every damn site on the Web—and every damn business with an Internet connection—to claim compliance to the letter of GDPR while violating its spirit.

Want to see how easy? Try searching for GDPR+compliance+consent:

https://www.google.com/search?q=gdpr+compliance+consent

Nearly all of the ~21,000,000 results you’ll get are from sources pitching ways to continue tracking people online, mostly by obtaining “consent” to privacy violations that almost nobody would welcome in the offline world—exactly the kind of icky practice that the GDPR was meant to stop.

Imagine if there was a way for every establishment you entered to painlessly inject a load of tracking beacons into your bloodstream without you knowing it. And that these beacons followed you everywhere and reported your activities back to parties unknown. Would you be okay with that? And how would you like it if you couldn’t even enter without recording your agreement to accept being tracked—on a ledger kept only by the establishment, so you have no way to audit their compliance to the agreement, whatever it might be?

Well, that’s what you’re saying when you click “Accept” or “Got it” when a typical GDPR-complying website presents a cookie notice that says something like this:

That notice is from Vice, by the way. Here’s how the top story on Vice’s front page looks in Belgium (though a VPN), with Privacy Badger looking for trackers:

What’s typical here is that a publication, with no sense of irony, runs a story about privacy-violating harvesting of personal data… while doing the same. (By the way, those red sliders say I’m blocking those trackers. Were it not for Privacy Badger, I’d be allowing them.)

Yes, Google says you’re anonymized somehow in both DoubleClick and Google Analytics, but it’s you they are stalking. (Look up stalk as a verb. Top result: “to pursue or approach prey, quarry, etc., stealthily.” That’s what’s going on.)

The main problem with the GDPR is that it effectively requires that every visitor to every website opt out of being tracked, and to do so (thank you, insincere “compliance” systems) by going down stairs into the basements of website popovers to throw tracking choice toggles to “off” positions which are typically defaulted on when you get there.

Again, let’s be clear about this: There is no way for you to know exactly how you are being tracked or what is done with information gathered about you. That’s because the instrument for that—a tool on your side—isn’t available. It probably hasn’t even been invented. You also have no record of agreeing to anything. It’s not even clear that the site or its third parties have a record of that. All you’ve got is a cookie planted deep in your browser’s bowels, designed to announce itself to other parties everywhere you go on the Web. In sum, consenting to a cookie notice leaves nothing resembling an audit trail.

Oh, and the California Consumer Protection Privacy Act (CCPA) makes matters worse by embedding opt-out into law there, while also requiring shit like this in the opt-out basement of every website facing a visitor suspected of coming from that state:

CCPA notice

So let’s go back to a simple privacy principle here: It is just as wrong to track a person like a marked animal in the online world as it is in the offline one.

The GDPR and the CCPA were made to thwart that kind of thing. But they have failed. Instead, they have made the experience of being tracked online a worse one.

Yes, that was not their intent. And yes, both have done some good. But if you are any less followed online today than you were when the GDPR became enforceable two years ago, it’s because you and the browser makers have worked to thwart at least some tracking. (Though in very different ways, so your experience of not being followed is not a consistent one. Or even perceptible in many cases.)

So tracking remains worse than rampant: it’s defaulted practice for both advertising and site analytics. And will remain so until we have code, laws and enforcement to stop it.

So, nothing to celebrate. Not this Privmas.

Tags: , ,

« Older entries