Internet

You are currently browsing the archive for the Internet category.

Passwords are hell.

Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember.

Unless you’re a wizard, this all but requires using a password manager.†

Think about how hard that job is. First, it’s impossible for developers of password managers to do everything right:

  • Most of their customers and users need to have logins and passwords for hundreds of sites and services on the Web and elsewhere in the networked world
  • Every one of those sites and services has its own gauntlet of methods for registering logins and passwords, and for remembering and changing them
  • Every one of those sites and services has its own unique user interfaces, each with its own peculiarities
  • All of those UIs change, sometimes often.

Keeping up with that mess while also keeping personal data safe from both user error and determined bad actors, is about as tall as an order can get. And then you have to do all that work for each of the millions of customers you’ll need if you’re going to make the kind of money required to keep abreast of those problems and providing the solutions required.

So here’s the thing: the best we can do with passwords is the best that password managers can do. That’s your horizon right there.

Unless we can get past logins and passwords somehow.

And I don’t think we can. Not in the client-server ecosystem that the Web has become, and that industry never stopped being, since long before the Internet came along. That’s the real hell. Passwords are just a symptom.

We need to work around it. That’s my work now. Stay tuned here, here, and here for more on that.


† We need to fix that Wikipedia page.

The Web is a haystack.

This isn’t what Tim Berners-Lee had in mind when he invented the Web. Nor is it what Jerry Yang and David Filo had in mind when they invented Jerry and David’s Guide to the World Wide Web, which later became Yahoo. Jerry and David’s model for the Web was a library, and Yahoo was to be catalog for it. This made sense, given the prevailing conceptual frames for the Web at the time: real estate and publishing. Both are still with us today. We frame the Web as real estate when we speak of “sites” with “locations” in “domains” with “addresses” you can “visit” and “browse” through stuff called “files” and “pages,” which we “author,” “edit,” “post,” “publish,” “syndicate” and store in “folders” within a “directory.” Both frames suggest durability, if not permanence. Again, kind of like a library.

But once we added personal movement (“surf,” “browse”) and a vehicle for it (the browser), the Web became a World Wide Free-for-all. Literally. Anyone could publish, change and remove whatever they pleased, whenever they pleased. The same went for organizations of every kind, all over the world. And everyone with a browser could find their way to and through all of those spaces and places, and enjoy whatever “content” publishers chose to put there. Thus the Web grew into billions of sites, pages, images, databases, videos, and other stuff, with most of it changing constantly.

The result was a heaving heap of fuck-all.*

How big is it? According to WorldWebSize.comGoogle currently indexes about 41 billion pages, and Bing about 9 billion. They also peaked together at about 68 billion pages in late 2019. The Web is surely larger than that, but that’s the practical limit because search engines are the practical way to find pieces of straw in that thing. Will the haystack be less of one when approached by other search engines, such as the new ad-less (subscription-funded) Neeva? Nope. Search engines do not give the Web a card catalog. They certify its nature as a haystack.

So that’s one practical limit. There are others, but they’re hard to see when the level of optionality on the Web is almost indescribably vast. But we can see a few limits by asking some questions:

  1. Why do you always have to accept websites’ terms? And why do you have no record of your own of what you accepted, or when‚ or anything?
  2. Why do you have no way to proffer your own terms, to which websites can agree?
  3. Why did Do Not Track, which was never more than a polite request not to be tracked off a website, get no respect from 99.x% of the world’s websites? And how the hell did Do Not Track turn into the Tracking Preference Expression at the W2C, where the standard never did get fully baked?
  4. Why, after Do Not Track failed, did hundreds of millions—or perhaps billions—of people start blocking ads, tracking or both, on the Web, amounting to the biggest boycott in world history? And then why did the advertising world, including nearly all advertisers, their agents, and their dependents in publishing, treat this as a problem rather than a clear and gigantic message from the marketplace?
  5. Why are the choices presented to you by websites called your choices, when all those choices are provided by them? And why don’t you give them choices?
  6. Why would Apple’s way of making you private on your phone be to “Ask App Not to Track,” rather than “Tell App Not to Track,” or “Prevent App From Tracking You”?
  7. Why does the GDPR call people “data subjects” rather than people, or human beings, and then assign the roles “data controller” and “data processor” only to other parties?*
  8. Why are nearly all the 200+million results in a search for GDPR+compliance about how companies can obey the letter of the law while violating its spirit by continuing to track people through the giant loophole you see in every cookie notice?
  9. Why does the CCPA give you the right to ask to have back personal data others have gathered about you on the Web, rather than forbid its collection in the first place? (Imagine a law that assumes that all farmers’ horses are gone from their barns, but gives those farmers a right to demand horses back from those who took them. It’s kinda like that.)
  10. Why, 22 years after The Cluetrain Manifesto said, we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it. —is that statement still not true?
  11. Why, 9 years after Harvard Business Review Press published The Intention Economy: When Customers Take Charge, has that not happened? (Really, what are you in charge of in the marketplace that isn’t inside companies’ silos and platforms?)

It’s easy to blame the cookie, which Lou Montulli invented in 1994 as a way for sites to remember their visitors by planting reminder files—cookies—in visitors’ browsers. Cookies also gave visitors a way to remember where they were when they last visited. For sites that require logins, cookies take care of that as well.

What matters, however, is not the cookie. It’s what makes the cookie necessary in the first place: the Web’s architecture. It’s called client-server, and is represented graphically like this:

client-server model

This architecture was born in the era of centralized mainframes, which “users” accessed through client devices called “dumb terminals”:

On the Web, as it was in the old mainframe world, we clients—mere users—are as subordinate to servers as are cattle to ranchers or slaves to masters. In the client-server paradigm, our agency—our ability to act with effect in the world—is restricted to what servers allow or provide for us. Our choices are what they provide. We are independent only to the degree that we can also be clients to other servers. In this paradigm, a free market is “your choice of captor.”

Want privacy? You have to ask for it. And, if you go to the trouble of doing that—which you have to do separately with every site and service you encounter (each a mainframe of its own)—your client doesn’t keep a record of what you “agreed” to. The server does. Good luck finding whatever it is the server or its third parties remember about that agreement.

Want to control how your data (or data about you) gets processed by the servers of the world? Good luck with that too. Again, Europe’s GDPR says “natural persons” are just “data subjects,” while “data controllers” and “data processors” are roles reserved for servers.

Want a shopping cart of your own to take from site to site? My wife asked for that in 1995. It’s still barely thinkable in 2021. Want a dashboard for your life where you can gather all your expenses, investments, property records, health information, calendars, contacts, and other personal information? She asked for that too, and we still don’t have it, except to the degree that large server operators (e.g. Google, Apple, Microsoft) give us pieces of it, hosted in their clouds, and rigged to keep you captive to their systems.

That’s why we don’t yet have an Internet of Things (IoT), but rather an Apple of Things, a Google of Things, and an Amazon of Things.

Is it possible to do stuff on the Web that isn’t client-server? Perhaps some techies among us can provide examples, but practically speaking, here’s what matters: If it’s not thinkable by the owners of the servers we depend on, it doesn’t get made.

From our position at the bottom of the Web’s haystack, it’s hard to imagine there might be a world where it’s possible for us to have full agency: to not be just users of clients enslaved to as many servers as we deal with every day.

But that world exists. It’s called the Internet, and it can support a helluva lot more than the Web, with many ways to interact other than those possible in the client-server world alone.

Digital technology as we know it has only been around for a few decades, and the Internet for maybe half that time. Mobile computers that run apps and presume connectivity everywhere have only been with us for a decade or less. And all of those will be with us for many decades, centuries, or millennia to come. We are not going to stop living digital lives, any more than we are going to stop speaking, writing, or using mathematics. Digital technology and the Internet are granted wishes that won’t go back into the genie’s bottle.

So yes, the Web is wonderful, but not boundlessly so. It has limits. Thanks to the client-server architecture that prevails there, full personal agency is not a grace of life on the Web. For the thirty-plus years of the Web’s existence, and for its foreseeable future, we will never have more agency than its servers allow clients and users.

It’s time to think and build outside the haystack. Models for that do exist, and some have been around a long time.

Email, for example. While you can look at your email on the Web, or use a Web-based email service (such as Gmail), email itself is independent of those. My own searls.com email has been at servers in my home, on racks elsewhere, and in a hired cloud. I can move it anywhere I want. You can move yours as well. All the services I hire to host my email are substitutable. That’s just one way we can enjoy full agency on the Internet.

My own work outside the Web is currently happening at Customer Commons, on what we call the Byway. Go there and follow along as we work to toward better answers to the questions above than you’ll get from inside the haystack.


*I originally had “heaving haystack of fuck-all” here, but some remember it as the more alliterative “heaving heap of fuck-all.” So I decided to swap them. If comments actually worked here, I’d ask for a vote. But feel free to write me instead, at first name at last name dot com.

KSKO radio

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following.

If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. But many people in rural Alaska served by KSKO and its tiny repeaters don’t have Internet access, so the station is either their only choice, or one of a few. So we use the gear we have to get the content we can.

TV viewing is also drifting from cable to à la carte subscription services (Netflix, et. al.) delivered over the Internet, in much the same way that it drifted earlier from over-the-air to cable. And yet over-the-air is still with us. It’s also significant that most of us get our Internet over connections originally meant only for cable TV, or over cellular connections originally meant only for telephony.

Marshall and Eric McLuhan, in Laws of Media, say every new medium or technology does four things: enhanceretrieveobsolesce and reverse. (These are also caled the Tetrad of Media Effects.) And there are many answers in each category. For example, the Internet—

  • enhances content delivery;
  • retrieves radio, TV and telephone technologies;
  • obsolesces over-the-air listening and viewing;
  • reverses into tribalism;

—among many other effects within each of those.

The McLuhans also note that few things get completely obsolesced. For example, there are still steam engines in the world. Some people still make stone tools.

It should also help to note that the Internet is not a technology. At its base it’s a protocol—TCP/IP—that can be used by a boundless variety of technologies. A protocol is a set of manners among things that compute and communicate. What made the Internet ubiquitous and all-consuming was the adoption of TCP/IP by things that compute and communicate everywhere in the world.

This development—the worldwide adoption of TCP/IP—is beyond profound. It’s a change as radical as we might have if all the world suddenly spoke one common language. Even more radically, it creates a second digital world that coexists with our physical one.

In this digital world, we are at a functional distance apart of zero. We also have no gravity. We are simply present with each other. This means the only preposition that accurately applies to our experience of the Internet is with. Because we are not really on or through or over anything. Those prepositions refer to the physical world. The digital world is some(non)thing else.

This is why referring to the Internet as a medium isn’t quite right. It is a one-of-one, an example only of itself. Like the Universe. That you can broadcast through the Internet is just one of the countless activities it supports. (Even though the it is not an it in the material sense.)

I think we are only at the beginning of coming to grips with what it all means, besides a lot.

Historic milestones don’t always line up with large round numbers on our calendars. For example, I suggest that the 1950s ended with the assassination of JFK in late 1963, and the rise of British Rock, led by the Beatles, in 1964. I also suggest that the 1960s didn’t end until Nixon resigned, and disco took off, in 1974.

It has likewise been suggested that the 20th century actually began with the assassination of Archduke Ferdinand and the start of WWI, in 1914. While that and my other claims might be arguable, you might at least agree that there’s no need for historic shifts to align with two or more zeros on a calendar—and that in most cases they don’t.

So I’m here to suggest that the 21st century began in 2020 with the Covid-19 pandemic and the fall of Donald Trump. (And I mean that literally. Social media platforms were Trump’s man’s stage, and the whole of them dropped him, as if through a trap door, on the occasion of the storming of the U.S. Capitol by his supporters on January 6, 2021. Whether you liked that or not is beside the facticity of it.)

Things are not the same now. For example, over the coming years, we may never hug, shake hands, or comfortably sit next to strangers again.

But I’m bringing this up for another reason: I think the future we wrote about in The Cluetrain Manifesto, in World of Ends, in The Intention Economy, and in other optimistic expressions during the first two decades of the 21st Century may finally be ready to arrive.

At least that’s the feeling I get when I listen to an interview I did with Christian Einfeldt (@einfeldt) at a San Diego tech conference in April, 2004—and that I just discovered recently in the Internet Archive. The interview was for a film to be called “Digital Tipping Point.” Here are its eleven parts, all just a few minutes long:

01 https://archive.org/details/e-dv038_doc_…
02 https://archive.org/details/e-dv039_doc_…
03 https://archive.org/details/e-dv038_doc_…
04 https://archive.org/details/e-dv038_doc_…
05 https://archive.org/details/e-dv038_doc_…
06 https://archive.org/details/e-dv038_doc_…
07 https://archive.org/details/e-dv038_doc_…
08 https://archive.org/details/e-dv038_doc_…
09 https://archive.org/details/e-dv038_doc_…
10 https://archive.org/details/e-dv039_doc_…
11 https://archive.org/details/e-dv039_doc_…

The title is a riff on Malcolm Gladwell‘s book The Tipping Point, which came out in 2000, same year as The Cluetrain Manifesto. The tipping point I sensed four years later was, I now believe, a foreshadow of now, and only suggested by the successes of the open source movement and independent personal publishing in the form of blogs, both of which I was high on at the time.

What followed in the decade after the interview were the rise of social networks, of smart mobile phones and of what we now call Big Tech. While I don’t expect those to end in 2021, I do expect that we will finally see  the rise of personal agency and of constructive social movements, which I felt swelling in 2004.

Of course, I could be wrong about that. But I am sure that we are now experiencing the millennial shift we expected when civilization’s odometer rolled past 2000.

“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world,” Archimedes is said to have said.

For almost all of the last four years, Donald Trump was one hell of an Archimedes. With the U.S. presidency as his lever and Twitter as his fulcrum, the 45th President leveraged an endless stream of news-making utterances into a massive following and near-absolute domination of news coverage, worldwide. It was an amazing show, the like of which we may never see again.

Big as it was, that show ended on January 8, when Twitter terminated the @RealDonaldTrump account. Almost immediately after that, Trump was “de-platformed” from all these other services as well: PayPal, Reddit, Shopify, Snapchat, Discord, Amazon, Twitch, Facebook, TikTok, Google, Apple, Twitter, YouTube and Instagram. That’s a lot of fulcrums to lose.

What makes them fulcrums is their size. All are big, and all are centralized: run by one company. As members, users and customers of these centralized services, we are also at their mercy: no less vulnerable to termination than Trump.

So here is an interesting question: What if Trump had his own fulcrum from the start? For example, say he took one of the many Trump domains he probably owns (or should have bothered to own, long ago), and made it a blog where he said all the same things he tweeted, and that site had the same many dozens of millions of followers today? Would it still be alive?

I’m not sure it would. Because, even though the base protocols of the Internet and the Web are peer-to-peer and end-to-end, all of us are dependent on services above those protocols, and at the mercy of those services’ owners.

That to me is the biggest lesson the de-platforming of Donald Trump has for the rest of us. We can talk “de-centralization” and “distribution” and “democratization” along with peer-to-peer and end-to-end, but we are still at the mercy of giants.

Yes, there are work-arounds. The parler.com website, de-platformed along with Trump, is back up and, according to @VickerySec (Chris Vickery), “routing 100% of its user traffic through servers located within the Russian Federation.” Adds @AdamSculthorpe, “With a DDos-Guard IP, exactly as I predicted the day it went offline. DDoS Guard is the Russian equivalent of CloudFlare, and runs many shady sites. RiTM (Russia in the middle) is one way to think about it.” Encrypted services such as Signal and Telegram also provide ways for people to talk and be social. But those are also platforms, and we are at their mercy too.

I bring all this up as a way of thinking out loud toward the talk I’ll be giving in a few hours (also see here), on the topic “Centralized vs. Decentralized.” Here’s the intro:

Centralised thinking is easy. Control sits on one place, everything comes home, there is a hub, the corporate office is where all the decisions are made and it is a power game.

Decentralised thinking is complex. TCP/IP and HTTP created a fully decentralised fabric for packet communication. No-one is in control. It is beautiful. Web3 decentralised ideology goes much further but we continually run into conflicts. We need to measure, we need to report, we need to justify, we need to find a model and due to regulation and law, there are liabilities.

However, we have to be doing both. We have to centralise some aspects and at the same time decentralise others. Whilst we hang onto an advertising model that provides services for free we have to have a centralised business model. Apple with its new OS is trying to break the tracking model and in doing so could free us from the barter of free, is that the plan which has nothing to do with privacy or are the ultimate control freaks. But the new distributed model means more risks fall on the creators as the aggregators control the channels and access to a model. Is our love for free preventing us from seeing the value in truly distributed or are those who need control creating artefacts that keep us from achieving our dreams? Is distributed even possible with liability laws and a need to justify what we did to add value today?

So here is what I think I’ll say.

First, we need to respect the decentralized nature of humanity. All of us are different, by design. We look, sound, think and feel different, as separate human beings. As I say in How we save the world, “no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M.LamarNicole and Jonas Maines.)”

This simple fact of our distributed souls and talents has had scant respect from the centralized systems of the digital world, which would rather lead than follow us, and rather guess about us than understand us. That’s partly because too many of them have become dependent on surveillance-based personalized advertising (which is awful in ways I’ve detailed in 136 posts, essays and articles compiled here). But it’s mostly because they’re centralized and can’t think or work outside their very old and square boxes.

Second, advertising, subscriptions and donations through the likes of (again, centralized) Patreon aren’t the only possible ways to support a site or a service. Those are industrial age conventions leveraged in the early decades of the digital age. There are other approaches we can implement as well, now that the pendulum is started to swing back from the centralized extreme. For example, the fully decentralized EmanciPay. A bunch of us came up with that one at ProjectVRM way back in 2009. What makes it decentralized is that the choice of what to pay, and how, is up to the customer. (No, it doesn’t have to be scary.) Which brings me to—

Third, we need to start thinking about solving business problems, market problems, technical problems, from our side. Here is how Customer Commons puts it:

There is … no shortage of of business problems that can only be solved from the customer’s side. Here are a few examples :

  1. Identity. Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves, and to reveal to others only what we need them to know. Working on this challenge is the SSI—Self-Sovereign Identity—movement. The solution here for individuals is tools of their own that scale.
  2. Subscriptions. Nearly all subscriptions are pains in the butt. “Deals” can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.
  3. Terms and conditions. In the world today, nearly all of these are ones companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a “legitimate interest.”
  4. Payments. For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to be), customers should have their own pricing gun: a way to signal—and actually pay willing sellers—as much as they like, however they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.
  5. Internet of Things. What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on—all silo’d in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent customers, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be, or have, picos.)
  6. Loyalty. All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in position to truly and fully express it. We should have our own loyalty programs, to which companies are members, rather than the reverse.
  7. Privacy. We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters, and other ways to limit what others can see or hear—and to signal to others what’s okay and what’s not. Instead, all we have are unenforced promises by others not to watching our naked selves, or to report what they see to others. Or worse, coerced urgings to “accept” spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.
  8. Customer service. There are no standard ways to call for service yet, or to get it. And there should be.
  9. Advertising. Our main problem with advertising today is tracking, which is failing because it doesn’t work. (Some history: ad blocking has been around since 2004, it took off in 2013, when the advertising and publishing industries gave the middle finger to Do Not Track, which was never more than a polite request in one’s browser not to be tracked off a site. By 2015, ad blocking alone was the biggest boycott i world history. And in 2018 and 2019 we got the GDPR and the CCPA, two laws meant to thwart tracking and unwanted data collection, and which likely wouldn’t have happened if we hadn’t been given that finger.) We can solve that problem from the customer side with intentcasting,. This is where we advertise to the marketplace what we want, without risk that our personal data won’t me misused. (Here is a list of intentcasting providers on the ProjectVRM Development Work list.)

We already have examples of personal solutions working at scale: the Internet, the Web, email and telephony. Each provides single, simple and standards-based ways any of us can scale how we deal with others—across countless companies, organizations and services. And they work for those companies as well.

Other solutions, however, are missing—such as ones that solve the eight problems listed above.

They’re missing for the best of all possible reasons: it’s still early. Digital living is still new—decades old at most. And it’s sure to persist for many decades, centuries or millennia to come.

They’re also missing because businesses typically think all solutions to business problems are ones for them. Thinking about customers solving business problems is outside that box.

But much work is already happening outside that box. And there already exist standards and code for building many customer-side solutions to problems shared with businesses. Yes, there are not yet as many or as good as we need; but there are enough to get started.

A lot of levers there.

For those of you attending this event, I’ll talk with you shortly. For the rest of you, I’ll let you know how it goes.

Let’s say the world is going to hell. Don’t argue, because my case isn’t about that. It’s about who saves it.

I suggest everybody. Or, more practically speaking, a maximized assortment of the smartest and most helpful anybodies.

Not governments. Not academies. Not investors. Not charities. Not big companies and their platforms. Any of those can be involved, of course, but we don’t have to start there. We can start with people. Because all of them are different. All of them can learn. And teach. And share. Especially since we now have the Internet.

To put this in a perspective, start with Joy’s Law: “No matter who you are, most of the smartest people work for someone else.” Then take Todd Park‘s corollary: “Even if you get the best and the brightest to work for you, there will always be an infinite number of other, smarter people employed by others.” Then take off the corporate-context blinders, and note that smart people are actually far more plentiful among the world’s customers, readers, viewers, listeners, parishioners, freelancers and bystanders.

Hundreds of millions of those people also carry around devices that can record and share photos, movies, writings and a boundless assortment of other stuff. Ways of helping now verge on the boundless.

We already have millions (or billions) of them are reporting on everything by taking photos and recording videos with their mobiles, obsolescing journalism as we’ve known it since the word came into use (specifically, around 1830). What matters with the journalism example, however, isn’t what got disrupted. It’s how resourceful and helpful (and not just opportunistic) people can be when they have the tools.

Because no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M. Lamar. Nicole and Jonas Maines.)

Yes, there are some wheat/chaff distinctions to make here. To thresh those, I dig Carlo Cipolla‘s Basic Laws on Human Stupidity (.pdf here) which stars this graphic:

The upper right quadrant has how many people in it? Billions, for sure.

I’m counting on them. If we didn’t have the Internet, I wouldn’t.

In Internet 3.0 and the Beginning of (Tech) History, @BenThompson of @Stratechery writes this:

The Return of Technology

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

—followed by this graphic:

If you want to know what he means by “Politics,” read the piece. I take it as something of a backlash by regulators against big tech, especially in Europe. (With global scope. All those cookie notices you see are effects of European regulations.) But the bigger point is where that arrow goes. We need infrastructure there, and it won’t be provided by regulation alone. Tech needs to take the lead. (See what I wrote here three years ago.) But our tech, not big tech.

The wind is at our backs now. Let’s sail with it.

Bonus links: Cluetrain, New Clues, World of EndsCustomer Commons.

And a big HT to my old buddy Julius R. Ruff, Ph.D., for turning me on to Cipolla.

[Later…] Seth Godin calls all of us “indies.” I like that. HT to @DaveWiner for flagging it.

If you listen to Episode 49: Parler, Ownership, and Open Source of the latest Reality 2.0 podcast, you’ll learn that I was blindsided at first by the topic of Parler, which has lately become a thing. But I caught up fast, even getting a Parler account not long after the show ended. Because I wanted to see what’s going on.

Though self-described as “the world’s town square,” Parler is actually a centralized social platform built for two purposes: 1) completely free speech; and 2) creating and expanding echo chambers.

The second may not be what Parler’s founders intended (see here), but that’s how social media algorithms work. They group people around engagements, especially likes. (I think, for our purposes here, that algorithmically nudged engagement is a defining feature of social media platforms as we understand them today. That would exclude, for example, Wikipedia or a popular blog or newsletter with lots of commenters. It would include, say, Reddit and Linkedin, because algorithms.)

Let’s start with recognizing that the smallest echo chamber in these virtual places is our own, comprised of the people we follow and who follow us. Then note that our visibility into other virtual spaces is limited by what’s shown to us by algorithmic nudging, such as by Twitter’s trending topics.

The main problem with this is not knowing what’s going on, especially inside other echo chambers. There are also lots of reasons for not finding out. For example, my Parler account sits idle because I don’t want Parler to associate me with any of the people it suggests I follow, soon as I show up:

l also don’t know what to make of this, which is the only other set of clues on the index page:

Especially since clicking on any of them brings up the same or similar top results, which seem to have nothing to do with the trending # topic. Example:

Thus endeth my research.

But serious researchers should be able to see what’s going on inside the systems that produce these echo chambers, especially Facebook’s.

The problem is that Facebook and other social networks are shell games, designed to make sure nobody knows exactly what’s going on, but feels okay with it, because they’re hanging with others who agree on the basics.

The design principle at work here is obscurantism—”the practice of deliberately presenting information in an imprecise, abstruse manner designed to limit further inquiry and understanding.”

To put the matter in relief, consider a nuclear power plant:

(Photo of kraftwerk Grafenrheinfeld, 2013, by Avda. Licensed CC BY-SA 3.0.)

Nothing here is a mystery. Or, if there is one, professional inspectors will be dispatched to solve it. In fact, the whole thing is designed from the start to be understandable, and its workings accountable to a dependent public.

Now look at a Facebook data center:

What it actually does is pure mystery, by design, to those outside the company. (And hell, to most, maybe all, of the people inside the company.) No inspector arriving to look at a rack of blinking lights in that place is going to know either. What Facebook looks like to you, to me, to anybody, is determined by a pile of discoveries, both on and off of Facebook’s site and app, around who you are and what to machines you seem interested in, and an algorithmic process that is not accountable to you, and impossible for anyone, perhaps including Facebook itself, to fully explain.

All societies, and groups within societies, are echo chambers. And, because they cohere in isolated (and isolating) ways it is sometimes hard for societies to understand each other, especially when they already have prejudicial beliefs about each other. Still, without the further influence of social media, researchers can look at and understand what’s going on.

Over in the digital world, which overlaps with the physical one, we at least know that social media amplifies prejudices. But, though it’s obvious by now that this is what’s going on, doing something to reduce or eliminate the production and amplification of prejudices is damn near impossible when the mechanisms behind it are obscure by design.

This is why I think these systems need to be turned inside out, so researchers can study them. I don’t know how to make that happen; but I do know there is nothing more large and consequential in the world that is also absent of academic inquiry. And that ain’t right.

BTW, if Facebook, Twitter, Parler or other social networks actually are opening their algorithmic systems to academic researchers, let me know and I’ll edit this piece accordingly.

I just got this email today:

Which tells me, from a sample of one (after another, after another) that Zoom is to video conferencing in 2020 what Microsoft Windows was to personal computing in 1999. Back then one business after another said they would only work with Windows and what was left of DOS: Microsoft’s two operating systems for PCs.

What saved the personal computing world from being absorbed into Microsoft was the Internet—and the Web, running on the Internet. The Internet, based on a profoundly generative protocol, supported all kinds of hardware and software at an infinitude of end points. And the Web, based on an equally generative protocol, manifested on browsers that ran on Mac and Linux computers, as well as Windows ones.

But video conferencing is different. Yes, all the popular video conferencing systems run in apps that work on multiple operating systems, and on the two main mobile device OSes as well. And yes, they are substitutable. You don’t have to use Zoom (unless, in cases like mine, where talking to my doctors requires it). There’s still Skype, Webex, Microsoft Teams, Google Hangouts and the rest.

But all of them have a critical dependency through their codecs. Those are the ways they code and decode audio and video. While there are some open source codecs, all the systems I just named use proprietary (patent-based) codecs. The big winner among those is H.264, aka AVC-1, which Wikipedia says “is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video industry developers as of September 2019.” Also,

H.264 is perhaps best known as being the most commonly used video encoding format on Blu-ray Discs. It is also widely used by streaming Internet sources, such as videos from NetflixHuluPrime VideoVimeoYouTube, and the iTunes Store, Web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSCISDB-TDVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2) systems.

H.264 is protected by patents owned by various parties. A license covering most (but not all) patents essential to H.264 is administered by a patent pool administered by MPEG LA.[9]

The commercial use of patented H.264 technologies requires the payment of royalties to MPEG LA and other patent owners. MPEG LA has allowed the free use of H.264 technologies for streaming Internet video that is free to end users, and Cisco Systems pays royalties to MPEG LA on behalf of the users of binaries for its open source H.264 encoder.

This is generative, clearly, but not as generative as the Internet and the Web, which are both end-to-end by design. .

More importantly, AVC-1 in effect slides the Internet and the Web into the orbit of companies that have taken over what used to be telephony and television, which are now mooshed together. In the Columbia Doctors example, Zoom the new PBX. The new classroom is every teacher and kid on her or his own rectangle, “zooming” with each other through the new telephony. The new TV is Netflix, Disney, Comcast, Spectrum, Apple, Amazon and many others, all competing for wedges our Internet access and entertainment budgets.

In this new ecosystem, you are less the producer than you were, or would have been, in the early days of the Net and the Web. You are the end user, the consumer, the audience, the customer. Not the producer, the performer. Sure, you can audition for those roles, and play them on YouTube and TikTok, but those are somebody else’s walled gardens. You operate within them at their grace. You are not truly free.

And maybe none of us ever were, in those early days of the Net and the Web. But it sure seemed that way. And it does seem that we have lost something.

Or maybe just that we are slowly losing it, in the manner of boiling frogs.

Do we have to? I mean, it’s still early.

The digital world is how old? Decades, at most.

And how long will it last? At the very least, more than that. Centuries or millennia, probably.

So there’s hope.

[Later…] For some of that, dig OBS—Open Broadcaster Software’s OBS StudioFree and open source software for video recording and live streaming. HT: Joel Grossman (@jgro).

Also, though unrelated, why is Columbia Doctors’ Telehealth leaking patient data to advertisers? See here.

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.

There is latency to everything. Pain, for example. Nerve impulses from pain sensors travel at about two feet per second. That’s why we wait for the pain when we stub a toe. The crack of a bat on a playing field takes half a second before we hear it in the watching crowd. The sunlight we see on Earth is eight minutes old. Most of this doesn’t matter to us, or if it does we adjust to it.

Likewise with how we adjust to the inverse square law. That law is why the farther away something is, the smaller it looks or the fainter it sounds. How much smaller or fainter is something we intuit more than we calculate. What matters is that we understand the law with our bodies. In fact we understand pretty much everything with our bodies.

All our deepest, most unconscious metaphors start with our bodies. That’s why we graspcatch, toss around, or throw away an idea. It’s also why nearly all our prepositions pertain to location or movement. Over, under, around, throughwithbeside, within, alongside, on, off, above and below only make sense to us because we have experienced them with our bodies.

So::: How are we to make full sense of the Web, or the Internet, where we are hardly embodied at all?

We may say we are on the Web, because we need it to make sense to us as embodied beings. Yet we are only looking at a manifestation of it.

The “it” is the hypertext protocol (http) that Tim Berners-Lee thought up in 1990 so high energy physicists, scattered about the world, could look at documents together. That protocol ran on another one: TCP/IP. Together they were mannered talk among computers about how to show the same document across any connection over any collection of networks between any two end points, regardless of who owned or controlled those networks. In doing so, Tim rubbed a bottle of the world’s disparate networks. Out popped the genie we call the Web, ready to grant boundless wishes that only began with document sharing.

This was a miracle beyond the scale of loaves and fish: one so new and so odd that the movie Blade Runner, which imagined in 1982 that Los Angeles in 2019 would feature floating cars, off-world colonies and human replicants, failed to foresee a future when anyone could meet with anyone else, or any group, anywhere in the world, on wish-granting slabs they could put on their desks, laps, walls or hold in their hands. (Instead Blade Runner imagined there would still be pay phones and computers with vacuum tubes for screens.)

This week I attended Web Science 20 on my personal slab in California, instead of what was planned originally: in a conference at the University of Southampton in the UK. It was still a conference, but now a virtual one, comprised of many people on many slabs, all over the world, each with no sense of distance any more meaningful than those imposed by the inconvenience of time zones.

Joyce (my wife, who is also the source of much wisdom for which her husband gets the credit) says our experience on the Web is one of absent distance and gravity—and that this experience is still so new to us that we have only begun to make full sense of it as embodied creatures. We’ll adjust, she says, much as astronauts adjust to the absence of gravity; but it will take more time than we’ve had so far. We may become expert at using the likes of Zoom, but that doesn’t mean we operate in full comprehension of the new digital environment we co-occupy.

My own part in WebSci20 was talking with five good people, plus others asking questions in a chat, during the closing panel of the conference. (That’s us, at the top of this post.) The title of our session was The Future of Web Science. To prep for that session I wrote the first draft of what follows: a series of thoughts I hoped to bring up in the session, and some of which I actually did.

The first of thought is the one I just introduced: The Web, like the Net it runs on, is both new and utterly vexing toward understanding in terms we’ve developed for making sense of embodied existence.

Here are some more.

The Web is a whiteboard.

In the beginning we thought of the Web as something of a library, mostly because it was comprised of sites with addresses and pages that were authoredpublishedsyndicated, browsed and read. A universal resource locator, better known as a URL, would lead us through what an operating system calls a path or a directory, much as a card catalog did before library systems went digital. It also helped that we understood the Web as real estate, with sites and domains that one owned and others could visit.

The metaphor of the Web as a library, though useful, also misdirects our attention and understanding away from its nature as collection of temporary manifestations. Because, for all we attempt to give the Web a sense of permanence, it is evanescent, temporary, ephemeral. We write and publish there as we might on snow, sand or a whiteboard. Even the websites we are said to “own” are in fact only rented. Fail to pay the registrar and off it goes.

The Web is not what’s on it.

It is not Google, or Facebook, dot-anything or dot-anybody. It is the manifestation of documents and other non-stuff we call “content,” presented to us in browsers and whatever else we invent to see and deal with what the hypertext protocol makes possible. Here is how David Weinberger and I put it in World of Ends, more than seventeen years ago:

1. The Internet isn’t complicated
2. The Internet isn’t a thing. It’s an agreement.
3. The Internet is stupid.
4. Adding value to the Internet lowers its value.
5. All the Internet’s value grows on its edges.
6. Money moves to the suburbs.
7. The end of the world? Nah, the world of ends.
8. The Internet’s three virtues:
a. No one owns it
b. Everyone can use it
c. Anyone can improve it
9. If the Internet is so simple, why have so many been so boneheaded about it?
10. Some mistakes we can stop making already

That was a follow-up of sorts to The Cluetrain Manifesto, which we co-wrote with two other guys four years earlier. We followed up both five years ago with an appendix to Cluetrain called New Clues. While I doubt we’d say any of that stuff the same ways today, the heart of it beats the same.

The Web is free.

The online advertising industry likes to claim the “free Internet” is a grace of advertising that is “relevant,” “personalized,” “interest-based,” “interactive” and other adjectives that misdirect us away from what those forms of advertising actually do, which is track us like marked animals.

That claim, of course, is bullshit. Here’s what Harry Frankfurt says about that in his canonical work, On Bullshit (Cambridge University Press, 1988): “The realms of advertising and public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.” Boiled down, bullshit is what Wikipedia (at the moment, itsef being evanescent) calls “speech intended to persuade without regard for truth.” Another distinction: “The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care if what they say is true or false, but rather only cares whether their listener is persuaded.”

Consider for a moment Win Bigly: Persuasion in a World Where Facts Don’t Matter, a 2017 book by Scott Adams that explains, among other things, how a certain U.S. tycoon got his ass elected President. The world Scott’s talks about is the Web.

Nothing in the history of invention is more supportive of bullshit than the Web. Nor is anything more supportive of truth-telling, education and damned near everything else one can do in the civilized world. And we’re only beginning to discover and make sense of all those possibilities.

We’re all digital now

Meaning not just physical. This is what’s new, not just to human experience, but to human existence.

Marshall McLuhan calls our technologies, including our media, extensions of our bodily selves. Consider how, when you ride a bike or drive a car, those are my wheels and my brakes. Our senses extend outward to suffuse our tools and other technologies, making them parts of our larger selves. Michael Polanyi called this process indwelling.

Think about how, although we are not really on or through the Web, we do dwell in it when we read, write, speak, watch and perform there. That is what I am doing right now, while I type what I see on a screen in San Marino, California, as a machine, presumably in Cambridge, Massachusetts, records my keystrokes and presents them back to me, and now you are reading it, somewhere else in (or on, or choose your preposition) the world. Dwell may be the best verb for what each of us are doing in the non-here we all co-occupy in this novel (to the physical world) non-place and times.

McLuhan also said media revolutions are formal causes. Meaning that they form us. (He got that one from Aristotle.) In different ways we were formed and re-formed by speech, writing, printing, and radio and television broadcasting.

I submit that we are far more formed by digital technologies, and especially by the Internet and the Web, than by any other prior technical revolution. (A friend calls our current revolution “the biggest thing since oxygenation.”)

But this is hard to see because, as McLuhan puts it, every one of these major revolutions becomes a ground on which everything else dances as figures. But it is essential to recognize that the figures are not the ground. This, I suggest, is the biggest challenge for Web Science.

It’s damned hard to study ground-level formal causes such as digital tech, the Net and the Web. Because what they are technically is not what they do formally. They are rising tides that float all boats, in oblivity to the boats themselves.

I could say more, and I’m sure I will; but I want to get this much out there before the panel.

 

 

« Older entries