Internet

You are currently browsing the archive for the Internet category.

KSKO radio

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following.

If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. But many people in rural Alaska served by KSKO and its tiny repeaters don’t have Internet access, so the station is either their only choice, or one of a few. So we use the gear we have to get the content we can.

TV viewing is also drifting from cable to à la carte subscription services (Netflix, et. al.) delivered over the Internet, in much the same way that it drifted earlier from over-the-air to cable. And yet over-the-air is still with us. It’s also significant that most of us get our Internet over connections originally meant only for cable TV, or over cellular connections originally meant only for telephony.

Marshall and Eric McLuhan, in Laws of Media, say every new medium or technology does four things: enhanceretrieveobsolesce and reverse. (These are also caled the Tetrad of Media Effects.) And there are many answers in each category. For example, the Internet—

  • enhances content delivery;
  • retrieves radio, TV and telephone technologies;
  • obsolesces over-the-air listening and viewing;
  • reverses into tribalism;

—among many other effects within each of those.

The McLuhans also note that few things get completely obsolesced. For example, there are still steam engines in the world. Some people still make stone tools.

It should also help to note that the Internet is not a technology. At its base it’s a protocol—TCP/IP—that can be used by a boundless variety of technologies. A protocol is a set of manners among things that compute and communicate. What made the Internet ubiquitous and all-consuming was the adoption of TCP/IP by things that compute and communicate everywhere in the world.

This development—the worldwide adoption of TCP/IP—is beyond profound. It’s a change as radical as we might have if all the world suddenly spoke one common language. Even more radically, it creates a second digital world that coexists with our physical one.

In this digital world, we are at a functional distance apart of zero. We also have no gravity. We are simply present with each other. This means the only preposition that accurately applies to our experience of the Internet is with. Because we are not really on or through or over anything. Those prepositions refer to the physical world. The digital world is some(non)thing else.

This is why referring to the Internet as a medium isn’t quite right. It is a one-of-one, an example only of itself. Like the Universe. That you can broadcast through the Internet is just one of the countless activities it supports. (Even though the it is not an it in the material sense.)

I think we are only at the beginning of coming to grips with what it all means, besides a lot.

Historic milestones don’t always line up with large round numbers on our calendars. For example, I suggest that the 1950s ended with the assassination of JFK in late 1963, and the rise of British Rock, led by the Beatles, in 1964. I also suggest that the 1960s didn’t end until Nixon resigned, and disco took off, in 1974.

It has likewise been suggested that the 20th century actually began with the assassination of Archduke Ferdinand and the start of WWI, in 1914. While that and my other claims might be arguable, you might at least agree that there’s no need for historic shifts to align with two or more zeros on a calendar—and that in most cases they don’t.

So I’m here to suggest that the 21st century began in 2020 with the Covid-19 pandemic and the fall of Donald Trump. (And I mean that literally. Social media platforms were Trump’s man’s stage, and the whole of them dropped him, as if through a trap door, on the occasion of the storming of the U.S. Capitol by his supporters on January 6, 2021. Whether you liked that or not is beside the facticity of it.)

Things are not the same now. For example, over the coming years, we may never hug, shake hands, or comfortably sit next to strangers again.

But I’m bringing this up for another reason: I think the future we wrote about in The Cluetrain Manifesto, in World of Ends, in The Intention Economy, and in other optimistic expressions during the first two decades of the 21st Century may finally be ready to arrive.

At least that’s the feeling I get when I listen to an interview I did with Christian Einfeldt (@einfeldt) at a San Diego tech conference in April, 2004—and that I just discovered recently in the Internet Archive. The interview was for a film to be called “Digital Tipping Point.” Here are its eleven parts, all just a few minutes long:

01 https://archive.org/details/e-dv038_doc_…
02 https://archive.org/details/e-dv039_doc_…
03 https://archive.org/details/e-dv038_doc_…
04 https://archive.org/details/e-dv038_doc_…
05 https://archive.org/details/e-dv038_doc_…
06 https://archive.org/details/e-dv038_doc_…
07 https://archive.org/details/e-dv038_doc_…
08 https://archive.org/details/e-dv038_doc_…
09 https://archive.org/details/e-dv038_doc_…
10 https://archive.org/details/e-dv039_doc_…
11 https://archive.org/details/e-dv039_doc_…

The title is a riff on Malcolm Gladwell‘s book The Tipping Point, which came out in 2000, same year as The Cluetrain Manifesto. The tipping point I sensed four years later was, I now believe, a foreshadow of now, and only suggested by the successes of the open source movement and independent personal publishing in the form of blogs, both of which I was high on at the time.

What followed in the decade after the interview were the rise of social networks, of smart mobile phones and of what we now call Big Tech. While I don’t expect those to end in 2021, I do expect that we will finally see  the rise of personal agency and of constructive social movements, which I felt swelling in 2004.

Of course, I could be wrong about that. But I am sure that we are now experiencing the millennial shift we expected when civilization’s odometer rolled past 2000.

“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world,” Archimedes is said to have said.

For almost all of the last four years, Donald Trump was one hell of an Archimedes. With the U.S. presidency as his lever and Twitter as his fulcrum, the 45th President leveraged an endless stream of news-making utterances into a massive following and near-absolute domination of news coverage, worldwide. It was an amazing show, the like of which we may never see again.

Big as it was, that show ended on January 8, when Twitter terminated the @RealDonaldTrump account. Almost immediately after that, Trump was “de-platformed” from all these other services as well: PayPal, Reddit, Shopify, Snapchat, Discord, Amazon, Twitch, Facebook, TikTok, Google, Apple, Twitter, YouTube and Instagram. That’s a lot of fulcrums to lose.

What makes them fulcrums is their size. All are big, and all are centralized: run by one company. As members, users and customers of these centralized services, we are also at their mercy: no less vulnerable to termination than Trump.

So here is an interesting question: What if Trump had his own fulcrum from the start? For example, say he took one of the many Trump domains he probably owns (or should have bothered to own, long ago), and made it a blog where he said all the same things he tweeted, and that site had the same many dozens of millions of followers today? Would it still be alive?

I’m not sure it would. Because, even though the base protocols of the Internet and the Web are peer-to-peer and end-to-end, all of us are dependent on services above those protocols, and at the mercy of those services’ owners.

That to me is the biggest lesson the de-platforming of Donald Trump has for the rest of us. We can talk “de-centralization” and “distribution” and “democratization” along with peer-to-peer and end-to-end, but we are still at the mercy of giants.

Yes, there are work-arounds. The parler.com website, de-platformed along with Trump, is back up and, according to @VickerySec (Chris Vickery), “routing 100% of its user traffic through servers located within the Russian Federation.” Adds @AdamSculthorpe, “With a DDos-Guard IP, exactly as I predicted the day it went offline. DDoS Guard is the Russian equivalent of CloudFlare, and runs many shady sites. RiTM (Russia in the middle) is one way to think about it.” Encrypted services such as Signal and Telegram also provide ways for people to talk and be social. But those are also platforms, and we are at their mercy too.

I bring all this up as a way of thinking out loud toward the talk I’ll be giving in a few hours (also see here), on the topic “Centralized vs. Decentralized.” Here’s the intro:

Centralised thinking is easy. Control sits on one place, everything comes home, there is a hub, the corporate office is where all the decisions are made and it is a power game.

Decentralised thinking is complex. TCP/IP and HTTP created a fully decentralised fabric for packet communication. No-one is in control. It is beautiful. Web3 decentralised ideology goes much further but we continually run into conflicts. We need to measure, we need to report, we need to justify, we need to find a model and due to regulation and law, there are liabilities.

However, we have to be doing both. We have to centralise some aspects and at the same time decentralise others. Whilst we hang onto an advertising model that provides services for free we have to have a centralised business model. Apple with its new OS is trying to break the tracking model and in doing so could free us from the barter of free, is that the plan which has nothing to do with privacy or are the ultimate control freaks. But the new distributed model means more risks fall on the creators as the aggregators control the channels and access to a model. Is our love for free preventing us from seeing the value in truly distributed or are those who need control creating artefacts that keep us from achieving our dreams? Is distributed even possible with liability laws and a need to justify what we did to add value today?

So here is what I think I’ll say.

First, we need to respect the decentralized nature of humanity. All of us are different, by design. We look, sound, think and feel different, as separate human beings. As I say in How we save the world, “no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M.LamarNicole and Jonas Maines.)”

This simple fact of our distributed souls and talents has had scant respect from the centralized systems of the digital world, which would rather lead than follow us, and rather guess about us than understand us. That’s partly because too many of them have become dependent on surveillance-based personalized advertising (which is awful in ways I’ve detailed in 136 posts, essays and articles compiled here). But it’s mostly because they’re centralized and can’t think or work outside their very old and square boxes.

Second, advertising, subscriptions and donations through the likes of (again, centralized) Patreon aren’t the only possible ways to support a site or a service. Those are industrial age conventions leveraged in the early decades of the digital age. There are other approaches we can implement as well, now that the pendulum is started to swing back from the centralized extreme. For example, the fully decentralized EmanciPay. A bunch of us came up with that one at ProjectVRM way back in 2009. What makes it decentralized is that the choice of what to pay, and how, is up to the customer. (No, it doesn’t have to be scary.) Which brings me to—

Third, we need to start thinking about solving business problems, market problems, technical problems, from our side. Here is how Customer Commons puts it:

There is … no shortage of of business problems that can only be solved from the customer’s side. Here are a few examples :

  1. Identity. Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves, and to reveal to others only what we need them to know. Working on this challenge is the SSI—Self-Sovereign Identity—movement. The solution here for individuals is tools of their own that scale.
  2. Subscriptions. Nearly all subscriptions are pains in the butt. “Deals” can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.
  3. Terms and conditions. In the world today, nearly all of these are ones companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a “legitimate interest.”
  4. Payments. For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to be), customers should have their own pricing gun: a way to signal—and actually pay willing sellers—as much as they like, however they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.
  5. Internet of Things. What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on—all silo’d in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent customers, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be, or have, picos.)
  6. Loyalty. All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in position to truly and fully express it. We should have our own loyalty programs, to which companies are members, rather than the reverse.
  7. Privacy. We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters, and other ways to limit what others can see or hear—and to signal to others what’s okay and what’s not. Instead, all we have are unenforced promises by others not to watching our naked selves, or to report what they see to others. Or worse, coerced urgings to “accept” spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.
  8. Customer service. There are no standard ways to call for service yet, or to get it. And there should be.
  9. Advertising. Our main problem with advertising today is tracking, which is failing because it doesn’t work. (Some history: ad blocking has been around since 2004, it took off in 2013, when the advertising and publishing industries gave the middle finger to Do Not Track, which was never more than a polite request in one’s browser not to be tracked off a site. By 2015, ad blocking alone was the biggest boycott i world history. And in 2018 and 2019 we got the GDPR and the CCPA, two laws meant to thwart tracking and unwanted data collection, and which likely wouldn’t have happened if we hadn’t been given that finger.) We can solve that problem from the customer side with intentcasting,. This is where we advertise to the marketplace what we want, without risk that our personal data won’t me misused. (Here is a list of intentcasting providers on the ProjectVRM Development Work list.)

We already have examples of personal solutions working at scale: the Internet, the Web, email and telephony. Each provides single, simple and standards-based ways any of us can scale how we deal with others—across countless companies, organizations and services. And they work for those companies as well.

Other solutions, however, are missing—such as ones that solve the eight problems listed above.

They’re missing for the best of all possible reasons: it’s still early. Digital living is still new—decades old at most. And it’s sure to persist for many decades, centuries or millennia to come.

They’re also missing because businesses typically think all solutions to business problems are ones for them. Thinking about customers solving business problems is outside that box.

But much work is already happening outside that box. And there already exist standards and code for building many customer-side solutions to problems shared with businesses. Yes, there are not yet as many or as good as we need; but there are enough to get started.

A lot of levers there.

For those of you attending this event, I’ll talk with you shortly. For the rest of you, I’ll let you know how it goes.

Let’s say the world is going to hell. Don’t argue, because my case isn’t about that. It’s about who saves it.

I suggest everybody. Or, more practically speaking, a maximized assortment of the smartest and most helpful anybodies.

Not governments. Not academies. Not investors. Not charities. Not big companies and their platforms. Any of those can be involved, of course, but we don’t have to start there. We can start with people. Because all of them are different. All of them can learn. And teach. And share. Especially since we now have the Internet.

To put this in a perspective, start with Joy’s Law: “No matter who you are, most of the smartest people work for someone else.” Then take Todd Park‘s corollary: “Even if you get the best and the brightest to work for you, there will always be an infinite number of other, smarter people employed by others.” Then take off the corporate-context blinders, and note that smart people are actually far more plentiful among the world’s customers, readers, viewers, listeners, parishioners, freelancers and bystanders.

Hundreds of millions of those people also carry around devices that can record and share photos, movies, writings and a boundless assortment of other stuff. Ways of helping now verge on the boundless.

We already have millions (or billions) of them are reporting on everything by taking photos and recording videos with their mobiles, obsolescing journalism as we’ve known it since the word came into use (specifically, around 1830). What matters with the journalism example, however, isn’t what got disrupted. It’s how resourceful and helpful (and not just opportunistic) people can be when they have the tools.

Because no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M. Lamar. Nicole and Jonas Maines.)

Yes, there are some wheat/chaff distinctions to make here. To thresh those, I dig Carlo Cipolla‘s Basic Laws on Human Stupidity (.pdf here) which stars this graphic:

The upper right quadrant has how many people in it? Billions, for sure.

I’m counting on them. If we didn’t have the Internet, I wouldn’t.

In Internet 3.0 and the Beginning of (Tech) History, @BenThompson of @Stratechery writes this:

The Return of Technology

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

—followed by this graphic:

If you want to know what he means by “Politics,” read the piece. I take it as something of a backlash by regulators against big tech, especially in Europe. (With global scope. All those cookie notices you see are effects of European regulations.) But the bigger point is where that arrow goes. We need infrastructure there, and it won’t be provided by regulation alone. Tech needs to take the lead. (See what I wrote here three years ago.) But our tech, not big tech.

The wind is at our backs now. Let’s sail with it.

Bonus links: Cluetrain, New Clues, World of EndsCustomer Commons.

And a big HT to my old buddy Julius R. Ruff, Ph.D., for turning me on to Cipolla.

[Later…] Seth Godin calls all of us “indies.” I like that. HT to @DaveWiner for flagging it.

If you listen to Episode 49: Parler, Ownership, and Open Source of the latest Reality 2.0 podcast, you’ll learn that I was blindsided at first by the topic of Parler, which has lately become a thing. But I caught up fast, even getting a Parler account not long after the show ended. Because I wanted to see what’s going on.

Though self-described as “the world’s town square,” Parler is actually a centralized social platform built for two purposes: 1) completely free speech; and 2) creating and expanding echo chambers.

The second may not be what Parler’s founders intended (see here), but that’s how social media algorithms work. They group people around engagements, especially likes. (I think, for our purposes here, that algorithmically nudged engagement is a defining feature of social media platforms as we understand them today. That would exclude, for example, Wikipedia or a popular blog or newsletter with lots of commenters. It would include, say, Reddit and Linkedin, because algorithms.)

Let’s start with recognizing that the smallest echo chamber in these virtual places is our own, comprised of the people we follow and who follow us. Then note that our visibility into other virtual spaces is limited by what’s shown to us by algorithmic nudging, such as by Twitter’s trending topics.

The main problem with this is not knowing what’s going on, especially inside other echo chambers. There are also lots of reasons for not finding out. For example, my Parler account sits idle because I don’t want Parler to associate me with any of the people it suggests I follow, soon as I show up:

l also don’t know what to make of this, which is the only other set of clues on the index page:

Especially since clicking on any of them brings up the same or similar top results, which seem to have nothing to do with the trending # topic. Example:

Thus endeth my research.

But serious researchers should be able to see what’s going on inside the systems that produce these echo chambers, especially Facebook’s.

The problem is that Facebook and other social networks are shell games, designed to make sure nobody knows exactly what’s going on, but feels okay with it, because they’re hanging with others who agree on the basics.

The design principle at work here is obscurantism—”the practice of deliberately presenting information in an imprecise, abstruse manner designed to limit further inquiry and understanding.”

To put the matter in relief, consider a nuclear power plant:

(Photo of kraftwerk Grafenrheinfeld, 2013, by Avda. Licensed CC BY-SA 3.0.)

Nothing here is a mystery. Or, if there is one, professional inspectors will be dispatched to solve it. In fact, the whole thing is designed from the start to be understandable, and its workings accountable to a dependent public.

Now look at a Facebook data center:

What it actually does is pure mystery, by design, to those outside the company. (And hell, to most, maybe all, of the people inside the company.) No inspector arriving to look at a rack of blinking lights in that place is going to know either. What Facebook looks like to you, to me, to anybody, is determined by a pile of discoveries, both on and off of Facebook’s site and app, around who you are and what to machines you seem interested in, and an algorithmic process that is not accountable to you, and impossible for anyone, perhaps including Facebook itself, to fully explain.

All societies, and groups within societies, are echo chambers. And, because they cohere in isolated (and isolating) ways it is sometimes hard for societies to understand each other, especially when they already have prejudicial beliefs about each other. Still, without the further influence of social media, researchers can look at and understand what’s going on.

Over in the digital world, which overlaps with the physical one, we at least know that social media amplifies prejudices. But, though it’s obvious by now that this is what’s going on, doing something to reduce or eliminate the production and amplification of prejudices is damn near impossible when the mechanisms behind it are obscure by design.

This is why I think these systems need to be turned inside out, so researchers can study them. I don’t know how to make that happen; but I do know there is nothing more large and consequential in the world that is also absent of academic inquiry. And that ain’t right.

BTW, if Facebook, Twitter, Parler or other social networks actually are opening their algorithmic systems to academic researchers, let me know and I’ll edit this piece accordingly.

I just got this email today:

Which tells me, from a sample of one (after another, after another) that Zoom is to video conferencing in 2020 what Microsoft Windows was to personal computing in 1999. Back then one business after another said they would only work with Windows and what was left of DOS: Microsoft’s two operating systems for PCs.

What saved the personal computing world from being absorbed into Microsoft was the Internet—and the Web, running on the Internet. The Internet, based on a profoundly generative protocol, supported all kinds of hardware and software at an infinitude of end points. And the Web, based on an equally generative protocol, manifested on browsers that ran on Mac and Linux computers, as well as Windows ones.

But video conferencing is different. Yes, all the popular video conferencing systems run in apps that work on multiple operating systems, and on the two main mobile device OSes as well. And yes, they are substitutable. You don’t have to use Zoom (unless, in cases like mine, where talking to my doctors requires it). There’s still Skype, Webex, Microsoft Teams, Google Hangouts and the rest.

But all of them have a critical dependency through their codecs. Those are the ways they code and decode audio and video. While there are some open source codecs, all the systems I just named use proprietary (patent-based) codecs. The big winner among those is H.264, aka AVC-1, which Wikipedia says “is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video industry developers as of September 2019.” Also,

H.264 is perhaps best known as being the most commonly used video encoding format on Blu-ray Discs. It is also widely used by streaming Internet sources, such as videos from NetflixHuluPrime VideoVimeoYouTube, and the iTunes Store, Web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSCISDB-TDVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2) systems.

H.264 is protected by patents owned by various parties. A license covering most (but not all) patents essential to H.264 is administered by a patent pool administered by MPEG LA.[9]

The commercial use of patented H.264 technologies requires the payment of royalties to MPEG LA and other patent owners. MPEG LA has allowed the free use of H.264 technologies for streaming Internet video that is free to end users, and Cisco Systems pays royalties to MPEG LA on behalf of the users of binaries for its open source H.264 encoder.

This is generative, clearly, but not as generative as the Internet and the Web, which are both end-to-end by design. .

More importantly, AVC-1 in effect slides the Internet and the Web into the orbit of companies that have taken over what used to be telephony and television, which are now mooshed together. In the Columbia Doctors example, Zoom the new PBX. The new classroom is every teacher and kid on her or his own rectangle, “zooming” with each other through the new telephony. The new TV is Netflix, Disney, Comcast, Spectrum, Apple, Amazon and many others, all competing for wedges our Internet access and entertainment budgets.

In this new ecosystem, you are less the producer than you were, or would have been, in the early days of the Net and the Web. You are the end user, the consumer, the audience, the customer. Not the producer, the performer. Sure, you can audition for those roles, and play them on YouTube and TikTok, but those are somebody else’s walled gardens. You operate within them at their grace. You are not truly free.

And maybe none of us ever were, in those early days of the Net and the Web. But it sure seemed that way. And it does seem that we have lost something.

Or maybe just that we are slowly losing it, in the manner of boiling frogs.

Do we have to? I mean, it’s still early.

The digital world is how old? Decades, at most.

And how long will it last? At the very least, more than that. Centuries or millennia, probably.

So there’s hope.

[Later…] For some of that, dig OBS—Open Broadcaster Software’s OBS StudioFree and open source software for video recording and live streaming. HT: Joel Grossman (@jgro).

Also, though unrelated, why is Columbia Doctors’ Telehealth leaking patient data to advertisers? See here.

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.

There is latency to everything. Pain, for example. Nerve impulses from pain sensors travel at about two feet per second. That’s why we wait for the pain when we stub a toe. The crack of a bat on a playing field takes half a second before we hear it in the watching crowd. The sunlight we see on Earth is eight minutes old. Most of this doesn’t matter to us, or if it does we adjust to it.

Likewise with how we adjust to the inverse square law. That law is why the farther away something is, the smaller it looks or the fainter it sounds. How much smaller or fainter is something we intuit more than we calculate. What matters is that we understand the law with our bodies. In fact we understand pretty much everything with our bodies.

All our deepest, most unconscious metaphors start with our bodies. That’s why we graspcatch, toss around, or throw away an idea. It’s also why nearly all our prepositions pertain to location or movement. Over, under, around, throughwithbeside, within, alongside, on, off, above and below only make sense to us because we have experienced them with our bodies.

So::: How are we to make full sense of the Web, or the Internet, where we are hardly embodied at all?

We may say we are on the Web, because we need it to make sense to us as embodied beings. Yet we are only looking at a manifestation of it.

The “it” is the hypertext protocol (http) that Tim Berners-Lee thought up in 1990 so high energy physicists, scattered about the world, could look at documents together. That protocol ran on another one: TCP/IP. Together they were mannered talk among computers about how to show the same document across any connection over any collection of networks between any two end points, regardless of who owned or controlled those networks. In doing so, Tim rubbed a bottle of the world’s disparate networks. Out popped the genie we call the Web, ready to grant boundless wishes that only began with document sharing.

This was a miracle beyond the scale of loaves and fish: one so new and so odd that the movie Blade Runner, which imagined in 1982 that Los Angeles in 2019 would feature floating cars, off-world colonies and human replicants, failed to foresee a future when anyone could meet with anyone else, or any group, anywhere in the world, on wish-granting slabs they could put on their desks, laps, walls or hold in their hands. (Instead Blade Runner imagined there would still be pay phones and computers with vacuum tubes for screens.)

This week I attended Web Science 20 on my personal slab in California, instead of what was planned originally: in a conference at the University of Southampton in the UK. It was still a conference, but now a virtual one, comprised of many people on many slabs, all over the world, each with no sense of distance any more meaningful than those imposed by the inconvenience of time zones.

Joyce (my wife, who is also the source of much wisdom for which her husband gets the credit) says our experience on the Web is one of absent distance and gravity—and that this experience is still so new to us that we have only begun to make full sense of it as embodied creatures. We’ll adjust, she says, much as astronauts adjust to the absence of gravity; but it will take more time than we’ve had so far. We may become expert at using the likes of Zoom, but that doesn’t mean we operate in full comprehension of the new digital environment we co-occupy.

My own part in WebSci20 was talking with five good people, plus others asking questions in a chat, during the closing panel of the conference. (That’s us, at the top of this post.) The title of our session was The Future of Web Science. To prep for that session I wrote the first draft of what follows: a series of thoughts I hoped to bring up in the session, and some of which I actually did.

The first of thought is the one I just introduced: The Web, like the Net it runs on, is both new and utterly vexing toward understanding in terms we’ve developed for making sense of embodied existence.

Here are some more.

The Web is a whiteboard.

In the beginning we thought of the Web as something of a library, mostly because it was comprised of sites with addresses and pages that were authoredpublishedsyndicated, browsed and read. A universal resource locator, better known as a URL, would lead us through what an operating system calls a path or a directory, much as a card catalog did before library systems went digital. It also helped that we understood the Web as real estate, with sites and domains that one owned and others could visit.

The metaphor of the Web as a library, though useful, also misdirects our attention and understanding away from its nature as collection of temporary manifestations. Because, for all we attempt to give the Web a sense of permanence, it is evanescent, temporary, ephemeral. We write and publish there as we might on snow, sand or a whiteboard. Even the websites we are said to “own” are in fact only rented. Fail to pay the registrar and off it goes.

The Web is not what’s on it.

It is not Google, or Facebook, dot-anything or dot-anybody. It is the manifestation of documents and other non-stuff we call “content,” presented to us in browsers and whatever else we invent to see and deal with what the hypertext protocol makes possible. Here is how David Weinberger and I put it in World of Ends, more than seventeen years ago:

1. The Internet isn’t complicated
2. The Internet isn’t a thing. It’s an agreement.
3. The Internet is stupid.
4. Adding value to the Internet lowers its value.
5. All the Internet’s value grows on its edges.
6. Money moves to the suburbs.
7. The end of the world? Nah, the world of ends.
8. The Internet’s three virtues:
a. No one owns it
b. Everyone can use it
c. Anyone can improve it
9. If the Internet is so simple, why have so many been so boneheaded about it?
10. Some mistakes we can stop making already

That was a follow-up of sorts to The Cluetrain Manifesto, which we co-wrote with two other guys four years earlier. We followed up both five years ago with an appendix to Cluetrain called New Clues. While I doubt we’d say any of that stuff the same ways today, the heart of it beats the same.

The Web is free.

The online advertising industry likes to claim the “free Internet” is a grace of advertising that is “relevant,” “personalized,” “interest-based,” “interactive” and other adjectives that misdirect us away from what those forms of advertising actually do, which is track us like marked animals.

That claim, of course, is bullshit. Here’s what Harry Frankfurt says about that in his canonical work, On Bullshit (Cambridge University Press, 1988): “The realms of advertising and public relations, and the nowadays closely related realm of politics, are replete with instances of bullshit so unmitigated that they can serve among the most indisputable and classic paradigms of the concept.” Boiled down, bullshit is what Wikipedia (at the moment, itsef being evanescent) calls “speech intended to persuade without regard for truth.” Another distinction: “The liar cares about the truth and attempts to hide it; the bullshitter doesn’t care if what they say is true or false, but rather only cares whether their listener is persuaded.”

Consider for a moment Win Bigly: Persuasion in a World Where Facts Don’t Matter, a 2017 book by Scott Adams that explains, among other things, how a certain U.S. tycoon got his ass elected President. The world Scott’s talks about is the Web.

Nothing in the history of invention is more supportive of bullshit than the Web. Nor is anything more supportive of truth-telling, education and damned near everything else one can do in the civilized world. And we’re only beginning to discover and make sense of all those possibilities.

We’re all digital now

Meaning not just physical. This is what’s new, not just to human experience, but to human existence.

Marshall McLuhan calls our technologies, including our media, extensions of our bodily selves. Consider how, when you ride a bike or drive a car, those are my wheels and my brakes. Our senses extend outward to suffuse our tools and other technologies, making them parts of our larger selves. Michael Polanyi called this process indwelling.

Think about how, although we are not really on or through the Web, we do dwell in it when we read, write, speak, watch and perform there. That is what I am doing right now, while I type what I see on a screen in San Marino, California, as a machine, presumably in Cambridge, Massachusetts, records my keystrokes and presents them back to me, and now you are reading it, somewhere else in (or on, or choose your preposition) the world. Dwell may be the best verb for what each of us are doing in the non-here we all co-occupy in this novel (to the physical world) non-place and times.

McLuhan also said media revolutions are formal causes. Meaning that they form us. (He got that one from Aristotle.) In different ways we were formed and re-formed by speech, writing, printing, and radio and television broadcasting.

I submit that we are far more formed by digital technologies, and especially by the Internet and the Web, than by any other prior technical revolution. (A friend calls our current revolution “the biggest thing since oxygenation.”)

But this is hard to see because, as McLuhan puts it, every one of these major revolutions becomes a ground on which everything else dances as figures. But it is essential to recognize that the figures are not the ground. This, I suggest, is the biggest challenge for Web Science.

It’s damned hard to study ground-level formal causes such as digital tech, the Net and the Web. Because what they are technically is not what they do formally. They are rising tides that float all boats, in oblivity to the boats themselves.

I could say more, and I’m sure I will; but I want to get this much out there before the panel.

 

 

A few days ago, in Figuring the Future, I sourced an Arnold Kling blog post that posed an interesting pair of angles toward outlook: a 2×2 with Fragile <—> Robust on one axis and Essential <—> Inessential on the other. In his sort, essential + fragile are hospitals and airlines. Inessential + fragile are cruise ships and movie theaters. Robust + essential are tech giants. Inessential + robust are sports and entertainment conglomerates, plus major restaurant chains. It’s a heuristic, and all of it is arguable (especially given the gray along both axes), which is the idea. Cases must be made if planning is to have meaning.

Now, haul Arnold’s template over to The U.S. Labor Market During the Beginning of the Pandemic Recession, by Tomaz Cajner, Leland D. Crane, Ryan A. Decker, John Grigsby, Adrian Hamins-Puertolas, Erik Hurst, Christopher Kurz, and Ahu Yildirmaz, of the University of Chicago, and lay it on this item from page 21:

The highest employment drop, in Arts, Entertainment and Recreation, leans toward inessential + fragile. The second, in Accommodation and Food Services is more on the essential + fragile side. The lowest employment changes, from Construction on down to Utilities, all tending toward essential + robust.

So I’m looking at those bottom eight essential + robust categories and asking a couple of questions:

1) What percentage of workers in each essential + robust category are now working from home?

2) How much of this work is essentially electronic? Meaning, done by people who live and work through glowing rectangles, connected on the Internet?

Hard to say, but the answers will have everything to do with the transition of work, and life in general, into a digital world that coexists with the physical one. This was the world we were gradually putting together when urgency around COVID-19 turned “eventually” into “now.”

In Junana, Bruce Caron writes,

“Choose One” was extremely powerful. It provided a seed for everything from language (connecting sound to meaning) to traffic control (driving on only one side of the road). It also opened up to a constructivist view of society, suggesting that choice was implicit in many areas, including gender.

Choose One said to the universe, “There are several ways we can go, but we’re all going to agree on this way for now, with the understanding that we can do it some other way later, thank you.” It wasn’t quite as elegant as “42,” but it was close. Once you started unfolding with it, you could never escape the arbitrariness of that first choice.

In some countries, an arbitrary first choice to eliminate or suspend personal privacy allowed intimate degrees of contract tracing to help hammer flat the infection curve of COVID-19. Not arbitrary, perhaps, but no longer escapable.

Other countries face similar choices. Here in the U.S., there is an argument that says “The tech giants already know our movements and social connections intimately. Combine that with what governments know and we can do contact tracing to a fine degree. What matters privacy if in reality we’ve lost it already and many thousands or millions of lives are at stake—and so are the economies that provide what we call our ‘livings.’ This virus doesn’t care about privacy, and for now neither should we.” There is also an argument that says, “Just because we have no privacy yet in the digital world is no reason not to have it. So, if we do contact tracing through our personal electronics, it should be disabled afterwards and obey old or new regulations respecting personal privacy.”

Those choices are not binary, of course. Nor are they outside the scope of too many other choices to name here. But many of those are “Choose Ones” that will play out, even if our choice is avoidance.

In The Web and the New Reality, which I posted on December 1, 1995 (and again a few days ago), I called that date “Reality 1.995.12,” and made twelve predictions. In this post I’ll visit how those have played out over the quarter century since then.

1. As more customers come into direct contact with suppliers, markets for suppliers will change from target populations to conversations.

Well, both. While there are many more direct conversations between demand and supply than there were in the pre-Internet world, we are more targeted than ever, now personally and not just as populations. This has turned into a gigantic problem that many of us have been talking about for a decade or more, to sadly insufficient effect.

2. Travel, ticket, advertising and PR agencies will all find new ways to add value, or they will be subtracted from market relationships that no longer require them.

I don’t recall why I grouped those four things, so let’s break them apart:

  • Little travel agencies went to hell. Giant Net-based ones thrived. See here.
  • Tickets are now almost all digital. I don’t know what a modern ticket agency does, if if any exist.
  • Advertising agencies went digital and became malignant. I’ve written about that a lot, here. All of those writings could be compressed to a pull quote from Separating Advertising’s Wheat and Chaff: “Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.”
  • PR agencies, far as I know (and I haven’t looked very far) are about the same.

3. Within companies, marketing communications will change from peripheral activities to core competencies.New media will flourish on the Web, and old media will learn to live with the Web and take advantage of it.

If we count the ascendance of the Chief Marketing Officer (CMO) as a success, this was a bulls-eye. However, most CMOs are all about “digital,” by which they generally mean direct response marketing. And if you didn’t skip to this item you know what I think about that.

4. Retail space will complement cyber space. Customer and technical service will change dramatically, as 800 numbers yield to URLs and hard copy documents yield to soft copy versions of the same thing… but in browsable, searchable forms.

Yep. All that happened.

5. Shipping services of all kinds will bloom. So will fulfillment services. So will ticket and entertainment sales services.

That too.

The web’s search engines will become the new yellow pages for the whole world. Your fingers will still do the walking, but they won’t get stained with ink. Same goes for the white pages. Also the blue ones.

And that.

6. The scope of the first person plural will enlarge to include the whole world. “We” may mean everybody on the globe, or any coherent group that inhabits it, regardless of location. Each of us will swing from group to group like monkeys through trees.

Oh yeah.

7. National borders will change from barricades and toll booths into speed bumps and welcome mats.

Mixed success. When I wrote this, nearly all Internet access was through telcos, so getting online away from home still required a local phone number. That’s pretty much gone. But the Internet itself is being broken into pieces. See here

8. The game will be over for what teacher John Taylor Gatto labels “the narcotic we call television.” Also for the industrial relic of compulsory education. Both will be as dead as the mainframe business. In other words: still trucking, but not as the anchoring norms they used to be.

That hasn’t happened; but self-education, home-schooling and online study of all kinds are thriving.

9. Big Business will become as anachronistic as Big Government, because institutional mass will lose leverage without losing inertia.

Well, this happened. So, no.

10. Domination will fail where partnering succeeds, simply because partners with positive sums will combine to outproduce winners and losers with zero sums.

Here’s what I meant by that.
I think more has happened than hasn’t. But, visiting the particulars requires a whole ‘nuther post.

11. Right will make might.

Nope. And this one might never happen. Hey, in 25 years one tends to become wiser.

12. And might will be mighty different.

That’s true, and in some ways that depresses me.

So, on the whole, not bad.

« Older entries