Internet

You are currently browsing the archive for the Internet category.

I wrote this more than a quarter century ago when Linux Journal was the only publication that would have me, and I posted unsold essays and wannabe columns at searls.com. These postings accumulated in this subdirectory for several years before Dave Winer got me to blog for real, starting here.

Interesting how much has changed since I wrote this, and how much hasn’t. Everything I said about metaphor applies no less than ever, even as all the warring parties mentioned have died or moved on to other activities, if not battles. (Note that there was no Google at this time, and the search engines mentioned exist only as fossils in posts such as this one.)

Perhaps most interesting is the paragraph about MARKETS ARE CONVERSATIONS. While that one-liner had no effect at the time, it became a genie that would not return to its bottle after Chris Locke, David Weinberger, Rick Levine and I put it in The Cluetrain Manifesto in 1999. In fact, I had been saying “markets are conversations” to no effect at least since the 1980s. Now join the conversation” is bullshit almost everywhere it’s uttered, but you can’t stop hearing it. Strange how that goes.

MAKE MONEY, NOT WAR
TIME TO MOVE PAST THE WAR METAPHORS OF THE INDUSTRIAL AGE

By Doc Searls
19 March 1997

“War isn’t an instinct. It’s an invention.”

“The metaphor is probably the most fertile power possessed by man.”

“Conversation is the socializing instrument par excellence.”

-José Ortega y Gasset


Patton lives

In the movie “Patton,” the general says, “Compared to war, all other forms of human endeavor shrink to insignificance.” In a moment of self-admonition, he adds, “God help me, I love it so.”

And so do we. For proof, all we have to do is pick up a trade magazine. Or better yet, fire up a search engine.

Altavista says more than one million documents on the Web contain the words MicrosoftNetscape, and war. Hotbot lists hundreds of documents titled “Microsoft vs. Netscape,” and twice as many titled “Netscape vs. Microsoft.”

It’s hard to find an article about the two companies that does not cast them as opponents battling over “turf,” “territory,” “sectors” and other geographies.

It’s also hard to start a conversation without using the same metaphorical premise. Intranet Design Magazine recently hosted a thread titled “Who’s winning?? Netscape vs. Microsoft.” Dave Shafer starts the thread with “Wondering what your informed opinion is on who is winning the internet war and what affects this will have on inter/intranet development.” The first respondent says, “sorry, i’m from a french country,” and “I’m searching for economical informations about the war between Microsoft and Netscape for the control of the WEB industrie.” Just as telling is a post by a guy named Michael, who says “Personaly I have both on my PC.”

So do I. Hey, I’ve got 80 megs of RAM and a 2 gig hard drive, so why not? I also have five ISPs, four word processors, three drawing programs, and two presentation packages. I own competing products from Apple, IBM, Microsoft, Netscape, Adobe, Yamaha, Sony, Panasonic, Aiwa, Subaru, Fisher Price and the University of Chicago — to name just a few I can see from where I sit. I don’t sense that buying and using any of these is a territorial act, a victory for one company, or a defeat for another.

But that doesn’t mean we don’t have those perceptions when we write and talk about companies and the markets where they compete. Clearly, we do, because we understand business — as we understand just about everything — in metaphorical terms. As it happens, our understanding of companies and markets is largely structured by the metaphors BUSINESS IS WAR and MARKETS ARE BATTLEFIELDS.

By those metaphors we share an understanding that companies fight battles over market territories that they attack, defend, dominate, yield or abandon. Their battlefields contain beachheads, bunkers, foxholes, sectors, streams, hills, mountains, swamps, streams, rivers, landslides, quagmires, mud, passages, roadblocks, and high ground. In fact, the metaphor BUSINESS IS WAR is such a functional conceptual system that it unconsciously pumps out clichés like a machine. And since sports is a sublimated and formalized kind of war, the distances between sports and war metaphors in business are so small that the vocabularies mix without clashing.

Here, I’ll pick up the nearest Business Week… it’s the January 13 issue. Let’s look at the High Technology section that starts on page 104. The topic is Software and the headline reads, “Battle stations! This industry is up for grabs as never before…” Here’s the first paragraph, with war and sports references capitalized: “Software was once an orderly affair in which a few PLAYERS called most of the shots. The industry had almost gotten used to letting Microsoft Corp. set the agenda in personal computing. But as the Internet ballooned into a $1 billion software business in 1996, HUGE NEW TERRITORIES came up for grabs. Microsoft enters the new year in a STRONG POSITION TO REASSERT CONTROL. But it will have to FIGHT OFF Netscape, IBM, Oracle and dozens of startups that are DESPERATELY STAKING OUT TURF on the Net. ‘Everyone is RACING TO FIND MARKET SPACE and get established…'”

Is this a good thing? Does it matter? The vocabularies of war and sports may be the most commonly used sources of metaphors, for everything from academic essays to fashion stories. Everybody knows war involves death and destruction, yet we experience little if any of that in the ordinary conduct of business, or even of violent activities such as sports.

So why should we concern ourselves with war metaphors, when we all know we don’t take them literally?

Two reasons. First, we do take them literally. Maybe we don’t kill each other, but the sentiments are there, and they do have influences. Second, war rarely yields positive sums, except for one side or another. The economy the Internet induces is an explosion of positive sums that accrue to many if not all participants. Doesn’t it deserve a more accurate metaphor?

For answers, let’s turn to George Lakoff.

The matter of Metaphor

“Answer true or false,” Firesign Theater says. “Dogs flew spaceships. The Aztecs invented the vacation… If you answered ‘false’ to any of these questions, then everything you know is wrong.”

This is the feeling you begin to get when you read George Lakoff, the foremost authority on the matter of metaphor. Lakoff is Professor of Linguistics and Cognitive Science at UC-Berkeley, the author of Women, Fire and Dangerous Things and Moral Politics: What Conservatives Know that Liberals Don’t. He is also co-author of Metaphors We Live By and More than Cool Reason. All are published by the University of Chicago Press.


Maybe that’s why they didn’t give us the real story in school. It would have been like pulling the pins out of a bunch of little hand grenades.

If Lakoff is right, the most important class you ignored in school was English — not because you need to know all those rules you forgot or books you never read, but because there’s something else behind everything you know (or think you know) and talk about. That something is a metaphor. (And if you think otherwise, you’re wrong.)

In English class — usually when the subject was poetry — they told us that meaning often arises out of comparison, and that three comparative devices are metaphorsimile, and analogy. Each compares one thing to another thing that is similar in some way:

  • Metaphors say one thing is another thing, such as “time is money,” “a computer screen is a desktop,” or (my favorite Burt Lancaster line) “your mind is a cookie of arsenic.”
  • Similes say one thing is like another thing, such as “gone like snow on the water” or “dumb as a bucket of rocks.”
  • Analogies suggest partial similarities between unalike things, as with “licorice is the liver of candy.”

But metaphor is the device that matters, because, as Lakoff says, “We may not always know it, but we think in metaphor.” And, more to the point, “Metaphors can kill.” Maybe that’s why they didn’t give us the real story in school. It would have been like pulling the pins out of a bunch of little hand grenades.

But now we’re adults, and you’d think we should know how safely to arm and operate a language device. But it’s not easy. Cognitive science is relatively new and only beginning to make sense of the metaphorical structures that give shape and meaning to our world. Some of these metaphors are obvious but many others are hidden. In fact, some are hidden so well that even a guru like Lakoff can overlook them for years.

Lakoff’s latest book, “Moral Politics: What Conservatives Know and Liberals Don’t,” was inspired by his realization that the reason he didn’t know what many conservatives were talking about was that, as a Liberal, he didn’t comprehend conservative metaphors. Dan Quayle’s applause lines went right past him.

After much investigation, Lakoff found that central to the conservative worldview was a metaphor of the state as a strict father and that the “family values” conservatives espouse are those of a strict father’s household: self-reliance, rewards and punishments, responsibility, respect for authority — and finally, independence. Conservatives under Ronald Reagan began to understand the deep connection between family and politics, while Liberals remained clueless about their own family metaphor — the “nurturant parent” model. Under Reagan, Lakoff says, conservatives drove the language of strict father morality into the media and the body politic. It won hearts and minds, and it won elections.

So metaphors matter, big time. They structure our perceptions, the way we make sense of the world, and the language we use to talk about things that happen in the world. They are also far more literal than poetry class would lead us to believe. Take the metaphor ARGUMENT IS WAR —

“It is important to see that we don’t just talk about arguments in terms of war. We can actually win or lose arguments. We see the person we are arguing with as an opponent. We attach kis decisions and defend our own. We gain and lose ground. We plan and use strategies… Many of the things we do in arguing are partially structured by the concept of war.” (From Metaphors We Live By)

In our culture argument is understood and structured by the war metaphor. But in other cultures it is not. Lakoff invites us to imagine a culture where argument is viewed as dance, participants as performers and the goal to create an aesthetically pleasing performance.

Right now we understand that “Netscape is losing ground in the browser battle,” because we see the browser business a territory over which Netscape and Microsoft are fighting a war. In fact, we are so deeply committed to this metaphor that the vocabularies of business and war reporting are nearly indistinguishable.

Yet the Internet “battlefield” didn’t exist a decade ago, and the software battlefield didn’t exist a decade before that. These territories were created out of nothingness. Countless achievements have been made on them. Victories have been won over absent or equally victorious opponents.

In fact, Netscape and Microsoft are creating whole new markets together, and both succeed mostly at nobody’s expense. Netscape’s success also owes much to the robust nature of the Windows NT Server platform.


The war stories we’re telling about the Internet are turning into epic lies.

At the same time Microsoft has moved forward in browsers, directory services, languages, object models and other product categories — mostly because it’s chasing Netscape in each of them.

Growing markets are positive-sum creations, while wars are zero-sum at best. But BUSINESS IS WAR is an massive metaphorical machine that works so well that business war stories almost write themselves. This wouldn’t be a problem if business was the same now as it was twenty or fifty years ago. But business is changing fast, especially where the Internet is involved. The old war metaphor just isn’t doing the job.

Throughout the Industrial Age, both BUSINESS IS WAR and MARKETS ARE BATTLEFIELDS made good structure, because most industries and markets were grounded in physical reality. Railroads, shipping, construction, automobiles, apparel and retail were all located in physical reality. Even the phone system was easily understood in terms of phones, wires and switches. And every industrial market contained finite labor pools, capital, real estate, opportunities and natural resources. Business really was war, and markets really were battlefields.

But the Internet is hardly physical and most of its businesses have few physical limitations. The Web doesn’t look, feel or behave like anything in the analog world, even though we are eager to describe it as a “highway” or as a kind of “space.” Internet-related businesses appear and grow at phenomenal rates. The year 1995 saw more than $100 billion in new wealth created by the Internet, most of it invested in companies that were new to the world, or close to it. Now new markets emerge almost every day, while existing markets fragment, divide and expand faster than any media can track them.

For these reasons, describing Internet business in physical terms is like standing at the Dawn of Life and describing new species in terms of geology. But that’s what we’re doing, and every day the facts of business and technology life drift farther away from the metaphors we employ to support them. We arrive at pure myth, and the old metaphors stand out like bones from a dry corpse.

Of course myths are often full of truth. Fr. Seán Olaoire says “there are some truths so profound only a story can tell them.” But the war stories we’re telling about the Internet are turning into epic lies.


Describing Internet business in physical terms is like standing at the Dawn of Life and describing new species in terms of geology.

What can we do about it?

First, there’s nothing we can do to break the war metaphor machine. It’s just too damn big and old and good at what it does. But we can introduce some new metaphors that make equally good story-telling machines, and tell more accurately what’s going on in this new business world.

One possibility is MARKETS ARE CONVERSATIONS. These days we often hear conversations used as synonyms for markets. We hear about “the privacy conversation” or “the network conversation.” We “talk up” a subject and say it has a lot of “street cred.” This may not be much, but it does accurately structure an understanding of what business is and how markets work in the world we are creating with the Internet.

Another is the CONDUIT metaphor. Lakoff credits Michael Reddy with discovering hidden in our discussions of language the implication of conduit structure:

Your thinking comes through loud and clear.
It’s hard to put my ideas into words
You can’t stuff ideas into a sentence
His words carry little meaning

The Net facilitates communication, and our language about communication implies contuits through which what we say is conveyed. The language of push media suggests the Net is less a centerless network — a Web — than a set of channels through which stuff is sent. Note the preposition. I suggest that we might look more closely at how much the conduit metaphor is implicit in what we say about push, channels and related subjects. There’s something to it, I think.

My problem with both CONDUIT and CHANNEL is that they don’t clearly imply positive sums, and don’t suggest the living nature of the Net. Businesses have always been like living beings, but in the Net environment they enjoy unprecedented fecundity. What’s a good metaphor for that? A jungle?

Whatever, it’s clearly not just a battlefield, regardless of the hostilities involved. It’s time to lay down our arms and and start building new conceptual machines. George Lakoff will speak at PC Forum next week. I hope he helps impart some mass to one or more new metaphorical flywheels. Because we need to start telling sane and accurate stories about our businesses and our markets.

If we don’t, we’ll go on shooting at each other for no good reason.


Links

Here are a few links into the worlds of metaphor and cognitive science. Some of this stuff is dense and heavy; but hey, it’s not an easy subject. Just an important one..

I also explored the issue of push media in Shoveling Push and When Push Becomes Shove. And I visited the Microsoft vs. Netscape “war” in Microsoft + Netscape: The Real Story. All three are in Reality 2.0.


In July 2008, when I posted the photo above on this blog, some readers thought Santa Barbara Mission was on fire. It didn’t matter that I explained in that post how I got the shot, or that news reports made clear that the Gap Fire was miles away. The photo was a good one, but it also collapsed three dimensions into just two. Hence the confusion. If you didn’t know better, it looked like the building was on fire. The photo removed distance.

So does the Internet, at least when we are there. Let’s look at what there means.

Pre-digital media were limited by distance, and to a high degree defined by it. Radio and television signals degrade across distances from transmitters, and are limited as well by buildings, terrain, weather, and (on some frequency bands), ionospheric conditions. Even a good radio won’t get an absent signal. Nor will a good TV. Worse, if you live more than a few dozen miles from a TV station’s transmitter, you need a good antenna mounted on your roof, a chimney, or a tall pole. For signals coming from different locations, you need a rotator as well. Even on cable, there is still a distinction between local channels and cable-only ones. You pay more to get “bundles” of the latter, so there is a distance in cost between local and distant channel sources. If you get your TV by satellite, your there needs to be in the satellite’s coverage footprint.

But with the Internet, here and there are the same. Distance is gone, on purpose. Its design presumes that all physical and wireless connections are one, no matter who owns them or how they get paid to move chunks of Internet data. It is a world of ends meant to show nothing of its middles, which are countless paths the ends ignore. (Let’s also ignore, for the moment, that some countries and providers censor or filter the Internet, in some cases blocking access from the physical locations their systems detect. Those all essentially violate the Internet’s simple assumption of openness and connectivity for everybody and everything at every end.)

For people on the Internet, distance is collapsed to the height and width of a window. There is also no gravity because space implies three dimensions and your screen has only two, and the picture is always upright. When persons in Scotland and Australia talk, neither is upside down to the other. But they are present with each other and that’s what matters. (This may change in the metaverse, whatever that becomes, but will likely require headgear not everyone will wear. And it will still happen over the Internet.)

Digital life, almost all of which now happens on the Internet, is new to human experience, and our means of coping are limited. For example, by language. And I don’t mean different ones. I mean all of them, because they are made for making sense of a three-dimensional physical world, which the Internet is not.

Take prepositions. English, like most languages, has a few dozen prepositions, most of which describe placement in three-dimensional space. Over, around, under, through, beside, within, off, on, over, aboard… all presume three dimensions. That’s also where our bodies are, and it is through our bodies that we make sense of the world. We say good is light and bad is dark because we are diurnal hunters and gatherers, with eyes optimized for daylight. We say good is up and bad is down because we walk and run upright. We “grasp” or “hold on” to an idea because we have opposable thumbs on hands built to grab. We say birth is “arrival,” death is “departure” and careers are “paths,” because we experience life as travel.

But there are no prepositions yet that do justice to the Internet’s absence of distance. Of course, we say we are “on” the Internet like we say we are “on” the phone. And it works well enough, as does saying we are on” fire or drugs. We just frame our understanding of the Internet in metaphorical terms that require a preposition, and “on” makes the most sense. But can we do better than that? Not sure.

Have you noticed that how we present ourselves in the digital world also differs from how we do the same in the physical one? On social media, for example, we perform roles, as if on a stage. We talk to an audience straight through a kind of fourth wall, like an actor, a lecturer, a comedian, musician, or politician. My point here is that the arts and methods of performing in the physical world are old, familiar, and reliant on physical surroundings. How we behave with others in our offices, our bedrooms, our kitchens, our clubs, and our cars are all well-practiced and understood. In social media, the sense of setting is much different and more limited.

In the physical world, much of our knowledge (as the scientist and philosopher Michael Polanyi first taught) is tacit rather than explicit, yet digital technology is entirely explicit: ones and zeroes, packets and pixels. We do have tacit knowledge of the digital world, but the on/off present/absent two-dimensionality of that world is still new to us and lacks much of what makes life in the three-dimensional natural world so rich and subtle.

Marshall McLuhan says all media, all technologies, extend us. When we drive a car we wear it like a carapace or a suit of armor. We also speak of it in the first person possessive: my engine, my fenders, my wheels, much as we would say my fingers and my hair. There is distance here too, and it involves performance. A person who would never yell at another person standing in line at a theater might do exactly that at another car. This kind of distance is gone, or very different, in the digital world.

In a way we are naked in the digital world, and vulnerable. By that I mean we lack the rudimentary privacy technologies we call clothing and shelter, which protect our private parts and spaces from observation and intrusion while also signaling the forms of observation and contact that we permit or welcome. The absence of this kind of privacy tech is why it is so easy for websites and apps to fill our browsers with cookies and other ways to track us and report our activities back to dozens, hundreds or thousands of unknown parties. In this early stage of life on the Internet, what’s called privacy is just the “choices” sites and services give us, none of which are recorded where we can easily find, audit, or dispute them.

Can we get new forms of personal tech that truly extend and project our agency in the digital world? I think so, but it’s a question so completely good that we don’t yet have an answer.

 

agree not to track

It’s P7012: Standard for Machine Readable Personal Privacy Terms, which “identifies/addresses the manner in which personal privacy terms are proffered and how they can be read and agreed to by machines.”

P7012 is being developed by a working group of the IEEE. Founded in 1963, the IEEE is the largest association of technical professionals in the world and is serious in the extreme.

This standard will guide the way the companies of the world agree to your terms. Not how you agree to theirs. We have the latter “system” right now and it is failing utterly, massively, and universally. Let me explain.

First, company privacy policies aren’t worth the pixels they’re printed on. They can change on a whim, and there is nothing binding about them anyway.

Second, the system of “agreements” we have today do nothing more than put fig leaves over the hard-ons companies have for information about you: information you give up when you agree to a consent notice.

Consent notices are those banners or pop-overs that site owners use to halt your experience and shake down consent to violations of your privacy. There’s usually a big button that says ACCEPT, and some smaller print with a link going to “settings.” Those urge you to switch on or off the “necessary,” “functional,” “performance,” and “targeting” or “marketing” cookies that the site would like to jam into your browser.

Regardless of what you “choose,” there are no simple or easy ways to discover or dispute violations of your “agreement” to anything. Worse, you have to do this with nearly every freaking website you encounter, universalizing the meaninglessness of the whole thing.

But what if sites and services agreed to your terms, soon as you show up?

We have that in the natural world, where it is impolite in the extreme to look under the personal privacy protections called clothing. Or to penetrate other personal privacy protections, such as shelter, doors, shades, and locks. Or to plant tracking beacons on people to follow them like marked animals. There are social contracts forbidding all of those. We expect that contract to be respected, and for the most part it is.

But we have no such social contracts on the Net. In fact, we have the opposite: a feeding frenzy on private information about us, made possible by our powerlessness to stop it, plus boundless corporate rationalization.

We do have laws meant to reduce that frenzy by making some of it illegal. Others are in the works, most notably in Europe. What they have done to stop it so far rounds to zero. In his latest book, ADSCAM: How Online Advertising Gave Birth to One of History’s Greatest Frauds, and Became a Threat to Democracy, Bob Hoffman has a much more sensible and effective policy suggestion than any others we’ve seen: simply ban tracking.

While we wait for that, we can use the same kind of tool that companies are using: a simple contract. Sign here. Electronically. That’s what P7012 will standardize.

There is nothing in the architecture of the Net or the Web to prevent a company from agreeing to personal terms.

In fact, at its base—in the protocol called TCP/IP—the Internet is a peer-to-peer system. It does not consign us to subordinate status as mere “users,” “consumers,” “eyeballs,” or whatever else marketers like to call us.

To perform as full peers in today’s online world, we need easy ways for company machines to agree to the same kind of personal terms we express informally in the natural world. That’s what P7012 will make possible.

I’m in that working group, and we’ve been at it for more than two years. We expect to have it done in the next few months. If you want to know more about it, or to help, talk to me.

And start thinking about what kind of standard-form and simple terms a person might proffer: ones that are agreeable to everyone. Because we will need them. And when we get them, surveillance capitalism can finally be replaced by a much larger and friendlier economy: one based on actual customer intentions rather than manipulations based on guesswork and horrible manners.

One candidate is #NoStalking, aka P2B1beta. #NoStalking was developed with help from the Cyberlaw Clinic at Harvard Law School and the Berkman Klein Center, and says “Just give me ads not based on tracking me.” In other words, it does permit advertising and welcomes sites and services making money that way. (This is how the advertising business worked for many decades before it started driving drunk on personal data.)

Constructive and friendly agreements such as #NoStalking will help businesses withdraw from their addiction to tracking, and make it easier for businesses to hear what people actually want.

Tags:

Twelve years ago, I posted The Data Bubble. It began,

The tide turned today. Mark it: 31 July 2010.

That’s when The Wall Street Journal published The Web’s Gold Mine: Your Secrets, subtitled A Journal investigation finds that one of the fastest-growing businesses on the Internet is the business of spying on consumers. First in a series. It has ten links to other sections of today’s report. It’s pretty freaking amazing — and amazingly freaky when you dig down to the business assumptions behind it. Here is the rest of the list (sans one that goes to a link-proof Flash thing):

Here’s the gist:

The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.

It gets worse:

In between the Internet user and the advertiser, the Journal identified more than 100 middlemen—tracking companies, data brokers and advertising networks—competing to meet the growing demand for data on individual behavior and interests.The data on Ms. Hayes-Beaty’s film-watching habits, for instance, is being offered to advertisers on BlueKai Inc., one of the new data exchanges. “It is a sea change in the way the industry works,” says Omar Tawakol, CEO of BlueKai. “Advertisers want to buy access to people, not Web pages.” The Journal examined the 50 most popular U.S. websites, which account for about 40% of the Web pages viewed by Americans. (The Journal also tested its own site, WSJ.com.) It then analyzed the tracking files and programs these sites downloaded onto a test computer. As a group, the top 50 sites placed 3,180 tracking files in total on the Journal’s test computer. Nearly a third of these were innocuous, deployed to remember the password to a favorite site or tally most-popular articles. But over two-thirds—2,224—were installed by 131 companies, many of which are in the business of tracking Web users to create rich databases of consumer profiles that can be sold.

Here’s what’s delusional about all this: There is no demand for tracking by individual customers. All the demand comes from advertisers — or from companies selling to advertisers. For now.

Here is the difference between an advertiser and an ordinary company just trying to sell stuff to customers: nothing. If a better way to sell stuff comes along — especially if customers like it better than this crap the Journal is reporting on — advertising is in trouble.

In fact, I had been calling the tracking-based advertising business (now branded adtech or ad-tech) a bubble for some time. For example, in Why online advertising sucks, and is a bubble (31 October 2008) and After the advertising bubble bursts (23 March 2009). But I didn’t expect my own small voice to have much effect. But this was different. What They Know was written by a crack team of writers, researchers, and data visualizers. It was led by Julia Angwin and truly Pulitzer-grade stuff. It  was so well done, so deep, and so sharp, that I posted a follow-up report three months later, called The Data Bubble II. In that one, I wrote,

That same series is now nine stories long, not counting the introduction and a long list of related pieces. Here’s the current list:

  1. The Web’s Gold Mine: What They Know About You
  2. Microsoft Quashed Bid to Boost Web Privacy
  3. On the Web’s Cutting Edge: Anonymity in Name Only
  4. Stalking by Cell Phone
  5. Google Agonizes Over Privacy
  6. Kids Face Intensive Tracking on Web
  7. ‘Scrapers’ Dig Deep for Data on the Web
  8. Facebook in Privacy Breach
  9. A Web Pioneer Profiles Users By Name

Related pieces—

Two things I especially like about all this. First, Julia Angwin and her team are doing a terrific job of old-fashioned investigative journalism here. Kudos for that. Second, the whole series stands on the side of readers. The second person voice (youyour) is directed to individual persons—the same persons who do not sit at the tables of decision-makers in this crazy new hyper-personalized advertising business.

To measure the delta of change in that business, start with John Battelle‘s Conversational Marketing series (post 1post 2post 3) from early 2007, and then his post Identity and the Independent Web, from last week. In the former he writes about how the need for companies to converse directly with customers and prospects is both inevitable and transformative. He even kindly links to The Cluetrain Manifesto (behind the phrase “brands are conversations”).

It was obvious to me that this fine work would blow the adtech bubble to a fine mist. It was just a matter of when.

Over the years since, I’ve retained hope, if not faith. Examples: The Data Bubble Redux (9 April 2016), and Is the advertising bubble finally starting to pop? (9 May 2016, and in Medium).

Alas, the answer to that last one was no. By 2016, Julia and her team had long since disbanded, and the original links to the What They Know series began to fail. I don’t have exact dates for which failed when, but I do know that the trusty master link, wjs.com/wtk, began to 404 at some point. Fortunately, Julia has kept much of it alive at https://juliaangwin.com/category/portfolio/wall-street-journal/what-they-know/. Still, by the late Teens it was clear that even the best journalism wasn’t going to be enough—especially since the major publications had become adtech junkies. Worse, covering their own publications’ involvement in surveillance capitalism had become an untouchable topic for journalists. (One notable exception is Farhad Manjoo of The New York Times, whose coverage of the paper’s own tracking was followed by a cutback in the practice.)

While I believe that most new laws for tech mostly protect yesterday from last Thursday, I share with many a hope for regulatory relief. I was especially jazzed about Europe’s GDPR, as you can read in GDPR will pop the adtech bubble (12 May 2018) and Our time has come (16 May 2018 in ProjectVRM).

But I was wrong then too. Because adtech isn’t a bubble. It’s a death star in service of an evil empire that destroys privacy through every function it funds in the digital world.

That’s why I expect the American Data Privacy and Protection Act (H.R. 8152), even if it passes through both houses of Congress at full strength, to do jack shit. Or worse, to make our experience of life in the digital world even more complicated, by requiring us to opt-out, rather than opt-in (yep, it’s in the law—as a right, no less), to tracking-based advertising everywhere. And we know how well that’s been going. (Read this whole post by Tom Fishburne, the Marketoonist, for a picture of how less than zero progress has been made, and how venial and absurd “consent” gauntlets on websites have become.) Do a search for https://www.google.com/search?q=gdpr+compliance to see how large the GDPR “compliance” business has become. Nearly all your 200+ million results will be for services selling obedience to the letter of the GDPR while death-star laser beams blow its spirit into spinning shards. Then expect that business to grow once the ADPPA is in place.

There is only thing that will save us from adtech’s death star.

That’s tech of our own. Our tech. Personal tech.

We did it in the physical world with the personal privacy tech we call clothing, shelter, locks, doors, shades, and shutters. We’ve barely started to make the equivalents for the digital world. But the digital world is only a few decades old. It will be around for dozens, hundreds, or thousands of decades to come. And adtech is still just a teenager. We can, must, and will do better.

All we need is the tech. Big Tech won’t do it for us. Nor will Big Gov.

The economics will actually help, because there are many business problems in the digital world that can only be solved from the customers’ side, with better signaling from demand to supply than adtech-based guesswork can ever provide. Customer Commons lists fourteen of those solutions, here. Privacy is just one of them.

Use the Force, folks.

That Force is us.

Passwords are hell.

Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember.

Unless you’re a wizard, this all but requires using a password manager.†

Think about how hard that job is. First, it’s impossible for developers of password managers to do everything right:

  • Most of their customers and users need to have logins and passwords for hundreds of sites and services on the Web and elsewhere in the networked world
  • Every one of those sites and services has its own gauntlet of methods for registering logins and passwords, and for remembering and changing them
  • Every one of those sites and services has its own unique user interfaces, each with its own peculiarities
  • All of those UIs change, sometimes often.

Keeping up with that mess while also keeping personal data safe from both user error and determined bad actors, is about as tall as an order can get. And then you have to do all that work for each of the millions of customers you’ll need if you’re going to make the kind of money required to keep abreast of those problems and providing the solutions required.

So here’s the thing: the best we can do with passwords is the best that password managers can do. That’s your horizon right there.

Unless we can get past logins and passwords somehow.

And I don’t think we can. Not in the client-server ecosystem that the Web has become, and that industry never stopped being, since long before the Internet came along. That’s the real hell. Passwords are just a symptom.

We need to work around it. That’s my work now. Stay tuned here, here, and here for more on that.


† We need to fix that Wikipedia page.

The Web is a haystack.

This isn’t what Tim Berners-Lee had in mind when he invented the Web. Nor is it what Jerry Yang and David Filo had in mind when they invented Jerry and David’s Guide to the World Wide Web, which later became Yahoo. Jerry and David’s model for the Web was a library, and Yahoo was to be the first catalog for it. This made sense, given the prevailing conceptual frames for the Web at the time: real estate and publishing.

Both of those are still with us today. We frame the Web as real estate when we speak of “sites” with “locations” in “domains” with “addresses” you can “visit” and “browse”—then shift to publishing when we speak of “files” and “pages,” that we “author,” “edit,” “post,” “publish,” “syndicate” and store in “folders” within a “directory.” Both frames suggest durability, if not permanence. Again, kind of like a library.

But once we added personal movement (“surf,” “browse”) and a vehicle for it (the browser), the Web became a World Wide Free-for-all. Literally. Anyone could publish, change and remove whatever they pleased, whenever they pleased. The same went for organizations of every kind, all over the world. And everyone with a browser could find their way to and through all of those spaces and places, and enjoy whatever “content” publishers chose to put there. Thus the Web grew into billions of sites, pages, images, databases, videos, and other stuff, with most of it changing constantly.

The result was a heaving heap of fuck-all.*

How big is it? According to WorldWebSize.comGoogle currently indexes about 41 billion pages, and Bing about 9 billion. They also peaked together at about 68 billion pages in late 2019. The Web is surely larger than that, but that’s the practical limit because search engines are the practical way to find pieces of straw in that thing. Will the haystack be less of one when approached by other search engines, such as the new ad-less (subscription-funded) Neeva? Nope. Search engines do not give the Web a card catalog. They certify its nature as a haystack.

So that’s one practical limit. There are others, but they’re hard to see when the level of optionality on the Web is almost indescribably vast. But we can see a few limits by asking some questions:

  1. Why do you always have to accept websites’ terms? And why do you have no record of your own of what you accepted, or when‚ or anything?
  2. Why do you have no way to proffer your own terms, to which websites can agree?
  3. Why did Do Not Track, which was never more than a polite request not to be tracked off a website, get no respect from 99.x% of the world’s websites? And how the hell did Do Not Track turn into the Tracking Preference Expression at the W2C, where the standard never did get fully baked?
  4. Why, after Do Not Track failed, did hundreds of millions—or perhaps billions—of people start blocking ads, tracking or both, on the Web, amounting to the biggest boycott in world history? And then why did the advertising world, including nearly all advertisers, their agents, and their dependents in publishing, treat this as a problem rather than a clear and gigantic message from the marketplace?
  5. Why are the choices presented to you by websites called your choices, when all those choices are provided by them? And why don’t you give them choices?
  6. Why would Apple’s way of making you private on your phone be to “Ask App Not to Track,” rather than “Tell App Not to Track,” or “Prevent App From Tracking You“?
  7. Why does the GDPR call people “data subjects” rather than people, or human beings, and then assign the roles “data controller” and “data processor” only to other parties? (Yes, it does say a “data controller” can be a “natural person,” but more as a technicality than as a call for the development of agency on behalf of that person.)
  8. Why are nearly all of the billion results in a search for GDPR+compliance about how companies can obey the letter of that law while violating its spirit by continuing to track people through the giant loophole you see in every cookie notice?
  9. Why does the CCPA give you the right to ask to have back personal data others have gathered about you on the Web, rather than forbid its collection in the first place? (Imagine a law that assumes that all farmers’ horses are gone from their barns, but gives those farmers a right to demand horses back from those who took them. It’s kinda like that.)
  10. Why, 22 years after The Cluetrain Manifesto said, we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it. —is that statement (one I helped write!) still not true?
  11. Why, 9 years after Harvard Business Review Press published The Intention Economy: When Customers Take Charge, has that not happened? (Really, what are you in charge of in the marketplace that isn’t inside companies’ silos and platforms?)
  12. And, to sum up all the above, why does “free market” on the Web mean your choice of captor?

It’s easy to blame the cookie, which Lou Montulli invented in 1994 as a way for sites to remember their visitors by planting reminder files—cookies—in visitors’ browsers. Cookies also gave visitors a way to remember where they were when they last visited. For sites that require logins, cookies take care of that as well.

What matters, however, is not the cookie. What matters is why the cookie was necessary in the first place: the Web’s architecture. It’s called client-server, and is represented graphically like this:

client-server model

This architecture was born in the era of centralized mainframes, which “users” accessed through client devices called “dumb terminals”:

On the Web, as it was in the old mainframe world, we clients—mere users—are as subordinate to servers as are calves to cows:

(In fact I’ve been told that client-server was originally a euphemism for “slave-master.” Whether true or not, it makes sense.)

In the client-server paradigm, our agency—our ability to act with effect in the world—is restricted to what servers allow or provide for us. Our choices are what they provide. We are independent only to the degree that we can also be clients to other servers. In this paradigm, a free market is “your choice of captor.”

Want privacy? You have to ask for it. And, if you go to the trouble of doing that—which you have to do separately with every site and service you encounter (each a mainframe of its own)—your client doesn’t keep a record of what you “agreed” to. The server does. Good luck finding whatever it is the server or its third parties remember about that agreement.

Want to control how your data (or data about you) gets processed by the servers of the world? Good luck with that too. Again, Europe’s GDPR says “natural persons” are just “data subjects,” while “data controllers” and “data processors” are roles reserved for servers.

Want a shopping cart of your own to take from site to site? My wife asked for that in 1995. It’s still barely thinkable in 2021. Want a dashboard for your life where you can gather all your expenses, investments, property records, health information, calendars, contacts, and other personal information? She asked for that too, and we still don’t have it, except to the degree that large server operators (e.g. Google, Apple, Microsoft) give us pieces of it, hosted in their clouds, and rigged to keep you captive to their systems.

That’s why we don’t yet have an Internet of Things (IoT), but rather an Apple of Things, a Google of Things, and an Amazon of Things.

Is it possible to do stuff on the Web that isn’t client-server? Perhaps some techies among us can provide examples, but practically speaking, here’s what matters: If it’s not thinkable by the owners of the servers we depend on, it doesn’t get made.

From our position at the bottom of the Web’s haystack, it’s hard to imagine there might be a world where it’s possible for us to have full agency: to not be just users of clients enslaved to as many servers as we deal with every day.

But that world exists. It’s called the Internet, And it can support a helluva lot more than the Web—with many ways to interact other than those possible in through client-server alone.

Digital technology as we know it has only been around for a few decades, and the Internet for maybe half that time. Mobile computers that run apps and presume connectivity everywhere have only been with us for a decade or less. And all of those will be with us for many decades, centuries, or millennia to come. We are not going to stop living digital lives, any more than we are going to stop speaking, writing, or using mathematics. Digital technology and the Internet are granted wishes that won’t go back into the genie’s bottle.

Credit where due: the Web is excellent, but not boundlessly so. It has limits. Thanks to the client-server model, full personal agency is not a grace of life on the Web. Not until we have servers or agents of our own. (Yes, we could have our own servers back in Web1 days—my own Web and email servers lived under my desk and had their own static IP addresses from roughly 1995 until 2003—and a few alpha geeks still do. But since then we’ve mostly needed to live as digital serfs, by the graces of corporate overlords.)

So now it’s time to think and build outside the haystack.

Models for that do exist, and some have been around for a long time. Email is one example. While you can look at your email on the Web, or use a Web-based email service (such as Gmail), email itself is independent of those. My own searls.com email has been at servers in my home, on racks elsewhere, and in a hired cloud. I can move it anywhere I want. You can move yours as well, because the services we hire to host our personal email are substitutable. That’s just one way we can enjoy full agency on the Internet.

Some work toward the next Web, or beyond it, is happening at places such as DWeb Camp and Unfinished. My own work is happening right now in three overlapping places:

  1. ProjectVRM, which I started as a fellow of the Berkman Klein Center at Harvard in 2006, and which is graciously still hosted (with this blog) by the Center there. Our mailing list currently has more than 550 members. We also meet twice a year with the Internet Identity Workshop, which I co-founded, and still co-organize, with Kaliya Young and Phil Windley, in 2005). Immodestly speaking, IIW is the most leveraged conference I know.
  2. Customer Commons, where we are currently working on building out what’s called the Byway. Go there and follow along as we work to toward better answers to the questions above than you’ll get from inside the haystack. Customer Commons is a 501(c)3 nonprofit spun out of ProjectVRM.
  3. The Ostrom Workshop at Indiana University, where Joyce (my wife and fellow founder and board member of Customer Commons) and I are both visiting scholars. It is in that capacity that we are working on the Byway and leading a salon series titled Beyond the Web. Go to that link and sign up to attend. I look forward to seeing and talking with you there.

[Later…] More on the Web as a haystack is in FILE NOT FOUND: A generation that grew up with Google is forcing professors to rethink their lesson plans, by Monica Chin (@mcsquared96) in The Verge, and Students don’t know what files and folders are, professors say, by Jody MacGregor in PC Gamer, which sources Monica’s report.


*I originally had “heaving haystack of fuck-all” here, but some remember it as the more alliterative “heaving heap of fuck-all.” So I decided to swap them. If comments actually worked here†, I’d ask for a vote. But feel free to write me instead, at my first name at my last name dot com.

†Now they do. Thanks for your patience, everybody.

 

KSKO radio

On Quora, somebody asked, Which is your choice, radio, television, or the Internet?. I replied with the following.

If you say to your smart speaker “Play KSKO,” it will play that small-town Alaska station, which has the wattage of a light bulb, anywhere in the world. In this sense the Internet has eaten the station. But many people in rural Alaska served by KSKO and its tiny repeaters don’t have Internet access, so the station is either their only choice, or one of a few. So we use the gear we have to get the content we can.

TV viewing is also drifting from cable to à la carte subscription services (Netflix, et. al.) delivered over the Internet, in much the same way that it drifted earlier from over-the-air to cable. And yet over-the-air is still with us. It’s also significant that most of us get our Internet over connections originally meant only for cable TV, or over cellular connections originally meant only for telephony.

Marshall and Eric McLuhan, in Laws of Media, say every new medium or technology does four things: enhanceretrieveobsolesce and reverse. (These are also caled the Tetrad of Media Effects.) And there are many answers in each category. For example, the Internet—

  • enhances content delivery;
  • retrieves radio, TV and telephone technologies;
  • obsolesces over-the-air listening and viewing;
  • reverses into tribalism;

—among many other effects within each of those.

The McLuhans also note that few things get completely obsolesced. For example, there are still steam engines in the world. Some people still make stone tools.

It should also help to note that the Internet is not a technology. At its base it’s a protocol—TCP/IP—that can be used by a boundless variety of technologies. A protocol is a set of manners among things that compute and communicate. What made the Internet ubiquitous and all-consuming was the adoption of TCP/IP by things that compute and communicate everywhere in the world.

This development—the worldwide adoption of TCP/IP—is beyond profound. It’s a change as radical as we might have if all the world suddenly spoke one common language. Even more radically, it creates a second digital world that coexists with our physical one.

In this digital world, we are at a functional distance apart of zero. We also have no gravity. We are simply present with each other. This means the only preposition that accurately applies to our experience of the Internet is with. Because we are not really on or through or over anything. Those prepositions refer to the physical world. The digital world is some(non)thing else.

This is why referring to the Internet as a medium isn’t quite right. It is a one-of-one, an example only of itself. Like the Universe. That you can broadcast through the Internet is just one of the countless activities it supports. (Even though the it is not an it in the material sense.)

I think we are only at the beginning of coming to grips with what it all means, besides a lot.

Historic milestones don’t always line up with large round numbers on our calendars. For example, I suggest that the 1950s ended with the assassination of JFK in late 1963, and the rise of British Rock, led by the Beatles, in 1964. I also suggest that the 1960s didn’t end until Nixon resigned, and disco took off, in 1974.

It has likewise been suggested that the 20th century actually began with the assassination of Archduke Ferdinand and the start of WWI, in 1914. While that and my other claims might be arguable, you might at least agree that there’s no need for historic shifts to align with two or more zeros on a calendar—and that in most cases they don’t.

So I’m here to suggest that the 21st century began in 2020 with the Covid-19 pandemic and the fall of Donald Trump. (And I mean that literally. Social media platforms were Trump’s man’s stage, and the whole of them dropped him, as if through a trap door, on the occasion of the storming of the U.S. Capitol by his supporters on January 6, 2021. Whether you liked that or not is beside the facticity of it.)

Things are not the same now. For example, over the coming years, we may never hug, shake hands, or comfortably sit next to strangers again.

But I’m bringing this up for another reason: I think the future we wrote about in The Cluetrain Manifesto, in World of Ends, in The Intention Economy, and in other optimistic expressions during the first two decades of the 21st Century may finally be ready to arrive.

At least that’s the feeling I get when I listen to an interview I did with Christian Einfeldt (@einfeldt) at a San Diego tech conference in April, 2004—and that I just discovered recently in the Internet Archive. The interview was for a film to be called “Digital Tipping Point.” Here are its eleven parts, all just a few minutes long:

01 https://archive.org/details/e-dv038_doc_…
02 https://archive.org/details/e-dv039_doc_…
03 https://archive.org/details/e-dv038_doc_…
04 https://archive.org/details/e-dv038_doc_…
05 https://archive.org/details/e-dv038_doc_…
06 https://archive.org/details/e-dv038_doc_…
07 https://archive.org/details/e-dv038_doc_…
08 https://archive.org/details/e-dv038_doc_…
09 https://archive.org/details/e-dv038_doc_…
10 https://archive.org/details/e-dv039_doc_…
11 https://archive.org/details/e-dv039_doc_…

The title is a riff on Malcolm Gladwell‘s book The Tipping Point, which came out in 2000, same year as The Cluetrain Manifesto. The tipping point I sensed four years later was, I now believe, a foreshadow of now, and only suggested by the successes of the open source movement and independent personal publishing in the form of blogs, both of which I was high on at the time.

What followed in the decade after the interview were the rise of social networks, of smart mobile phones and of what we now call Big Tech. While I don’t expect those to end in 2021, I do expect that we will finally see  the rise of personal agency and of constructive social movements, which I felt swelling in 2004.

Of course, I could be wrong about that. But I am sure that we are now experiencing the millennial shift we expected when civilization’s odometer rolled past 2000.

“Give me a lever long enough and a fulcrum on which to place it, and I shall move the world,” Archimedes is said to have said.

For almost all of the last four years, Donald Trump was one hell of an Archimedes. With the U.S. presidency as his lever and Twitter as his fulcrum, the 45th President leveraged an endless stream of news-making utterances into a massive following and near-absolute domination of news coverage, worldwide. It was an amazing show, the like of which we may never see again.

Big as it was, that show ended on January 8, when Twitter terminated the @RealDonaldTrump account. Almost immediately after that, Trump was “de-platformed” from all these other services as well: PayPal, Reddit, Shopify, Snapchat, Discord, Amazon, Twitch, Facebook, TikTok, Google, Apple, Twitter, YouTube and Instagram. That’s a lot of fulcrums to lose.

What makes them fulcrums is their size. All are big, and all are centralized: run by one company. As members, users and customers of these centralized services, we are also at their mercy: no less vulnerable to termination than Trump.

So here is an interesting question: What if Trump had his own fulcrum from the start? For example, say he took one of the many Trump domains he probably owns (or should have bothered to own, long ago), and made it a blog where he said all the same things he tweeted, and that site had the same many dozens of millions of followers today? Would it still be alive?

I’m not sure it would. Because, even though the base protocols of the Internet and the Web are peer-to-peer and end-to-end, all of us are dependent on services above those protocols, and at the mercy of those services’ owners.

That to me is the biggest lesson the de-platforming of Donald Trump has for the rest of us. We can talk “de-centralization” and “distribution” and “democratization” along with peer-to-peer and end-to-end, but we are still at the mercy of giants.

Yes, there are work-arounds. The parler.com website, de-platformed along with Trump, is back up and, according to @VickerySec (Chris Vickery), “routing 100% of its user traffic through servers located within the Russian Federation.” Adds @AdamSculthorpe, “With a DDos-Guard IP, exactly as I predicted the day it went offline. DDoS Guard is the Russian equivalent of CloudFlare, and runs many shady sites. RiTM (Russia in the middle) is one way to think about it.” Encrypted services such as Signal and Telegram also provide ways for people to talk and be social. But those are also platforms, and we are at their mercy too.

I bring all this up as a way of thinking out loud toward the talk I’ll be giving in a few hours (also see here), on the topic “Centralized vs. Decentralized.” Here’s the intro:

Centralised thinking is easy. Control sits on one place, everything comes home, there is a hub, the corporate office is where all the decisions are made and it is a power game.

Decentralised thinking is complex. TCP/IP and HTTP created a fully decentralised fabric for packet communication. No-one is in control. It is beautiful. Web3 decentralised ideology goes much further but we continually run into conflicts. We need to measure, we need to report, we need to justify, we need to find a model and due to regulation and law, there are liabilities.

However, we have to be doing both. We have to centralise some aspects and at the same time decentralise others. Whilst we hang onto an advertising model that provides services for free we have to have a centralised business model. Apple with its new OS is trying to break the tracking model and in doing so could free us from the barter of free, is that the plan which has nothing to do with privacy or are the ultimate control freaks. But the new distributed model means more risks fall on the creators as the aggregators control the channels and access to a model. Is our love for free preventing us from seeing the value in truly distributed or are those who need control creating artefacts that keep us from achieving our dreams? Is distributed even possible with liability laws and a need to justify what we did to add value today?

So here is what I think I’ll say.

First, we need to respect the decentralized nature of humanity. All of us are different, by design. We look, sound, think and feel different, as separate human beings. As I say in How we save the world, “no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M.LamarNicole and Jonas Maines.)”

This simple fact of our distributed souls and talents has had scant respect from the centralized systems of the digital world, which would rather lead than follow us, and rather guess about us than understand us. That’s partly because too many of them have become dependent on surveillance-based personalized advertising (which is awful in ways I’ve detailed in 136 posts, essays and articles compiled here). But it’s mostly because they’re centralized and can’t think or work outside their very old and square boxes.

Second, advertising, subscriptions and donations through the likes of (again, centralized) Patreon aren’t the only possible ways to support a site or a service. Those are industrial age conventions leveraged in the early decades of the digital age. There are other approaches we can implement as well, now that the pendulum is started to swing back from the centralized extreme. For example, the fully decentralized EmanciPay. A bunch of us came up with that one at ProjectVRM way back in 2009. What makes it decentralized is that the choice of what to pay, and how, is up to the customer. (No, it doesn’t have to be scary.) Which brings me to—

Third, we need to start thinking about solving business problems, market problems, technical problems, from our side. Here is how Customer Commons puts it:

There is … no shortage of of business problems that can only be solved from the customer’s side. Here are a few examples :

  1. Identity. Logins and passwords are burdensome leftovers from the last millennium. There should be (and already are) better ways to identify ourselves, and to reveal to others only what we need them to know. Working on this challenge is the SSI—Self-Sovereign Identity—movement. The solution here for individuals is tools of their own that scale.
  2. Subscriptions. Nearly all subscriptions are pains in the butt. “Deals” can be deceiving, full of conditions and changes that come without warning. New customers often get better deals than loyal customers. And there are no standard ways for customers to keep track of when subscriptions run out, need renewal, or change. The only way this can be normalized is from the customers’ side.
  3. Terms and conditions. In the world today, nearly all of these are ones companies proffer; and we have little or no choice about agreeing to them. Worse, in nearly all cases, the record of agreement is on the company’s side. Oh, and since the GDPR came along in Europe and the CCPA in California, entering a website has turned into an ordeal typically requiring “consent” to privacy violations the laws were meant to stop. Or worse, agreeing that a site or a service provider spying on us is a “legitimate interest.”
  4. Payments. For demand and supply to be truly balanced, and for customers to operate at full agency in an open marketplace (which the Internet was designed to be), customers should have their own pricing gun: a way to signal—and actually pay willing sellers—as much as they like, however they like, for whatever they like, on their own terms. There is already a design for that, called Emancipay.
  5. Internet of Things. What we have so far are the Apple of things, the Amazon of things, the Google of things, the Samsung of things, the Sonos of things, and so on—all silo’d in separate systems we don’t control. Things we own on the Internet should be our things. We should be able to control them, as independent customers, as we do with our computers and mobile devices. (Also, by the way, things don’t need to be intelligent or connected to belong to the Internet of Things. They can be, or have, picos.)
  6. Loyalty. All loyalty programs are gimmicks, and coercive. True loyalty is worth far more to companies than the coerced kind, and only customers are in position to truly and fully express it. We should have our own loyalty programs, to which companies are members, rather than the reverse.
  7. Privacy. We’ve had privacy tech in the physical world since the inventions of clothing, shelter, locks, doors, shades, shutters, and other ways to limit what others can see or hear—and to signal to others what’s okay and what’s not. Instead, all we have are unenforced promises by others not to watching our naked selves, or to report what they see to others. Or worse, coerced urgings to “accept” spying on us and distributing harvested information about us to parties unknown, with no record of what we’ve agreed to.
  8. Customer service. There are no standard ways to call for service yet, or to get it. And there should be.
  9. Advertising. Our main problem with advertising today is tracking, which is failing because it doesn’t work. (Some history: ad blocking has been around since 2004, it took off in 2013, when the advertising and publishing industries gave the middle finger to Do Not Track, which was never more than a polite request in one’s browser not to be tracked off a site. By 2015, ad blocking alone was the biggest boycott i world history. And in 2018 and 2019 we got the GDPR and the CCPA, two laws meant to thwart tracking and unwanted data collection, and which likely wouldn’t have happened if we hadn’t been given that finger.) We can solve that problem from the customer side with intentcasting,. This is where we advertise to the marketplace what we want, without risk that our personal data won’t me misused. (Here is a list of intentcasting providers on the ProjectVRM Development Work list.)

We already have examples of personal solutions working at scale: the Internet, the Web, email and telephony. Each provides single, simple and standards-based ways any of us can scale how we deal with others—across countless companies, organizations and services. And they work for those companies as well.

Other solutions, however, are missing—such as ones that solve the eight problems listed above.

They’re missing for the best of all possible reasons: it’s still early. Digital living is still new—decades old at most. And it’s sure to persist for many decades, centuries or millennia to come.

They’re also missing because businesses typically think all solutions to business problems are ones for them. Thinking about customers solving business problems is outside that box.

But much work is already happening outside that box. And there already exist standards and code for building many customer-side solutions to problems shared with businesses. Yes, there are not yet as many or as good as we need; but there are enough to get started.

A lot of levers there.

For those of you attending this event, I’ll talk with you shortly. For the rest of you, I’ll let you know how it goes.

Let’s say the world is going to hell. Don’t argue, because my case isn’t about that. It’s about who saves it.

I suggest everybody. Or, more practically speaking, a maximized assortment of the smartest and most helpful anybodies.

Not governments. Not academies. Not investors. Not charities. Not big companies and their platforms. Any of those can be involved, of course, but we don’t have to start there. We can start with people. Because all of them are different. All of them can learn. And teach. And share. Especially since we now have the Internet.

To put this in a perspective, start with Joy’s Law: “No matter who you are, most of the smartest people work for someone else.” Then take Todd Park‘s corollary: “Even if you get the best and the brightest to work for you, there will always be an infinite number of other, smarter people employed by others.” Then take off the corporate-context blinders, and note that smart people are actually far more plentiful among the world’s customers, readers, viewers, listeners, parishioners, freelancers and bystanders.

Hundreds of millions of those people also carry around devices that can record and share photos, movies, writings and a boundless assortment of other stuff. Ways of helping now verge on the boundless.

We already have millions (or billions) of them are reporting on everything by taking photos and recording videos with their mobiles, obsolescing journalism as we’ve known it since the word came into use (specifically, around 1830). What matters with the journalism example, however, isn’t what got disrupted. It’s how resourceful and helpful (and not just opportunistic) people can be when they have the tools.

Because no being is more smart, resourceful or original than a human one. Again, by design. Even identical twins, with identical DNA from a single sperm+egg, can be as different as two primary colors. (Examples: Laverne Cox and M. Lamar. Nicole and Jonas Maines.)

Yes, there are some wheat/chaff distinctions to make here. To thresh those, I dig Carlo Cipolla‘s Basic Laws on Human Stupidity (.pdf here) which stars this graphic:

The upper right quadrant has how many people in it? Billions, for sure.

I’m counting on them. If we didn’t have the Internet, I wouldn’t.

In Internet 3.0 and the Beginning of (Tech) History, @BenThompson of @Stratechery writes this:

The Return of Technology

Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.

—followed by this graphic:

If you want to know what he means by “Politics,” read the piece. I take it as something of a backlash by regulators against big tech, especially in Europe. (With global scope. All those cookie notices you see are effects of European regulations.) But the bigger point is where that arrow goes. We need infrastructure there, and it won’t be provided by regulation alone. Tech needs to take the lead. (See what I wrote here three years ago.) But our tech, not big tech.

The wind is at our backs now. Let’s sail with it.

Bonus links: Cluetrain, New Clues, World of EndsCustomer Commons.

And a big HT to my old buddy Julius R. Ruff, Ph.D., for turning me on to Cipolla.

[Later…] Seth Godin calls all of us “indies.” I like that. HT to @DaveWiner for flagging it.

« Older entries