Bet on obsolescence

In New Digital Realities; New Oversight SolutionsTom Wheeler, Phil Verveer and Gene Kimmelman suggest that “the problems in dealing with digital platform companies” strip the gears of antitrust and other industrial era regulatory machines, and that what we need instead is “a new approach to regulation that replaces industrial era regulation with a new more agile regulatory model better suited for the dynamism of the digital era.” For that they suggest “a new Digital Platform Agency should be created with a new, agile approach to oversight built on risk management rather than micromanagement.” They provide lots of good reasons for this, which you can read in depth here.

I’m on a list where this is being argued. One of those participating is Richard Shockey, who often cites his eponymous law, which says, “The answer is money. What is the question?” I bring that up as background for my own post on the list, which I’ll share here:

The Digital Platform Agency proposal seems to obey a law like Shockey’s that instead says, “The answer is policy. What is the question?”

I think it will help, before we apply that law, to look at modern platforms as something newer than new. Nascent. Larval. Embryonic. Primitive. Epiphenomenal.

It’s not hard to think of them that way if we take a long view on digital life.

Start with this question: is digital tech ever going away?

Whether yes or no, how long will digital tech be with us, mothering boundless inventions and necessities? Centuries? Millenia?

And how long have we had it so far? A few decades? Hell, Facebook and Twitter have only been with us since the late ’00s.

So why start to regulate what can be done with those companies from now on, right now?

I mean, what if platforms are just castles—headquarters of modern duchies and principalities?

Remember when we thought IBM, AT&T and the PTTs in Europe would own and run the world forever?

Remember when the BUNCH was around, and we called IBM “the environment?” Remember EBCDIC?

Remember when Microsoft ruled the world, and we thought they had to be broken up?

Remember when Kodak owned photography, and thought their enemy was Fuji?

Remember when recorded music had to be played by rolls of paper, lengths of tape, or on spinning discs and disks?

Remember when “social media” was a thing, and all the world’s gossip happened on Facebook and Twitter?

Then consider the possibility that all the dominant platforms of today are mortally vulnerable to obsolescence, to collapse under their own weight, or both.

Nay, the certainty.

Every now is a future then, every is a was. And trees don’t grow to the sky.

It’s an easy bet that every platform today is as sure to be succeeded as were stone tablets by paper, scribes by movable type, letterpress by offset, and all of it by xerography, ink jet, laser printing and whatever comes next.

Sure, we do need regulation. But we also need faith in the mortality of every technology that dominates the world at any moment in history, and in the march of progress and obsolescence.

Another thought: if the only answer is policy, the problem is the question.

This suggests yet another another law (really an aphorism, but whatever): “The answer is obsolescence. What is the question?”

As it happens, I wrote about Facebook’s odds for obsolescence two years ago here. An excerpt:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting, amplify tribal prejudices (including genocidal ones) and produce many $billions for itself in an advertising business that depends on all of that—while also trying to correct, while they are doing what they were designed to do, the massively complex and settled infrastructural systems that make all if it work.

I’m not saying regulators should do nothing. I am saying that gravity still works, the mighty still fall, and these are facts of nature it will help regulators to take into account.



Leave a Reply

Your email address will not be published. Required fields are marked *