privacy

You are currently browsing the archive for the privacy category.

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

Tags: , , , ,

Journalism’s biggest problem (as I’ve said before) is what it’s best at: telling stories. That’s what Thomas B. Edsall (of Columbia and The New York Times) does in Trump’s Digital Advantage Is Freaking Out Democratic Strategists, published in today’s New York Times. He tells a story. Or, in the favored parlance of our time, a narrative, about what he sees Republicans’ superior use of modern methods for persuading voters:

Experts in the explosively growing field of political digital technologies have developed an innovative terminology to describe what they do — a lexicon that is virtually incomprehensible to ordinary voters. This language provides an inkling of the extraordinarily arcane universe politics has entered:

geofencingmass personalizationdark patternsidentity resolution technologiesdynamic prospectinggeotargeting strategieslocation analyticsgeo-behavioural segmentpolitical data cloudautomatic content recognitiondynamic creative optimization.

Geofencing and other emerging digital technologies derive from microtargeting marketing initiatives that use consumer and other demographic data to identify the interests of specific voters or very small groups of like-minded individuals to influence their thoughts or actions.

In fact the “arcane universe” he’s talking about is the direct marketing playbook, which was born offline as the junk mail business. In that business, tracking individuals and bothering them personally is a fine and fully rationalized practice. And let’s face it: political campaigning has always wanted to get personal. It’s why we have mass mailings, mass callings, mass textings and the rest of it—all to personal addresses, numbers and faces.

Coincidence: I just got this:

There is nothing new here other than (at the moment) the Trump team doing it better than any Democrat. (Except maybe Bernie.) Obama’s team was better at it in ’08 and ’12. Trump’s was better at it in ’16 and is better again in ’20.*

However, debating which candidates do the best marketing misdirects our attention away from the destruction of personal privacy by constant tracking of our asses online—including tracking of asses by politicians. This, I submit, is a bigger and badder issue than which politicians do the best direct marketing. It may even be bigger than who gets elected to what in November.

As issues go, personal privacy is soul-deep. Who gets elected, and how, are not.

As I put it here,

Surveillance of people is now the norm for nearly every website and app that harvests personal data for use by machines. Privacy, as we’ve understood it in the physical world since the invention of the loincloth and the door latch, doesn’t yet exist. Instead, all we have are the “privacy policies” of corporate entities participating in the data extraction marketplace, plus terms and conditions they compel us to sign, either of which they can change on a whim. Most of the time our only choice is to deny ourselves the convenience of these companies’ services or live our lives offline.

Worse is that these are proffered on the Taylorist model, meaning mass-produced.

There is a natural temptation to want to fix this with policy. This is a mistake for two reasons:

  1. Policy-makers are themselves part of the problem. Hell, most of their election campaigns are built on direct marketing. And law enforcement (which carries out certain forms of policy) has always regarded personal privacy as a problem to overcome rather than a solution to anything. Example.
  2. Policy-makers often screw things up. Exhibit A: the EU’s GDPR, which has done more to clutter the Web with insincere and misleading cookie notices than it has to advance personal privacy tech online. (I’ve written about this a lot. Here’s one sample.)

We need tech of our own. Terms and policies of our own. In the physical world, we have privacy tech in the forms of clothing, shelter, doors, locks and window shades. We have policies in the form of manners, courtesies, and respect for privacy signals we send to each other. We lack all of that online. Until we invent it, the most we’ll do to achieve real privacy online is talk about it, and inveigh for politicians to solve it for us. Which they won’t.

If you’re interested in solving personal privacy at the personal level, take a look at Customer Commons. If you want to join our efforts there, talk to me.

_____________
*The Trump campaign also has the enormous benefit of an already-chosen Republican ticket. The Democrats have a mess of candidates and a split in the party between young and old, socialists and moderates, and no candidate as interesting as is Trump. (Also, I’m not Joyce.)

At this point, it’s no contest. Trump is the biggest character in the biggest story of our time. (I explain this in Where Journalism Fails.) And he’s on a glide path to winning in November, just as I said he was in 2016.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers is their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it), and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short-term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools, and each other.

Louis Brandeis and Samuel Warren visited the same problem more than 130 years ago, when they became alarmed at the privacy risks suggested by photography, audio recording, and reporting on both via technologies that were far more primitive than those we have today. As a warning to the future, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal—whether anonymously or not.”

But it’s hard to argue for those rights in our digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there are no limits to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used by law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

Facial recognition systems are also getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and not just because they’re acquiescing to the inevitable: they’re relying on it because it makes interaction with machines easier—and they trust it.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of what might get done with facial data if the bank gets hacked, or if it changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if (though more like when) government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, which was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting through to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But assurances such as Apple’s haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time, appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

But, since the facial recognition genie will never go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example, to unlock phones, sort through old photos, or to show to others the way they would a driving license or a passport (to say, in effect, “See? This is me.”) But, the data they gather for themselves should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes). This, as I understand it, is roughly what Apple does with iPhones.
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age, or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted into the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

black hole

Last night I watched The Great Hack a second time. It’s a fine documentary, maybe even a classic. (A classic in literature, I learned on this Radio Open Source podcast, is a work that “can only be re-read.” If that’s so, then perhaps a classic movie is one that can only be re-watched.*)

The movie’s message could hardly be more loud and clear: vast amounts of private information about each of us is gathered constantly in the digital world, and is being weaponized so our minds and lives can be hacked by others for commercial or political gain. Or both. The movie’s star, Professor David Carroll of the New School (@profcarroll), has been delivering that message for many years, as have many others, including myself.

But to what effect?

Sure, we have policy moves such as the GDPR, the main achievement of which (so far) has been to cause every website to put confusing and (in most cases) insincere cookie notices on their index pages, meant (again, in most cases) to coerce “consent” (which really isn’t) to exactly the unwanted tracking the regulation was meant to stop.

Those don’t count.

Ennui does. Apathy does.

On seeing The Great Hack that second time, I had exactly the same feeling my wife had on seeing it for her first: that the very act of explaining the problem also trivialized it. In other words, the movie worsened the very problem it solved. And it isn’t alone at this, because so has everything everybody has said, written or reported about it. Or so it sometimes seems. At least to me.

Okay, so: if I’m right about that, why might it be?

One reason is that there’s no story. See, every story requires three elements: character (or characters), problem (or problems), and movement toward resolution. (Find a more complete explanation here.) In this case, the third element—movement toward resolution—is absent. Worse, there’s almost no hope. “The Great Hack” concludes with a depressing summary that tends to leave one feeling deeply screwed, especially since the only victories in the movie are over the late Cambridge Analytica; and those victories were mostly within policy circles we know will either do nothing or give us new laws that protect yesterday from last Thursday… and then last another hundred years.

The bigger reason is that we are now in a media environment summarized by Marshall McLuhan in his book The Medium is the Massage: “every new medium works us over completely.” Our new medium is the Internet, which is a non-place absent of distance and gravity. The only institutions holding up there are ones clearly anchored in the physical world. Health care and law enforcement, for example. Others dealing in non-material goods, such as information and ideas, aren’t doing as well.

Journalism, for example. Worse, on the Internet it’s easy for everyone to traffic in thoughts and opinions, as well as in solid information. So now the world of thoughts and ideas, which preponderate on social media such as Twitter, Facebook and Instagram, are vast floods of everything from everybody. In the midst of all that, the news cycle, which used to be daily, now lasts about as long as a fart. Calling it all too much is a near-absolute understatement.

But David Carroll is right. Darkness is falling. I just wish all the light we keep trying to shed would do a better job of helping us all see that.

_________

*For those who buy that notion, I commend The Rewatchables, a great podcast from The Ringer.

Whither Linux Journal?

[16 August 2019…] Had a reassuring call yesterday with Ted Kim, CEO of London Trust Media. He told me the company plans to keep the site up as an archive at the LinuxJournal.com domain, and that if any problems develop around that, he’ll let us know. I told him we appreciate it very much—and that’s where it stands. I’m leaving up the post below for historical purposes.

On August 5th, Linux Journal‘s staff and contractors got word from the magazine’s parent company, London Trust Media, that everyone was laid off and the business was closing. Here’s our official notice to the world on that.

I’ve been involved with Linux Journal since before it started publishing in 1994, and have been on its masthead since 1996. I’ve also been its editor-in-chief since January of last year, when it was rescued by London Trust Media after nearly going out of business the month before. I say this to make clear how much I care about Linux Journal‘s significance in the world, and how grateful I am to London Trust Media for saving the magazine from oblivion.

London Trust Media can do that one more time, by helping preserve the Linux Journal website, with its 25 years of archives, so all its links remain intact, and nothing gets 404’d. Many friends, subscribers and long-time readers of Linux Journal have stepped up with offers to help with that. The decision to make that possible, however, is not in my hands, or in the hands of anyone who worked at the magazine. It’s up to London Trust Media. The LinuxJournal.com domain is theirs.

I have had no contact with London Trust Media in recent months. But I do know at least this much:

  1. London Trust Media has never interfered with Linux Journal‘s editorial freedom. On the contrary, it quietly encouraged our pioneering work on behalf of personal privacy online. Among other things, LTM published the first draft of a Privacy Manifesto now iterating at ProjectVRM, and recently published on Medium.
  2. London Trust Media has always been on the side of freedom and openness, which is a big reason why they rescued Linux Journal in the first place.
  3. Since Linux Journal is no longer a functioning business, its entire value is in its archives and their accessibility to the world. To be clear, these archives are not mere “content.” They are a vast store of damned good writing, true influence, and important history that search engines should be able to find where it has always been.
  4. While Linux Journal is no longer listed as one of London Trust Media’s brands, the website is still up, and its archives are still intact.

While I have no hope that Linux Journal can be rescued again as a subscriber-based digital magazine, I do have hope that the LinuxJournal.com domain, its (Drupal-based) website and its archives will survive. I base that hope on believing that London Trust Media’s heart has always been in the right place, and that the company is biased toward doing the right thing.

But the thing is up to them. It’s their choice whether or not to support the countless subscribers and friends who have stepped forward with offers to help keep the website and its archives intact and persistent on the Web. It won’t be hard to do that. And it’s the right thing to do.

The Spinner* (with the asterisk) is “a service that enables you to subconsciously influence a specific person, by controlling the content on the websites he or she usually visits.” Meaning you can hire The Spinner* to hack another person.

It works like this:

  1. You pay The Spinner* $29. For example, to urge a friend to stop smoking. (That’s the most positive and innocent example the company gives.)
  2. The Spinner* provides you with an ordinary link you then text to your friend. When that friend clicks on the link, they get a tracking cookie that works as a bulls-eye for The Spinner* to hit with 10 different articles written specifically to influence that friend. He or she “will be strategically bombarded with articles and media tailored to him or her.” Specifically, 180 of these things. Some go in social networks (notably Facebook) while most go into “content discovery platforms” such as Outbrain and Revcontent (best known for those clickbait collections you see appended to publishers’ websites).

The Spinner* is also a hack on journalism, designed like a magic trick to misdirect moral outrage toward The Spinner’s obviously shitty business, and away from the shitty business called adtech, which not only makes The Spinner possible, but pays for most of online journalism as well.

The magician behind The Spinner* is “Elliot Shefler.” Look that name up and you’ll find hundreds of stories. Here are a top few, to which I’ve added some excerpts and notes:

  • For $29, This Man Will Help Manipulate Your Loved Ones With Targeted Facebook And Browser Links, by Parmy Olson @parmy in Forbes. Excerpt: He does say that much of his career has been in online ads and online gambling. At its essence, The Spinner’s software lets people conduct a targeted phishing attack, a common approach by spammers who want to secretly grab your financial details or passwords. Only in this case, the “attacker” is someone you know. Shefler says his algorithms were developed by an agency with links to the Israeli military.
  • For $29, This Company Swears It Will ‘Brainwash’ Someone on Facebook, by Kevin Poulson (@kpoulson) in The Daily Beast. A subhead adds, A shadowy startup claims it can target an individual Facebook user to bend him or her to a client’s will. Experts are… not entirely convinced.
  • Facebook is helping husbands ‘brainwash’ their wives with targeted ads, by Simon Chandler (@_simonchandler_) in The Daily Dot. Excerpt: Most critics assume that Facebook’s misadventures relate only to its posting of ads paid for by corporations and agencies, organizations that aim to puppeteer the “average” individual. It turns out, however, that the social network also now lets this same average individual place ads that aim to manipulate other such individuals, all thanks to the mediation of a relatively new and little-known company…
  • Brainwashing your wife to want sex? Here is adtech at its worst., by Samuel Scott (@samueljscott) in The Drum. Alas, the piece is behind a registration wall that I can’t climb without fucking myself (or so I fear, since the terms and privacy policy total 32 pages and 10,688 words I’m not going to read), so I can’t quote from it.
  • Creepy company hopes ‘Inception’ method will get your wife in the mood, by Saqib Shah (@eightiethmnt) in The Sun, via The New York Post. Excerpt: “It’s unethical in many ways,” admitted Shefler, adding “But it’s the business model of all media. If you’re against it, you’re against all media.” He picked out Nike as an example, explaining that if you visit the brand’s website it serves you a cookie, which then tailors the browsing experience to you every time you come back. A shopping website would also use cookies to remember the items you’re storing in a virtual basket before checkout. And a social network might use cookies to track the links you click and then use that information to show you more relevant or interesting links in the future…The Spinner started life in January of this year. Shefler claims the company is owned by a larger, London-based “agency” that provides it with “big data” and “AI” tools.
  • Adtech-for-sex biz tells blockchain consent app firm, ‘hold my beer’, by Rebecca Hill (@beckyhill) in The Register. The subhead says, Hey love, just click on this link… what do you mean, you’re seeing loads of creepy articles?
  • New Service Promises to Manipulate Your Wife Into Having Sex With You, by Fiona Tapp (@fionatappdotcom) in Rolling Stone. Excerpt: The Spinner team suggests that there isn’t any difference, in terms of morality, from a big company using these means to influence a consumer to book a flight or buy a pair of shoes and a husband doing the same to his wife. Exactly.
  • The Spinner And The Faustian Bargain Of Anonymized Data, by Lauren Arevalo-Downes (whose Twitter link by the piece goes to a 404) in A List Daily. On that site, the consent wall that creeps up from the bottom almost completely blanks out the actual piece, and I’m not going to “consent,” so no excertoing here either.
  • Can you brainwash one specific person with targeted Facebook ads? in TripleJ Hack, by ABC.net.au. Excerpt: Whether or not the Spinner has very many users, whether or not someone is going to stop drinking or propose marriage simply because they saw a sponsored post in their feed, it seems feasible that someone can try to target and brainwash a single person through Facebook.
  • More sex, no smoking – even a pet dog – service promises to make you a master of manipulation, by Chris Keall (@ChrisKeall) in The New Zealand Herald. Excerpt: On one level, The Spinner is a jape, rolled out as a colour story by various publications. But on another level it’s a lot more sinister: apparently yet another example of Facebook’s platform being abused to invade privacy and manipulate thought.
  • The Cambridge Analytica of Sex: Online service to manipulate your wife to have sex with you, by Ishani Ghose in meaww. Excerpt: The articles are all real but the headlines and the descriptions have been changed by the Spinner team. The team manipulating the headlines of these articles include a group of psychologists from an unnamed university. As the prepaid ads run, the partner will see headlines such as “3 Reasons Why YOU Should Initiate Sex With Your Husband” or “10 Marriage Tips Every Woman Needs to Hear”.

Is Spinner for real?

“Elliot Shefler” is human for sure. But his footprint online is all PR. He’s not on Facebook, Twitter or Instagram. The word “Press” (as in coverage) at the top of the Spinner website is just a link to a Google search for Elliot Shefler, not to curated list such as a real PR person or agency might compile.

Fortunately, a real PR person, Rich Leigh (@RichLeighPR) did some serious digging (you know, like a real reporter) and presented his findings in his blog, PR Examples, in a post titled Frustrated husbands can ‘use micro-targeted native ads to influence their wives to initiate sex’ – surely a PR stunt? Please, a PR stunt? It ran last July 10th, the day after Rich saw this tweet by Maya Kosoff (@mekosoff):

—and this one:

The links to (and in) those tweets no longer work, but the YouTube video behind one of the links is still up. The Spinner itself produced the video, which is tricked to look like a real news story. (Rich does some nice detective work, figuring that out.) The image above is a montage I put together from screenshots of the video.

Here’s some more of what Rich found out:

  • Elliot – not his real name, incidentally, his real name is Halib, a Turkish name (he told me) – lives, or told me he lives, in Germany

  • When I asked him directly, he assured me that it was ‘real’, and when I asked him why it didn’t work when I tried to pay them money, told me that it would be a technical issue that would take around half an hour to fix, likely as a result of ‘high traffic. I said I’d try again later. I did – keep reading

  • It is emphatically ‘not’ PR or marketing for anything

  • He told me that he has 5-6,000 paying users – that’s $145,000 – $174,000, if he’s telling the truth

  • Halib said that Google Ads were so cheap as nobody was bidding on them for the terms he was going for, and they were picking up traffic for ‘one or two cents’

  • He banked on people hate-tweeting it. “I don’t mind what they feel, as long as they think something”, Halib said – which is scarily like something I’ve said in talks I’ve given about coming up with PR ideas that bang

  • The service ‘works’ by dropping a cookie, which enables it to track the person you’re trying to influence in order to serve specific content. I know we had that from the site, but it’s worth reiterating

Long post short, Rich says Habib and/or Elliot is real, and so is The Spinner.

But what matters isn’t whether or not The Spinner is real. It’s that The Spinner misdirects reporters’ attention away from what adtech is and does, which is spy on people for the purpose of aiming stuff at them. And that adtech isn’t just what funds all of Facebook and much of Google (both giant and obvious targets of journalistic scrutiny), but what funds nearly all of publishing online, including most reporters’ salaries.

So let’s look deeper, starting here: There is no moral difference between planting an unseen tracking beacon on a person’s digital self and doing the same on a person’s physical self.

The operational difference is that in the online world it’s a helluva lot easier to misdirect people into thinking they’re not being spied on. Also a helluva lot easier for spies and intermediaries (such as publishers) to plausibly deny that spying is what they’re doing. And to excuse it, saying for example “It’s what pays for the Free Internet!” Which is bullshit, because the Internet, including the commercial Web, got along fine for many years before adtech turned the whole thing into Mos Eisley. And it will get along fine without adtech after we kill it, or it dies of its own corruption.

Meanwhile the misdirection continues, and it’s away from a third rail that honest and brave journalists† need to grab: that adtech is also what feeds most of them.

______________

† I’m being honest here, but not brave. Because I’m safe. I don’t work for a publication that’s paid by adtech. At Linux Journal, we’re doing the opposite, by being the first publication ready to accept terms that our readers proffer, starting with Customer CommonsP2B1(beta), which says “Just show me ads not based on tracking me.”

To get real privacy in the online world, we need to get the tech horse in front of the policy cart.

So far we haven’t done that. Let me explain…

Nature and the Internet both came without privacy.

The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.

When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.

In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.

Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”

Two such answers arrived with this morning’s New York TimesFacebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with personal tech that gives each of us power not just resist encroachments by others, but to have agency. (Merriam Websterthe capacity, condition, or state of acting or of exerting power.) When enough of us get personal agency, we can also have collective agency, for social as well as personal results.

We acquired both personal and social agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of both with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech. And that tech must be personal.

I bring this up because we will be working on privacy tech over the next four days at the Computer History Museum, first at VRM Day, today, and then over next three days at IIW: the Internet Identity Workshop. We have both twice every year.

On the table at both are work some of us, me included, are doing through Customer Commons on terms we can proffer as individuals, and the sites and services of the world can agree to.

Those terms are examples of what we call customertech: tech that’s ours and not Facebook’s or Apple’s or Google’s or Amazon’s.

The purpose of customertech is to turn the connected marketplace into a Marvel-like universe in which all of us are enhanced. It’ll be interesting to see what kind of laws and social effects follow.*

But hey, let’s invent the tech we need first.

*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.

Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.

Bonus link.


I found the image in this search for cart & horse images that were free to use .

Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Irony Alert: the same is true for the Times, along with every other publication that lives off adtech: tracking-based advertising. These pubs don’t just open the kimonos of their readers. They bring readers’ bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of aiming “interest-based” advertising at those same readers, wherever those readers’ eyeballs may appear—or reappear in the case of “retargeted” advertising.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach, standard, experience or audit trail), and no blood valving by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data?

Answer: nobody knows, because the whole adtech “ecosystem” is a four-dimensional shell game with hundreds of players

or, in the case of “martech,” thousands:

For one among many views of what’s going on, here’s a compressed screen shot of what Privacy Badger showed going on in my browser behind Zeynep’s op-ed in the Times:

[Added later…] @ehsanakhgari tweets pointage to WhoTracksMe’s page on the NYTimes, which shows this:

And here’s more irony: a screen shot of the home page of RedMorph, another privacy protection extension:

That quote is from Free Tools to Keep Those Creepy Online Ads From Watching You, by Brian X. Chen and Natasha Singer, and published on 17 February 2016 in the Times.

The same irony applies to countless other correct and important reportage on the Facebook/Cambridge Analytica mess by other writers and pubs. Take, for example, Cambridge Analytica, Facebook, and the Revelations of Open Secrets, by Sue Halpern in yesterday’s New Yorker. Here’s what RedMorph shows going on behind that piece:

Note that I have the data leak toward Facebook.net blocked by default.

Here’s a view through RedMorph’s controller pop-down:

And here’s what happens when I turn off “Block Trackers and Content”:

By the way, I want to make clear that Zeynep, Brian, Natasha and Sue are all innocents here, thanks both to the “Chinese wall” between the editorial and publishing functions of the Times, and the simple fact that the route any ad takes between advertiser and reader through any number of adtech intermediaries is akin to a ball falling through a pinball machine. Refresh your page while reading any of those pieces and you’ll see a different set of ads, no doubt aimed by automata guessing that you, personally, should be “impressed” by those ads. (They’ll count as “impressions” whether you are or not.)

Now…

What will happen when the Times, the New Yorker and other pubs own up to the simple fact that they are just as guilty as Facebook of leaking data about their readers to other parties, for—in many if not most cases—God knows what purposes besides “interest-based” advertising? And what happens when the EU comes down on them too? It’s game-on after 25 May, when the EU can start fining violators of the General Data Protection Regulation (GDPR). Key fact: the GDPR protects the data blood of what they call “EU data subjects” wherever those subjects’ necks are exposed in borderless digital world.

To explain more about how this works, here is the (lightly edited) text of a tweet thread posted this morning by @JohnnyRyan of PageFair:

Facebook left its API wide open, and had no control over personal data once those data left Facebook.

But there is a wider story coming: (thread…)

Every single big website in the world is leaking data in a similar way, through “RTB bid requests” for online behavioural advertising #adtech.

Every time an ad loads on a website, the site sends the visitor’s IP address (indicating physical location), the URL they are looking at, and details about their device, to hundreds -often thousands- of companies. Here is a graphic that shows the process.

The website does this to let these companies “bid” to show their ad to this visitor. Here is a video of how the system works. In Europe this accounts for about a quarter of publishers’ gross revenue.

Once these personal data leave the publisher, via “bid request”, the publisher has no control over what happens next. I repeat that: personal data are routinely sent, every time a page loads, to hundreds/thousands of companies, with no control over what happens to them.

This means that every person, and what they look at online, is routinely profiled by companies that receive these data from the websites they visit. Where possible, these data and combined with offline data. These profiles are built up in “DMPs”.

Many of these DMPs (data management platforms) are owned by data brokers. (Side note: The FTC’s 2014 report on data brokers is shocking. See https://www.ftc.gov/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014. There is no functional difference between an #adtech DMP and Cambridge Analytica.

—Terrell McSweeny, Julie Brill and EDPS

None of this will be legal under the #GDPR. (See one reason why at https://t.co/HXOQ5gb4dL). Publishers and brands need to take care to stop using personal data in the RTB system. Data connections to sites (and apps) have to be carefully controlled by publishers.

So far, #adtech’s trade body has been content to cover over this wholesale personal data leakage with meaningless gestures that purport to address the #GDPR (see my note on @IABEurope current actions here: https://t.co/FDKBjVxqBs). It is time for a more practical position.

And advertisers, who pay for all of this, must start to demand that safe, non-personal data take over in online RTB targeting. RTB works without personal data. Brands need to demand this to protect themselves – and all Internet users too. @dwheld @stephan_lo @BobLiodice

Websites need to control
1. which data they release in to the RTB system
2. whether ads render directly in visitors’ browsers (where DSPs JavaScript can drop trackers)
3. what 3rd parties get to be on their page
@jason_kint @epc_angela @vincentpeyregne @earljwilkinson 11/12

Lets work together to fix this. 12/12

Those last three recommendations are all good, but they also assume that websites, advertisers and their third party agents are the ones with the power to do something. Not readers.

But there’s lots readers will be able to do. More about that shortly. Meanwhile, publishers can get right with readers by dropping #adtech and going back to publishing the kind of high-value brand advertising they’ve run since forever in the physical world.

That advertising, as Bob Hoffman (@adcontrarian) and Don Marti (@dmarti) have been making clear for years, is actually worth a helluva lot more than adtech, because it delivers clear creative and economic signals and comes with no cognitive overhead (for example, wondering where the hell an ad comes from and what it’s doing right now).

As I explain here, “Real advertising wants to be in a publication because it values the publication’s journalism and readership” while “adtech wants to push ads at readers anywhere it can find them.”

Doing real advertising is the easiest fix in the world, but so far it’s nearly unthinkable for a tech industry that has been defaulted for more than twenty years to an asymmetric power relationship between readers and publishers called client-server. I’ve been told that client-server was chosen as the name for this relationship because “slave-master” didn’t sound so good; but I think the best way to visualize it is calf-cow:

As I put it at that link (way back in 2012), Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing about the Net that prevents each of us from having plenty of power on our own.

On the Net, we don’t need to be slaves, cattle or throbbing veins. We can be fully human. In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.

Customer Commons is working on exactly those terms. The first publication to agree to readers terms is Linux Journal, where I am now editor-in-chief. The first of those terms is #P2B1(beta), says “Just show me ads not based on tracking me,” and is hashtagged #NoStalking.

In Help Us Cure Online Publishing of Its Addiction to Personal Data, I explain how this models the way advertising ought to be done: by the grace of readers, with no spying.

Obeying readers’ terms also carries no risk of violating privacy laws, because every pub will have contracts with its readers to do the right thing. This is totally do-able. Read that last link to see how.

As I say there, we need help. Linux Journal still has a small staff, and Customer Commons (a California-based 501(c)(3) nonprofit) so far consists of five board members. What it aims to be is a worldwide organization of customers, as well as the place where terms we proffer can live, much as Creative Commons is where personal copyright licenses live. (Customer Commons is modeled on Creative Commons. Hats off to the Berkman Klein Center for helping bring both into the world.)

I’m also hoping other publishers, once they realize that they are no less a part of the surveillance economy than Facebook and Cambridge Analytica, will help out too.

[Later…] Not long after this post went up I talked about these topics on the Gillmor Gang. Here’s the video, plus related links.

I think the best push-back I got there came from Esteban Kolsky, (@ekolsky) who (as I recall anyway) saw less than full moral equivalence between what Facebook and Cambridge Analytica did to screw with democracy and what the New York Times and other ad-supported pubs do by baring the necks of their readers to dozens of data vampires.

He’s right that they’re not equivalent, any more than apples and oranges are equivalent. The sins are different; but they are still sins, just as apples and oranges are still both fruit. Exposing readers to data vampires is simply wrong on its face, and we need to fix it. That it’s normative in the extreme is no excuse. Nor is the fact that it makes money. There are morally uncompromised ways to make money with advertising, and those are still available.

Another push-back is the claim by many adtech third parties that the personal data blood they suck is anonymized. While that may be so, correlation is still possible. See Study: Your anonymous web browsing isn’t as anonymous as you think, by Barry Levine (@xBarryLevine) in Martech Today, which cites De-anonymizing Web Browsing Data with Social Networks, a study by Jessica Su (@jessicatsu), Ansh Shukla (@__anshukla__) and Sharad Goel (@5harad)
of Stanford and Arvind Narayanan (@random_walker) of Princeton.

(Note: Facebook and Google follow logged-in users by name. They also account for most of the adtech business.)

One commenter below noted that this blog as well carries six trackers (most of which I block).. Here is how those look on Ghostery:

So let’s fix this thing.

[Later still…] Lots of comments in Hacker News as well.

[Later again (8 April 2018)…] About the comments below (60+ so far): the version of commenting used by this blog doesn’t support threading. If it did, my responses to comments would appear below each one. Alas, some not only appear out of sequence, but others don’t appear at all. I don’t know why, but I’m trying to find out. Meanwhile, apologies.

away2remember2manytabsFor today’s entries, I’m noting which linked pieces require you to turn off tracking protection, meaning tracking is required by those publishers. I’m also annotating entries with hashtags and organizing sections into bulleted lists.


#AdBlocking and #Advertising

#Apple

#Photography

#Other

2017-03-27_subwayphones

I shot this picture with my phone on the subway last night, while no less absorbed in my personal rectangle than everyone else on the subway (and I do mean everyone) was with theirs.

I don’t know what the other passengers were doing on their rectangles, though it’s not hard to guess. In my case it was spinning through emails, texting, tweeting, checking various other apps (weather, navigation, calendar) and listening to podcasts.

We shape our tools and then they shape us. That’s was and remains Marshall McLuhan‘s main point. The us is both singular and plural. We get shaped, and so do our infrastructures, societies, governments and the rest of what we do in the civilized world. (Here’s an example of all four of those happening at once: People won’t stop staring at their phones, so a Dutch town put traffic lights on the ground. From Quartz.)

Two years from now, most of the phones used by people in this shot will be traded in, discarded or re-purposed. But will we remain just as tethered to Apple, Google, Facebook, Amazon, telcos and the other feudal overlords* that sell us our rectangles and connect to the world? (*A metaphor we owe to Bruce Schneier.)

The deeper question is whether we’ll finish becoming dependent serfs to sovereigns with silos or start becoming self-sovereign as free-range human beings in truly open societies.

The answer will probably be some combination of both. In the meantime, however, one clear need is for greater independence and agency, at least at the individual level. (There are similar needs at the social, political and economic spheres as well, but let’s keep this personal.)

Obsolescence will help.

Within the next two years (just like the last two and the two before that), most phones will do less old-fashioned telephony, text, audio and video, and much more cool (and perhaps scary) new shit (VR, AI, IA, CX and other two-letter acronyms, to name a few off the top of my head and my screen).

Just as surely they’ll also give us new ways to shape what we do and be shaped as well. Perhaps by then mass media will finish turning into the mess media it actually is already, though we don’t call it that yet.

One big Hmm is What comes after phone use spreads beyond ubiquity (when most of us have multiple rectangles)?

Everything gets obsolesced, one way or another, eventually. But that doesn’t mean it goes away. It just means something else comes along that’s better for the main purpose, while the obsolesced tech still hangs around in a subordinated, subsumed or specialized state. Print did that to scribing, Radio did that to print, TV did it to radio, and the Net is doing it to damn near every other medium we can name, subsuming them all and stretching their effects to the absolute limit by eliminating the distances between everything while pushing costs toward zero. (See The Giant Zero for more on that.)

Thus, while all our asses still sit on Earth in physical space, our digital selves float weightlessly in a non-space with no gravity and no distance. Since progress is the process by which the miraculous becomes mundane, we already experience these two states non-ironically and all at once. And een this isn’t new. Here’s what I wrote about it in The Intention Economy, published in 2012:

Story #1. It’s 2002, and the kid is seven. As always, he’s full of questions. As sometimes happens, I don’t have an answer. But this time he comes back with a simple demand:

“Look it up,” he says.

“I can’t. I’m driving.”

“Look it up anyway.”

“I need a computer for that.”

“Why?”

Story #2. It’s 2007, and we are staying overnight in the house of an old family friend. In a guest bedroom is a small portable 1970’s-vintage black-and-white TV. On the front of the TV are a volume control and two tuning dials: one for channels 2-13, the other for 14-83. The kid examines the device for a minute or two and says, “What is this?” I say it’s a TV. He points at the two dials and asks, “Then what are these for?”

Progress is how the miraculous becomes mundane. The beauty of stars would be legend, Emerson said, if they only showed through the clouds but once every thousand years. What would he have made of commercial aviation, a system by which millions of people fly all over the globe, every day, leaping continents and oceans in just a few hours, while complaining of bad food and slow service, and shutting their windows to block light from the clouds below so they can watch a third-rate movie with bad sound on a tiny screen?

The Internet is a sky of stars we’ve made for ourselves (and of ourselves), all just a few clicks away.

Mcuhan says the effects of every new medium can be understood through four questions he calls a tetrad, illustrated this way:

250px-mediatetrad-svg

Put a new medium in the middle and then sort effects into the four corners by answering a question for each:

  1. What does the medium enhance?
  2. What does the medium make obsolete?
  3. What does the medium retrieve that had been obsolesced earlier?
  4. What does the medium reverse or flip into when pushed to extremes?

These are posed as a heuristic: an approach to help us understand what’s going on, rather than a way to come up with perfect or final answers. There can be many answers to each question, all arguable.

So let’s look at smartphones. I suggest they—

  • Enhance conversation
  • Obsolesce mass media (print, radio, TV, cinema, whatever)
  • Retrieve personal agency (the ability to act with effect in the world)
  • Reverse into isolation (also into lost privacy through exposure to surveillance and exploitation)

I don’t think we’re all the way into any of those yet, even as every damn one of us in a subway rewires our brains in real time using rectangles that extend our presence, involvement and effects in the world. Ironies abound, invisible, unnoticed. We all smell something. Is it our human frogs boiling? The primordial ooze out of which we are evolving into creatures other than human? What is that?

Here’s a hmm: what will obsolesce smartphones?

I don’t have answers; I’m just sure there will be some—and that we’ll have passed Peak Phone when they come.

Tags: , ,

« Older entries § Newer entries »