Technology

You are currently browsing the archive for the Technology category.

Dave makes a profound distinction in his post this morning titled Outliners and Word Processors. For the first time I not only grok what I already knew about outlining, but why it’s so much better as a way to write than word processing ever was.

The distinction is a bit hard to see because Word — the word processor that approximately everybody uses — has a “view” called “Outline.” That view has made lots of writers hate outlining, for a good and ironic reason: it was never about outlining, so it botched the job. Dave explains,

What they called outlining was more like outline formatting. Putting Roman numerals on the top sections, capital letters on the first level. Numbers on the second and so on.

Word is a word processor. Its primary function is writing-for-printing. The choices the designers made make it a relatively strong formatter and a weak organizer.

Design choice is the key point. Dave again:

Word is a production tool — good for annual reports, formal papers, stories, books. Fargo is an organizing tool, good for lists, project plans, narrating your work, presentations, team communication. You could organize a conference with an outliner. The slides would naturally be composed wiht an outliner.

An outliner is designed for editing structure more than it is for editing text. The text is sort of “along for the ride.” Or you could see an outliner as text-on-rails. Outliner text is always ready to move, with a single mouse gesture or keystroke. You enter text into an outliner so you can move it around, like stick-up notes on a whiteboard.

…Word processors are good at selecting words, sentences and paragraphs. Outliners select headlines and all their subs.

This makes me think that Word should have been called a “format processor” from the start. We already had text editors. Word processing was actually about how things looked. Still is. See, when you write in Word, you are in a land called “styles,” no matter what. All styles format text, in countless ways. The default, called “Normal,” comes pre-set with font, size, justification, line spacing, paragraph spacing and so on. If you make changes to it, those get added as well, until you concatenate a long list of formatting variables, which get carried forward by copy and pasting, often in bizarre ways, conditioned on whatever other style choices may or may not have already been made in another part of the text.

For a long time I wrote entirely in an outliner called MORE, which was created by Dave and friends back the 1980s. As a writer I found MORE a far better tool than Word, especially for long pieces, because its structure-first design made it easy for me to move around whole sections, and to jump from one section to another. Fargo works the same way. Take this outline, for example:

Earth

  • Geology
  • Astronomy

Air

  • Chemistry
  • Weather

Water

  • chemistry
  • bodies

Fire

  • Material
  • Temperature

Writing that in WordPress (which I’m doing now) is a chore, because all the choices are formatting ones, not outlining ones. Let’s say I want to move Water above Fire. I need to copy and paste it, and then hit the HTML tab so I can un-screw whatever happens under WordPress’ very thin covers, and the formatting elements of HTML reside.

In Fargo, I just hit hit Command-U (or Control-U on Linux or Windows computers). Everything under Fire moves up. I can do the same with the subheads, or with the paragraphs under the subheads. (I would illustrate that here if the HTML hack weren’t so arduous.)

When I was writing The Intention Economy, I wished every day that I could have written it in MORE, because it would have been so much easier than it was in Word. MORE really was text-on-rails.

At its peak, The Intention Economy was 120,000 words long. The finished book was about 80,000 words. The outline view: four main parts and twenty-seven chapters. If I had been writing it in MORE, I could have collapsed the whole book to just the top-level (the four parts), expanded just to the chapter level, and then edited text within any of those, while seeing the whole outline in collapsed form above and below. I could have moved whole chapters or subchapters forward or back, and I could have promoted or demoted parts, chapters and subchapters, again with keyboard commands. I could easily have managed writing the whole book with an ease that Word simply would not allow, except to the degree that I could master working in its awful outline view.

(To be fair, there have been improvements in Word that make something like real outlining possible. I bring this up in case you’re writing a book and need easy navigation in Word. What you want is Document Map Pane under Sidebars in the View menu. That makes an outline pane appear to the left of the text. If you are using Word’s default outline and text formatting, you can expand and collapse subheads and text, and move about your document by clicking on the heading or subheading you like. It’s a huge help, though nothing as useful as what we lost when MORE went away a few years ago.)

By the way, on the production side, MORE actually did some things that Word still doesn’t do, such as giving you the choice of putting the saved date and time in the header or footer, rather than the current date and time. This is extremely handy for matching printed drafts with saved drafts on the computer. I believe MORE did that because it came from outline designers rather than format designers. It showed respect for the need to organize, and not just to format and produce.

The assumption with Word, even today, is that you will be printing the finished thing out, rather than publishing it on the Web. While Word does have a Web Layout view, and will produce HTML, it’s the gawd-awful-worst HTML the world has ever known. (Look up Word + HTML in a search engine and you’ll find lots of links to fixes for Word’s hideous HTML.) Again, this is a design legacy from a time before the Web, and we are still forced to live with it today.

Outlining is a much better fit for writing on, and for, the Web.

Consider this old writing aphorism: What you say matters more than how you say it. Outlining respects this by giving you a way to shape and re-shape what you say. As it was originally conceived, so did HTML. Although it did markup, which was formatting, HTML was as simple as possible, leaving particulars such as fonts and sizes up to the reader’s browser, rather than up to the writer’s word processor. This has changed over the years, as HTML has become far more complex, and design along with it. Right now, for example, I’m coping with designing a couple of new WordPress blogs, and the choices I face are all between different piles of complexity. If you want to color outside the lines of whatever themes you choose — or hell, just to choose a theme you can work with — you’re going to need professional help, or to spend a lot of time learning and re-learning how to write on the Web. That’s because the choices of how you say it have totally overrun those of what you say.

By coming from what you say rather than how you say it, Fargo is both an antidote to the complexities of writing for the Web today, and a throwback to the original design graces of HTML, and of the Web itself.

So I highly recommend to serious writers that they get on board and learn outlining, as Dave and his team at SmallPicture iterate Fargo toward whatever it will end up being. Hey, it’s still new. And what better time to get on board than when you’re new to the whole thing as well.

Bonus link: Outlining solves syncing and sharing, by Chris Wolverton.

I’m in Boston right now, and bummed that I can’t attend Start-up City: An Entrepreneurial Economy for Middle Class New York, which is happening today at New York Law School today.

I learned about it via Dana Spiegel of NYC Wireless, who will be on a panel titled “Breakout Session III: Infrastructure for the 21st Century—How Fast, Reliable Internet Access Can Boost Business Throughout the Five Boroughs.” In an email Dana wrote, The question for the panel participants is how fast, reliable internet access can boost business throughout NYC.” The mail was to a list. I responded, and since then I’ve been asked if that response might be shared outside the list as well. So I decided to blog it. Here goes:

Fast and reliable infrastructure of any kind is good for business. That it’s debatable for the Internet shows we still don’t understand what the Internet is — or how, compared to what it costs to build and maintain other forms of infrastructure, it’s damned cheap, with economic and social leverage in the extreme.

Here’s a thought exercise for the audience: Imagine no Internet: no data on phones, no ethernet or wi-fi connections at home — or anywhere. No email, no Google, no Facebook, no Skype.

That’s what we would have if designing the Internet had been left up to phone and cable companies, and not to geeks whose names most people don’t know, and who made something no business or government would ever contemplate: a thing nobody owns, everybody can use and anybody can improve — and for all three reasons supports positive economic externalities beyond calculation.

The only reason we have the carriers in the Net’s picture is that we needed their wires. They got into the Internet service business only because demand for Internet access was huge, and they couldn’t avoid it.

Yet, because we still rely on their wires, and we get billed for their services every month, we think and talk inside their conceptual boxes.

Try this: cities are networks, and networks are cities. Every business, every person, every government agency and employee, every institution, is a node in a network whose value increases as a high multiple of all the opportunities there are for nodes to  connect — and to do anything. This is why the city should care about pure connectivity, and not just about “service” as a grace of phone and cable companies.

Building a network infrastructure as neutral to purpose as water, electricity, roads and sewage treatment should be a top priority for the city. It can’t do that if it’s wearing blinders supplied by Verizon, Time Warner and AT&T.

Re-base the questions on the founding protocols of the Net itself, and its city-like possibilities. Not on what we think the carriers can do for us, or what we can do that’s carrier-like.

I came to the realization that networks are cities, and vice versa, via Geoffrey West — first in Jonah Lehrer’s “A Physicist Solves The City,” in the New York Times, and then in West’s TED talk, “The Surprising Math of Cities and Corporations.” West is the physicist in Lehrer’s piece. Both are highly recommended.

Bonus link.

We’re not watching any less TV. In fact, we’re watching more of it, on more different kinds of screens. Does this mean that TV absorbs the Net, or vice versa? Or neither? That’s what I’m exploring here. By “explore” I mean I’m not close to finished, and never will be. I’m just vetting some ideas and perspectives, and looking for help improving them.

TV 1.0: The Antenna Age

In the beginning, 100% of  TV went out over the air, radiated by contraptions atop towers or buildings, and picked up by rabbit ears on the backs of TV sets or by bird roosts on roofs. “Cable” was the wire that ran from the roof to the TV set. It helps to understand how this now-ancient system worked, because its main conceptual frame — the channel, or a collection of them —  is still with us, even though the technologies used are almost entirely different. So here goes.

tv antenna

Empire State Building antennas

On the left is a typical urban rooftop TV antenna. The different lengths of the antenna elements correspond roughly to the wavelengths of the signals. For reception, this mattered a lot.

In New York  City, for example, TV signals all came from the Empire State Building — and still do, at least until they move to the sleek new spire atop One World Trade Center, aka the Freedom Tower. (Many stations were on the North Tower of the old World Trade center, and perished with the rest of the building on 9/11/2001. After that, they moved back to their original homes on the Empire State Building.)

“Old” in the right photo refers to analog, and “new” to digital. (An aside: FM is still analog. Old and New here are just different generations of transmitting antennas. The old FM master antenna is two rings of sixteen T-shaped things protruding above and below the observation deck on the 102nd floor. It’s still in use as an auxiliary antenna. Here’s a similar photo from several decades back, showing the contraptual arrangement at the height of the Antenna Age.)

Channels 2-6 were created by the FCC in the 1940s (along with FM radio, which is in a band just above TV channel 6). Those weren’t enough channels, so 7-13 came along next, on higher frequencies — and therefore shorter wavelengths. Since the shorter waves don’t bend as well around buildings and terrain, stations on channels 7-13 needed higher power. So, while the maximum power for channels 2-6 was 100,000 watts, the “equivalent” on channels 7-13 was 316,000 watts. All those channels were in VHF bands, for Very High Frequency. Channels 14-83 — the UHF, or Ultra High Frequency band, was added in the 1950s, to make room for more stations in more places. Here the waves were much shorter, and the maximum transmitted power for “equivalent” coverage  to VHF was 5,000,000 watts. (All were ERP, or effective radiated power, toward the horizon.)

This was, and remains, a brute-force approach to what we now call “delivering content.” Equally brute approaches were required for reception as well. To watch TV, homes in outer suburban or rural areas needed rooftop antennas that looked like giant centipedes.

What they got — analog TV — didn’t have the resolution of today’s digital TV, but it was far more forgiving of bad reception conditions. You might get “ghosting” from reflected signals, or “snow” from a weak signal, but people put up with those problems just so they could see what was on.

More importantly, they got hooked.

TV 2.0: the Cable Age.

It began with CATV, or Community Antenna Television. For TV junkies who couldn’t get a good signal, CATV was a godsend. In the earliest ’70s I lived in McAfee, New Jersey, deep in a valley, where a rabbit-ears antenna got nothing, and even the biggest rooftop antenna couldn’t do much better. (We got a snowy signal on Channel 2 and nothing else.) So when CATV came through, giving us twelve clear channels of TV from New York and Philadelphia, we were happy to pay for it. A bit later, when we moved down Highway 94 to a high spot south of Newton, my rooftop antenna got all those channels and more, so there was  no need for CATV there. Then, after ’74, when we moved to North Carolina, we did without cable for a few years, because our rooftop antennas, which we could spin about with a rotator, could get everything from Roanoke, Virginia to Florence, South Carolina.

But then, in the early ’80s, we picked up on cable because it had Atlanta “superstation” WTCG (later WTBS and then just TBS) and HBO, which was great for watching old movies. WTCG, then still called Channel 17, also featured the great Bill Tush. (Sample here.) The transformation of WTCG into a satellite-distributed “superstation” meant that a TV station no longer needed to be local, or regional. For “super” stations on cable, “coverage” and “range” became bugs, not features.

Cable could also present viewers with more channels than they could ever get over the air. Technical improvements gradually raised the number of possible channels from dozens to hundreds. Satellite systems, which replicated cable in look and feel, could carry even more channels.

Today cable is post-peak. See here:

catv and cable tv

That’s because, in the ’90s, cable also turned out to be ideal for connecting homes to the Internet. We were still addicted to what cable gave us as “TV,” but we also had the option to watch a boundless variety of other stuff — and to produce our own. Today people are no less hooked on video than they were in 1955, but a declining percentage of their glowing-rectangle viewing is on cable-fed TV screens. The main thing still tying people to cable is the exclusive availability of high-quality and in-demand shows (including, especially, live sports) over cable and satellite alone.

This is why apps for CNN, ESPN, HBO and other cable channels require proof of a cable or satellite TV subscription. If cable content was á la carte, the industry would collapse. The industry knows this, of course, which makes it defensive.

That’s why Aereo freaks them out. Aereo is the new company that Fox and other broadcasters are now suing for giving people who can’t receive TV signals a way to do that over the Net. The potential served population is large, since the transition of U.S. television from analog to digital transmission (DTV) was, and remains, a great big fail.

Where the FCC estimated a 2% loss of analog viewers after the transition in June 2009, in fact 100% of the system changed, and post-transition digital coverage was not only a fraction of pre-transition analog coverage, but required an entirely new way to receive signals, as well as to view them. Here in New York, for example, I’m writing this in an apartment that could receive analog TV over rabbit ears in the old analog days. It looked bad, but at least it was there. With DTV there is nothing. For apartment dwellers without line-of-sight to the Empire State Building, the FCC’s reception maps are a fiction. Same goes for anybody out in the suburbs or in rural areas. If there isn’t a clear-enough path between the station’s transmitter and your TV’s antenna, you’re getting squat.

TV stations actually don’t give much of a damn about over-the-air any more, because 90+% of viewers are watching cable. But TV stations still make money from cable systems, thanks to re-transmission fees and “must carry” rules. These rules require cable systems to carry all the signals receivable in the area they serve. And the coverage areas are mostly defined by the old analog signal footprints, rather than the new smaller digital footprints, which are also much larger on the FCC’s maps than in the realities where people actually live.

Aereo gets around all that by giving each customer an antenna of their own, somewhere out where the signals can be received, and delivering each received station’s video to customers over the Net. In other words, it avoids being defined as cable, or even CATV. It’s just giving you, the customer, your own little antenna.

This is a clever technical and legal hack, and strong enough for Aereo towin in court. After that victory, Fox threatened to take its stations off the air entirely, becoming cable- and satellite-only. This exposed the low regard that broadcasters hold for their over-the-air signals, and for broadcasting’s legacy “public service” purpose.

The rest of the Aereo story is inside baseball, and far from over. (If you want a good rundown of the story so far, dig Aereo: Reinventing the cable TV model, by Tristan Louis.)

Complicating this even more is the matter of “white spaces.” Those are parts of the TV bands where there are no broadcast signals, or where broadcast signals are going away. These spaces are valuable because there are countless other purposes to which signals in those spaces could be put, including wireless Internet connections. Naturally, TV station owners want to hold on to those spaces, whether they broadcast in them or not. And, just as naturally, the U.S. government would like to auction the spaces off. (To see where the spaces are, check out Google’s “spectrum browser“. And note how few of them there are in urban areas, where there are the most remaining TV signals.)

Still, TV 2.0 through 2.9 is all about cable, and what cable can do. What’s happening with over-the-air is mostly about what the wonks call policy. From Aereo to white spaces, it’s all a lot of jockeying for position — and making hay where the regulatory sun shines.

Meanwhile, broadcasters and cable operators still hate the Net, even though cable operators are in the business of providing access to it. Both also remain in denial about the Net’s benefits beyond serving as Cable 2.x. They call distribution of content over the Net (e.g. through Hulu and Netflix) “over the top” or OTT, even though it’s beyond obvious that OTT is the new bottom.

FCC regulations regarding TV today are in desperate need of normalizing to the plain fact that the Net is the new bottom — and incumbent broadcasters aren’t the only ones operating there. But then, the feds don’t understand the Net either. The FCC’s world is radio, TV and telephony. To them, the Net is just a “service” provided by phone and cable companies.

TV 3.0: The IPTV age

IPTV is TV over the Internet Protocol — in other words, through the open Internet, rather than through cable’s own line-up of channels. One example is Netflix. By streaming movies over the Net, Netflix put a big dent in cable viewing. Adding insult to that injury, the vast majority of Netflix streamed movies are delivered over cable connections, and cable doesn’t get a piece of the action, because delivery is over OTT, via IPTV. And now, by producing its own high-quality shows, such as House of Cards, Netflix is competing with cable on the program front as well. To make the viewing experience as smooth as possible for its customers, Netflix also has its own equivalent of a TV transmitter. It’s called OpenConnect, and it’s one among a number of competing CDNs, or Content Delivery Networks. Basically they put up big server farms as close as possible to large volumes of demand, such as in cities.

So think of Netflix as a premium cable channel without the cable, or the channel, optimized for delivery over the Internet. It carries forward some of TV’s norms (such as showing old movies and new TV shows for a monthly subscription charge) while breaking new ground where cable and its sources either can’t or won’t go.

Bigger than Netflix, at least in terms of its catalog and global popularity, is Google’s YouTube. If you want your video to be seen by the world, YouTube is where you put it today, if you want maximum leverage. YouTube isn’t a monopoly for Google (the list of competitors is long), but it’s close. (According to Alexa, YouTube is accessed by a third of all Internet users worldwide. Its closest competitor (in the U.S., at least), is Vimeo, with a global reach of under 1%.) So, while Netflix looks a lot like cable, YouTube looks like the Web. It’s Net-native.

Bassem Youssef, “the Jon Stewart of Egypt,” got his start on YouTube, and then expanded into regular TV. He’s still on YouTube, even though his show on TV got canceled when he was hauled off to jail for offending the regime. Here he tells NBC’s Today show, “there’s always YouTube.” [Later… Dig this bonus link.]

But is there? YouTube is a grace of Google, not the Web. And Google is a big advertising business that has lately been putting more and more ads, TV-like, in front of videos. Nothing wrong with that, it’s a proven system. The question, as we move from TV 3.0 to 3.9, is whether the Net and the Web will survive the inclusion of TV’s legacy methods and values in its midst. In The TV in the Snake of Time, written in July 2010, I examined that question at some length:

Television is deeply embedded in pretty much all developed cultures by now. We — and I mean this in the worldwide sense — are not going to cease being couch potatoes. Nor will our suppliers cease couch potato farming, even as TV moves from airwaves to cable, satellite, and finally the Internet.

In the process we should expect the spirit (if not also the letter) of the Net’s protocols to be violated.

Follow the money. It’s not for nothing that Comcast wishes to be in the content business. In the old cable model there’s a cap on what Comcast can charge, and make, distributing content from others. That cap is its top cable subscription deals. Worse, they’re all delivered over old-fashioned set top boxes, all of which are — as Steve Jobs correctly puts it — lame. If you’re Comcast, here’s what ya do:

  1. Liberate the TV content distro system from the set top sphincter.
  2. Modify or re-build the plumbing to deliver content to Net-native (if not entirely -friendly) devices such as home flat screens, smartphones and iPads.
  3. Make it easy for users to pay for any or all of it on an à la carte (or at least an easy-to-pay) basis, and/or add a pile of new subscription deals.

Now you’ve got a much bigger marketplace, enlarged by many more devices and much less friction on the payment side. (Put all “content” and subscriptions on the shelves of “stores” like iTunes’ and there ya go.) Oh, and the Internet? … that World of Ends that techno-utopians (such as yours truly) liked to blab about? Oh, it’s there. You can download whatever you want on it, at higher speeds every day, overall. But it won’t be symmetrical. It will be biased for consumption. Our job as customers will be to consume — to persist, in the perfect words of Jerry Michalski, as “gullets with wallets and eyeballs.”

Future of the Internet

So, for current and future build-out, the Internet we techno-utopians know and love goes off the cliff while better rails get built for the next generations of TV — on the very same “system.” (For the bigger picture, Jonathan Zittrain’s latest is required reading.)

In other words, it will get worse before it gets better. A lot worse, in fact.

But it will get better, and I’m not saying that just because I’m still a utopian. I’m saying that because the new world really is the Net, and there’s a limit to how much of it you can pave with one-way streets. And how long the couch potato farming business will last.

More and more of us are bound to produce as well as consume, and we’ll need two things that a biased-for-TV Net can’t provide. One is speed in both directions: out as well as in. (“Upstream” calls Sisyphus to mind, so let’s drop that one.) The other is what Bob Frankston calls “ambient connectivity.” That is, connectivity we just assume.

When you go to a hotel, you don’t have to pay extra to get water from the “hydro service provider,” or electricity from the “power service provider.” It’s just there. It has a cost, but it’s just overhead.

That’s the end state. We’re still headed there. But in the meantime the Net’s going through a stage that will be The Last Days of TV. The optimistic view here is that they’ll also be the First Days of the Net.

Think of the original Net as the New World, circa 1491. Then think of TV as the Spanish invasion. Conquistators! Then read this essay by Richard Rodriguez. My point is similar. TV won’t eat the Net. It can’t. It’s not big enough. Instead, the Net will swallow TV. Ten iPad generations from now, TV as we know it will be diffused into countless genres and sub-genres, with millions of non-Hollywood production centers. And the Net will be bigger than ever.

In the meantime, however, don’t hold your breath.

That meantime has  now lasted nearly three years — or much longer if you go back to 1998, when I wrote a chapter of a book by Microsoft, right after they bought WebTV. An excerpt:

The Web is about dialog. The fact that it supports entertainment, and does a great job of it, does nothing to change that fact. What the Web brings to the entertainment business (and every business), for the first time, is dialog like nobody has ever seen before. Now everybody can get into the entertainment conversation. Or the conversations that comprise any other market you can name. Embracing that is the safest bet in the world. Betting on the old illusion machine, however popular it may be at the moment, is risky to say the least…

TV is just chewing gum for the eyes. — Fred Allen

This may look like a long shot, but I’m going to bet that the first fifty years of TV will be the only fifty years. We’ll look back on it the way we now look back on radio’s golden age. It was something communal and friendly that brought the family together. It was a way we could be silent together. Something of complete unimportance we could all talk about.

And, to be fair, TV has always had a very high quantity of Good Stuff. But it also had a much higher quantity of drugs. Fred Allen was being kind when he called it “chewing gum for the eyes.” It was much worse. It made us stupid. It started us on real drugs like cannabis and cocaine. It taught us that guns solve problems and that violence is ordinary. It disconnected us from our families and communities and plugged us into a system that treated us as a product to be fattened and led around blind, like cattle.

Convergence between the Web and TV is inevitable. But it will happen on the terms of the metaphors that make sense of it, such as publishing and retailing. There is plenty of room in these metaphors — especially retailing — for ordering and shipping entertainment freight. The Web is a perfect way to enable the direct-demand market for video goods that the television industry was never equipped to provide, because it could never embrace the concept. They were in the eyeballs-for-advertisers business. Their job was to give away entertainment, not to charge for it.

So what will we get? Gum on the computer screen, or choice on the tube?

It’ll be no contest, especially when the form starts funding itself.

Bet on Web/TV, not TV/Web.

I was recruited to write that chapter because I was the only guy Microsoft could find who thought the Web would eat TV rather than vice versa. And it does look like that’s finally happening, but only if you think Google is the Web. Or if you think Web sites are the new channels. In tech-speak, channels are silos.

When I wrote those pieces, I did not foresee the degree to which our use of the Net would be contained in silos that Bruce Schneier compares to feudal-age castles. Too much of the Web we know today is inside the walls governed by Lord Zuck, King Tim, Duke Jeff and the emperors Larry and Sergey. In some ways those rulers are kind and generous, but we are not free so long as we are native to their dominions rather than the boundless Networked world on which they sit.

The downside of depending on giants is that you can, and will, get screwed. Exhibit A (among too many for one alphabet) is Si Dawson’s goodbye post on Twitcleaner, a service to which he devoted his life, and countless people loved, that “was an engineering marvel built, as it were, atop a fail-whaling ship.”  When Twitter “upgraded” its API, it sank Twitcleaner and many other services built on Twitter. Writes Si, “Through all this I’ve learned so, so much.Perhaps the key thing? Never playfootball when someone else owns the field. So obvious in hindsight.”

Now I’m having the same misgivings about Dropbox, which works as what Anil Dash calls a POPS: Privately Owned Public Space. It’s a great service, but it’s also a private one. And therefore risky like Twitter is risky.

What has happened with all those companies was a morphing of mission from a way to the way:

  • Google was way to search, and became the way to search
  • Facebook was way to be social on the Web, and became the way to be social on the Web
  • Twitter was way to microblog, and became the way to microblog

I could go on, but you get the idea.

What makes the Net and the Web open and free are not its physical systems, or any legal system. What makes them free are their protocols, which are nothing more than agreements: the machine equivalents of handshakes. Protocols do not by their nature presume a centralized system, like TV — or like giant Web sites and services. Protocols are also also not corruptible, because they are each NEA: Nobody owns it, Everybody can use it and Anybody can improve it.

Back in 2003, David Weinberger and I wrote about protocols and NEA in a site called World of Ends: What the Internet Is and How to Stop Mistaking It For Something Else. In it we said the Net was defined by its protocols, not by the companies providing the wiring and the airwaves over which we access the Net.

Yet, a decade later, we are still mistaking the Net for TV. Why? One reason is that there is so much more TV on the Net than ever before. Another is that we get billed for the Net by cable and phone companies. For cable and phone companies providing home service, it’s “broadband” or “high speed Internet.” For mobile phone companies, it’s a “data plan.” By whatever name, it’s one great big channel: a silo open at both ends, through which “content” gets piped to “consumers.” To its distributors — the ones we pay for access — it’s just another kind of cable TV.

The biggest player in cable is not Comcast or Time Warner. It’s ESPN. That’s because the most popular kind of live TV is sports, and ESPN runs that show. Today, ESPN is moving aggressively to mobile. In other words, from cable to the Net. Says Bloomberg Businessweek,

ESPN has been unique among traditional media businesses in that it has flourished on the Web and in the mobile space, where the number of users per minute, which is ESPN’s internal metric, reached 102,000 in June, an increase of 48 percent so far this year. Mobile is now ESPN’s fastest-growing platform.

Now, in ESPN Eyes Subsidizing Wireless-Data Plans, the Wall Street Journal reports, “Under one potential scenario, the company would pay a carrier to guarantee that people viewing ESPN mobile content wouldn’t have that usage counted toward their monthly data caps.” If this happens, it would clearly violate the principle of network neutrality: that the network itself should not favor one kind of data, or data producer, over another.Such a deal would instantly turn every competing data producer into a net neutrality activist, so it’s not likely to happen.

Meanwhile John McCain, no friend of net neutrality, has introduced the TV Consumer Freedom Act, which is even less friendly to cable. As Business Insider puts it, McCain wants to blow the sucker upSays McCain,

This legislation has three principal objectives: (1) encourage the wholesale and retail ‘unbundling’ of programming by distributors and programmers; (2) establish consequences if broadcasters choose to ‘downgrade’ their over-the-air service; and (3) eliminate the sports blackout rule for events held in publicly-financed stadiums.

For over 15 years I have supported giving consumers the ability to buy cable channels individually, also known as ‘a la carte’ – to provide consumers more control over viewing options in their home and, as a result, their monthly cable bill.

The video industry, principally cable companies and satellite companies and the programmers that sell channels, like NBC and Disney-ABC, continue to give consumers two options when buying TV programming: First, to purchase a package of channels whether you watch them all or not; or, second, not purchase any cable programming at all.

This is unfair and wrong – especially when you consider how the regulatory deck is stacked in favor of industry and against the American consumer.

Unbundle TV, make it á la carte, and you have nothing more than subscription video on the Net. And that is what TV will become. If McCain’s bill passes, we will still pay Time Warner and Comcast for connections to the Net; and they will continue to present a portfolio of á la carte and bundled subscription options. Many video sources will continue to be called “networks” and “channels.” But it won’t be TV 4.0 because TV 3.0 — TV over IP — will be the end of TV’s line.

Shows will live on. So will producers and artists and distributors. The old TV business to be as creative as ever, and will produce more good stuff than ever. Couch potatoes will live too, but there will be many more farmers, and the fertilizer will abound in variety.

What we’ll have won’t be TV because TV is channels, and channels are scarce. The Net has no channels, and isn’t about scarcity. It just has an endless number of ends, and no limit on the variety of sources pumping out “content” from those ends. Those sources include you, me, and everybody else who wants to produce and share video, whether for free or for pay.

The Net is an environment built for abundance. You can put all the scarcities you want on it, because an abundance-supporting environment allows that. An abundance system such as the Net gives business many more ways to bet than a scarcity system such as TV has been from the antenna age on through cable. As Jerry Michalski says (and tweets), “#abundance is pretty scary, isn’t it? Yet it’s the way forward.”

Abundance also frees all of us personally. How we organize what we watch should be up to us, not up to cable systems compiling their own guides that look like spreadsheets, with rows of channels and columns of times. We can, and should, do better than that. We should also do better than what YouTube gives us, based on what its machines think we might want.

The new box to think outside of is Google’s. So let’s re-start there. TV is what it’s always been: dumb and terminal.

 

Yesterday, when Anil Dash (@AnilDash) spoke about The Web We Lost at Harvard, I took notes in my little outliner, in a browser. They follow. The top outline level is slide titles, or main points. The next level down are points made under the top level. Some of the outline is what Anil said, and some of it is what I thought he said, or thought on my own based on what he said, and then blathered out through my fingers. Apologies to Anil for what I might have heard wrong. Corrections invited.

David Weinberger also blogged the event This wasn’t easy, because David also introduced Anil and moderated the Q&A. His notes are, as always, excellent. So go read those first.

You can also follow along with this photo set.

Here goes:

POPS — Privately Owned Public Spaces

A secretive, private Ivy League club.

  • Facebook was conceived as that.

Wholesale destruction of your wedding photos

  • We hear stories about this, over and over, when a proprietary silo — even a POPS — dies, gets acquired or otherwise goes poof
  • Think of what matters. (e.g. wedding photos) Everything else you own is just: stuff
  • The silo makers are allowed to do this, because they have one-sided and onerous terms of service. For example:

Apple’s terms for iOS developers

  • Amazing: “We view apps different than books or songs, which we do not curate. If you want to criticize a religion, write a book. If you want to describe sex, write a book or a song, or create a medical app. It can get complicated, but we have decided to not allow certain kinds of content in the App Store.”

There is a war raging against the Web we once had.

  • “Being introduced as a blogger is like being introduced as an emailer”

They are bending the law to make controlling our data illegal

  • Watch what’s happening. We won SOPA/PIPA, but that was just one thing. Are we going to do that twice? The same way?

Metadata is dying. And we didn’t even notice.

  • Compare Flickr (old Web) and Instragram (new Web), which has no metadata
  • Props to Berkman for doing the right thing by RSS

Links were corrupted. Likes are next.

  • Economics are getting divorced from original contexts.
  • Remember Suck.com? It was all about linking outward. (See David Weinberger on hyperlinks subverting hierarchy)
  • Now links (at pubs and ad-supported sites) go to internal aggregation pages. SOA.
  • Google converted the meaning of links from the expressive to the economic. (Or, to an economic statement.) Link-spam went viral in less than six months.
  • Facebook has what they call Edgerank. “Likes” at first were an expression of intent. Now they are fuel for advertising. We’re seeing “like fraud.”
  • On Flickr, favorites are still favorites because they aren’t monetizable. Thus Flickr has remained, relatively speaking, blessedly uncorrupted

They are gaslighting the Web.

  • Note how unevenly Facebook places warnings. “Please be careful…” they say, about clicking on a non-Facebook facebook link. You see this on many non-BigCo sites that use Facebook logins. But…>
  • With big Facebook partners you don’t get the message. Coincidence
  • >Also, sites that register with them get the warning, while those that don’t register don’t have the message, even though they are less trustworthy. (Do I have that right? Not sure.)
  • This is not malicious. It’s well-intended in its own pavement-to-hell way.>

In the best case, we’re stuck fixing their bugs on our budgets

  • In the worst case, they’re behaving badly
  • This is true for all the things that compete with the Web

Ideas get locked into apps that will not survive acquisition

  • Content tied to devices dies when those devices become obsolete

We’ve given up on formats. We lost.

  • Watch out for proprietary and under-documented formats
  • Exceptions are .jpg and .html.

Undocumented and non-interoperable are now too common.

  • There is an intentional pulling away from that which lowers switching costs, and creates public spaces.
  • “Town halls” in POPS are not happening in public spaces. Example: the White House “town halls” on Facebook

TOS + IP trumps the constitution

  • Everything you say can be changed on FB and they would be within their rights to do that

It’s never the Pharoah’s words that are lost to history

  • POPS and walled gardens are not level playing fields
  • Ordinary people’s interactions are being lost.
  • Can’t we just opt out? What does that cost?
  • There are opportuity and career costs
  • Can I meaningfully expand my sphere of opportunities in a silo’d world run by pharoahs?
  • “If I hadn’t participated in the blogosphere I wouldn’t be here today”

Our hubris helped them do this.

  • We, the geeks of the world, the builders of public spaces, created non-appealing stuff. It didn’t compete. (e.g. OpenID)
  • Thus we (i.e. everybody) are privileging prisons over the Web itself.
  • We (geeks) did sincerely care
  • We were so arrogant around the goodness of our own open creations that Zuck’s closed vision seemed more appealing
  • That Z’s private club was more appealing says something.
  • How we told the story, how we went about it, also mattered. We didn’t appeal. We talked to ourselves.
  • It’s not just about UI, though we did suck at that too. It was about being in tune with ordinary non-geeks
  • If we had been listening more… and had been a little more open in self-criticism…

Too much triumphalism in having won SOPA and PIPA.

  • Can we do that again? Our willingness to pat ourselves on the back isn’t helpful.
  • The people we count on to rally behind our efforts may not show up again

The open web faded away was not for lack of a compelling vision.

  • We were less inclusive than Facebook and Apple.

But it’s only some of the Web, right?

  • We built the Web for pages
  • Then we changed from pages to streams… narrow single column streams
  • Yahoo is now a stream too. See recent changes there. The Web is now more like radio. Snow on the water.
  • These streams feel like apps. But users are chosing something different.
  • (Shows a graph.)
  • Half the time we spent in 2010 was already in a streaming experience. The percentage is much higher now.
  • These streams are controlled-access. They are limited-access highways. This is part of the mechanism for constraining the conversation. A mismatch between the open web advocacy community and what people do. These others have a much more

Geeks always want to fight the last battle.

  • What they need is a new kind of stream compelling enough for normal people to use.
  • Mozilla is an exception, thanks to Microsoft being evil and IE bad.

So, what do we do?

  • Are FB, LI and TW the new NBC, ABC and CBS?
  • The web follows patterns.
  • The pendulum swings
  • Google is trying to be the evil empire now (whether they know it or not), overreaching, making us feel itchy the way Microsoft did in ’97.

Policy works. Fighting Microsoft helped.

  • Reality is: public policy can be an effective
  • Policy is coming around social networking. Count on it. Facebook’s overreach has that effect
  • There are apps that want to do the right thing. (Anil, for example, is doing ThinkUp)
  • The open web community mostly makes science projects and tool kits. Not enough.
  • Are you being more sensitive to what users want than Zuck is?
  • Item: it’s very hard to learn the history of the software industry, even here. How did software impact culture? How did desktop office suites affect business? The principal actors are still here. They have phones and email addresses. Yet we can’t seem to learn from them.

There are insights to be gleaned from owning our data.

  • Can’t imagine a less attractive name for something than Quantified Self; but the movement matters
  • This stuff that is already digital we pay no attention to. Instead we (companies) rely on marketing reports.
  • Odd: it’s much easier to track my heart rate than how often I visit Twitter.
  • These are the vectors for displacement, e.g. Google on meaning, emotion, expression… We have to be able to do better than them.
  • Think about it: if you allow one more color than blue you’re ahead of Facebook

There are institutions that still care about a a healthy web.

  • The White House has a podcast
  • The Library of Congress? (not clear about the reference here)
  • Facebook terms of service had a conflict with federal law
  • Would hve been fun to see them shut down the White House Facebook account.
  • Terms of service aren’t laws. Break them sometimes.

PR trumps ToS 10 times out of 10

  • Look at our culture as being negatively affected by ToSes
  • Look at Facebook’s ToS the same way we look at public laws. They even eliminated the token effort.
  • Look at YouTube. “No infringement intended.”
  • The people have already chosen a path of civil disobedience
  • A Million Mixer march happens every day

Bonus links: Bruce Schneier in the Q&A brought up his Feudal model, which he talked about on Thursday in conversation with Jonathan Zittrain. And this very thoughtful piece by

artifacty HD[Later (7 April)… The issue has been resolved, at least for now. We never did figure out what caused the poor video resolution in this case, but it looks better now. Still, it seems that compression artifacts are a mix of feature and bug for both cable and satellite television. One of these weeks or months I’ll study it in more depth. My plan now is just to enjoy watching the national championship game tomorrow night, between Louisville and Michigan.]

What teams are playing here? Can you read the school names? Recognize any faces?  Is that a crowd in the stands or a vegetable garden? Is the floor made of wood or ice?

You should be able to tell at least some of those things on an HD picture from a broadcast network. But it ain’t easy. Not any more. At least not for me.

Used to be I could tell, at least on Dish Network, which is one reason I got it for our house in Santa Barbara. I compared Dish’s picture on HD channels with those of Cox, our cable company, and it was no contest. DirectTV was about the equal, but had a more complicated remote control and cost a bit more. So we went with Dish. Now I can’t imagine Cox — or anybody — delivering a worse HD picture.

The picture isn’t bad just on CBS, or just during games like this one. It sucks on pretty much all the HD channels. The quality varies, but generally speaking it has gone down hill since we first got our Sony Bravia 1080p “Full HD” screen in 2006. It was the top of the line model then and I suppose still looks good, even though it’s hard to tell, since Dish is our only TV source.

Over-the-air (OTA) TV looks better when we can get it; but hardly perfect. Here’s what the Rose Bowl looked like from KGTV in San Diego when I shot photos of it on New Years Day of 2007. Same screen. You can see some compression artifacts in this close-up here and this one here; but neither is as bad as what we see now. (Since I shot those, KGTV and the CBS affiliate in San Diego, KFMB, moved down from the UHF to the VHF band, so my UHF antenna no longer gets them. Other San Diego stations with UHF signals still come in sometimes and look much better than anything from Dish.)

So why does the picture look so bad? My assumption is that Dish, to compete with cable and DirectTV, maximizes the number of channels it carries by compressing away the image quality of each. But I could be wrong, so I invite readers (and Dish as well) to give me the real skinny on what’s up with this.

And, because I’m guessing some of you will ask: No, this isn’t standard-def that I’m mistaking for high-def. This really is the HD stream from the station.

[Later…] I heard right away from @Dish_Answers. That was quick. We’ll see how it goes.

I was talking with @ErikCecil yesterday about the sea change we both detect in people’s tolerance for unwanted tracking. They’re getting tired of it. So are lawmakers and regulators. (No, not everybody. But not a small percentage. And it’s growing.) See here, here,  here, here, here, here, here, here and here.

Somewhere in the midst of our chat, Erik summarized the situation with a metaphor that rang so true that I have to share it. Here’s roughly what he said: “The backwash that’s coming is a tsunami that hasn’t hit yet. Right now it’s a wide swell over deep water. But you can tell it’s coming because the tide is suspiciously far out. So we have all these Big Data marketing types, out there on the muddy flats, raking up treasures of exposed personal data. They don’t see that this is not the natural way of things, or that it’s temporary. But the tidal wave is coming. And when it finally hits, watch out.”

 

 

It’s been more than six months since Apple introduced iOS 6, and nearly as long since Tim Cook issued a public apology for the company’s Maps app, which arrived with iOS 6 and replaced the far better version powered mostly by Google. Said Tim,

…The more our customers use our Maps the better it will get and we greatly appreciate all of the feedback we have received from you.

While we’re improving Maps, you can try alternatives by downloading map apps from the App Store like Bing, MapQuest and Waze, or use Google or Nokia maps by going to their websites and creating an icon on your home screen to their web app.

Everything we do at Apple is aimed at making our products the best in the world. We know that you expect that from us, and we will keep working non-stop until Maps lives up to the same incredibly high standard.

In spite of slow and steady improvements, and a few PR scores, Apple’s Maps app still fails miserably at giving useful directions here in New York — while Google’s new Maps app (introduced in December) does a better job, every day. For example, yesterday I needed to go to a restaurant called Pranna, at 79 Madison Avenue. On my iOS Calendar app, “79 Madison Avenue” was lit up in blue, meaning if I clicked on it, Apple’s Maps, by default (which can’t be changed by me) would come up. Which it did. When I clicked on “Directions to here,” it said “Did you mean…” and gave two places: one in Minster, Ohio and another in Bryson City, North Carolina. It didn’t know there was a 79 Madison Avenue in New York. So I went to Google Maps and punched in “79 Madison Avenue.” In seconds I had four different route options (similar to the screen shot here), each taking into account the arrival times of subways at stations, plus walking times between my apartment, the different stations, and the destination. For me as a user here in New York, there is no contest between these two app choices, and I doubt there ever will be.

Credit where due: Apple’s Maps app finally includes subway stations. But it only has one entrance for each: a 9-digit zip code address. In reality many stations have a number of entrances. At the north end of Manhattan, the A train has entrances running from 181st to 184th, including an elevator above 184th with an entrance on Fort Washington. Google’s app knows these things, and factors them in. Apple’s app doesn’t yet.

On the road, Apple’s app still only shows slow traffic as a dotted red line. Google’s and Nokia’s (called Here) show green, yellow and red, as they have from the start. Google’s also re-routes you, based on upcoming traffic jams as they develop. I don’t know if Apple’s app does that; but I doubt it.

But here’s the main question: Do we still need an Apple maps app on the iPhone? Between Google, Here, Waze and others, the category is covered.

In fact Apple did have a good reason for rolling their own Maps app: there were no all-purpose map apps for iOS that did vocalized instructions and re-routing of turn-by-turn directions. Google refused to make those graces available on the Apple Maps app, which was clearly galling to Apple. Eventually Apple’s patience wore out. So they said to themselves, “The hell with it. We’re not getting anywhere with these guys. Let’s do it ourselves.” But then they failed hard, and Google eventually relented and made its own iOS app with those formerly missing features, plus much more.

Bottom line: we no longer need Apple to play an expensive catch-up game. (At least on iPhone. Google still doesn’t have a Maps app for iPad. Not sure if that’s because Google doesn’t want it, or because Apple won’t let them distribute it.)

Unless, of course, Apple really can do a better job than Google and Here (which has NAVTEQ, the granddaddy of all mapping systems, behind it). Given what we’ve seen so far, there is no reason to believe this will happen.

So here’s a simple recommendation to Apple: give up. Fold the project, suck up your pride, and point customers toward Google’s Maps app. Or at least give users a choice on set-up between Google Maps, Here, Waze or whatever, for real-world navigation. Concentrate instead on what you do best. For example, flyover and Siri. Both are cool, but neither requires that you roll your own maps to go with them. At least, I hope not.

 

 

When you see an ad for Budweiser on TV, you know who paid for it and why it’s there. You also know it isn’t personal, because it’s brand advertising.

But when you see an ad on a website, do you know what it’s doing there? Do you know if its there just for you, or if it’s for anybody? Hard to tell.

However, if it’s an ad for a camera showingng up right after you visited some photography sites, it’s a pretty good guess you’re being tracked. It’s also likely you are among millions who are creeped out by the knowledge that they’re being tracked.

On the whole, the tracking-driven online advertising business (aka “adtech”) assumes that you have given permission to be followed, at least implicitly. This is one reason tracking users and targeting them with personalized ads is more normative than ever online today. But there is also a growing concern that personal privacy lines are not only being crossed, but trampled.

Ad industry veterans are getting creeped out too, because they know lawmakers and regulators will be called on for protection. That’s the case George Simpson — an ad industry insider — makes in  Suicide by Cookies, where he starts with the evidence:

Evidon measured sites across the Internet and found the number of web-tracking tags from ad servers, analytics companies, audience-segmenting firms, social networks and sharing tools up 53% in the past year. (The ones in Mandarin were probably set by the Chinese army.) But only 45% of the tracking tools were added to sites directly by publishers. The rest were added by publishers’ partners, or THEIR partners’ partners.

Then he makes a correct forecast government intervention, and concludes with this:

I have spent the better part of the last 15 years defending cookie-setting and tracking to help improve advertising. But it is really hard when the prosecution presents the evidence, and it has ad industry fingerprints all over it — every time. There was a time when “no PII” was an acceptable defense, but now that data is being compiled and cross-referenced from dozens, if not hundreds, of sources, you can no longer say this with a straight face. And we are way past the insanity plea.

I know there are lots of user privacy initiatives out there to discourage the bad apples and get all of the good ones on the same page. But clearly self-regulation is not working the way we promised Washington it would.

I appreciate the economics of this industry, and know that it is imperative to wring every last CPM out of every impression — but after a while, folks not in our business simply don’t care anymore, and will move to kill any kind of tracking that users don’t explicitly opt in to.

And when that happens, you can’t say, “Who knew?”

To get ahead of the regulatory steamroller, the ad business needs two things. One is transparency. There isn’t much today. (See Bringing Manners to Marketing at Customer Commons.) The other is permission. It can’t only be presumed. It has to be explicit.

We — the targets of adtech — need to know the provenance of an ad, at a glance. It should be as clear as possible when an ad is personal or not, when it is tracking-based or not, and whether it’s permitted. That is, welcomed. (More about that below.)

This can be done symbolically. How about these:

 means personalized.

↳ means tracking-based.

☌ means permitted.

I picked those out of a character viewer. There are hundreds of these kinds of things. It really doesn’t matter what they are, so long as people can easily, after awhile, grok what they mean.

People are already doing their own policy development anyway, by identifying and blocking both ads and tracking, through browser add-ons and extensions. Here are mine for Firefox, on just one of my computers:

All of these, in various ways, give me control over what gets into my browser. (In fact the Evidon research cited above was gained by Ghostery, which is an Evidon product installed in millions of browsers. So I guess I helped, in some very small way.)

Speaking of permission, now would be a good time to revisit Permission Marketing, which Seth Godin published in May 1999,  about the same time The Cluetrain Manifesto also went up. Here’s how Seth compressed the book’s case nine years later.

Permission marketing is the privilege (not the right) of delivering anticipated, personal and relevant messages to people who actually want to get them.

It recognizes the new power of the best consumers to ignore marketing. It realizes that treating people with respect is the best way to earn their attention.

Pay attention is a key phrase here, because permission marketers understand that when someone chooses to pay attention they are actually paying you with something precious. And there’s no way they can get their attention back if they change their mind. Attention becomes an important asset, something to be valued, not wasted.

Real permission is different from presumed or legalistic permission. Just because you somehow get my email address doesn’t mean you have permission. Just because I don’t complain doesn’t mean you have permission. Just because it’s in the fine print of your privacy policy doesn’t mean it’s permission either.

Real permission works like this: if you stop showing up, people complain, they ask where you went.

Real permission is what’s needed here. It’s what permission marketing has always been about. And it’s what VRM (Vendor Relationship Management) is about as well.

Brand advertising is permitted in part because it’s not personal. Sometimes it is even liked.. The most common example of that is Super Bowl TV ads. But a better example is magazines made thick with brand ads that are as appealing to readers as the editorial content. Fashion magazines are a good example of that.

Adtech right now is not in a demand market on the individual’s side. In fact, judging from the popularity of ad-blocking browser extensions, there is a lot of negative demand. According to ClarityRay, 9.23% of all ads were blocked by users surveyed a year ago. That number is surely much higher today.

At issue here is what economists call signaling — a subject about which Don Marti has written a great deal over the last couple of years. I visit the subject (with Don’s help) in this post at Wharton’s Future of Advertising site, where contributors are invited to say where they think advertising will be in the year 2020. My summary paragraph:

Here is where this will lead by 2020: The ability of individuals to signal their intentions in the marketplace will far exceed the ability of corporations to guess at those intentions, or to shape them through advertising. Actual relationships between people and processes on both sides of the demand-supply relationship will out-perform today’s machine-based guesswork by advertisers, based on “big data” gained by surveillance. Advertising will continue to do what it has always done best, which is to send clear signals of the advertiser’s substance. And it won’t be confused with its distant relatives in the direct response marketing business.

I invite everybody reading this to go there and jump in.

Meanwhile, consider this one among many olive branches that need to be extended between targets — you and me — and the advertisers targeting us.

 

In 2013 – Beginning Of The End For PR Boomers, David Bray actually says this…

The media landscape is evolving rapidly, and baby boomers are about to be left behind because of their inability to keep up with technology and the changing times. The days of the self-proclaimed experts (those who profess to be “thought leaders” as a result of reading and hearing about new advancements that clients can take advantage of) are long gone.

Media today is all about authenticity — and largely dominated by participatory media and consumers, who see right through advertising and marketing hyperbole and shut it out. Participating in these media is the only way to gain a “true” understanding of how and which work, and which don’t. Clients are demanding that their PR counsel and support teams are in the conversation, and that they themselves use the media where their content is being created and distributed.

Take, for example, the use of social media for online business networking or lead generation. As the saying goes, “it’s hard to teach an old dog new tricks.” The old dog in this instance — baby boomers — use traditional, in-person offline meetings as their primary source of building their business networks, while the younger generations are building their own brands and businesses more quickly, and reaching a much wider audience by leveraging new digital tools like LinkedIn and Twitter to run full-on campaigns.

… giving his profession some bad PR that gets worse as you read down through the comments. Here’s mine:

No person is just a demographic, just a race, or just a category. Nor does any person like to be dismissed as a stereotype, especially if that stereotype is wrong about them personally. I have 972 friends on Facebook, 19,061 followers on Twitter, 801 connections on LinkedIn, a Klout score of 81 and a PeerIndex of 81. That I’m also 65 is not ironic. If I weren’t this old, those stats wouldn’t be this high. I got the hell out of PR several demographics ago — and into the far more helpful work I do now — exactly because of shallow and dismissive stereotyping that has been a cancer in PR, and all of marketing, for the duration. It only makes the problem worse to drive out of the business people who have been young a lot longer than you have.

PR’s problems are old news and not getting any younger. Here is what I wrote for Upside in 1992. Alas, Upside erased itself when it died, the Wayback Machine only traces it back to 1996, and the text is stuck for now in a place where search engines don’t index it.  So I’ll repeat the whole thing here:

THE PROBLEM WITH PR
TOWARD A WORLD BEYOND PRESS RELEASES & BOGUS NEWS

There is no Pulitzer Prize for public relations. No Peabody. No Heismann. No Oscar, Emmy or Eddy. Not even a Most Valuable Flacker award. Sure, like many misunderstood professions, public relations has its official bodies, and even its degrees, awards and titles. Do you know what they are? Neither do most people who practice the profession.

The call of the flack is not a grateful one. Almost all casual references to public relations are negative. Between the last sentence and this one, I sought to confirm this by looking through a Time magazine. It took me about seven seconds to find an example: a Lance Morrow essay in which he says Serbia has “the biggest public relations problem since Pol Pot went into politics.” Since genocide is the problem in question, the public relations solution can only range from lying to cosmetics. Morrow’s remark suggests this is the full range of PR’s work. Few, I suspect, would disagree.

So PR has the biggest PR problem of all: people use it as a synonym for BS. It seems only fair to defend the profession, but there is no point to it. Common usage is impossible to correct. And frankly, there is a much smaller market for telling the truth than for shading it.

For proof, check your trash for a computer industry press release. Chances are you will read an “announcement” that was not made, for a product that was not available, with quotes by people who did not speak them, for distribution to a list of reporters who considered it junk mail. The dishonesty here is a matter of form more than content. Every press release is crafted as a news story, complete with headline, dateline, quotes and so forth. The idea is to make the story easy for editors to “insert” with little or no modification.

Yet most editors would rather insert a spider in their nose than a press release in their publication. First, no self-respecting editor would let anybody else — least of all a biased source — write a story. Second, press releases are not conceived as stories, but rather as “messages.”

It is amazing how much time, energy and money companies spend to come up with “the right message.” At this moment, thousands of staffers, consultants and agency people sit in meetings or bend over keyboards, straining to come up with perfect messages for their products and companies. All are oblivious to a fact that would be plain if they paid more attention to their market than their product.

There is no demand for messages.

There is, however, a demand for facts. To editors, messages are just clothing and make-up for emperors that are best seen naked. Editors like their subjects naked because facts are raw material for stories. Which brings up another clue that public relations tends to ignore.

Stories are about conflict.

What makes a story hot is the friction in its core. When that friction ceases, the story ends. Take the story of Apple vs. IBM. As enemies, they made great copy. As collaborators, they are boring as dirt.

The whole notion of “positive” stories is oxymoronic. Stories never begin with “happily ever after.” Happy endings may resolve problems, but they only work at the end, not the beginning. Good PR recognizes that problems are the hearts of stories, and takes advantage of that fact.

Unfortunately, bad PR not only ignores the properties of stories, but imagines that “positive” stories can be “created” by staging press conferences and other “announcement events” that are just as bogus as press releases — and just as hated by their audiences.

Columnist John Dvorak, a kind of fool killer to the PR profession, says, “So why would you want to sit in a large room full of reporters and publicly ask a question that can then be quoted by every guy in the place? It’s not the kind of material a columnist wants — something everybody is reporting. I’m always amazed when PR types are disappointed when I tell them I won’t be attending a press conference.”

So why does PR persist in practices its consumers hold in contempt?

Because PR’s consumers are not its customers. PR’s customers are companies who want to look good, and pay PR for the equivalent of clothing and cosmetics. If PR’s consumers — the press — were also its customers, you can bet the PR business would serve a much different purpose: to reveal rather than conceal, clarify rather than mystify, inform rather than mislead.

But it won’t happen. Even if PR were perfectly useful to the press, there is still the matter of “positioning” — one of PR’s favorite words. I have read just about every definition of this word since Trout & Ries coined it in 1969, and I am convinced that a “position” is nothing other than an identity. It is who you are, where you come from, and what you do for a living. Not a message about your ambitions.

That means PR does not have a very good position. It’s identity is a euphemism, or at least sounds like one. While it may “come from” good intentions, what it does for a living is not a noble thing. Just ask its consumers.

Maybe it is time to do with PR what we do with technology: make something new — something that works as an agent for understanding rather than illusion. Something that satisfies both the emperors and their subjects. God knows we’ve got the material. Our most important facts don’t need packaging, embellishment or artificial elevation. They only need to be made plain. This may not win prizes, but it will win respect.

That was 21 years ago. Now PR doesn’t just spin the press, but “influencers” of all kind. These days I sometimes find myself on the receiving end of that spin: a vantage from which I can see how much the fundamental disconnects in PR have remained the same, while the methods used, and the influencers targeted, have changed. (Mostly by adding new methods to old ones that haven’t changed at all.)

Even the “social media” David Bray finds so young and modern embody the same disconnect between consumers and customers that have afflicted old media, such as TV and radio, from the beginning. Only now the consumers are called users while the customers are still called advertisers. Thus PR maintains the age-old dysfunction of stereotyping populations, and of dealing with whole populations through categorical prejudices, rather than engaging real human beings in real ways, with a minimum of bullshit, even when one party is spinning and the other is just listening. That’s what being “in the conversation” actually means.

I came late to personal computing, which was born with the MITS Altair in 1975.

The first PC I ever met — and wanted desperately, in an instant — was an Apple II, in 1977. It sold in one of the first personal computer shops, in Durham, NC. Price: $2500. At the time I was driving one of a series of old GM cars I bought for nothing or under 1/10th what that computer cost. So I wasn’t in the market, and wouldn’t buy my first personal computer until I lived in California, more than a decade later.

By ’77, Apple already had competition, and ran ads voiced by Dick Cavett calling the Apple II “The most personal computer.”

After that I wanted, in order, an Osborne, a Sinclair and an IBM PC, which came out in ’82 and, fully configured, went for more than $2000. At least I got to play with a PC and an Apple II then, because my company did the advertising for a software company making a game for them . I also wrote an article about it for one of the first issues of PC Magazine. The game was Ken Uston’s Professional Blackjack.

Then, in 1984, we got one of the very first Macs sold in North Carolina. It cost about $2500 and sat in our conference room, next to a noisy little dot matrix printer that also cost too much. It was in use almost around the clock. I think the agency had about 10 people then, and we each booked our time on it.

As the agency grew, it acquired more Macs, and that’s all we used the whole time I was there.

So I got to see first hand what Dave Winer is driving at in MacWrite and MacPaint, a coral reef and What early software was influential?

In a comment under the latter, I wrote this:

One thing I liked about MacWrite and MacPaint was their simplicity. They didn’t try to do everything. Same with MacDraw (the first object- or vector- based drawing tool). I still hunger for the simplicity of MacDraw. Also of WriteNow, which (as I recall) was written in machine, or something, which made it very very fast. Also hard to update.

Same with MultiPlan, which became (or was replaced by) Excel. I loved the early Excel. It was so simple and easy to use. The current Excel is beyond daunting.

Not sure what Quicken begat, besides Quickbooks, but it was also amazingly fast for its time, and dead simple. Same with MacInTax. I actually loved doing my taxes with MacInTax.

And, of course, ThinkTank and MORE. I don’t know what the connection between MORE and the other presentation programs of the time were. Persuasion and PowerPoint both could make what MORE called “bullet charts” from outlines, but neither seemed to know what outlining was. Word, IMHO, trashed outlining by making it almost impossible to use, or to figure out. Still that way, too.

One thing to study is cruft. How is it that wanting software to do everything defeats the simple purpose of doing any one thing well? That’s a huge lesson, and one still un-learned, on the whole.

Think about what happened to Bump. Here was a nice simple way to exchange contact information. Worked like a charm. Then they crufted it up and people stopped using it. But was the lesson learned?

Remember the early Volkswagen ads, which were models of simplicity, like the car itself? They completely changed advertising “creative” for generations. Somewhere in there, somebody in the ad biz did a cartoon, multi-panel, showing how to “improve” those simple VW ads. Panel after panel, copy was added: benefits, sale prices, locations and numbers, call-outs… The end result was just another ugly ad, full of crap. Kind of like every commercial website today. Compare those with what TBL wrote HTML to do.

One current victim of cruftism is Apple, at least in software and services. iTunes is fubar. iCloud is beyond confusing, and is yet another domain namespace (it succeeds .mac and .me, which both still work, confusingly). And Apple hasn’t fixed namespace issues for users, or made it easy to search through prior purchases. Keynote is okay, but I still prefer PowerPoint, because — get this: it’s still relatively simple. Ugly, but simple.

Crufism in Web services, as in personal software, shows up when creators of “solutions” start thinking your actual volition is a problem. They think they can know you better than you know yourself, and that they can “deliver” you an “experience” better than you can make for yourself. Imagine what it would be like to stee a car if it was always guessing at where you want to go instead of obeying your actual commands? Or if the steering wheel tugged you toward every McDonalds you passed because McDonalds is an advertiser and the car’s algorithm-obeying driver thought it knew you were hungry and had a bias for fast food — whether you have it or not.

That’s the crufty “service” world we’re in now, and we’re in it because we’re just consumers of it, and not respected as producers.

The early tool-makers knew we were producers. That’s what they made those tools for. That’s been forgotten too.

I wrote that in an outliner, also by Dave.

Interesting to see how far we’ve come, and how far we still need to go.

Bonus link, on “old skool”.

« Older entries § Newer entries »