Internet

You are currently browsing the archive for the Internet category.

In Chatbots were the next big thing: what happened?, Justin Lee (@justinleejw) nicely unpacks how chatbots were overhyped to begin with and continue to fail their Turing tests, especially since humans in nearly all cases would  rather talk to humans than to mechanical substitutes.

There’s also a bigger and more fundamental reason why bots still aren’t a big thing: we don’t have them. If we did, they’d be our robot assistants, going out to shop for us, to get things fixed, or to do whatever.

Why didn’t we get bots of our own?

I can pinpoint the exact time and place where bots of our own failed to happen, and all conversation and development went sideways, away from the vector that takes us to bots of our own (hashtag: #booo), and instead toward big companies doing more than ever to deal with us robotically, mostly to sell us shit.

The time was April 2016, and the place was Facebook’s F8 conference. It was on stage there that Mark Zuckerberg introduced “messenger bots”. He began,

Now that Messenger has scaled, we’re starting to develop ecosystems around it. And the first thing we’re doing is exploring how you can all communicate with businesses.

Note his use of the second person you. He’s speaking to audience members as individual human beings. He continued,

You probably interact with dozens of businesses every day. And some of them are probably really meaningful to you. But I’ve never met anyone who likes calling a business. And no one wants to have to install a new app for every service or business they want to interact with. So we think there’s gotta be a better way to do this.

We think you should be able to message a business the same way you message a friend. You should get a quick response, and it shouldn’t take your full attention, like a phone call would. And you shouldn’t have to install a new app.

This promised pure VRM: a way for a customer to relate to a vendor. For example, to issue a service request, or to intentcast for bids on a new washing machine or a car.

So at this point Mark seemed to be talking about a new communication channel that could relieve the typical pains of being a customer while also opening the floodgates of demand notifying supply when it’s ready to buy. Now here’s where it goes sideways:

So today we’re launching Messenger Platform. So you can build bots for Messenger.

By “you” Zuck now means developers. He continues,

And it’s a simple platform, powered by artificial intelligence, so you can build natural language services to communicate directly with people. So let’s take a look.

See the shift there? Up until that last sentence, he seemed to be promising something for people, for customers, for you and me: a better way to deal with business. But alas, it’s just shit:

CNN, for example, is going to be able to send you a daily digest of stories, right into messenger. And the more you use it, the more personalized it will get. And if you want to learn more about a specific topic, say a Supreme Court nomination or the zika virus, you just send a message and it will send you that information.

And right there the opportunity was lost. And all the promise, up there at the to of the hype cycle. Note how Aaron Batalion uses the word “reach” in  ‘Bot’ is the wrong name…and why people who think it’s silly are wrong, written not long after Zuck’s F8 speech: “In a micro app world, you build one experience on the Facebook platform and reach 1B people.”

What we needed, and still need, is for reach to go the other way: a standard bot design that would let lots of developers give us better ways to reach businesses. Today lots of developers compete to give us better ways to use the standards-based tools we call browsers and email clients. The same should be true of bots.

In Market intelligence that flows both ways, I describe one such approach, based on open source code, that doesn’t require locating your soul inside a giant personal data extraction business.

Here’s a diagram that shows how one person (me in this case) can relate to a company whose moccasins he owns:

vrmcrmconduit

The moccasins have their own pico: a cloud on the Net for a thing in the physical world: one that becomes a standard-issue conduit between customer and company.

A pico of this type might come in to being when the customer assigns a QR code to the moccasins and scans it. The customer and company can then share records about the product, or notify the other party when there’s a problem, a bargain on a new pair, or whatever. It’s tabula rasa: wide open.

The current code for this is called Wrangler. It’s open source and in Github. For the curious, Phil Windley explains how picos work in Reactive Programming With Picos.

It’s not bots yet, but it’s a helluva lot better place to start re-thinking and re-developing what bots should have been in the first place. Let’s start developing there, and not inside giant silos.

[Note: the image at the top is from this 2014 video by Capgemini explaining #VRM. Maybe now that Facebook is doing face-plants in the face of the GDPR, and privacy is finally a thing, the time is ripe, not only for #booos, but for the rest of the #VRM portfolio of unfinished and un-begun work on the personal side.]

Nature and the Internet both came without privacy.

The difference is that we’ve invented privacy tech in the natural world, starting with clothing and shelter, and we haven’t yet done the same in the digital world.

When we go outside in the digital world, most of us are still walking around naked. Worse, nearly every commercial website we visit plants tracking beacons on us to support the extractive economy in personal data called adtech: tracking-based advertising.

In the natural world, we also have long-established norms for signaling what’s private, what isn’t, and how to respect both. Laws have grown up around those norms as well. But let’s be clear: the tech and the norms came first.

Yet for some reason many of us see personal privacy as a grace of policy. It’s like, “The answer is policy. What is the question?”

Two such answers arrived with this morning’s New York TimesFacebook Is Not the Problem. Lax Privacy Rules Are., by the Editorial Board; and Can Europe Lead on Privacy?, by ex-FCC Chairman Tom Wheeler. Both call for policy. Neither see possibilities for personal tech. To both, the only actors in tech are big companies and big government, and it’s the job of the latter to protect people from the former. What they both miss is that we need what we might call big personal. We can only get that with with personal tech gives each of us power not just resist encroachments by others, but to have agency. (Merriam Websterthe capacity, condition, or state of acting or of exerting power.)

We acquired agency with personal computing and the Internet. Both were designed to make everyone an Archimedes. We also got a measure of it with the phones and tablets we carry around in our pockets and purses. None are yet as private as they should be, but making them fully private is the job of tech.

I bring this up because we will be working on privacy tech over the next four days at the Computer History Museum, first at VRM Day, today, and then over next three days at IIW: the Internet Identity Workshop.

On the table at both are work some of us, me included, are doing through Customer Commons on terms we can proffer as individuals, and the sites and services of the world can agree to.

Those terms are examples of what we call customertech: tech that’s ours and not Facebook’s or Apple’s or Google’s or Amazon’s.

The purpose is to turn the connected marketplace into a Marvel-like universe in which all of us are enhanced. It’ll be interesting to see what kind of laws follow.*

But hey, let’s invent the tech we need first.

*BTW, I give huge props to the EU for the General Data Protection Regulation, which is causing much new personal privacy tech development and discussion. I also think it’s an object lesson in what can happen when an essential area of tech development is neglected, and gets exploited by others for lack of that development.

Also, to be clear, my argument here is not against policy, but for tech development. Without the tech and the norms it makes possible, we can’t have fully enlightened policy.

Bonus link.

Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Irony Alert: the same is true for the Times, along with every other publication that lives off adtech: tracking-based advertising. These pubs don’t just open the kimonos of their readers. They bring readers’ bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of aiming “interest-based” advertising at those same readers, wherever those readers’ eyeballs may appear—or reappear in the case of “retargeted” advertising.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach, standard or experience), and no blood valving by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data?

Answer: nobody can, because the whole adtech “ecosystem” is a four-dimensional shell game with hundreds of players

or, in the case of “martech,” thousands:

For one among many views of what’s going on, here’s a compressed screen shot of what Privacy Badger showed going on in my browser behind Zeynep’s op-ed in the Times:

[Added later…] @ehsanakhgari tweets pointage to WhoTracksMe’s page on the NYTimes, which shows this:

And here’s more irony: a screen shot of the home page of RedMorph, another privacy protection extension:

That quote is from Free Tools to Keep Those Creepy Online Ads From Watching You, by Brian X. Chen and Natasha Singer, and published on 17 February 2016 in the Times.

The same irony applies to countless other correct and important reporting on the Facebook/Cambridge Analytica mess by other writers and pubs. Take, for example, Cambridge Analytica, Facebook, and the Revelations of Open Secrets, by Sue Halpern in yesterday’s New Yorker. Here’s what RedMorph shows going on behind that piece:

Note that I have the data leak toward Facebook.net blocked by default.

Here’s a view through RedMorph’s controller pop-down:

And here’s what happens when I turn off “Block Trackers and Content”:

By the way, I want to make clear that Zeynep, Brian, Natasha and Sue are all innocents here, thanks both to the “Chinese wall” between the editorial and publishing functions of the Times, and the simple fact that the route any ad takes between advertiser and reader through any number of adtech intermediaries is akin to ball falling through a pinball machine. Refresh your page while reading any of those pieces and you’ll see a different set of ads, no doubt aimed by automata guessing that you, personally, should be “impressed” by those ads. (They’ll count as “impressions” whether you are or not.)

Now…

What will happen when the Times, the New Yorker and other pubs own up to the simple fact that they are just as guilty as Facebook of leaking their readers’ data to other parties, for—in many if not most cases—God knows what purposes besides “interest-based” advertising? And what happens when the EU comes down on them too? It’s game-on after 25 May, when the EU can start fining violators of the General Data Protection Regulation (GDPR). Key fact: the GDPR protects the data blood of what they call “EU data subjects” wherever those subjects’ necks are exposed in borderless digital world.

To explain more about how this works, here is the (lightly edited) text of a tweet thread posted this morning by @JohnnyRyan of PageFair:

Facebook left its API wide open, and had no control over personal data once those data left Facebook.

But there is a wider story coming: (thread…)

Every single big website in the world is leaking data in a similar way, through “RTB bid requests” for online behavioural advertising #adtech.

Every time an ad loads on a website, the site sends the visitor’s IP address (indicating physical location), the URL they are looking at, and details about their device, to hundreds -often thousands- of companies. Here is a graphic that shows the process.

The website does this to let these companies “bid” to show their ad to this visitor. Here is a video of how the system works. In Europe this accounts for about a quarter of publishers’ gross revenue.

Once these personal data leave the publisher, via “bid request”, the publisher has no control over what happens next. I repeat that: personal data are routinely sent, every time a page loads, to hundreds/thousands of companies, with no control over what happens to them.

This means that every person, and what they look at online, is routinely profiled by companies that receive these data from the websites they visit. Where possible, these data and combined with offline data. These profiles are built up in “DMPs”.

Many of these DMPs (data management platforms) are owned by data brokers. (Side note: The FTC’s 2014 report on data brokers is shocking. See https://www.ftc.gov/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014. There is no functional difference between an #adtech DMP and Cambridge Analytica.

—Terrell McSweeny, Julie Brill and EDPS

None of this will be legal under the #GDPR. (See one reason why at https://t.co/HXOQ5gb4dL). Publishers and brands need to take care to stop using personal data in the RTB system. Data connections to sites (and apps) have to be carefully controlled by publishers.

So far, #adtech’s trade body has been content to cover over this wholesale personal data leakage with meaningless gestures that purport to address the #GDPR (see my note on @IABEurope current actions here: https://t.co/FDKBjVxqBs). It is time for a more practical position.

And advertisers, who pay for all of this, must start to demand that safe, non-personal data take over in online RTB targeting. RTB works without personal data. Brands need to demand this to protect themselves – and all Internet users too. @dwheld @stephan_lo @BobLiodice

Websites need to control
1. which data they release in to the RTB system
2. whether ads render directly in visitors’ browsers (where DSPs JavaScript can drop trackers)
3. what 3rd parties get to be on their page
@jason_kint @epc_angela @vincentpeyregne @earljwilkinson 11/12

Lets work together to fix this. 12/12

Those last three recommendations are all good, but they also assume that websites, advertisers and their third party agents are the ones with the power to do something. Not readers.

But there’s lots readers will be able to do. More about that shortly. Meanwhile, publishers can get right with readers by dropping #adtech and go back to publishing the kind of high-value brand advertising they’ve run since forever in the physical world.

That advertising, as Bob Hoffman (@adcontrarian) and Don Marti (@dmarti) have been making clear for years, is actually worth a helluva lot more than adtech, because it delivers clear creative and economic signals and comes with no cognitive overhead (for example, wondering where the hell an ad comes from and what it’s doing right now).

As I explain here, “Real advertising wants to be in a publication because it values the publication’s journalism and readership” while “adtech wants to push ads at readers anywhere it can find them.”

Going back to real advertising is the easiest fix in the world, but so far it’s nearly unthinkable because we’ve been defaulted for more than twenty years to an asymmetric power relationship between readers and publishers called client-server. I’ve been told that client-server was chosen as the name for this relationship because “slave-master” didn’t sound so good; but I think the best way to visualize it is calf-cow:

As I put it at that link (way back in 2012), Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing to the Net that prevents each of us from having plenty of power on our own.

On the Net, we don’t need to be slaves, cattle or blood bags. We can be human. In legal terms, we can operate as first parties rather than second ones. In other words, the sites of the world can click “agree” to our terms, rather than the other way around.

Customer Commons is working on exactly those terms. The first publication to agree to readers terms is Linux Journal, where I am now the editor-in-chief. The first of those terms will say “just show me ads not based on tracking me,” and is hashtagged #DoNotByte.

In Help Us Cure Online Publishing of Its Addiction to Personal Data, I explain how this models the way advertising ought to be done: by the grace of readers, with no spying.

Obeying readers’ terms also carries no risk of violating privacy laws, because every pub will have contracts with its readers to do the right thing. This is totally do-able. Read that last link to see how.

As I say there, we need help. Linux Journal still has a small staff, and Customer Commons (a California-based 501(c)(3) nonprofit) so far consists of five board members. What it aims to be is a worldwide organization of customers, as well as the place where terms we proffer can live, much as Creative Commons is where personal copyright licenses live. (Customer Commons is modeled on Creative Commons. Hats off to the Berkman Klein Center for helping bring both into the world.)

I’m also hoping other publishers, once they realize that they are no less a part of the surveillance economy than Facebook and Cambridge Analytica, will help out too.

[Later…] Not long after this post went up I talked about these topics on the Gillmor Gang. Here’s the video, plus related links.

I think the best push-back I got there came from Esteban Kolsky, (@ekolsky) who (as I recall anyway) saw less than full moral equivalence between what Facebook and Cambridge Analytica did to screw with democracy and what the New York Times and other ad-supported pubs do by baring the necks of their readers to dozens of data vampires.

He’s right that they’re not equivalent, any more than apples and oranges are equivalent. The sins are different; but they are still sins, just as apples and oranges are still both fruit. Exposing readers to data vampires is simply wrong on its face, and we need to fix it. That it’s normative in the extreme is no excuse. Nor is the fact that it makes money. There are morally uncompromised ways to make money with advertising, and those are still available.

Another push-back is the claim by many adtech third parties that the personal data blood they suck is anonymized. While that may be so, correlation is still possible. See Study: Your anonymous web browsing isn’t as anonymous as you think, by Barry Levine (@xBarryLevine) in Martech Today, which cites De-anonymizing Web Browsing Data with Social Networks, a study by Jessica Su (@jessicatsu), Ansh Shukla (@__anshukla__) and Sharad Goel (@5harad)
of Stanford and Arvind Narayanan (@random_walker) of Princeton.

(Note: Facebook and Google follow logged-in users by name. They also account for most of the adtech business.)

One commenter below noted that this blog as well carries six trackers (most of which I block).. Here is how those look on Ghostery:

So let’s fix this thing.

[Later still…] Lots of comments in Hacker News as well.

[Later again (8 April 2018)…] About the comments below (60+ so far): the version of commenting used by this blog doesn’t support threading. If it did, my responses to comments would appear below each one. Alas, some not only appear out of sequence, but others don’t appear at all. I don’t know why, but I’m trying to find out. Meanwhile, apologies.

The term “fake news” was a casual phrase until it became clear to news media that a flood of it had been deployed during last year’s presidential election in the U.S. Starting in November 2016, fake news was the subject of strong and well-researched coverage by NPR (here and here), Buzzfeed, CBS (here and here), Wired, the BBC, Snopes, CNN (here and here), Rolling Stone and others. It thus became a thing…

… until Donald Trump started using it as an epithet for news media he didn’t like. He did that first during a press conference on February 16, and then the next day on Twitter:

And he hasn’t stopped. To Trump, any stick he can whup non-Fox mainstream media with is a good stick, and FAKE NEWS is the best.

So that pretty much took “fake news,” as a literal thing, off the table for everyone other than Trump and his amen chorus.

So, since we need a substitute, I suggest decoy news. Because that’s what we’re talking about: fabricated news meant to look like the real thing.

But the problem is bigger than news alone, because advertising-funded media have been in the decoy business since forever. The difference in today’s digital world is that it’s a lot easier to fabricate a decoy story than to research and produce a real one—and it pays just as well, or even better, because overhead in the decoy business rounds to nothing. Why hire a person to do an algorithm’s job?

In the content business the commercial Web has become, algorithms are now used to target both stories and the advertising that pays for them.

This is why, on what we used to call the editorial side of publishing (interesting trend here), journalism as a purpose has been displaced by content production.

We can see one tragic result in a New York Times story titled In New Jersey, Only a Few Media Watchdogs Are Left, by David Chen (@davidwchen). In it he reports that “The Star-Ledger, which almost halved its newsroom eight years ago, has mutated into a digital media company requiring most reporters to reach an ever-increasing quota of page views as part of their compensation.”

This calls to mind how Saturday Night Live in 1977 introduced the Blues Brothers in a skit where Paul Shaffer, playing rock impresario Don Kirshner, proudly said the Brothers were “no longer an authentic blues act, but have managed to become a viable commercial product.”

To be viably commercial today, all media need to be in the content production business, paid for by adtech, which is entirely driven by algorithms informed by surveillance-gathered personal data. The result looks like this:

To fully grok how we got here, it is essential to understand the difference between advertising and direct marketing, and how nearly all of online advertising is now the latter. I describe the shift from former to latter in Separating Advertising’s Wheat and Chaff:

Advertising used to be simple. You knew what it was, and where it came from.

Whether it was an ad you heard on the radio, saw in a magazine or spotted on a billboard, you knew it came straight from the advertiser through that medium. The only intermediary was an advertising agency, if the advertiser bothered with one.

Advertising also wasn’t personal. Two reasons for that.

First, it couldn’t be. A billboard was for everybody who drove past it. A TV ad was for everybody watching the show. Yes, there was targeting, but it was always to populations, not to individuals.

Second, the whole idea behind advertising was to send one message to lots of people, whether or not the people seeing or hearing the ad would ever use the product. The fact that lots of sports-watchers don’t drink beer or drive trucks was beside the point, which was making brands sponsoring a game familiar to everybody watching it.

In their landmark study, “The Waste in Advertising is the Part that Works” (Journal of Advertising Research, December, 2004, pp. 375–390), Tim Ambler and E. Ann Hollier say brand advertising does more than signal a product message; it also gives evidence that the parent company has worth and substance, because it can afford to spend the money. Thus branding is about sending a strong economic signal along with a strong creative one.

Plain old brand advertising also paid for the media we enjoyed. Still does, in fact. And much more. Without brand advertising, pro sports stars wouldn’t be getting eight and nine figure contracts.

But advertising today is also digital. That fact makes advertising much more data-driven, tracking-based and personal. Nearly all the buzz and science in advertising today flies around the data-driven, tracking-based stuff generally called adtech. This form of digital advertising has turned into a massive industry, driven by an assumption that the best advertising is also the most targeted, the most real-time, the most data-driven, the most personal — and that old-fashioned brand advertising is hopelessly retro.

In terms of actual value to the marketplace, however, the old-fashioned stuff is wheat and the new-fashioned stuff is chaff. In fact, the chaff was only grafted on recently.

See, adtech did not spring from the loins of Madison Avenue. Instead its direct ancestor is what’s called direct response marketing. Before that, it was called direct mail, or junk mail. In metrics, methods and manners, it is little different from its closest relative, spam.

Direct response marketing has always wanted to get personal, has always been data-driven, has never attracted the creative talent for which Madison Avenue has been rightly famous. Look up best ads of all time and you’ll find nothing but wheat. No direct response or adtech postings, mailings or ad placements on phones or websites.

Yes, brand advertising has always been data-driven too, but the data that mattered was how many people were exposed to an ad, not how many clicked on one — or whether you, personally, did anything.

And yes, a lot of brand advertising is annoying. But at least we know it pays for the TV programs we watch and the publications we read. Wheat-producing advertisers are called “sponsors” for a reason.

So how did direct response marketing get to be called advertising ? By looking the same. Online it’s hard to tell the difference between a wheat ad and a chaff one.

Remember the movie “Invasion of the Body Snatchers?” (Or the remake by the same name?) Same thing here. Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.

It is now an article of faith within today’s brain-snatched advertising business that the best ad is the most targeted and personalized ad. Worse, almost all the journalists covering the advertising business assume the same thing. And why wouldn’t they, given that this is how advertising is now done online, especially by the Facebook-Google duopoly.

And here is why those two platforms can’t fix it: both have AI machines built to give millions of advertising customers ways to target the well-studied eyeballs of billions of people, using countless characterizations of those eyeballs.In fact, the only (and highly ironic) way they can police bad acting on their platforms is by hiring people who do nothing but look for that bad acting.

One fix is regulation. We now have that, hugely, with the General Data Protection Regulation (GDPR). It’s an EU law, but it protects the privacy of EU citizens everywhere—with potentially massive fines. In spirit, if not also in letter (which the platforms are struggling mightily to weasel around), the GDPR outlaws tracking people like tagged animals online. I’ve called the GDPR an extinction event for adtech, and the main reason brands (including the media kind) need to fire it.

The other main fixes begin on the personal side. Don Marti (@dmarti) tweets, “Build technologies to implement people’s norms on sharing their personal data, and you’ll get technologies to help high-reputation sites build ad-supported business models ABSOLUTELY FREE!” Those models are all advertising wheat, not adtech chaff.

Now here’s the key: what we need most are single and simple ways for each of us to manage all our dealings with other entities online. Having separate means, each provided by the dozens or hundreds of sites and services we each deal with, all with different UIs, login/password gauntlets, forms to fill out, meaningless privacy policies and for-lawyers-only terms of service, cannot work. All that shit may give those companies scale across many consumers, but every one of them only adds to those consumers’ relationship overhead. I explain how this will work in Giving Customers Scale, plus many other posts, columns and essays compiled in my People vs. Adtech series, which is on its way to becoming a book. I’d say more about all of it, but need to catch a plane. See you on the other coast.

Meanwhile, the least we can do is start talking about decoy news and the business model that pays for it.

[Later…] I’m on the other coast now, but preoccupied by the #ThomasFire threatening our home in Santa Barbara. Since this blog appears to be mostly down, I’m writing about it at doc.blog.

 

 

 

Tags: , ,

Santa Barbara is one of the world’s great sea coast towns. It’s also in a good position to be one of the world’s great Internet coast towns too.

Luckily, Santa Barbara is advantaged by its location not just on the ocean, but on some of the thickest Internet trunk lines (called “backbones”) in the world. These run through town beside the railroad and Highway 101. Some are owned by the state college and university system. Others are privately owned. In fact Level(3), now part of CenturyLink, has long had a tap on that trunk, and a large data center, in the heart of the Funk Zone. Here it is:

Last I checked, Level(3) was in the business of wholesaling access to its backbone. So was the UC system.

Yet Santa Barbara is still disadvantaged by depending on a single “high speed” Internet service provider: Cox Communications, which is also the town’s incumbent cable TV company. Like most cable companies, Cox is widely disliked by its customers. It has also recently imposed caps on data use.

Cox’s only competitor is Frontier Communications, which provides Internet access over old phone lines previously run by Verizon and GTE. Cable connections provide higher bandwidth than phone lines, but both are limited to fractions of the capacity provided by fiber optic cables. While it’s possible for cable companies to upgrade service to what’s called DOCSIS 3.1, there has been little in the history of Santa Barbara’s dealings with Cox to suggest that Cox will grace the city with its best possible service. (In fact Cox’s only hint toward fiber is in nearby Goleta, not in Santa Barbara.)

About a decade ago, when I was involved in a grass roots effort to get the city to start providing Internet service on its own over fiber optic connections, Cox told us that Santa Barbara was last in line for upgrading the company’s facilities. Other cities were bigger and more important to Cox, which is based in Atlanta.

Back then we lacked a champion for the Internet cause on the Santa Barbara City Council. The mayor liked the idea, and so did a couple of Council members, but the attitude was, “We’ll wait until Palo Alto does something like this and then copy that.” So the effort died.

But we have a champion now, running for City Council in the 6th District, which covers much of downtown: Jack Ucciferri. A story by Gwendolyn Wu in The Independent yesterday begins, “As District 6 City Council candidate Jack Ucciferri went door-to-door to campaign, he found that many Santa Barbara residents had one thing in common: a mutual disdain for the Cox Communications internet monopoly. ‘Every person I talk to agrees with me,’ Ucciferri said.” Specifically, “Ucciferri is dreaming of a fiber optic plan for Santa Barbara. Down south, the cities of Santa Monica and Oxnard already have or are preparing plans for fiber optic cable networks.”

One of the biggest issues for Santa Barbara is the decline of business downtown, especially along State Street, the city’s heart, where the most common sign on storefronts is “For Lease.” Jack’s district contains more of State Street than any other. I can think of nothing that will help State Street—and Santa Barbara—more than to have world-class Internet access and speeds, which would be a huge attraction for many businesses large and small.

So I urge readers in Jack’s district to give him the votes he needs to champion the cause of making Santa Barbara a leader in the digital world, rather than yet another cable backwater, which it will surely remain if he loses.

[Later…] Jack lost on Tuesday, but came in second of three candidates. The winner was the long-standing incumbent, Gregg Hart. (Here’s Noozhawk’s coverage.) I don’t see this as a loss for Jack or his cause. Conversations leading up to the election (including one with a candidate wh won in another district) have led me to believe the time is right to at least fiber up Santa Barbara’s troubled downtown, where The Retail Apocalypse is well underway.

 

 

dat is the new love

Personal data, that is.

Because it’s good to give away—but only if you mean it.

And it’s bad to take it, even it seems to be there for the taking.

I bring this up because a quarter million pages (so far) on the Web say “data is the new oil.”

That’s because a massive personal data extraction industry has grown up around the simple fact that our data is there for the taking. Or so it seems. To them. And their apologists.

As a result, we’re at a stage of wanton data extraction that looks kind of like the oil industry did in 1920 or so:

It’s a good metaphor, but for a horrible business. It’s a business we need to reform, replace, or both. What we need most are new industries that grow around who and what we are as individual human beings—and as a society that values what makes us human.

Our data is us. Each of us. It is our life online. Yes, we shed some data in the course of our activities there, kind of like we shed dandruff and odors. But that’s no excuse for the extractors to frack our lives and take what they want, just because it’s there, and they can.

Now think about what love is, and how it works. How we give it freely, and how worthwhile it is when others accept it. How taking it without asking is simply wrong. How it’s better to earn it than to demand it. How it grows when it’s given. How we grow when we give it as well.

True, all metaphors are wrong, but that’s how metaphors work. Time is not money. Life is not travel. A country is not a family. But all those things are like those other things, so we think and talk about each of them in terms of those others. (By saving, wasting and investing time; by arriving, departing, and moving through life; by serving our motherlands, and honoring our founding fathers.)

Oil made sense as a metaphor when data was so easy to take, and the resistance wasn’t there.

But now the resistance is there. More than half a billion people block ads online, most of which are aimed by extracted personal data. Laws like the GDPR have appeared, with heavy fines for taking personal data without clear permission.

I could go on, but I also need to go to bed. I just wanted to get this down while it was in the front of my mind, where it arrived while discussing how shitty and awful “data is the new oil” was when it first showed up in 2006, and how sadly popular it has become since then:

It’s time for a new metaphor that expresses what our personal data really is to us, and how much more it’s worth to everybody else if we keep, give and accept it on the model of love.

You’re welcome.

 

Nothing challenges our understanding of infrastructure better than a crisis, and we have a big one now in Houston. We do with every giant storm, of course. New York is still recovering from Sandy and New Orleans from Katrina. Reforms and adaptations always follow, as civilization learns from experience.

Look at aviation, for example. Houston is the 4th largest city in the U.S. and George Bush International Airport (aka IAH) is a major hub for United Airlines. For the last few days traffic there has been sphinctered down to emergency flights alone. You can see how this looks on FlightAware’s Miserymap:

Go there and click on the blue play button to see how flight cancellations have played over time, and how the flood in Houston has affected Dallas as well. Click on the airport’s donut to see what routes are most affected. Frequent fliers like myself rely on tools like this one, made possible by a collection of digital technologies working over the Internet.

The airport itself is on Houston’s north side, and not flooded. Its main problem instead has been people. Countless workers have been unable to come in because they’re trapped in the flood, busy helping neighbors or barely starting to deal with lives of their own and others that have been inconvenienced, ruined or in sudden need of large repair.

Aviation just one of modern civilization’s infrastructures. Roads are another. Early in the flood, when cars were first stranded on roads, Google Maps, which gets its traffic information from cell phones, showed grids of solid red lines on the city’s flooded streets. Now those same streets are blank, because the cell phones have departed and the cars aren’t moving.

The cell phone system itself, however, has been one of the stars in the Houston drama. Harvey shows progress on emergency communications since Katrina, says a Wired headline from yesterday. Only 4% of the areas cells were knocked out.

Right now the flood waters are at their record heights, or even rising. Learnings about extant infrastructures have already commenced, and will accumulate as the city drains and dries. It should help to have a deeper understanding of what infrastructure really is, and what it’s doing where it is, than we have so far.

I say that because infrastructure is still new as a concept. As a word, infrastructure has only been in common use since the 1960s:

In The Etymology of Infrastructure and the Infrastructure of the InternetStephen Lewis writes,

Infrastructure indeed entered the English language as a loan word from French in which it had been a railroad engineering term.  A 1927 edition of the Oxford indeed mentioned the word in the context of “… the tunnels, bridges, culverts, and ‘infrastructure work’ of the French railroads.”  After World War II, “infrastructure” reemerged as in-house jargon within NATO, this time referring to fixed installations necessary for the operations of armed forces and to capital investments considered necessary to secure the security of Europe…

Within my own memory the use of the word “infrastructure” had spilled into the contexts of urban management and regions national development and into the private sector… used to refer to those massive capital investments (water, subways, roads, bridges, tunnels, schools, hospitals, etc.) necessary to city’s economy and the lives of its inhabitants and businesses enterprises but too massive and too critical to be conceived, implemented, and run at a profit or to be trusted to the private sector…

In recent years, in the United States at least, infrastructure is a word widely used but an aspect of economic life and social cohesion known more by its collapse and abandonment and raffling off to the private sector than by its implementation, well-functioning, and expansion.

As Steve also mentions in that piece, he and I are among the relatively small number of people (at least compared to those occupying the familiar academic disciplines) who have paid close attention to the topic for some time.

The top dog in this pack (at least for me) is Brett Frischmann, the Villanova Law professor whose book Infrastructure: The Social Value of Shared Resources (Oxford, 2013) anchors the small and still young canon of work on the topic. Writes Brett,

Infrastructure resources entail long term commitments with deep consequences for the public. Infrastructures are a prerequisite for economic and social development. Infrastructures shape complex systems of human activity, including economic, cultural, and political systems. That is, infrastructures affect the behaviour of individuals, firms, households, and other organizations by providing and shaping the available opportunities of these actors to participate in these systems and to interact with each other.

The emphasis is mine, because I am curious about how shaping works. Specifically, How does infrastructure shape all those things—and each of us as well?

Here is a good example of people being shaped, in this case by mobile phones:

I shot that photo on my own phone in a New York subway a few months ago. As you see, everybody in that car is fully preoccupied with their personal rectangle. These people are not the same as they were ten or even five years ago. Nor are the “firms, households and other organizations” in which they participate. Nor is the subway itself, now that all four major mobile phone carriers cover every station in the city. At good speeds too:

We don’t know if Marshall McLuhan said “we shape our tools and then our tools shape us,” but it was clearly one of his core teachings (In fact the line comes from Father John Culkin, SJ, a Professor of Communication at Fordham and a colleague of McLuhan’s. Whether or not Culkin got it from McLuhan we’ll never know.) As aphorisms go, it’s a close relative to the subtitle of McLuhan’s magnum opus, Understanding Media: the Extensions of Man (Berkeley, 1964, 1994, 2003). The two are compressed into his most quoted line, “the medium is the message,” which says that every medium changes us while also extending us.

In The Medium is the Massage: an Inventory of Effects (Gingko, 1967, 2001), McLuhan explains it this way: “All media work us over completely. They are so pervasive… that they leave no part of us untouched unaffected, unaltered… Any understanding of social and cultural change is impossible without a knowledge of the way media work as environments.”

Specifically, “All media are extensions of some human faculty—psychic or physical. The wheel is an extension of the foot.The book is an extension of the eye. Clothing, an extension of the skin. Electric curcuitry, an extension of the central nervous system. Media, by altering the environment, evoke in us unique ratios of sense perceptins. The extension of any once sense alters the way we think and act—the way we perceive the world. When these things change, men change.”

He also wasn’t just talking communications media. He was talking about everything we make, which in turn make us. As Eric McLuhan (Marshall’s son and collaborator) explains in Laws of Media: The New Science (Toronto, 1988), “media” meant “everything man[kind] makes and does, every procedure, every style, every artefact, every poem, song, painting, gimmick, gadget, theory—every product of human effort.”

Chief among the laws Marshall and Eric minted is the tetrad of media effects. (A tetrad is a group of four.) It says every medium, every technology, has effects that refract in four dimensions that also affect each other. Here’s a graphic representation of them:

They apply these laws heuristically, through questions:

  1. What does a medium enhance?
  2. What does it obsolesce?
  3. What does it retrieve that had been obsolesced earlier?
  4. What does it reverse or flip into when pushed to its extreme (for example, by becoming ubiquitous)?

Questions are required because there can be many different effects, and many different answers. All can change. All can be argued. All can work us over.

One workover happened right here, with this blog. In fact, feeling worked over was one of the reasons I dug back into McLuhan, who I had been ignoring for decades.

Here’s the workover…

In the heyday of blogging, back in the early ’00s, this blog’s predecessor (at doc.weblogs.com) had about 20,000 subscribers to its RSS feed, and readers that numbered in up to dozens of thousand per day. Now it gets dozens. On a good day, maybe hundreds. What happened?

In two words, social media. When I put that in the middle of the tetrad, four answers that jumped to mind:

In the ENHANCED corner, Social media surely makes everyone more social, in the purely convivial sense of the word. Suddenly we have hundreds or thousands of “friends” (Facebook, Swarm, Instagram), “followers” (Twitter) and “contacts” (Linkedin). Never mind that we know few of their birthdays, parents names or other stuff we used to care about. We’re social with them now.

Blogging clearly got OBSOLESCED, but—far more importantly—so did the rest of journalism. And I say this as a journalist who once made a living at the profession and now, like everybody else who once did the same, now make squat. What used to be business of journalism is now the business of “content production,” because that’s what social media and its publishing co-dependents get paid by advertising robots to produce in the world. What’s more, anybody can now participate. Look at that subway car photo above. Any one of those people, or all of them, are journalists now. They write and post in journals of various kinds on social media. Some of what they produce is news, if you want to call it that. But hell, news itself is worked over completely. (More about that in a minute.)

We’ve RETRIEVED gossip, which journalism, the academy and the legal profession had obsolesced (by saying, essentially, “we’re in charge of truth and facts”). In Sapiens: A Brief History of Humankind (Harper, 2015), Yuval Noah Harari says gossip was essential for our survival as hunter-gatherers: “Social cooperation is our key for survival and reproduction. It is not enough for individual men and women to know the whereabouts of lions and bisons.. It’s much more important for them to know who in their band hates whom, who is sleeping with whom, who is honest and who is a cheat.” And now we can do that with anybody and everybody, across the vast yet spaceless nowhere we call the Internet, and to hell with the old formalisms of journalism, education and law.

And social media has also clearly REVERSED us into tribes, especially in the news we produce and consume, much of it to wage verbal war with each other. Or worse. For a view of how that works, check out The Wall Street Journal‘s Red Feed / Blue Feed site, which shows the completely opposed (and hostile) views of the world that Facebook injects into the news feeds of people its algorithms consider “very liberal” or “very conservative.”

Is social media infrastructure? I suppose so. The mobile phone network certainly is. And right now we’re glad to have it, because Houston, the fourth largest city in the U.S., is suffering perhaps the worst natural disaster in the country’s history, and the cell phone system is holding up remarkably well, so far. Countless lives are being saved by it, and it will certainly remain the most essential communication system as the city recovers and rebuilds.

Meanwhile, however, it also makes sense to refract the mobile phone through the tetrad. I did that right after I shot the photo above, in this blog post. In it I said smartphones—

  • Enhance conversation
  • Obsolesce mass media (print, radio, TV, cinema, whatever)
  • Retrieve personal agency (the ability to act with effect in the world)
  • Reverse into isolation (also into lost privacy through exposure to surveillance and exploitation)

In the same graphic, it looks like this:

But why listen to me when the McLuhans were on the case almost three decades ago? This is from Gregory Sandstrom‘s “Laws of media—The four effects: A Mcluhan contribution to social epistemology” (SERCC, November 11, 2012)—

The REVERSES items might be off, the but others are right on. (Whoa: cameras!)

The problem here, however, is the tendency we have to get caught up in effects. While those are all interesting, the McLuhans want us to look below those, to causes. This is hard because effects are figures, and causes are grounds: the contexts from which figures arise. From Marshall and Eric McLuhan’s Media and Formal Cause (Neopoesis, 2011): “Novelty becomes cliché through use. And constant use creates a new hidden environment while simultaneously pushing the old invisible ground into prominence, as a new figure, clearly visible for the first time. Every innovation scraps its immediate predecessor and retrieves still older figures; it causes floods of antiquities or nostalgic art forms and stimulates the search for ‘museum pieces’.”

We see this illustrated by Isabelle Adams in her paper “What Would McLuhan Say about the Smartphone? Applying McLuhan’s Tetrad to the Smartphone” (Glocality, 2106):

 

Laws of Media again: “The motor car retrieved the countryside, scrapped the inner core of the city, and created suburban megalopolis. Invention is the mother of necessities, old and new.”

We tend to see it the other way around, with necessity mothering invention. It should help to learn from the McLuhans that most of what we think we need is what we invent in order to need it.

Beyond clothing, shelter and tools made of sticks and stones, all the artifacts that fill civilized life are ones most of us didn’t know we needed until some maker in our midst invented them.

And some tools—extensions of our bodies—don’t become necessities until somebody invents a new way to use them. Palm, Nokia and Blackberry all made smart phones a decade before the iPhone and the Android. Was it those two operating systems that made them everybody suddenly want one? No, apps were the inventions that mothered mass necessity for mobile phones, just like it was websites the made us need graphical browsers, which made us need personal computers connected by the Internet.

All those things are effects that the McLuhans want us to look beneath. But they don’t want us to look for the obvious causes of the this-made-that-happen kind. In Media and Formal Cause, Eric McLuhan writes:

Formal causality kicks in whenever “coming events cast their shadows before them.” Formal cause is still, in our time, hugely mysterious. The literate mind finds it is too paradoxical and irrational. It deals with environmental processes and it works outside of time. The effects—those long shadows—arrive first; the causes take a while longer.

Formal cause was one of four listed first by Aristotle:

  • Material—what something is made of.
  • Efficient—how one thing acts on another, causing change.
  • Formal—what makes the thing form a coherent whole.
  • Final—the purpose to which a thing is put.

In Understanding Media, Marshall McLuhan writes, “Any technology gradually creates a totally new human environment”, adding:

Environments are not passive wrappings but active processes….The railway did not introduce movement or transportation or wheel or road into society, but it accelerated and enlarged the scale of previous human functions, creating totally new kinds of cities and new kinds of work and leisure.

Thus railways were a formal cause that scaled up new kinds of cities, work and leisure.  “People don’t want to know the cause of anything”, Marshall said (and Eric quotes, in Media and Formal Cause). “They do not want to know why radio caused Hitler and Gandhi alike. They do not want to know that print caused anything whatever. As users of these media, they wish merely to get inside, hoping perhaps to add another layer to their environment….”

In Media and Formal Cause, Eric also sources Jane Jacobs:

Current theory in many fields—economics, history, anthropology—assumes that cities are built upon a rural economic base. If my observations and reasoning are correct, the reverse is true: that rural economies, including agricultural work, are directly built upon city economies and city work….Rural production is literally the creation of city consumption. That is to say, city economics invent the things that are to become city imports from the rural world.

Which brings us back to Houston. What forms will it cause as we repair it?

(I’m still not done, but need to get to my next appointment. Stay tuned.)

 

 

Who Owns the Internet? — What Big Tech’s Monopoly Powers Mean for our Culture is Elizabeth Kolbert‘s review in The New Yorker of several books, one of which I’ve read: Jonathan Taplin’s Move Fast and Break Things—How Facebook, Google, and Amazon Cornered Culture and Undermined Democracy.

The main takeaway for me, to both Elizabeth’s piece and Jon’s book, is making clear that Google and Facebook are at the heart of today’s personal data extraction industry, and that this industry defines (as well as supports) much of our lives online.

Our data, and data about us, is the crude that Facebook and Google extract, refine and sell to advertisers. This by itself would not be a Bad Thing if it were done with our clearly expressed (rather than merely implied) permission, and if we had our own valves to control personal data flows with scale across all the companies we deal with, rather than countless different valves, many worthless, buried in the settings pages of the Web’s personal data extraction systems, as well as in all the extractive mobile apps of the world.

It’s natural to look for policy solutions to the problems Jon and others visit in the books Elizabeth reviews. And there are some good regulations around already. Most notably, the GDPR in Europe has energized countless developers (some listed here) to start providing tools individuals (no longer just “consumers” or “users”) can employ to control personal data flows into the world, and how that data might be used. Even if surveillance marketers find ways around the GDPR (which some will), advertisers themselves are starting to realize that tracking people like animals only fails outright, but that the human beings who constitute the actual marketplace have mounted the biggest boycott in world history against it.

But I also worry because I consider both Facebook and Google epiphenomenal. Large and all-powerful though they may be today, they are (like all tech companies, especially ones whose B2B customers and B2C consumers are different populations—commercial broadcasters, for example) shallow and temporary effects rather than deep and enduring causes.

I say this as an inveterate participant in Silicon Valley who can name many long-gone companies that once occupied Google’s and Facebook’s locations there—and I am sure many more will occupy the same spaces in a fullness of time that will surely include at least one Next Big Thing that obsolesces advertising as we know it today online. Such as, for example, discovering that we don’t need advertising at all.

Even the biggest personal data extraction companies are also not utilities on the scale or even the importance of power and water distribution (which we need to live), or the extraction industries behind either. Nor have these companies yet benefitted from the corrective influence of fully empowered individuals and societies: voices that can be heard directly, consciously and personally, rather than mere data flows observed by machines.

That direct influence will be far more helpful than anything they’re learning now just by following our shadows and sniffing our exhaust, mostly against our wishes. (To grok how little we like being spied on, read The Tradeoff Fallacy: How Marketers are Misrepresenting American Consumers and Opening Them Up to Exploiitation, a report by Joseph Turow, Michael Hennessy and Nora Draper of the Annenberg School for Communication at the University of Pennsylvania.)

Our influence will be most corrective when all personal data extraction companies become what lawyers call second parties. That’s when they agree to our terms as first partiesThese terms are in development today at Customer Commons, Kantara and elsewhere. They will prevail once they get deployed in our browsers and apps, and companies start agreeing (which they will in many cases because doing so gives them instant GDPR compliance, which is required by next May, with severe fines for noncompliance).

Meanwhile new government policies that see us only as passive victims will risk protecting yesterday from last Thursday with regulations that last decades or longer. So let’s hold off on that until we have terms of our own, start performing as first parties (on an Internet designed to support exactly that), and the GDPR takes full effect. (Not that more consumer-protecting federal regulation is going to happen in the U.S. anyway under the current administration: all the flow is in the other direction.)

By the way, I believe nobody “owns” the Internet, any more than anybody owns gravity or sunlight. For more on why, see Cluetrain’s New Clues, which David Weinberger and I put up 1.5 years ago.

Take a look at this chart:

CryptoCurrency Market Capitalizations

screen-shot-2017-06-21-at-10-37-51-pm

As Neo said, Whoa.

To help me get my head fully around all that’s going on behind that surge, or mania, or whatever it is, I’ve composed a lexicon-in-process that I’m publishing here so I can find it again. Here goes:::

Bitcoin. “A cryptocurrency and a digital payment system invented by an unknown programmer, or a group of programmers, under the name Satoshi Nakamoto. It was released as open-source software in 2009. The system is peer-to-peer, and transactions take place between users directly, without an intermediary. These transactions are verified by network nodes and recorded in a public distributed ledger called a blockchain. Since the system works without a central repository or single administrator, bitcoin is called the first decentralized digital currency.” (Wikipedia.)

Cryptocurrency. “A digital asset designed to work as a medium of exchange using cryptography to secure the transactions and to control the creation of additional units of the currency. Cryptocurrencies are a subset of alternative currencies, or specifically of digital currencies. Bitcoin became the first decentralized cryptocurrency in 2009. Since then, numerous cryptocurrencies have been created. These are frequently called altcoins, as a blend of bitcoin alternative. Bitcoin and its derivatives use decentralized control as opposed to centralized electronic money/centralized banking systems. The decentralized control is related to the use of bitcoin’s blockchain transaction database in the role of a distributed ledger.” (Wikipedia.)

“A cryptocurrency system is a network that utilizes cryptography to secure transactions in a verifiable database that cannot be changed without being noticed.” (Tim Swanson, in Consensus-as-a-service: a brief report on the emergence of permissioned, distributed ledger systems.)

Distributed ledger. Also called a shared ledger, it is “a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions.” (Wikipedia, citing a report by the UK Government Chief Scientific Adviser: Distributed Ledger Technology: beyond block chain.) A distributed ledger requires a peer-to-peer network and consensus algorithms to ensure replication across nodes. The ledger is sometimes also called a distributed database. Tim Swanson adds that a distributed ledger system is “a network that fits into a new platform category. It typically utilizes cryptocurrency-inspired technology and perhaps even part of the Bitcoin or Ethereum network itself, to verify or store votes (e.g., hashes). While some of the platforms use tokens, they are intended more as receipts and not necessarily as commodities or currencies in and of themselves.”

Blockchain.”A peer-to-peer distributed ledger forged by consensus, combined with a system for ‘smart contracts’ and other assistive technologies. Together these can be used to build a new generation of transactional applications that establishes trust, accountability and transparency at their core, while streamlining business processes and legal constraints.” (Hyperledger.)

“To use conventional banking as an analogy, the blockchain is like a full history of banking transactions. Bitcoin transactions are entered chronologically in a blockchain just the way bank transactions are. Blocks, meanwhile, are like individual bank statements. Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system. The full copy of the blockchain has records of every Bitcoin transaction ever executed. It can thus provide insight about facts like how much value belonged a particular address at any point in the past. The ever-growing size of the blockchain is considered by some to be a problem due to issues like storage and synchronization. On an average, every 10 minutes, a new block is appended to the block chain through mining.” (Investopedia.)

“Think of it as an operating system for marketplaces, data-sharing networks, micro-currencies, and decentralized digital communities. It has the potential to vastly reduce the cost and complexity of getting things done in the real world.” (Hyperledger.)

Permissionless system. “A permissionless system [or ledger] is one in which identity of participants is either pseudonymous or even anonymous. Bitcoin was originally designed with permissionless parameters although as of this writing many of the on-ramps and off-ramps for Bitcoin are increasingly permission-based. (Tim Swanson.)

Permissioned system. “A permissioned system -[or ledger] is one in which identity for users is whitelisted (or blacklisted) through some type of KYB or KYC procedure; it is the common method of managing identity in traditional finance.” (Tim Swanson)

Mining. “The process by which transactions are verified and added to the public ledger, known as the blockchain. (It is) also the means through which new bitcoin are released. Anyone with access to the Internet and suitable hardware can participate in mining. The mining process involves compiling recent transactions into blocks and trying to solve a computationally difficult puzzle. The participant who first solves the puzzle gets to place the next block on the block chain and claim the rewards. The rewards, which incentivize mining, are both the transaction fees associated with the transactions compiled in the block as well as newly released bitcoin.” (Investopedia.)

Ethereum. “An open-source, public, blockchain-based distributed computing platform featuring smart contract (scripting) functionality, which facilitates online contractual agreements. It provides a decentralized Turing-complete virtual machine, the Ethereum Virtual Machine (EVM), which can execute scripts using an international network of public nodes. Ethereum also provides a cryptocurrency token called “ether”, which can be transferred between accounts and used to compensate participant nodes for computations performed. Gas, an internal transaction pricing mechanism, is used to mitigate spam and allocate resources on the network. Ethereum was proposed in late 2013 by Vitalik Buterin, a cryptocurrency researcher and programmer. Development was funded by an online crowdsale during July–August 2014. The system went live on 30 July 2015, with 11.9 million coins “premined” for the crowdsale… In 2016 Ethereum was forked into two blockchains, as a result of the collapse of The DAO project. The two chains have different numbers of users, and the minority fork was renamed to Ethereum Classic.” (Wikipedia.)

Decentralized Autonomous Organization. This is “an organization that is run through rules encoded as computer programs called smart contracts. A DAO’s financial transaction record and program rules are maintained on a blockchain… The precise legal status of this type of business organization is unclear. The best-known example was The DAO, a DAO for venture capital funding, which was launched with $150 million in crowdfunding in June 2016 and was immediately hacked and drained of US$50 million in cryptocurrency… This approach eliminates the need to involve a bilaterally accepted trusted third party in a financial transaction, thus simplifying the sequence. The costs of a blockchain enabled transaction and of making available the associated data may be substantially lessened by the elimination of both the trusted third party and of the need for repetitious recording of contract exchanges in different records: for example, the blockchain data could in principle, if regulatory structures permitted, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration.(Wikipedia)

Initial Coin Offering. “A means of crowdfunding the release of a new cryptocurrency. Generally, tokens for the new cryptocurrency are sold to raise money for technical development before the cryptocurrency is released. Unlike an initial public offering (IPO), acquisition of the tokens does not grant ownership in the company developing the new cryptocurrency. And unlike an IPO, there is little or no government regulation of an ICO.” (Chris Skinner.)

“In an ICO campaign, a percentage of the cryptocurrency is sold to early backers of the project in exchange for legal tender or other cryptocurrencies, but usually for Bitcoin…During the ICO campaign, enthusiasts and supporters of the firm’s initiative buy some of the distributed cryptocoins with fiat or virtual currency. These coins are referred to as tokens and are similar to shares of a company sold to investors in an Initial Public Offering (IPO) transaction.” (Investopedia.)

Tokens. “In the blockchain world, a token is a tiny fraction of a cryptocurrency (bitcoin, ether, etc) that has a value usually less than 1/1000th of a cent, so the value is essentially nothing, but it can still go onto the blockchain…This sliver of currency can carry code that represents value in the real world — the ownership of a diamond, a plot of land, a dollar, a share of stock, another cryptocurrency, etc. Tokens represent ownership of the underlying asset and can be traded freely. One way to understand it is that you can trade physical gold, which is expensive and difficult to move around, or you can just trade tokens that represent gold. In most cases, it makes more sense to trade the token than the asset. Tokens can always be redeemed for their underlying asset, though that can often be a difficult and expensive process. Though technically they could be redeemed, many tokens are designed never to be redeemed but traded forever. On the other hand, a ticket is a token that is designed to be redeemed and may or may not be trade-able” (TokenFactory.)

“Tokens in the ethereum ecosystem can represent any fungible tradable good: coins, loyalty points, gold certificates, IOUs, in game items, etc. Since all tokens implement some basic features in a standard way, this also means that your token will be instantly compatible with the ethereum wallet and any other client or contract that uses the same standards. (Ethereum.org/token.)

“The most important takehome is that tokens are not equity, but are more similar to paid API keys. Nevertheless, they may represent a >1000X improvement in the time-to-liquidity and a >100X improvement in the size of the buyer base relative to traditional means for US technology financing — like a Kickstarter on steroids.” (Thoughts on Tokens, by Balaji S. Srinivasan.)

“A blockchain token is a digital token created on a blockchain as part of a decentralized software protocol. There are many different types of blockchain tokens, each with varying characteristics and uses. Some blockchain tokens, like Bitcoin, function as a digital currency. Others can represent a right to tangible assets like gold or real estate. Blockchain tokens can also be used in new protocols and networks to create distributed applications. These tokens are sometimes also referred to as App Coins or Protocol Tokens. These types of tokens represent the next phase of innovation in blockchain technology, and the potential for new types of business models that are decentralized – for example, cloud computing without Amazon, social networks without Facebook, or online marketplaces without eBay. However, there are a number of difficult legal questions surrounding blockchain tokens. For example, some tokens, depending on their features, may be subject to US federal or state securities laws. This would mean, among other things, that it is illegal to offer them for sale to US residents except by registration or exemption. Similar rules apply in many other countries. (A Securities Law Framework for Blockchain Tokens.)

In fact tokens go back. All the way.

In Before Writing Volume I: From Counting to Cuneiform, Denise Schmandt-Besserat writes, “Tokens can be traced to the Neolithic period starting about 8000 B.C. They evolved following the needs of the economy, at first keeping track of the products of farming…The substitution of signs for tokens was the first step toward writing.” (For a compression of her vast scholarship on the matter, read Tokens: their Significance for the Origin of Counting and Writing.

I sense that we are now at a threshold no less pregnant with possibilities than we were when ancestors in Mesopotamia rolled clay into shapes, made marks on them and invented t-commerce.

And here is a running list of sources I’ve visited, so far:

You’re welcome.

To improve it, that is.

archimedes120

On a mailing list that obsesses about All Things Networking, another member cited what he called “the Doc Searls approach” to something. Since it was a little off (though kind and well-intended), I responded with this (lightly edited):

The Doc Searls approach is to put as much agency as possible in the hands of individuals first, and self-organized groups of individuals second. In other words, equip demand to engage and drive supply on customers’ own terms and in their own ways.

This is supported by the wide-open design of TCP/IP in the first place, which at least models (even if providers don’t fully give us) an Archimedean place to stand, and a wide-open market for levers that help us move the world—one in which the practical distance between everyone and everything rounds to zero.

To me this is a greenfield that has been mostly fallow for the duration. There are exceptions (and encouraging those is my personal mission), but mostly what we live with are industrial age models that assume from the start that the most leveraged agency is central, and that all the most useful intelligence (lately with AI and ML being the most hyper-focused on and fantasized about) should naturally be isolated inside corporate giants with immense data holdings and compute factories.

Government oversight of these giants and what they do is nigh unthinkable, much less do-able. While regulators aplenty know and investigate the workings of oil refineries and nuclear power plants, there are no equivalents for Google’s, Facebook’s or Amazon’s vast refineries of data and plants doing AI, ML and much more. All the expertise is working for those companies or selling their skills in the marketplace. (The public minded work in universities, I suppose.) I don’t lament this, by the way. I just note that it pretty much can’t happen.

More importantly, we have seen, over and over, that compute powers of many kinds will be far more leveraged for all when individuals can apply them. We saw that when computing got personal, when the Internet gave everybody a place to operate on a common network that spanned the world, and when both could fit in a hand-held rectangle.

The ability for each of us to not only drive prices individually, but to retrieve the virtues of the bazaar to the networked marketplace, will eventually win out. In the meantime it appears the best we can do is imagine that the full graces of computing and networks are what only big companies can do for (and to) us.

Bonus link: a talk I gave last week in Munich.

So I thought it might be good to surface that here. At least it partly explains why I’ve been working more and blogging less lately.

« Older entries