Internet

You are currently browsing the archive for the Internet category.

Take a look at this chart:

CryptoCurrency Market Capitalizations

screen-shot-2017-06-21-at-10-37-51-pm

As Neo said, Whoa.

To help me get my head fully around all that’s going on behind that surge, or mania, or whatever it is, I’ve composed a lexicon-in-process that I’m publishing here so I can find it again. Here goes:::

Bitcoin. “A cryptocurrency and a digital payment system invented by an unknown programmer, or a group of programmers, under the name Satoshi Nakamoto. It was released as open-source software in 2009. The system is peer-to-peer, and transactions take place between users directly, without an intermediary. These transactions are verified by network nodes and recorded in a public distributed ledger called a blockchain. Since the system works without a central repository or single administrator, bitcoin is called the first decentralized digital currency.” (Wikipedia.)

Cryptocurrency. “A digital asset designed to work as a medium of exchange using cryptography to secure the transactions and to control the creation of additional units of the currency. Cryptocurrencies are a subset of alternative currencies, or specifically of digital currencies. Bitcoin became the first decentralized cryptocurrency in 2009. Since then, numerous cryptocurrencies have been created. These are frequently called altcoins, as a blend of bitcoin alternative. Bitcoin and its derivatives use decentralized control as opposed to centralized electronic money/centralized banking systems. The decentralized control is related to the use of bitcoin’s blockchain transaction database in the role of a distributed ledger.” (Wikipedia.)

“A cryptocurrency system is a network that utilizes cryptography to secure transactions in a verifiable database that cannot be changed without being noticed.” (Tim Swanson, in Consensus-as-a-service: a brief report on the emergence of permissioned, distributed ledger systems.)

Distributed ledger. Also called a shared ledger, it is “a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions.” (Wikipedia, citing a report by the UK Government Chief Scientific Adviser: Distributed Ledger Technology: beyond block chain.) A distributed ledger requires a peer-to-peer network and consensus algorithms to ensure replication across nodes. The ledger is sometimes also called a distributed database. Tim Swanson adds that a distributed ledger system is “a network that fits into a new platform category. It typically utilizes cryptocurrency-inspired technology and perhaps even part of the Bitcoin or Ethereum network itself, to verify or store votes (e.g., hashes). While some of the platforms use tokens, they are intended more as receipts and not necessarily as commodities or currencies in and of themselves.”

Blockchain.”A peer-to-peer distributed ledger forged by consensus, combined with a system for ‘smart contracts’ and other assistive technologies. Together these can be used to build a new generation of transactional applications that establishes trust, accountability and transparency at their core, while streamlining business processes and legal constraints.” (Hyperledger.)

“To use conventional banking as an analogy, the blockchain is like a full history of banking transactions. Bitcoin transactions are entered chronologically in a blockchain just the way bank transactions are. Blocks, meanwhile, are like individual bank statements. Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system. The full copy of the blockchain has records of every Bitcoin transaction ever executed. It can thus provide insight about facts like how much value belonged a particular address at any point in the past. The ever-growing size of the blockchain is considered by some to be a problem due to issues like storage and synchronization. On an average, every 10 minutes, a new block is appended to the block chain through mining.” (Investopedia.)

“Think of it as an operating system for marketplaces, data-sharing networks, micro-currencies, and decentralized digital communities. It has the potential to vastly reduce the cost and complexity of getting things done in the real world.” (Hyperledger.)

Permissionless system. “A permissionless system [or ledger] is one in which identity of participants is either pseudonymous or even anonymous. Bitcoin was originally designed with permissionless parameters although as of this writing many of the on-ramps and off-ramps for Bitcoin are increasingly permission-based. (Tim Swanson.)

Permissioned system. “A permissioned system -[or ledger] is one in which identity for users is whitelisted (or blacklisted) through some type of KYB or KYC procedure; it is the common method of managing identity in traditional finance.” (Tim Swanson)

Mining. “The process by which transactions are verified and added to the public ledger, known as the blockchain. (It is) also the means through which new bitcoin are released. Anyone with access to the Internet and suitable hardware can participate in mining. The mining process involves compiling recent transactions into blocks and trying to solve a computationally difficult puzzle. The participant who first solves the puzzle gets to place the next block on the block chain and claim the rewards. The rewards, which incentivize mining, are both the transaction fees associated with the transactions compiled in the block as well as newly released bitcoin.” (Investopedia.)

Ethereum. “An open-source, public, blockchain-based distributed computing platform featuring smart contract (scripting) functionality, which facilitates online contractual agreements. It provides a decentralized Turing-complete virtual machine, the Ethereum Virtual Machine (EVM), which can execute scripts using an international network of public nodes. Ethereum also provides a cryptocurrency token called “ether”, which can be transferred between accounts and used to compensate participant nodes for computations performed. Gas, an internal transaction pricing mechanism, is used to mitigate spam and allocate resources on the network. Ethereum was proposed in late 2013 by Vitalik Buterin, a cryptocurrency researcher and programmer. Development was funded by an online crowdsale during July–August 2014. The system went live on 30 July 2015, with 11.9 million coins “premined” for the crowdsale… In 2016 Ethereum was forked into two blockchains, as a result of the collapse of The DAO project. The two chains have different numbers of users, and the minority fork was renamed to Ethereum Classic.” (Wikipedia.)

Decentralized Autonomous Organization. This is “an organization that is run through rules encoded as computer programs called smart contracts. A DAO’s financial transaction record and program rules are maintained on a blockchain… The precise legal status of this type of business organization is unclear. The best-known example was The DAO, a DAO for venture capital funding, which was launched with $150 million in crowdfunding in June 2016 and was immediately hacked and drained of US$50 million in cryptocurrency… This approach eliminates the need to involve a bilaterally accepted trusted third party in a financial transaction, thus simplifying the sequence. The costs of a blockchain enabled transaction and of making available the associated data may be substantially lessened by the elimination of both the trusted third party and of the need for repetitious recording of contract exchanges in different records: for example, the blockchain data could in principle, if regulatory structures permitted, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration.(Wikipedia)

Initial Coin Offering. “A means of crowdfunding the release of a new cryptocurrency. Generally, tokens for the new cryptocurrency are sold to raise money for technical development before the cryptocurrency is released. Unlike an initial public offering (IPO), acquisition of the tokens does not grant ownership in the company developing the new cryptocurrency. And unlike an IPO, there is little or no government regulation of an ICO.” (Chris Skinner.)

“In an ICO campaign, a percentage of the cryptocurrency is sold to early backers of the project in exchange for legal tender or other cryptocurrencies, but usually for Bitcoin…During the ICO campaign, enthusiasts and supporters of the firm’s initiative buy some of the distributed cryptocoins with fiat or virtual currency. These coins are referred to as tokens and are similar to shares of a company sold to investors in an Initial Public Offering (IPO) transaction.” (Investopedia.)

Tokens. “In the blockchain world, a token is a tiny fraction of a cryptocurrency (bitcoin, ether, etc) that has a value usually less than 1/1000th of a cent, so the value is essentially nothing, but it can still go onto the blockchain…This sliver of currency can carry code that represents value in the real world — the ownership of a diamond, a plot of land, a dollar, a share of stock, another cryptocurrency, etc. Tokens represent ownership of the underlying asset and can be traded freely. One way to understand it is that you can trade physical gold, which is expensive and difficult to move around, or you can just trade tokens that represent gold. In most cases, it makes more sense to trade the token than the asset. Tokens can always be redeemed for their underlying asset, though that can often be a difficult and expensive process. Though technically they could be redeemed, many tokens are designed never to be redeemed but traded forever. On the other hand, a ticket is a token that is designed to be redeemed and may or may not be trade-able” (TokenFactory.)

“Tokens in the ethereum ecosystem can represent any fungible tradable good: coins, loyalty points, gold certificates, IOUs, in game items, etc. Since all tokens implement some basic features in a standard way, this also means that your token will be instantly compatible with the ethereum wallet and any other client or contract that uses the same standards. (Ethereum.org/token.)

“The most important takehome is that tokens are not equity, but are more similar to paid API keys. Nevertheless, they may represent a >1000X improvement in the time-to-liquidity and a >100X improvement in the size of the buyer base relative to traditional means for US technology financing — like a Kickstarter on steroids.” (Thoughts on Tokens, by Balaji S. Srinivasan.)

“A blockchain token is a digital token created on a blockchain as part of a decentralized software protocol. There are many different types of blockchain tokens, each with varying characteristics and uses. Some blockchain tokens, like Bitcoin, function as a digital currency. Others can represent a right to tangible assets like gold or real estate. Blockchain tokens can also be used in new protocols and networks to create distributed applications. These tokens are sometimes also referred to as App Coins or Protocol Tokens. These types of tokens represent the next phase of innovation in blockchain technology, and the potential for new types of business models that are decentralized – for example, cloud computing without Amazon, social networks without Facebook, or online marketplaces without eBay. However, there are a number of difficult legal questions surrounding blockchain tokens. For example, some tokens, depending on their features, may be subject to US federal or state securities laws. This would mean, among other things, that it is illegal to offer them for sale to US residents except by registration or exemption. Similar rules apply in many other countries. (A Securities Law Framework for Blockchain Tokens.)

In fact tokens go back. All the way.

In Before Writing Volume I: From Counting to Cuneiform, Denise Schmandt-Besserat writes, “Tokens can be traced to the Neolithic period starting about 8000 B.C. They evolved following the needs of the economy, at first keeping track of the products of farming…The substitution of signs for tokens was the first step toward writing.” (For a compression of her vast scholarship on the matter, read Tokens: their Significance for the Origin of Counting and Writing.

I sense that we are now at a threshold no less pregnant with possibilities than we were when ancestors in Mesopotamia rolled clay into shapes, made marks on them and invented t-commerce.

And here is a running list of sources I’ve visited, so far:

You’re welcome.

To improve it, that is.

archimedes120

On a mailing list that obsesses about All Things Networking, another member cited what he called “the Doc Searls approach” to something. Since it was a little off (though kind and well-intended), I responded with this (lightly edited):

The Doc Searls approach is to put as much agency as possible in the hands of individuals first, and self-organized groups of individuals second. In other words, equip demand to engage and drive supply on customers’ own terms and in their own ways.

This is supported by the wide-open design of TCP/IP in the first place, which at least models (even if providers don’t fully give us) an Archimedean place to stand, and a wide-open market for levers that help us move the world—one in which the practical distance between everyone and everything rounds to zero.

To me this is a greenfield that has been mostly fallow for the duration. There are exceptions (and encouraging those is my personal mission), but mostly what we live with are industrial age models that assume from the start that the most leveraged agency is central, and that all the most useful intelligence (lately with AI and ML being the most hyper-focused on and fantasized about) should naturally be isolated inside corporate giants with immense data holdings and compute factories.

Government oversight of these giants and what they do is nigh unthinkable, much less do-able. While regulators aplenty know and investigate the workings of oil refineries and nuclear power plants, there are no equivalents for Google’s, Facebook’s or Amazon’s vast refineries of data and plants doing AI, ML and much more. All the expertise is working for those companies or selling their skills in the marketplace. (The public minded work in universities, I suppose.) I don’t lament this, by the way. I just note that it pretty much can’t happen.

More importantly, we have seen, over and over, that compute powers of many kinds will be far more leveraged for all when individuals can apply them. We saw that when computing got personal, when the Internet gave everybody a place to operate on a common network that spanned the world, and when both could fit in a hand-held rectangle.

The ability for each of us to not only drive prices individually, but to retrieve the virtues of the bazaar to the networked marketplace, will eventually win out. In the meantime it appears the best we can do is imagine that the full graces of computing and networks are what only big companies can do for (and to) us.

Bonus link: a talk I gave last week in Munich.

So I thought it might be good to surface that here. At least it partly explains why I’ve been working more and blogging less lately.

esb-antenae

Before we start, let me explain that ATSC 1.0 is the HDTV standard, and defines what you get from HDTV stations over the air and cable. It dates from the last millennium. Resolution currently maxes out at 1080i, which fails to take advantage even the lowest-end HDTVs sold today, which are 1080p (better than 1080i).

Your new 4K TV or computer screen has 4x the resolution and “upscales” the ATSC picture it gets over the air or from cable. But actual 4k video looks better. Sources for that include satellite TV providers (DirectTV and Dish) and streaming services (Netflix, Amazon, YouTube, etc.).

In other words, the TV broadcast industry is to 4K video what AM radio is to FM. (Or what both are to streaming.)

This is why our new FCC chairman is stepping up for broadcasters. In FCC’s Pai Proposes ATSC 3.0 Rollout, John Eggerton (@eggerton) of B&C (Broadcasting & Cable) begins,

New FCC chairman Ajit Pai signaled Thursday that he wants broadcasters to be able to start working on tomorrow’s TV today.

Pai, who has only been in the job since Jan. 20, wasted no time prioritizing that goal. He has already circulated a Notice of Proposed Rulemaking to the other commissioners that would allow TV stations to start rolling out the ATSC 3.0 advanced TV transmission standard on a voluntary basis. He hopes to issue final authorization for the new standard by the end of the year, he said in an op ed in B&C explaining the importance of the initiative.

“Next Gen TV matters because it will let broadcasters offer much better services in a variety of ways,” Pai wrote. “Picture quality will improve with 4K transmissions. Accurate sound localization and customizable sound mixes will produce an immersive audio experience. Broadcasters will be able to provide advanced emergency alerts with more information, more tailored to a viewer’s particular location. Enhanced personalization and interactivity will enable better audience measurement, which in turn will make for higher-quality advertising—ads relevant to you and that you actually might want to see. Perhaps most significantly, consumers will easily be able to watch over-the-air programming on mobile devices.”

Three questions here.

  1. Re: personalization, will broadcasters and advertisers agree to our terms rather than vice versa? Term #1: #NoStalking. So far, I doubt it. (Not that the streamers are ready either, but they’re more likely to listen.)
  2. How does this square with the Incentive Auction, which—if it succeeds—will get rid of most over the air TV?
  3. What will this do for (or against) cable, which is having a helluva time wedging too many channels into its available capacities already, and do it by compressing the crap out of everything, filling the screen with artifacts (those sections of skin or ball fields that look plaid or pixelated).

Personally, I think both over the air and cable TV are dead horses walking, and ATSC 3.0 won’t save them. We’ll still have cable, but will use it mostly to watch and interact with streams, most of which will come from producers and distributors that were Net-native in the first place.

But I could be wrong about any or all of this. Either way (or however), tell me how.

 

Save

Tags: , , , , ,

 

amsterdam-streetImagine you’re on a busy city street where everybody who disagrees with you disappears.

We have that city now. It’s called media—especially the social kind.

You can see how this works on Wall Street Journal‘s Blue Feed, Red Feed page. Here’s a screen shot of the feed for “Hillary Clinton” (one among eight polarized topics):

blue-red-wsj

Both invisible to the other.

We didn’t have that in the old print and broadcast worlds, and still don’t, where they persist. (For example, on news stands, or when you hit SCAN on a car radio.)

But we have it in digital media.

Here’s another difference: a lot of the stuff that gets shared is outright fake. There’s a lot of concern about that right now:

fakenews

Why? Well, there’s a business in it. More eyeballs, more advertising, more money, for more eyeballs for more advertising. And so on.

Those ads are aimed by tracking beacons planted in your phones and browsers, feeding data about your interests, likes and dislikes to robot brains that work as hard as they can to know you and keep feeding you more stuff that stokes your prejudices. Fake or not, what you’ll see is stuff you are likely to share with others who do the same. This business that pays for this is called “adtech,” also known as “interest based” or “interactive” advertising. But those are euphemisms. Its science is all about stalking. They can plausibly deny it’s personal. But it is.

The “social” idea is “markets as conversations” (a personal nightmare for me, gotta say). The business idea is to drag as many eyeballs as possible across ads that are aimed by the same kinds of creepy systems. The latter funds the former.

Rather than unpack that, I’ll leave that up to the rest of ya’ll, with a few links:

 

I want all the help I can get unpacking this, because I’m writing about it in a longer form than I’m indulging in here. Thanks.

Save

Tags: , , ,

cropped-wst-logo-main[3 December update: Here is a video of the panel.]

So I was on a panel at WebScience@10 in London (@WebScienceTrust, #WebSci10), where the first question asked was, “What are two aspects of ‘trust and the Web’ that you think are most relevant/important at the moment?” My answer went something like this::::

1) The Net is young, and the Web with it.

Both were born in their current forms on 30 April 1995, when the NSFnet backed off on its forbidding commercial traffic on its pipes. This opened the whole Net to absolutely everything, exactly when the graphical Web browser became fully useful.

Twenty-one years in the history of a world is nothing. We’re still just getting started here.

2) The Internet, like nature, did not come with privacy. And privacy is personal. We need to start there.

We arrived naked in this new world, and — like Adam and Eve — still don’t have clothing and shelter.

The browser should have been a private tool in the first place, but it wasn’t; and it won’t be, so long as we leave improving it mostly up to companies with more interest in violating our privacy than providing it.

Just 21 years into this new world, we still need our own clothing, shelter, vehicles and private spaces. Browsers included. We will only get privacy if our tools provide it as a simple fact.

We also need to be the first parties, rather than the second ones, in our social and business agreements. In other words, others need to accept our terms, rather than vice versa. As first parties, we are independent. As second parties, we are dependent. Simple as that. Without independence, without agency, without the ability to initiate, without the ability to obtain agreement on our own terms, it’s all just more of the same old industrial model.

In the physical world, our independence earns respect, and that’s what we give to others as a matter of course. Without that respect, we don’t have civilization. This is why the Web we have today is still largely uncivilized.

We can only civilize the Net and the Web by inventing digital clothing and doors for people, and by providing standard agreements private individuals can assert in their dealings with others.

Inventing yet another wannabe unicorn to provide “privacy as a service” won’t do it. Nor will regulating the likes of Facebook and Google, or expecting them to become interested in building protections, when their businesses depend on the absence of those protections.

Fortunately, work has begun on personal privacy tools, and agreements we can each assert. And we can talk about those.

Save

Save

Ingeyes Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking, @JuliaAngwin and @ProPublica unpack what the subhead says well already: “Google is the latest tech company to drop the longstanding wall between anonymous online ad tracking and user’s names.”

So here’s a message from humanity to Google and all the other spy organizations in the surveillance economy: Tracking is no less an invasion of privacy in apps and browsers than it is in homes, cars, purses, pants and wallets.

That’s because our apps and browsers, like the devices on which we use them, are personal and private. Simple as that. (HT to @Apple for digging that fact.)

To help the online advertising business understand what ought to be obvious (but isn’t yet), let’s clear up some misconceptions:

  1. Tracking people without their clear and conscious permission is wrong. (Meaning The Castle Doctrine should apply online no less than it does in the physical world.)
  2. Assuming that using a browser or an app constitutes some kind of “deal” to allow tracking is wrong. (Meaning implied consent is not the real thing. See The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation, by Joseph Turow, Ph.D. and the Annenberg School for Communication at the University of Pennsylvania.)
  3. Claiming that advertising funds the “free” Internet is wrong. (The Net has been free for the duration. Had it been left up to the billing companies of the world, we never would have had it, and they never would have made their $trillions on it. More at New Clues.)

What’s right is civilization, which relies on manners. Advertisers, their agencies and publishers haven’t learned manners yet.

But they will.

At the very least, regulations will force companies harvesting personal data to obey those they harvest it from, with fines for not obeying. Toward that end, Europe’s General Data Protection Regulation already has compliance offices at large corporations shaking in their boots, for good reason: “a fine up to 20,000,000 EUR, or in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher (Article 83, Paragraph 5 & 6).” Those come into force in 2018. Stay tuned.

Companies harvesting personal data also shouldn’t be surprised to find themselves re-classified as fiduciaries, no less responsible than accountants, brokers and doctors for confidentiality on behalf of the people they collect data from. (Thank you, professors Balkin and Zittrain, for that legal and rhetorical hack. Brilliant, and well done. Or begun.)

The only way to fully fix publishing, advertising and surveillance-corrupted business in general is to equip individuals with terms they can assert in dealing with others online — and to do it at scale. Meaning we need terms that work the same way across all the companies we deal with. That’s why Customer Commons and Kantara are working on exactly those terms. For starters. And these will be our terms — not separate and different ones that live at each company we deal with. Those aren’t working now, and never will work, because they can’t. And they can’t because when you have to deal with as many different terms as there are parties supplying them, the problem becomes unmanageable, and you get screwed. That’s why —

There’s a new sheriff on the Net, and it’s the individual. Who isn’t a “user,” by the way. Or a “consumer.” With new terms of our own, we’re the first party. The companies we deal with are second parties. Meaning that they are the users, and the consumers, of our legal “content.” And they’ll like it too, because we actually want to do good business with good companies, and are glad to make deals that work for both parties. Those include expressions of true loyalty, rather than the coerced kind we get from every “loyalty” card we carry in our purses and wallets.

When we are the first parties, we also get scale. Imagine changing your terms, your contact info, or your last name, for every company you deal with — and doing that in one move. That can only happen when you are the first party.

So here’s a call to action.

If you want to help blow up the surveillance economy by helping develop much better ways for demand and supply to deal with each other, show up next week at the Computer History Museum for VRM Day and the Internet Identity Workshop, where there are plenty of people already on the case.

Then follow the work that comes out of both — as if your life depends on it. Because it does.

And so does the economy that will grow atop true privacy online and the freedoms it supports. Both are a helluva lot more leveraged than the ill-gotten data gains harvested by the Lumascape doing unwelcome surveillance.

Bonus links:

  1. All the great research Julia Angwin & Pro Publica have been doing on a problem that data harvesting companies have been causing and can’t fix alone, even with government help. That’s why we’re doing the work I just described.
  2. What Facebook Knows About You Can Matter Offline, an OnPoint podcast featuring Julia, Cathy O’Neill and Ashkan Soltani.
  3. Everything by Shoshana Zuboff. From her home page: “’I’ve dedicated this part of my life to understanding and conceptualizing the transition to an information civilization. Will we be the masters of information, or will we be its slaves? There’s a lot of work to be done, if we are to build bridges to the kind of future that we can call “home.” My new book on this subject, Master or Slave? The Fight for the Soul of Our Information Civilization, will be published by Public Affairs in the U.S. and Eichborn in Germany in 2017.” Can’t wait.
  4. Don Marti’s good thinking and work with Aloodo and other fine hacks.

shackles

Who Owns the Mobile Experience? is a report by Unlockd on mobile advertising in the U.K. To clarify the way toward an answer, the report adds, “mobile operators or advertisers?”

The correct answer is neither. Nobody’s experience is “owned” by somebody else.

True, somebody else may cause a person’s experience to happen. But causing isn’t the same as owning.

We own our selves. That includes our experiences.

This is an essential distinction. For lack of it, both mobile operators and advertisers are delusional about their customers and consumers. (That’s an important distinction too. Operators have customers. Advertisers have consumers. Customers pay, consumers may or may not. That the former also qualifies as the latter does not mean the distinction should not be made. Sellers are far more accountable to customers than advertisers are to consumers.)

It’s interesting that Unlockd’s survey shows almost identically high levels of delusion by advertisers and operators…

  • 85% of advertisers and 82% of operators “think the mobile ad experience is positive for end users”
  • 3% of advertisers and 1% of operators admit “it could be negative”
  • Of the 85% of advertisers who think the experience is positive, 50% “believe it’s because products advertised are relevant to the end user”
  • “the reasons for this opinion is driven from the belief that users are served detail around products that are relevant to them.”

… while:

  • 47% of consumers think “the mobile phone ad experience (for them) is positive”
  • 39% of consumers “think ads are irrelevant
  • 36% blame “poor or irritating format”
  • 40% “believe the volume of ads served to them are a main reason for the negative experience”

It’s amazing but not surprising to me that mobile operators apparently consider their business to be advertising more than connectivity. This mindset is also betrayed by AT&T charging a premium for privacy and Comcast wanting to do the same. (Advertising today, especially online, does not come with privacy. Quite the opposite, in fact. A great deal of it is based on tracking people. Shoshana Zuboff calls this surveillance capitalism.)

Years ago, when I consulted BT, JP Rangaswami (@jobsworth), then BT’s Chief Scientist, told me phone companies’ core competency was billing, not communications. Since those operators clearly wish to be in the “content” business now, and to make money the same way print and broadcast did for more than a century, it makes sense that they imagine themselves now to be one-way conduits for ad-fortified content, and not just a way people and things (including the ones called products and companies) can connect to each other.

The FCC and other regulators need to bear this in mind as they look at what operators are doing to the Internet. I mean, it’s good and necessary for regulators to care about neutrality and privacy of Internet services, but a category error is being made if regulators fail to recognize that the operators want to be “content distributors” on the models of commercial broadcasting (funded by advertising) and the post office (funded by junk mail, which is the legacy model of today’s personalized direct response advertising  online).

I also have to question how consumers were asked by this survey about their mobile ad experiences. Let me see a show of hands: how many here consider their mobile phone ad experience “positive?” Keep your hands down if you are associated in any way with advertising, phone companies or publishing. When I ask this question, or one like it (e.g. “Who here wants to see ads on their phone?”) in talks I give, the number of raised hands is usually zero. If it’s not, the few parties with raised hands offer qualified responses, such as, “I’d like to see coupons when I’m in a store using a shopping app.”

Another delusion of advertisers and operators is that all ads should be relevant. They don’t need to be. In fact, the most valuable ads are not targeted personally, but across populations, so large populations can become familiar with advertised products and services.

It’s a simple fact that branding wouldn’t exist without massive quantities of ads being shown to people for whom the ads are irrelevant. Few of us would know the brands of Procter & Gamble, Unilever, L’Oreal, Coca-Cola, Nestlé, General Motors, Volkswagen, Mars or McDonald’s (the current top ten brand advertisers worldwide) if not for the massive amounts of money those companies spend advertising to people who will never buy their products but will damn sure known those products’ names. (Don Marti explains this well.)

A hard fact that the advertising industry needs to face is that there is very little appetite for ads on the receiving end. People put up with it on TV and radio, and in print, but for the most part they don’t like it. (The notable exceptions are print ads in fashion magazines and other high-quality publications. And classifieds.)

Appetites for ads, and all forms of content, should be consumers’ own. This means consumers need to be able to specify the kind of advertising they’re looking for, if any.

Even then, the far more valuable signal coming from consumers is (or will be) an actual desire for certain products and services. In marketing lingo, these signals are qualified leads. In VRM lingo, these signals  are intentcasts. With intentcasting, the customers do the advertising, and are in full control of the process. And they are no longer mere consumers (which Jerry Michalski calls “gullets with wallets and eyeballs”).

It helps that there are dozens of companies in this business already.

So it would be far more leveraged for operators to work with those companies than with advertising systems so disconnected from reality that they’ve caused hundreds of millions of people to block ads on their mobile devices — and are in such deep denial of the market’s clear messages that they deny the legitimacy of a clear personal choice, misdirecting attention toward the makers of ad blocking tools, and away from what’s actually happening: people asserting power over their own lives and private spaces (e.g. their browsers) online.

If companies actually believe in free markets, they need to believe in free customers. Those are people who, at the very least, are in charge of their own experiences in the networked world.

Save

Tags: , , , ,

doc036cThe NYTimes says the Mandarins of language are demoting the Internet to a common noun. It is to be just “internet” from now on. Reasons:

Thomas Kent, The A.P.’s standards editor, said the change mirrored the way the word was used in dictionaries, newspapers, tech publications and everyday life.

In our view, it’s become wholly generic, like ‘electricity or the ‘telephone,’ ” he said. “It was never trademarked. It’s not based on any proper noun. The best reason for capitalizing it in the past may have been that the word was new. But at one point, I’ve heard, ‘phonograph’ was capitalized.”

But we never called electricity “the Electricity.” And “the telephone” referred to a single thing of which there billions of individual examples.

What was it about “the Internet” that made us want to capitalize it in the first place? Is usage alone reason enough to stop respecting that?

Some of my tech friends say the “Internet” we’ve had for all these years is just one prototype: the first and best-known of many other possible ones.

All due respect, but: bah.

There is only one Internet just like there is only one Universe. There are other examples of neither.

Formalizing the lower-case “internet,” for whatever reason, dismisses what’s transcendent and singular about the Internet we have: a whole that is more, and other, than a sum of parts.

I know it looks like the Net is devolving into many separate systems, isolated and silo’d to some degree. We see that with messaging, for example. Hundreds of different ones, most of them incompatible, on purpose. We have specialized mobile systems that provide variously open vs. sphinctered access (such as T-Mobile’s “binge” allowance for some content sources but not others), zero-rated not-quite-internets (such as Facebook’s Free Basics) and countries such as China, where many domains and uses are locked out.

Some questions…

Would we enjoy a common network by any name today if the Internet had been lower-case from the start?

Would makers or operators of any of the parts that comprise the Internet’s whole feel any fealty to what at least ought to be the common properties of that whole? Or would they have made sure that their parts only got along, at most, with partners’ parts? Would the first considerations by those operators not have been billing and tariffs agreed to by national regulators?

Hell, would the four of us have written The Cluetrain Manifesto? Would David Weinberger and I have written World of Ends or New Clues if the Internet had lacked upper-case qualities?

Would the world experience absent distance and cost across a The Giant Zero in its midst were it not for the Internet’s founding design, which left out billing proprietary routing on purpose?

Would we have anything resembling the Internet of today if designing and building it had been left up to phone and cable companies? Or to governments (even respecting the roles government activities did play in creating the Net we do have)?

I think the answer to all of those would be no.

In The Compuserve of Things, Phil Windley begins, “On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?”

Would he, or anybody, ask such questions, or aspire to such purposes, were it not for the respect many of us pay to the upper-cased-ness of “the Internet?”

How does demoting Internet from proper to common noun not risk (or perhaps even assure) its continued devolution to a collection of closed and isolated parts that lack properties (e.g. openness and commonality) possessed only by the whole?

I don’t know. But I think these kinds of questions are important to ask, now that the keepers of usage standards have demoted what the Net’s creators made — and ignore why they made it.

If you care at all about this, please dig Archive.org‘s Locking the Web open: a Call for a Distributed Web, Brewster Kahle’s post by the same title, covering more ground, and the Decentralized Web Summit, taking place on June 8-9. (I’ll be there in spirit. Alas, I have other commitments on the East Coast.)

For some reason, many or most of the images in this blog don’t load in some browsers. Same goes for the ProjectVRM blog as well. This is new, and I don’t know exactly why it’s happening.

So far, I gather it happens only when the URL is https and not http.

Okay, here’s an experiment. I’ll add an image here in the WordPress (4.4.2) composing window, and center it in the process, all in Visual mode. Here goes:

cheddar3

Now I’ll hit “Publish,” and see what we get.

When the URL starts with https, it fails to show in—

  • Firefox ((46.0.1)
  • Chrome (50.0.2661.102)
  • Brave (0.9.6)

But it does show in—

  • Opera (12.16)
  • Safari (9.1).

Now I’ll go back and edit the HTML for the image in Text mode, taking out class=”aligncenter size-full wp-image-10370 from between the img and src attributes, and bracket the whole image with the <center> and </center> tags. Here goes:

cheddar3

Hmm… The <center> tags don’t work, and I see why when I look at the HTML in Text mode: WordPress removes them. That’s new. Thus another old-school HTML tag gets sidelined. 🙁

Okay, I’ll try again to center it, this by time by taking out class=”aligncenter size-full wp-image-10370 in Text mode, and clicking on the centering icon in Visual mode. When I check back in Text mode, I see WordPress has put class=”aligncenter” between img and src. I suppose that attribute is understood by WordPress’ (or the theme’s) CSS while the old <center> tags are not. Am I wrong about that?

Now I’ll hit the update button, rendering this—

cheddar3

—and check back with the browsers.

Okay, it works with all of them now, whether the URL starts with https or http.

So the apparent culprit, at least by this experiment, is centering with anything other than class=”aligncenter”, which seems to require inserting a centered image Visual mode, editing out size-full wp-image-whatever (note: whatever is a number that’s different for every image I put in a post) in Text mode, and then going back and centering it in Visual mode, which puts class=”aligncenter” in place of what I edited out in text mode. Fun.

Here’s another interesting (and annoying) thing. When I’m editing in the composing window, the url is https. But when I “view post” after publishing or updating, I get the http version of the blog, where I can’t see what doesn’t load in the https version. But when anybody comes to the blog by way of an external link, such as a search engine or Twitter, they see the https version, where the graphics won’t load if I haven’t fixed them manually in the manner described above.

So https is clearly breaking old things, but I’m not sure if it’s https doing it, something in the way WordPress works, or something in the theme I’m using. (In WordPress it’s hard — at least for me — to know where WordPress ends and the theme begins.)

Dave Winer has written about how https breaks old sites, and here we can see it happening on a new one as well. WordPress, or at least the version provided for https://blogs.harvard.edu bloggers, may be buggy, or behind the times with the way it marks up images. But that’s a guess.

I sure hope there is some way to gang-edit all my posts going back to 2007. If not, I’ll just have to hope readers will know to take the s out of https and re-load the page. Which, of course, nearly all of them won’t.

It doesn’t help that all the browser makers now obscure the protocol, so you can’t see whether a site is http or https, unless you copy and paste it. They only show what comes after the // in the URL. This is a very unhelpful dumbing-down “feature.”

Brave is different. The location bar isn’t there unless you mouse over it. Then you see the whole URL, including the protocol to the left of the //. But if you don’t do that, you just see a little padlock (meaning https, I gather), then (with this post) “blogs.harvard.edu | Doc Searls Weblog • Help: why don’t images load in https?” I can see why they do that, but it’s confusing.

By the way, I probably give the impression of being a highly technical guy. I’m not. The only code I know is Morse. The only HTML I know is vintage. I’m lost with <span> and <div> and wp-image-whatever, don’t hack CSS or PHP, and don’t understand why <em> is now preferable to <i> if you want to italicize something. (Fill me in if you like.)

So, technical folks, please tell us wtf is going on here (or give us your best guess), and point to simple and global ways of fixing it.

Thanks.

[Later…]

Some answer links, mostly from the comments below:

That last one, which is cited in two comments, says this:

Chris Cree who experienced the same problem discovered that the WP_SITEURL and WP_HOME constants in the wp-config.php file were configured to structure URLs with http instead of https. Cree suggests users check their settings to see which URL type is configured. If both the WordPress address and Site URLs don’t show https, it’s likely causing issues with responsive images in WordPress 4.4.

Two things here:

  1. I can’t see where in Settings the URL type is mentioned, much less configurable. But Settings has a load of categories and choices within categories, so I may be missing it.
  2. I wonder what will happen to old posts I edited to make images responsive. (Some background on that. “Responsive design,” an idea that seems to have started here in 2010, has since led to many permutations of complications in code that’s mostly hidden from people like me, who just want to write something on a blog or a Web page. We all seem to have forgotted that it was us for whom Tim Berners-Lee designed HTML in the first place.) My “responsive” hack went like this: a) I would place the image in Visual mode; b) go into Text mode; and c) carve out the stuff between img and src and add new attributes for width and height. Those would usually be something like width=”50%” and height=”image”. This was an orthodox thing to do in HTML 4.01, but not in HTML 5. Browsers seem tolerant of this approach, so far, at least for pages viewed with the the http protocol. I’ve checked old posts that have images marked up that way, and it’s not a problem. Yet. (Newer browser versions may not be so tolerant.) Nearly all images, however, fail to load in Firefox, Chrome and Brave when viewed through https.

So the main question remaining are:

  1. Is this something I can correct globally with a hack in my own blogs?
  2. If so, is the hack within the theme, the CSS, the PHP, or what?
  3. If not, is it something the übergeeks at Harvard blogs can fix?
  4. If it’s not something they can fix, is my only choice to go back and change every image from the blogs’ beginnings (or just live with the breakage)?
  5. If that’s required, what’s to keep some new change in HTML 5, or WordPress, or the next “best practice” from breaking everything that came before all over again?

Thanks again for all your help, folks. Much appreciated. (And please keep it coming. I’m sure I’m not alone with this problem.)

A photo readers find among the most interesting among the 13,000+ aerial photos I've put on Flickr

This photo of the San Juan River in Utah is among dozens of thousands I’ve put on Flickr. it might be collateral damage if Yahoo dies or fails to sell the service to a worthy buyer.

Flickr is far from perfect, but it is also by far the best online service for serious photographers. At a time when the center of photographic gravity is drifting form arts & archives to selfies & social, Flickr remains both retro and contemporary in the best possible ways: a museum-grade treasure it would hurt terribly to lose.

Alas, it is owned by Yahoo, which is, despite Marissa Mayer’s best efforts, circling the drain.

Flickr was created and lovingly nurtured by Stewart Butterfield and Caterina Fake, from its creation in 2004 through its acquisition by Yahoo in 2005 and until their departure in 2008. Since then it’s had ups and downs. The latest down was the departure of Bernardo Hernandez in 2015.

I don’t even know who, if anybody, runs it now. It’s sinking in the ratings. According to Petapixel, it’s probably up for sale. Writes Michael Zhang, “In the hands of a good owner, Flickr could thrive and live on as a dominant photo sharing option. In the hands of a bad one, it could go the way of MySpace and other once-powerful Internet services that have withered away from neglect and lack of innovation.”

Naturally, the natives are restless. (Me too. I currently have 62,527 photos parked and curated there. They’ve had over ten million views and run about 5,000 views per day. I suppose it’s possible that nobody is more exposed in this thing than I am.)

So I’m hoping a big and successful photography-loving company will pick it up. I volunteer Adobe. It has the photo editing tools most used by Flickr contributors, and I expect it would do a better job of taking care of both the service and its customers than would Apple, Facebook, Google, Microsoft or other possible candidates.

Less likely, but more desirable, is some kind of community ownership. Anybody up for a kickstarter?

[Later…] I’m trying out 500px. Seems better than Flickr in some respects so far. Hmm… Is it possible to suck every one of my photos, including metadata, out of Flickr by its API and bring it over to 500px?

I also like Thomas Hawk‘s excellent defense of Flickr, here.

 

Tags: , , , ,

« Older entries