Internet

You are currently browsing the archive for the Internet category.

archimedes120

On a mailing list that obsesses about All Things Networking, another member cited what he called “the Doc Searls approach” to something. Since it was a little off (though kind and well-intended), I responded with this (lightly edited):

The Doc Searls approach is to put as much agency as possible in the hands of individuals first, and self-organized groups of individuals second. In other words, equip demand to engage and drive supply on customers’ own terms and in their own ways.

This is supported by the wide-open design of TCP/IP in the first place, which at least models (even if providers don’t fully give us) an Archimedean place to stand, and a wide-open market for levers that help us move the world—one in which the practical distance between everyone and everything rounds to zero.

To me this is a greenfield that has been mostly fallow for the duration. There are exceptions (and encouraging those is my personal mission), but mostly what we live with are industrial age models that assume from the start that the most leveraged agency is central, and that all the most useful intelligence (lately with AI and ML being the most hyper-focused on and fantasized about) should naturally be isolated inside corporate giants with immense data holdings and compute factories.

Government oversight of these giants and what they do is nigh unthinkable, much less do-able. While regulators aplenty know and investigate the workings of oil refineries and nuclear power plants, there are no equivalents for Google’s, Facebook’s or Amazon’s vast refineries of data and plants doing AI, ML and much more. All the expertise is working for those companies or selling their skills in the marketplace. (The public minded work in universities, I suppose.) I don’t lament this, by the way. I just note that it pretty much can’t happen.

More importantly, we have seen, over and over, that compute powers of many kinds will be far more leveraged for all when individuals can apply them. We saw that when computing got personal, when the Internet gave everybody a place to operate on a common network that spanned the world, and when both could fit in a hand-held rectangle.

The ability for each of us to not only drive prices individually, but to retrieve the virtues of the bazaar to the networked marketplace, will eventually win out. In the meantime it appears the best we can do is imagine that the full graces of computing and networks are what only big companies can do for (and to) us.

Bonus link: a talk I gave last week in Munich.

So I thought it might be good to surface that here. At least it partly explains why I’ve been working more and blogging less lately.

esb-antenae

Before we start, let me explain that ATSC 1.0 is the HDTV standard, and defines what you get from HDTV stations over the air and cable. It dates from the last millennium. Resolution currently maxes out at 1080i, which fails to take advantage even the lowest-end HDTVs sold today, which are 1080p (better than 1080i).

Your new 4K TV or computer screen has 4x the resolution and “upscales” the ATSC picture it gets over the air or from cable. But actual 4k video looks better. Sources for that include satellite TV providers (DirectTV and Dish) and streaming services (Netflix, Amazon, YouTube, etc.).

In other words, the TV broadcast industry is to 4K video what AM radio is to FM. (Or what both are to streaming.)

This is why our new FCC chairman is stepping up for broadcasters. In FCC’s Pai Proposes ATSC 3.0 Rollout, John Eggerton (@eggerton) of B&C (Broadcasting & Cable) begins,

New FCC chairman Ajit Pai signaled Thursday that he wants broadcasters to be able to start working on tomorrow’s TV today.

Pai, who has only been in the job since Jan. 20, wasted no time prioritizing that goal. He has already circulated a Notice of Proposed Rulemaking to the other commissioners that would allow TV stations to start rolling out the ATSC 3.0 advanced TV transmission standard on a voluntary basis. He hopes to issue final authorization for the new standard by the end of the year, he said in an op ed in B&C explaining the importance of the initiative.

“Next Gen TV matters because it will let broadcasters offer much better services in a variety of ways,” Pai wrote. “Picture quality will improve with 4K transmissions. Accurate sound localization and customizable sound mixes will produce an immersive audio experience. Broadcasters will be able to provide advanced emergency alerts with more information, more tailored to a viewer’s particular location. Enhanced personalization and interactivity will enable better audience measurement, which in turn will make for higher-quality advertising—ads relevant to you and that you actually might want to see. Perhaps most significantly, consumers will easily be able to watch over-the-air programming on mobile devices.”

Three questions here.

  1. Re: personalization, will broadcasters and advertisers agree to our terms rather than vice versa? Term #1: #NoStalking. So far, I doubt it. (Not that the streamers are ready either, but they’re more likely to listen.)
  2. How does this square with the Incentive Auction, which—if it succeeds—will get rid of most over the air TV?
  3. What will this do for (or against) cable, which is having a helluva time wedging too many channels into its available capacities already, and do it by compressing the crap out of everything, filling the screen with artifacts (those sections of skin or ball fields that look plaid or pixelated).

Personally, I think both over the air and cable TV are dead horses walking, and ATSC 3.0 won’t save them. We’ll still have cable, but will use it mostly to watch and interact with streams, most of which will come from producers and distributors that were Net-native in the first place.

But I could be wrong about any or all of this. Either way (or however), tell me how.

 

Save

Tags: , , , , ,

 

amsterdam-streetImagine you’re on a busy city street where everybody who disagrees with you disappears.

We have that city now. It’s called media—especially the social kind.

You can see how this works on Wall Street Journal‘s Blue Feed, Red Feed page. Here’s a screen shot of the feed for “Hillary Clinton” (one among eight polarized topics):

blue-red-wsj

Both invisible to the other.

We didn’t have that in the old print and broadcast worlds, and still don’t, where they persist. (For example, on news stands, or when you hit SCAN on a car radio.)

But we have it in digital media.

Here’s another difference: a lot of the stuff that gets shared is outright fake. There’s a lot of concern about that right now:

fakenews

Why? Well, there’s a business in it. More eyeballs, more advertising, more money, for more eyeballs for more advertising. And so on.

Those ads are aimed by tracking beacons planted in your phones and browsers, feeding data about your interests, likes and dislikes to robot brains that work as hard as they can to know you and keep feeding you more stuff that stokes your prejudices. Fake or not, what you’ll see is stuff you are likely to share with others who do the same. This business that pays for this is called “adtech,” also known as “interest based” or “interactive” advertising. But those are euphemisms. Its science is all about stalking. They can plausibly deny it’s personal. But it is.

The “social” idea is “markets as conversations” (a personal nightmare for me, gotta say). The business idea is to drag as many eyeballs as possible across ads that are aimed by the same kinds of creepy systems. The latter funds the former.

Rather than unpack that, I’ll leave that up to the rest of ya’ll, with a few links:

 

I want all the help I can get unpacking this, because I’m writing about it in a longer form than I’m indulging in here. Thanks.

Save

Tags: , , ,

cropped-wst-logo-main[3 December update: Here is a video of the panel.]

So I was on a panel at WebScience@10 in London (@WebScienceTrust, #WebSci10), where the first question asked was, “What are two aspects of ‘trust and the Web’ that you think are most relevant/important at the moment?” My answer went something like this::::

1) The Net is young, and the Web with it.

Both were born in their current forms on 30 April 1995, when the NSFnet backed off on its forbidding commercial traffic on its pipes. This opened the whole Net to absolutely everything, exactly when the graphical Web browser became fully useful.

Twenty-one years in the history of a world is nothing. We’re still just getting started here.

2) The Internet, like nature, did not come with privacy. And privacy is personal. We need to start there.

We arrived naked in this new world, and — like Adam and Eve — still don’t have clothing and shelter.

The browser should have been a private tool in the first place, but it wasn’t; and it won’t be, so long as we leave improving it mostly up to companies with more interest in violating our privacy than providing it.

Just 21 years into this new world, we still need our own clothing, shelter, vehicles and private spaces. Browsers included. We will only get privacy if our tools provide it as a simple fact.

We also need to be the first parties, rather than the second ones, in our social and business agreements. In other words, others need to accept our terms, rather than vice versa. As first parties, we are independent. As second parties, we are dependent. Simple as that. Without independence, without agency, without the ability to initiate, without the ability to obtain agreement on our own terms, it’s all just more of the same old industrial model.

In the physical world, our independence earns respect, and that’s what we give to others as a matter of course. Without that respect, we don’t have civilization. This is why the Web we have today is still largely uncivilized.

We can only civilize the Net and the Web by inventing digital clothing and doors for people, and by providing standard agreements private individuals can assert in their dealings with others.

Inventing yet another wannabe unicorn to provide “privacy as a service” won’t do it. Nor will regulating the likes of Facebook and Google, or expecting them to become interested in building protections, when their businesses depend on the absence of those protections.

Fortunately, work has begun on personal privacy tools, and agreements we can each assert. And we can talk about those.

Save

Save

Ingeyes Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking, @JuliaAngwin and @ProPublica unpack what the subhead says well already: “Google is the latest tech company to drop the longstanding wall between anonymous online ad tracking and user’s names.”

So here’s a message from humanity to Google and all the other spy organizations in the surveillance economy: Tracking is no less an invasion of privacy in apps and browsers than it is in homes, cars, purses, pants and wallets.

That’s because our apps and browsers, like the devices on which we use them, are personal and private. Simple as that. (HT to @Apple for digging that fact.)

To help the online advertising business understand what ought to be obvious (but isn’t yet), let’s clear up some misconceptions:

  1. Tracking people without their clear and conscious permission is wrong. (Meaning The Castle Doctrine should apply online no less than it does in the physical world.)
  2. Assuming that using a browser or an app constitutes some kind of “deal” to allow tracking is wrong. (Meaning implied consent is not the real thing. See The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation, by Joseph Turow, Ph.D. and the Annenberg School for Communication at the University of Pennsylvania.)
  3. Claiming that advertising funds the “free” Internet is wrong. (The Net has been free for the duration. Had it been left up to the billing companies of the world, we never would have had it, and they never would have made their $trillions on it. More at New Clues.)

What’s right is civilization, which relies on manners. Advertisers, their agencies and publishers haven’t learned manners yet.

But they will.

At the very least, regulations will force companies harvesting personal data to obey those they harvest it from, with fines for not obeying. Toward that end, Europe’s General Data Protection Regulation already has compliance offices at large corporations shaking in their boots, for good reason: “a fine up to 20,000,000 EUR, or in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher (Article 83, Paragraph 5 & 6).” Those come into force in 2018. Stay tuned.

Companies harvesting personal data also shouldn’t be surprised to find themselves re-classified as fiduciaries, no less responsible than accountants, brokers and doctors for confidentiality on behalf of the people they collect data from. (Thank you, professors Balkin and Zittrain, for that legal and rhetorical hack. Brilliant, and well done. Or begun.)

The only way to fully fix publishing, advertising and surveillance-corrupted business in general is to equip individuals with terms they can assert in dealing with others online — and to do it at scale. Meaning we need terms that work the same way across all the companies we deal with. That’s why Customer Commons and Kantara are working on exactly those terms. For starters. And these will be our terms — not separate and different ones that live at each company we deal with. Those aren’t working now, and never will work, because they can’t. And they can’t because when you have to deal with as many different terms as there are parties supplying them, the problem becomes unmanageable, and you get screwed. That’s why —

There’s a new sheriff on the Net, and it’s the individual. Who isn’t a “user,” by the way. Or a “consumer.” With new terms of our own, we’re the first party. The companies we deal with are second parties. Meaning that they are the users, and the consumers, of our legal “content.” And they’ll like it too, because we actually want to do good business with good companies, and are glad to make deals that work for both parties. Those include expressions of true loyalty, rather than the coerced kind we get from every “loyalty” card we carry in our purses and wallets.

When we are the first parties, we also get scale. Imagine changing your terms, your contact info, or your last name, for every company you deal with — and doing that in one move. That can only happen when you are the first party.

So here’s a call to action.

If you want to help blow up the surveillance economy by helping develop much better ways for demand and supply to deal with each other, show up next week at the Computer History Museum for VRM Day and the Internet Identity Workshop, where there are plenty of people already on the case.

Then follow the work that comes out of both — as if your life depends on it. Because it does.

And so does the economy that will grow atop true privacy online and the freedoms it supports. Both are a helluva lot more leveraged than the ill-gotten data gains harvested by the Lumascape doing unwelcome surveillance.

Bonus links:

  1. All the great research Julia Angwin & Pro Publica have been doing on a problem that data harvesting companies have been causing and can’t fix alone, even with government help. That’s why we’re doing the work I just described.
  2. What Facebook Knows About You Can Matter Offline, an OnPoint podcast featuring Julia, Cathy O’Neill and Ashkan Soltani.
  3. Everything by Shoshana Zuboff. From her home page: “’I’ve dedicated this part of my life to understanding and conceptualizing the transition to an information civilization. Will we be the masters of information, or will we be its slaves? There’s a lot of work to be done, if we are to build bridges to the kind of future that we can call “home.” My new book on this subject, Master or Slave? The Fight for the Soul of Our Information Civilization, will be published by Public Affairs in the U.S. and Eichborn in Germany in 2017.” Can’t wait.
  4. Don Marti’s good thinking and work with Aloodo and other fine hacks.

shackles

Who Owns the Mobile Experience? is a report by Unlockd on mobile advertising in the U.K. To clarify the way toward an answer, the report adds, “mobile operators or advertisers?”

The correct answer is neither. Nobody’s experience is “owned” by somebody else.

True, somebody else may cause a person’s experience to happen. But causing isn’t the same as owning.

We own our selves. That includes our experiences.

This is an essential distinction. For lack of it, both mobile operators and advertisers are delusional about their customers and consumers. (That’s an important distinction too. Operators have customers. Advertisers have consumers. Customers pay, consumers may or may not. That the former also qualifies as the latter does not mean the distinction should not be made. Sellers are far more accountable to customers than advertisers are to consumers.)

It’s interesting that Unlockd’s survey shows almost identically high levels of delusion by advertisers and operators…

  • 85% of advertisers and 82% of operators “think the mobile ad experience is positive for end users”
  • 3% of advertisers and 1% of operators admit “it could be negative”
  • Of the 85% of advertisers who think the experience is positive, 50% “believe it’s because products advertised are relevant to the end user”
  • “the reasons for this opinion is driven from the belief that users are served detail around products that are relevant to them.”

… while:

  • 47% of consumers think “the mobile phone ad experience (for them) is positive”
  • 39% of consumers “think ads are irrelevant
  • 36% blame “poor or irritating format”
  • 40% “believe the volume of ads served to them are a main reason for the negative experience”

It’s amazing but not surprising to me that mobile operators apparently consider their business to be advertising more than connectivity. This mindset is also betrayed by AT&T charging a premium for privacy and Comcast wanting to do the same. (Advertising today, especially online, does not come with privacy. Quite the opposite, in fact. A great deal of it is based on tracking people. Shoshana Zuboff calls this surveillance capitalism.)

Years ago, when I consulted BT, JP Rangaswami (@jobsworth), then BT’s Chief Scientist, told me phone companies’ core competency was billing, not communications. Since those operators clearly wish to be in the “content” business now, and to make money the same way print and broadcast did for more than a century, it makes sense that they imagine themselves now to be one-way conduits for ad-fortified content, and not just a way people and things (including the ones called products and companies) can connect to each other.

The FCC and other regulators need to bear this in mind as they look at what operators are doing to the Internet. I mean, it’s good and necessary for regulators to care about neutrality and privacy of Internet services, but a category error is being made if regulators fail to recognize that the operators want to be “content distributors” on the models of commercial broadcasting (funded by advertising) and the post office (funded by junk mail, which is the legacy model of today’s personalized direct response advertising  online).

I also have to question how consumers were asked by this survey about their mobile ad experiences. Let me see a show of hands: how many here consider their mobile phone ad experience “positive?” Keep your hands down if you are associated in any way with advertising, phone companies or publishing. When I ask this question, or one like it (e.g. “Who here wants to see ads on their phone?”) in talks I give, the number of raised hands is usually zero. If it’s not, the few parties with raised hands offer qualified responses, such as, “I’d like to see coupons when I’m in a store using a shopping app.”

Another delusion of advertisers and operators is that all ads should be relevant. They don’t need to be. In fact, the most valuable ads are not targeted personally, but across populations, so large populations can become familiar with advertised products and services.

It’s a simple fact that branding wouldn’t exist without massive quantities of ads being shown to people for whom the ads are irrelevant. Few of us would know the brands of Procter & Gamble, Unilever, L’Oreal, Coca-Cola, Nestlé, General Motors, Volkswagen, Mars or McDonald’s (the current top ten brand advertisers worldwide) if not for the massive amounts of money those companies spend advertising to people who will never buy their products but will damn sure known those products’ names. (Don Marti explains this well.)

A hard fact that the advertising industry needs to face is that there is very little appetite for ads on the receiving end. People put up with it on TV and radio, and in print, but for the most part they don’t like it. (The notable exceptions are print ads in fashion magazines and other high-quality publications. And classifieds.)

Appetites for ads, and all forms of content, should be consumers’ own. This means consumers need to be able to specify the kind of advertising they’re looking for, if any.

Even then, the far more valuable signal coming from consumers is (or will be) an actual desire for certain products and services. In marketing lingo, these signals are qualified leads. In VRM lingo, these signals  are intentcasts. With intentcasting, the customers do the advertising, and are in full control of the process. And they are no longer mere consumers (which Jerry Michalski calls “gullets with wallets and eyeballs”).

It helps that there are dozens of companies in this business already.

So it would be far more leveraged for operators to work with those companies than with advertising systems so disconnected from reality that they’ve caused hundreds of millions of people to block ads on their mobile devices — and are in such deep denial of the market’s clear messages that they deny the legitimacy of a clear personal choice, misdirecting attention toward the makers of ad blocking tools, and away from what’s actually happening: people asserting power over their own lives and private spaces (e.g. their browsers) online.

If companies actually believe in free markets, they need to believe in free customers. Those are people who, at the very least, are in charge of their own experiences in the networked world.

Save

Tags: , , , ,

doc036cThe NYTimes says the Mandarins of language are demoting the Internet to a common noun. It is to be just “internet” from now on. Reasons:

Thomas Kent, The A.P.’s standards editor, said the change mirrored the way the word was used in dictionaries, newspapers, tech publications and everyday life.

In our view, it’s become wholly generic, like ‘electricity or the ‘telephone,’ ” he said. “It was never trademarked. It’s not based on any proper noun. The best reason for capitalizing it in the past may have been that the word was new. But at one point, I’ve heard, ‘phonograph’ was capitalized.”

But we never called electricity “the Electricity.” And “the telephone” referred to a single thing of which there billions of individual examples.

What was it about “the Internet” that made us want to capitalize it in the first place? Is usage alone reason enough to stop respecting that?

Some of my tech friends say the “Internet” we’ve had for all these years is just one prototype: the first and best-known of many other possible ones.

All due respect, but: bah.

There is only one Internet just like there is only one Universe. There are other examples of neither.

Formalizing the lower-case “internet,” for whatever reason, dismisses what’s transcendent and singular about the Internet we have: a whole that is more, and other, than a sum of parts.

I know it looks like the Net is devolving into many separate systems, isolated and silo’d to some degree. We see that with messaging, for example. Hundreds of different ones, most of them incompatible, on purpose. We have specialized mobile systems that provide variously open vs. sphinctered access (such as T-Mobile’s “binge” allowance for some content sources but not others), zero-rated not-quite-internets (such as Facebook’s Free Basics) and countries such as China, where many domains and uses are locked out.

Some questions…

Would we enjoy a common network by any name today if the Internet had been lower-case from the start?

Would makers or operators of any of the parts that comprise the Internet’s whole feel any fealty to what at least ought to be the common properties of that whole? Or would they have made sure that their parts only got along, at most, with partners’ parts? Would the first considerations by those operators not have been billing and tariffs agreed to by national regulators?

Hell, would the four of us have written The Cluetrain Manifesto? Would David Weinberger and I have written World of Ends or New Clues if the Internet had lacked upper-case qualities?

Would the world experience absent distance and cost across a The Giant Zero in its midst were it not for the Internet’s founding design, which left out billing proprietary routing on purpose?

Would we have anything resembling the Internet of today if designing and building it had been left up to phone and cable companies? Or to governments (even respecting the roles government activities did play in creating the Net we do have)?

I think the answer to all of those would be no.

In The Compuserve of Things, Phil Windley begins, “On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?”

Would he, or anybody, ask such questions, or aspire to such purposes, were it not for the respect many of us pay to the upper-cased-ness of “the Internet?”

How does demoting Internet from proper to common noun not risk (or perhaps even assure) its continued devolution to a collection of closed and isolated parts that lack properties (e.g. openness and commonality) possessed only by the whole?

I don’t know. But I think these kinds of questions are important to ask, now that the keepers of usage standards have demoted what the Net’s creators made — and ignore why they made it.

If you care at all about this, please dig Archive.org‘s Locking the Web open: a Call for a Distributed Web, Brewster Kahle’s post by the same title, covering more ground, and the Decentralized Web Summit, taking place on June 8-9. (I’ll be there in spirit. Alas, I have other commitments on the East Coast.)

For some reason, many or most of the images in this blog don’t load in some browsers. Same goes for the ProjectVRM blog as well. This is new, and I don’t know exactly why it’s happening.

So far, I gather it happens only when the URL is https and not http.

Okay, here’s an experiment. I’ll add an image here in the WordPress (4.4.2) composing window, and center it in the process, all in Visual mode. Here goes:

cheddar3

Now I’ll hit “Publish,” and see what we get.

When the URL starts with https, it fails to show in—

  • Firefox ((46.0.1)
  • Chrome (50.0.2661.102)
  • Brave (0.9.6)

But it does show in—

  • Opera (12.16)
  • Safari (9.1).

Now I’ll go back and edit the HTML for the image in Text mode, taking out class=”aligncenter size-full wp-image-10370 from between the img and src attributes, and bracket the whole image with the <center> and </center> tags. Here goes:

cheddar3

Hmm… The <center> tags don’t work, and I see why when I look at the HTML in Text mode: WordPress removes them. That’s new. Thus another old-school HTML tag gets sidelined. 🙁

Okay, I’ll try again to center it, this by time by taking out class=”aligncenter size-full wp-image-10370 in Text mode, and clicking on the centering icon in Visual mode. When I check back in Text mode, I see WordPress has put class=”aligncenter” between img and src. I suppose that attribute is understood by WordPress’ (or the theme’s) CSS while the old <center> tags are not. Am I wrong about that?

Now I’ll hit the update button, rendering this—

cheddar3

—and check back with the browsers.

Okay, it works with all of them now, whether the URL starts with https or http.

So the apparent culprit, at least by this experiment, is centering with anything other than class=”aligncenter”, which seems to require inserting a centered image Visual mode, editing out size-full wp-image-whatever (note: whatever is a number that’s different for every image I put in a post) in Text mode, and then going back and centering it in Visual mode, which puts class=”aligncenter” in place of what I edited out in text mode. Fun.

Here’s another interesting (and annoying) thing. When I’m editing in the composing window, the url is https. But when I “view post” after publishing or updating, I get the http version of the blog, where I can’t see what doesn’t load in the https version. But when anybody comes to the blog by way of an external link, such as a search engine or Twitter, they see the https version, where the graphics won’t load if I haven’t fixed them manually in the manner described above.

So https is clearly breaking old things, but I’m not sure if it’s https doing it, something in the way WordPress works, or something in the theme I’m using. (In WordPress it’s hard — at least for me — to know where WordPress ends and the theme begins.)

Dave Winer has written about how https breaks old sites, and here we can see it happening on a new one as well. WordPress, or at least the version provided for https://blogs.harvard.edu bloggers, may be buggy, or behind the times with the way it marks up images. But that’s a guess.

I sure hope there is some way to gang-edit all my posts going back to 2007. If not, I’ll just have to hope readers will know to take the s out of https and re-load the page. Which, of course, nearly all of them won’t.

It doesn’t help that all the browser makers now obscure the protocol, so you can’t see whether a site is http or https, unless you copy and paste it. They only show what comes after the // in the URL. This is a very unhelpful dumbing-down “feature.”

Brave is different. The location bar isn’t there unless you mouse over it. Then you see the whole URL, including the protocol to the left of the //. But if you don’t do that, you just see a little padlock (meaning https, I gather), then (with this post) “blogs.harvard.edu | Doc Searls Weblog • Help: why don’t images load in https?” I can see why they do that, but it’s confusing.

By the way, I probably give the impression of being a highly technical guy. I’m not. The only code I know is Morse. The only HTML I know is vintage. I’m lost with <span> and <div> and wp-image-whatever, don’t hack CSS or PHP, and don’t understand why <em> is now preferable to <i> if you want to italicize something. (Fill me in if you like.)

So, technical folks, please tell us wtf is going on here (or give us your best guess), and point to simple and global ways of fixing it.

Thanks.

[Later…]

Some answer links, mostly from the comments below:

That last one, which is cited in two comments, says this:

Chris Cree who experienced the same problem discovered that the WP_SITEURL and WP_HOME constants in the wp-config.php file were configured to structure URLs with http instead of https. Cree suggests users check their settings to see which URL type is configured. If both the WordPress address and Site URLs don’t show https, it’s likely causing issues with responsive images in WordPress 4.4.

Two things here:

  1. I can’t see where in Settings the URL type is mentioned, much less configurable. But Settings has a load of categories and choices within categories, so I may be missing it.
  2. I wonder what will happen to old posts I edited to make images responsive. (Some background on that. “Responsive design,” an idea that seems to have started here in 2010, has since led to many permutations of complications in code that’s mostly hidden from people like me, who just want to write something on a blog or a Web page. We all seem to have forgotted that it was us for whom Tim Berners-Lee designed HTML in the first place.) My “responsive” hack went like this: a) I would place the image in Visual mode; b) go into Text mode; and c) carve out the stuff between img and src and add new attributes for width and height. Those would usually be something like width=”50%” and height=”image”. This was an orthodox thing to do in HTML 4.01, but not in HTML 5. Browsers seem tolerant of this approach, so far, at least for pages viewed with the the http protocol. I’ve checked old posts that have images marked up that way, and it’s not a problem. Yet. (Newer browser versions may not be so tolerant.) Nearly all images, however, fail to load in Firefox, Chrome and Brave when viewed through https.

So the main question remaining are:

  1. Is this something I can correct globally with a hack in my own blogs?
  2. If so, is the hack within the theme, the CSS, the PHP, or what?
  3. If not, is it something the übergeeks at Harvard blogs can fix?
  4. If it’s not something they can fix, is my only choice to go back and change every image from the blogs’ beginnings (or just live with the breakage)?
  5. If that’s required, what’s to keep some new change in HTML 5, or WordPress, or the next “best practice” from breaking everything that came before all over again?

Thanks again for all your help, folks. Much appreciated. (And please keep it coming. I’m sure I’m not alone with this problem.)

A photo readers find among the most interesting among the 13,000+ aerial photos I've put on Flickr

This photo of the San Juan River in Utah is among dozens of thousands I’ve put on Flickr. it might be collateral damage if Yahoo dies or fails to sell the service to a worthy buyer.

Flickr is far from perfect, but it is also by far the best online service for serious photographers. At a time when the center of photographic gravity is drifting form arts & archives to selfies & social, Flickr remains both retro and contemporary in the best possible ways: a museum-grade treasure it would hurt terribly to lose.

Alas, it is owned by Yahoo, which is, despite Marissa Mayer’s best efforts, circling the drain.

Flickr was created and lovingly nurtured by Stewart Butterfield and Caterina Fake, from its creation in 2004 through its acquisition by Yahoo in 2005 and until their departure in 2008. Since then it’s had ups and downs. The latest down was the departure of Bernardo Hernandez in 2015.

I don’t even know who, if anybody, runs it now. It’s sinking in the ratings. According to Petapixel, it’s probably up for sale. Writes Michael Zhang, “In the hands of a good owner, Flickr could thrive and live on as a dominant photo sharing option. In the hands of a bad one, it could go the way of MySpace and other once-powerful Internet services that have withered away from neglect and lack of innovation.”

Naturally, the natives are restless. (Me too. I currently have 62,527 photos parked and curated there. They’ve had over ten million views and run about 5,000 views per day. I suppose it’s possible that nobody is more exposed in this thing than I am.)

So I’m hoping a big and successful photography-loving company will pick it up. I volunteer Adobe. It has the photo editing tools most used by Flickr contributors, and I expect it would do a better job of taking care of both the service and its customers than would Apple, Facebook, Google, Microsoft or other possible candidates.

Less likely, but more desirable, is some kind of community ownership. Anybody up for a kickstarter?

[Later…] I’m trying out 500px. Seems better than Flickr in some respects so far. Hmm… Is it possible to suck every one of my photos, including metadata, out of Flickr by its API and bring it over to 500px?

I also like Thomas Hawk‘s excellent defense of Flickr, here.

 

Tags: , , , ,

subway-speedtest

At the uptown end of the 59th Street/Columbus Circle subway platform there hangs from the ceiling a box with three disks on fat stalks, connected by thick black cables that run to something unseen in the downtown direction. Knowing a few things about radio and how it works, I saw that and thought, Hmm… That has to be a cell. I wonder whose? So I looked at my phone and saw my T-Mobile connection had five dots (that’s iPhone for bars), and said LTE as well. So I ran @Ookla‘s Speedtest app and got the results above.

Pretty good, no?

Sure, you’re not going to binge-watch anything there, or upload piles of pictures to some cloud, but you can at tug on your e-tether to everywhere for a few minutes. Nice to have.

So I’m wondering, @TMobile… Are those speeds the max one should expect from LTE when your local cell is almost as close as your hat?

And how long before you put these along the rest of the A/B/C/D Train routes? (The only other one I know is at 72nd, a B/C stop.) Or the rest of the subway system? In Boston too? BART? (Gotta hit all my cities.)

Meanwhile, thanks for taking care of my Main Stop in midtown.

Tags: , , , , , , , , , ,

« Older entries