security

You are currently browsing the archive for the security category.

Passwords are hell.

Worse, to make your hundreds of passwords safe as possible, they should be nearly impossible for others to discover—and for you to remember.

Unless you’re a wizard, this all but requires using a password manager.†

Think about how hard that job is. First, it’s impossible for developers of password managers to do everything right:

  • Most of their customers and users need to have logins and passwords for hundreds of sites and services on the Web and elsewhere in the networked world
  • Every one of those sites and services has its own gauntlet of methods for registering logins and passwords, and for remembering and changing them
  • Every one of those sites and services has its own unique user interfaces, each with its own peculiarities
  • All of those UIs change, sometimes often.

Keeping up with that mess while also keeping personal data safe from both user error and determined bad actors, is about as tall as an order can get. And then you have to do all that work for each of the millions of customers you’ll need if you’re going to make the kind of money required to keep abreast of those problems and providing the solutions required.

So here’s the thing: the best we can do with passwords is the best that password managers can do. That’s your horizon right there.

Unless we can get past logins and passwords somehow.

And I don’t think we can. Not in the client-server ecosystem that the Web has become, and that industry never stopped being, since long before the Internet came along. That’s the real hell. Passwords are just a symptom.

We need to work around it. That’s my work now. Stay tuned here, here, and here for more on that.


† We need to fix that Wikipedia page.

Facial recognition by machines is out of control. Meaning our control. As individuals, and as a society.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones.

This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of these systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

Worse, Clearview is just one company. Laws also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. The need for law enforcement and intelligence agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and calls fo0r action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (one of which was me) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.

Since then their grasp has exceeded our reach. And with facial recognition they have gone too far.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them deal with it.

I see three ways, so far. I’m sure ya’ll will think of other and better ones. The Internet is good for that.

First is to use an image like the one above (preferably with a better design) as your avatar, favicon, or other facial expression. (Like I just did for @dsearls on Twitter.) Here’s a favicon we can all use until a better one comes along:

Second, sign the Stop facial recognition by surveillance systems petition I just put up at that link. Two hashtags:

  • #GOOMF, for Get Out Of My Face
  • #Faceless

Third is to stop blaming and complaining. That’s too easy, tends to go nowhere and wastes energy. Instead,

Fourth, develop useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect. We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

Fifth is to develop the policies we need to stop the spread of privacy-violating technologies and practices, and to foster development of technologies that enlarge our agency in the digital world—and not just to address the wrongs being committed against us. (Which is all most privacy laws actually do.)

 

 

Tags: , , , ,

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers is their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it), and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short-term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools, and each other.

Louis Brandeis and Samuel Warren visited the same problem more than 130 years ago, when they became alarmed at the privacy risks suggested by photography, audio recording, and reporting on both via technologies that were far more primitive than those we have today. As a warning to the future, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal—whether anonymously or not.”

But it’s hard to argue for those rights in our digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there are no limits to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used by law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

Facial recognition systems are also getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and not just because they’re acquiescing to the inevitable: they’re relying on it because it makes interaction with machines easier—and they trust it.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of what might get done with facial data if the bank gets hacked, or if it changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if (though more like when) government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, which was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting through to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But assurances such as Apple’s haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time, appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

But, since the facial recognition genie will never go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example, to unlock phones, sort through old photos, or to show to others the way they would a driving license or a passport (to say, in effect, “See? This is me.”) But, the data they gather for themselves should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes). This, as I understand it, is roughly what Apple does with iPhones.
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age, or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted into the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

Ingeyes Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking, @JuliaAngwin and @ProPublica unpack what the subhead says well already: “Google is the latest tech company to drop the longstanding wall between anonymous online ad tracking and user’s names.”

So here’s a message from humanity to Google and all the other spy organizations in the surveillance economy: Tracking is no less an invasion of privacy in apps and browsers than it is in homes, cars, purses, pants and wallets.

That’s because our apps and browsers, like the devices on which we use them, are personal and private. Simple as that. (HT to @Apple for digging that fact.)

To help the online advertising business understand what ought to be obvious (but isn’t yet), let’s clear up some misconceptions:

  1. Tracking people without their clear and conscious permission is wrong. (Meaning The Castle Doctrine should apply online no less than it does in the physical world.)
  2. Assuming that using a browser or an app constitutes some kind of “deal” to allow tracking is wrong. (Meaning implied consent is not the real thing. See The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation, by Joseph Turow, Ph.D. and the Annenberg School for Communication at the University of Pennsylvania.)
  3. Claiming that advertising funds the “free” Internet is wrong. (The Net has been free for the duration. Had it been left up to the billing companies of the world, we never would have had it, and they never would have made their $trillions on it. More at New Clues.)

What’s right is civilization, which relies on manners. Advertisers, their agencies and publishers haven’t learned manners yet.

But they will.

At the very least, regulations will force companies harvesting personal data to obey those they harvest it from, with fines for not obeying. Toward that end, Europe’s General Data Protection Regulation already has compliance offices at large corporations shaking in their boots, for good reason: “a fine up to 20,000,000 EUR, or in the case of an undertaking, up to 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher (Article 83, Paragraph 5 & 6).” Those come into force in 2018. Stay tuned.

Companies harvesting personal data also shouldn’t be surprised to find themselves re-classified as fiduciaries, no less responsible than accountants, brokers and doctors for confidentiality on behalf of the people they collect data from. (Thank you, professors Balkin and Zittrain, for that legal and rhetorical hack. Brilliant, and well done. Or begun.)

The only way to fully fix publishing, advertising and surveillance-corrupted business in general is to equip individuals with terms they can assert in dealing with others online — and to do it at scale. Meaning we need terms that work the same way across all the companies we deal with. That’s why Customer Commons and Kantara are working on exactly those terms. For starters. And these will be our terms — not separate and different ones that live at each company we deal with. Those aren’t working now, and never will work, because they can’t. And they can’t because when you have to deal with as many different terms as there are parties supplying them, the problem becomes unmanageable, and you get screwed. That’s why —

There’s a new sheriff on the Net, and it’s the individual. Who isn’t a “user,” by the way. Or a “consumer.” With new terms of our own, we’re the first party. The companies we deal with are second parties. Meaning that they are the users, and the consumers, of our legal “content.” And they’ll like it too, because we actually want to do good business with good companies, and are glad to make deals that work for both parties. Those include expressions of true loyalty, rather than the coerced kind we get from every “loyalty” card we carry in our purses and wallets.

When we are the first parties, we also get scale. Imagine changing your terms, your contact info, or your last name, for every company you deal with — and doing that in one move. That can only happen when you are the first party.

So here’s a call to action.

If you want to help blow up the surveillance economy by helping develop much better ways for demand and supply to deal with each other, show up next week at the Computer History Museum for VRM Day and the Internet Identity Workshop, where there are plenty of people already on the case.

Then follow the work that comes out of both — as if your life depends on it. Because it does.

And so does the economy that will grow atop true privacy online and the freedoms it supports. Both are a helluva lot more leveraged than the ill-gotten data gains harvested by the Lumascape doing unwelcome surveillance.

Bonus links:

  1. All the great research Julia Angwin & Pro Publica have been doing on a problem that data harvesting companies have been causing and can’t fix alone, even with government help. That’s why we’re doing the work I just described.
  2. What Facebook Knows About You Can Matter Offline, an OnPoint podcast featuring Julia, Cathy O’Neill and Ashkan Soltani.
  3. Everything by Shoshana Zuboff. From her home page: “’I’ve dedicated this part of my life to understanding and conceptualizing the transition to an information civilization. Will we be the masters of information, or will we be its slaves? There’s a lot of work to be done, if we are to build bridges to the kind of future that we can call “home.” My new book on this subject, Master or Slave? The Fight for the Soul of Our Information Civilization, will be published by Public Affairs in the U.S. and Eichborn in Germany in 2017.” Can’t wait.
  4. Don Marti’s good thinking and work with Aloodo and other fine hacks.

For some reason, many or most of the images in this blog don’t load in some browsers. Same goes for the ProjectVRM blog as well. This is new, and I don’t know exactly why it’s happening.

So far, I gather it happens only when the URL is https and not http.

Okay, here’s an experiment. I’ll add an image here in the WordPress (4.4.2) composing window, and center it in the process, all in Visual mode. Here goes:

cheddar3

Now I’ll hit “Publish,” and see what we get.

When the URL starts with https, it fails to show in—

  • Firefox ((46.0.1)
  • Chrome (50.0.2661.102)
  • Brave (0.9.6)

But it does show in—

  • Opera (12.16)
  • Safari (9.1).

Now I’ll go back and edit the HTML for the image in Text mode, taking out class=”aligncenter size-full wp-image-10370 from between the img and src attributes, and bracket the whole image with the <center> and </center> tags. Here goes:

cheddar3

Hmm… The <center> tags don’t work, and I see why when I look at the HTML in Text mode: WordPress removes them. That’s new. Thus another old-school HTML tag gets sidelined. 🙁

Okay, I’ll try again to center it, this by time by taking out class=”aligncenter size-full wp-image-10370 in Text mode, and clicking on the centering icon in Visual mode. When I check back in Text mode, I see WordPress has put class=”aligncenter” between img and src. I suppose that attribute is understood by WordPress’ (or the theme’s) CSS while the old <center> tags are not. Am I wrong about that?

Now I’ll hit the update button, rendering this—

cheddar3

—and check back with the browsers.

Okay, it works with all of them now, whether the URL starts with https or http.

So the apparent culprit, at least by this experiment, is centering with anything other than class=”aligncenter”, which seems to require inserting a centered image Visual mode, editing out size-full wp-image-whatever (note: whatever is a number that’s different for every image I put in a post) in Text mode, and then going back and centering it in Visual mode, which puts class=”aligncenter” in place of what I edited out in text mode. Fun.

Here’s another interesting (and annoying) thing. When I’m editing in the composing window, the url is https. But when I “view post” after publishing or updating, I get the http version of the blog, where I can’t see what doesn’t load in the https version. But when anybody comes to the blog by way of an external link, such as a search engine or Twitter, they see the https version, where the graphics won’t load if I haven’t fixed them manually in the manner described above.

So https is clearly breaking old things, but I’m not sure if it’s https doing it, something in the way WordPress works, or something in the theme I’m using. (In WordPress it’s hard — at least for me — to know where WordPress ends and the theme begins.)

Dave Winer has written about how https breaks old sites, and here we can see it happening on a new one as well. WordPress, or at least the version provided for https://blogs.harvard.edu bloggers, may be buggy, or behind the times with the way it marks up images. But that’s a guess.

I sure hope there is some way to gang-edit all my posts going back to 2007. If not, I’ll just have to hope readers will know to take the s out of https and re-load the page. Which, of course, nearly all of them won’t.

It doesn’t help that all the browser makers now obscure the protocol, so you can’t see whether a site is http or https, unless you copy and paste it. They only show what comes after the // in the URL. This is a very unhelpful dumbing-down “feature.”

Brave is different. The location bar isn’t there unless you mouse over it. Then you see the whole URL, including the protocol to the left of the //. But if you don’t do that, you just see a little padlock (meaning https, I gather), then (with this post) “blogs.harvard.edu | Doc Searls Weblog • Help: why don’t images load in https?” I can see why they do that, but it’s confusing.

By the way, I probably give the impression of being a highly technical guy. I’m not. The only code I know is Morse. The only HTML I know is vintage. I’m lost with <span> and <div> and wp-image-whatever, don’t hack CSS or PHP, and don’t understand why <em> is now preferable to <i> if you want to italicize something. (Fill me in if you like.)

So, technical folks, please tell us wtf is going on here (or give us your best guess), and point to simple and global ways of fixing it.

Thanks.

[Later…]

Some answer links, mostly from the comments below:

That last one, which is cited in two comments, says this:

Chris Cree who experienced the same problem discovered that the WP_SITEURL and WP_HOME constants in the wp-config.php file were configured to structure URLs with http instead of https. Cree suggests users check their settings to see which URL type is configured. If both the WordPress address and Site URLs don’t show https, it’s likely causing issues with responsive images in WordPress 4.4.

Two things here:

  1. I can’t see where in Settings the URL type is mentioned, much less configurable. But Settings has a load of categories and choices within categories, so I may be missing it.
  2. I wonder what will happen to old posts I edited to make images responsive. (Some background on that. “Responsive design,” an idea that seems to have started here in 2010, has since led to many permutations of complications in code that’s mostly hidden from people like me, who just want to write something on a blog or a Web page. We all seem to have forgotted that it was us for whom Tim Berners-Lee designed HTML in the first place.) My “responsive” hack went like this: a) I would place the image in Visual mode; b) go into Text mode; and c) carve out the stuff between img and src and add new attributes for width and height. Those would usually be something like width=”50%” and height=”image”. This was an orthodox thing to do in HTML 4.01, but not in HTML 5. Browsers seem tolerant of this approach, so far, at least for pages viewed with the the http protocol. I’ve checked old posts that have images marked up that way, and it’s not a problem. Yet. (Newer browser versions may not be so tolerant.) Nearly all images, however, fail to load in Firefox, Chrome and Brave when viewed through https.

So the main question remaining are:

  1. Is this something I can correct globally with a hack in my own blogs?
  2. If so, is the hack within the theme, the CSS, the PHP, or what?
  3. If not, is it something the übergeeks at Harvard blogs can fix?
  4. If it’s not something they can fix, is my only choice to go back and change every image from the blogs’ beginnings (or just live with the breakage)?
  5. If that’s required, what’s to keep some new change in HTML 5, or WordPress, or the next “best practice” from breaking everything that came before all over again?

Thanks again for all your help, folks. Much appreciated. (And please keep it coming. I’m sure I’m not alone with this problem.)