Science

You are currently browsing the archive for the Science category.

Back in 2009 I shot the picture above from a plane flight on approach to SFO. On Flickr (at that link) the photo has had 16,524 views and has been faved 420 times as of now. Here’s the caption:

These are salt evaporation ponds on the shores of San Francisco Bay, filled with slowly evaporating salt water impounded within levees in former tidelands. There are many of these ponds surrounding the South Bay.

A series microscopic life forms of different kinds and colors predominate to in series as the water evaporates. First comes green algae. Next brine shrimp predominate, turning the pond orange. Next, dunaliella salina, a micro-algae containing high amounts of beta-carotene (itself with high commercial value), predominates, turning the water red. Other organisms can also change the hue of each pond. The full range of colors include red, green, orange and yellow, brown and blue. Finally, when the water is evaporated, the white of salt alone remains. This is harvested with machines, and the process repeats.

Given the popularity of that photo and others I’ve shot like it (see here and here), I’ve wanted to make a large print of it to mount and hang somewhere. But there’s a problem: the photo was shot with a 2005-vintage Canon 30D, an 8.2 megapixel SLR with an APS-C (less than full frame) sensor, and an aftermarket zoom lens. It’s also a JPEG shot, which means it shows compression artifacts when you look closely or enlarge it a lot. To illustrate the problem, here’s a close-up of one section of the photo:

See how grainy and full of artifacts that is? Also not especially sharp. So that was an enlargement deal breaker.

Until today, that is, when my friend Marian Crostic, a fine art photographer who often prints large pieces, told me about Topaz LabsGigapixel AI. I’ve tried image enhancing software before with mixed results, but on Marian’s word and an $80 price, I decided to give this one a whack. Here’s the result:

Color me impressed enough to think it’s worth sharing.

 

 

Tags: , , , , , ,

On fire

The white mess in the image above is the Bobcat Fire, spreading now in the San Gabriel Mountains, against which Los Angeles’ suburban sprawl (that’s it, on the right) reaches its limits of advance to the north. It makes no sense to build very far up or into these mountains, for two good reasons. One is fire, which happens often and awfully. The other is that the mountains are geologically new, and falling down almost as fast as they are rising up. At the mouths of valleys emptying into the sprawl are vast empty reservoirs—catch basins—ready to be filled with rocks, soil and mud “downwasting,” as geologists say, from a range as big as the Smokies, twice as high, ready to burn and shed.

Outside of its northern rain forests and snow-capped mountains, California has just two seasons: fire and rain. Right now we’re in the midst of fire season. Rain is called Winter, and it has been dry since the last one. If the Bobcat fire burns down to the edge of Monrovia, or Altadena, or any of the towns at the base of the mountains, heavy winter rains will cause downwasting in a form John McPhee describes in Los Angeles Against the Mountains:

The water was now spreading over the street. It descended in heavy sheets. As the young Genofiles and their mother glimpsed it in the all but total darkness, the scene was suddenly illuminated by a blue electrical flash. In the blue light they saw a massive blackness, moving. It was not a landslide, not a mudslide, not a rock avalanche; nor by any means was it the front of a conventional flood. In Jackie’s words, “It was just one big black thing coming at us, rolling, rolling with a lot of water in front of it, pushing the water, this big black thing. It was just one big black hill coming toward us.

In geology, it would be known as a debris flow. Debris flows amass in stream valleys and more or less resemble fresh concrete. They consist of water mixed with a good deal of solid material, most of which is above sand size. Some of it is Chevrolet size. Boulders bigger than cars ride long distances in debris flows. Boulders grouped like fish eggs pour downhill in debris flows. The dark material coming toward the Genofiles was not only full of boulders; it was so full of automobiles it was like bread dough mixed with raisins. On its way down Pine Cone Road, it plucked up cars from driveways and the street. When it crashed into the Genofiles’ house, the shattering of safety glass made terrific explosive sounds. A door burst open. Mud and boulders poured into the hall. We’re going to go, Jackie thought. Oh, my God, what a hell of a way for the four of us to die together.

Three rains ago we had debris flows in Montecito, the next zip code over from our home in Santa Barbara. I wrote about it in Making sense of what happened to Montecito. The flows, which destroyed much of the town and killed about two dozen people, were caused by heavy rains following the Thomas Fire, which at 281,893 acres was biggest fire in California history at the time. The Camp Fire, a few months later, burned a bit less land but killed 85 people and destroyed more than 18,000 buildings, including whole towns. This year we already have two fires bigger than the Thomas, and at least three more growing fast enough to take the lead. You can see the whole updated list on the Los Angeles Times California Wildfires Map.

For a good high-altitude picture of what’s going on, I recommend NASA’s FIRMS (Fire Information for Resource Management System). It’s a highly interactive map that lets you mix input from satellite photographs and fire detection by orbiting MODIS and VIIRS systems. MODIS is onboard the Terra and Aqua satellites; and VIIRS is onboard the Suomi National Polar-Orbiting Partnership (Suomi NPP) spacecraft. (It’s actually more complicated than that. If you’re interested, dig into those links.) Here’s how the FIRMS map shows the active West Coast fires and the smoke they’re producing:

That’s a lot of cremated forest and wildlife right there.

I just put those two images and a bunch of others up on Flickr, here. Most are of MODIS fire detections superimposed on 3-D Google Earth maps. The main thing I want to get across with these is how large and anomalous these fires are.

True: fire is essential to many of the West’s wild ecosystems. It’s no accident that the California state tree, the Coast Redwood, grows so tall and lives so long: it’s adapted to fire. (One can also make a case that the state flower, the California Poppy, which thrives amidst fresh rocks and soil, is adapted to earthquakes.) But what’s going on here is something much bigger. Explain it any way you like, including strange luck.

Whatever you conclude, it’s a hell of a show. And vice versa.

Across almost 73 laps around the Sun, I’ve seen six notable comets. The fifth was Hale-Bopp, which I blogged about here, along with details on the previous four, in 1997. The sixth is NEOWISE, and that’s it, above, shot with my iPhone 11. There are a couple other shots in that same album, taken with my Canon 5D Mark III. Those are sharper, but this one shows off better.

Hey, if the comet looks this good over the lights of Los Angeles, you might have a chance to see it wherever you are tonight. It’ll be under the Big Dipper in the western sky.

Every night it will be a bit dimmer and farther away, so catch it soon. Best view is starting about an hour after sunset.

 

 

Tags: ,

Source: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)

Pillars of Creation” is a live view of stars forming in a neighboring region of the Milky Way. (Inside the Eagle Nebula, 5,400 to 6,100 light years away.)

The Solar System, formed 4.6 billion years ago. Earth became a planet .46 billion years later. That was 9.247 billion years after the Big Bang, which happened 13.787 billion years ago, meaning that our solar system is more than a third of the age of the Universe. (By the giant impact hypothesis, Earth was enlarged 4.5 billion years ago when it mooshed together with another planet about the size of Mars. That ex-planet is now called Theia, and the leftover material it scattered into space is now the Moon. Another gift to Earth, says the theory, is our water.)

Hydrogen, helium, and other light elements formed with the Big Bang, but the heavy elements were cooked up at various times in the 8 billion year span before our solar system was formed. Some, perhaps, are still cooking.

Life appeared on earth at least 3.5 billion years ago.* Oxygenation sufficient to support life as we know it happened at the start of proterozoic era, about 2.5 billion years ago. The phanerozoic eon, characterized by an abundance of plants and animals, began 0.541 billion years ago and will continue until the Sun gets so large and hot that photosynthesis as we know it becomes impossible. A rough consensus within science is that this will likely happen in just 600 million years, meaning we’re about 80% of our way through the time window for life on Earth.

More perspective: the primary rock formation on which most of Manhattan’s ranking skyscrapers repose—Manhattan Schist—is itself about a half billion years old. (My ass is three floors above some right now.)

In another 4.5 billion years, our galaxy, the Milky Way, will become one with Andromeda, which is currently 2.5 million light-years distant but headed our way on a collision course, looking for now like a thrown frisbee, four moons wide. (It’s actually much larger.) The two will begin merging (not colliding, because nearly all stars are too far apart for that) around 4 billion years from now, and in about 7 billion years will complete a new galaxy resembling an elliptical haze. Here is a video simulation of that future. And here are still panels for the same:

Our Sun will likely be around for all of that future, though by the end it will have become a red giant with a diameter wider than Earth’s orbit, or perhaps will have gone nova, surviving as a white dwarf. (Also—I’m adding this later—Andromeda is weird and scary.)

When that’s done, the Universe will be about 16 billion years old. Or young. In TIME WITHOUT END: PHYSICS AND BIOLOGY IN AN OPEN UNIVERSE, Freeman Dyson gives these estimates for the future age of the Universe:

TABLE I. Summary of time scales.

Closed Universe
Total duration 10^11 yr

Open Universe
Low-mass stars cool off 10^14 yr
Planets detached from stars 10^15 yr
Stars detached from galaxies 10^19 yr
Decay of orbits by gravitational radiation 10^20 yr
Decay of black holes by Hawking process 10^64 yr
Matter liquid at zero temperature 10^65 yr
All matter decays to iron 10^1500 yr
Collapse of ordinary matter to black hole
[alternative (ii)] 10^(10^26) yr
Collapse of stars to neutron stars
or black holes [alternative (iv)] 10^(10^76) yr

So the best guess here is that Universe is about 1% into its lifespan, which has a great many zeros in its number of birthdays. In biological terms, that means it’s not even a baby, or a fetus. It’s more like a zygote, or a blastula.

So maybe… just maybe… the forms of life we know on Earth are just early prototypes of what’s to come in the fullness of time, space, and evolving existence.

Bonus link: Katie Mack, who has forgotten more about all this than I’ll ever know.

[Later…] *Make that 4.1 billion years ago.

[Later again…] Then there is this.

We know more than we can tell.

That one-liner from Michael Polanyi has been waiting half a century for a proper controversy, which it now has with facial recognition. Here’s how he explains it in The Tacit Dimension:

This fact seems obvious enough; but it is not easy to say exactly what it means. Take an example. We know a person’s face, and can recognize it among a thousand others, indeed among a million. Yet we usually cannot tell how we recognize a face we know. So most of this knowledge cannot be put into words.

Polanyi calls that kind of knowledge tacit. The kind we can put into words he calls explicit.

For an example of both at work, consider how, generally, we  don’t know how we will end the sentences we begin, or how we began the sentences we are ending—and how the same is true of what we hear or read from other people whose sentences we find meaningful. The explicit survives only as fragments, but the meaning of what was said persists in tacit form.

Likewise, if we are asked to recall and repeat, verbatim, a paragraph of words we have just said or heard, we will find it difficult or impossible to do so, even if we have no trouble saying exactly what was meant. This is because tacit knowing, whether kept to one’s self or told to others, survives the natural human tendency to forget particulars after a few seconds, even when we very clearly understand what we have just said or heard.

Tacit knowledge and short term memory are both features of human knowing and communication, not bugs. Even for people with extreme gifts of memorization (e.g. actors who can learn a whole script in one pass, or mathematicians who can learn pi to 4000 decimals), what matters more than the words or the numbers is their meaning. And that meaning is both more and other than what can be said. It is deeply tacit.

On the other hand—the digital hand—computer knowledge is only explicit, meaning a computer can know only what it can tell. At both knowing and telling, a computer can be far more complete and detailed than a human could ever be. And the more a computer knows, the better it can tell. (To be clear, a computer doesn’t know a damn thing. But it does remember—meaning it retrieves—what’s in its databases, and it does process what it retrieves. At all those activities it is inhumanly capable.)

So, the more a computer learns of explicit facial details, the better it can infer conclusions about that face, including ethnicity, age, emotion, wellness (or lack of it), and much else. Given a base of data about individual faces, and of names associated with those faces, a computer programmed to be adept at facial recognition can also connect faces to names, and say “This is (whomever).”

For all those reasons, computers doing facial recognition are proving useful for countless purposes: unlocking phones, finding missing persons and criminals, aiding investigations, shortening queues at passport portals, reducing fraud (for example at casinos), confirming age (saying somebody is too old or not old enough), finding lost pets (which also have faces). The list is long and getting longer.

Yet many (or perhaps all) of those purposes are at odds with the sense of personal privacy that derives from the tacit ways we know faces, our reliance on short-term memory, and our natural anonymity (literally, namelessness) among strangers. All of those are graces of civilized life in the physical world, and they are threatened by the increasingly widespread use—and uses—of facial recognition by governments, businesses, schools, and each other.

Louis Brandeis and Samuel Warren visited the same problem more than 130 years ago, when they became alarmed at the privacy risks suggested by photography, audio recording, and reporting on both via technologies that were far more primitive than those we have today. As a warning to the future, they wrote a landmark Harvard Law Review paper titled The Right to Privacy, which has served as a pole star of good sense ever since. Here’s an excerpt:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right “to be let alone” 10 Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life ; and numerous mechanical devices threaten to make good the prediction that “what is whispered in the closet shall be proclaimed from the house-tops.” For years there has been a feeling that the law must afford some remedy for the unauthorized circulation of portraits of private persons ;11 and the evil of invasion of privacy by the newspapers, long keenly felt, has been but recently discussed by an able writer.12 The alleged facts of a somewhat notorious case brought before an inferior tribunal in New York a few months ago, 13 directly involved the consideration of the right of circulating portraits ; and the question whether our law will recognize and protect the right to privacy in this and in other respects must soon come before out courts for consideration.

They also say the “right of the individual to be let alone…is like the right not be assaulted or beaten, the right not be imprisoned, the right not to be maliciously prosecuted, the right not to be defamed.”

To that list today we might also add, “the right not to be reduced to bits” or “the right not to be tracked like an animal—whether anonymously or not.”

But it’s hard to argue for those rights in our digital world, where computers can see, hear, draw and paint exact portraits of everything: every photo we take, every word we write, every spreadsheet we assemble, every database accumulating in our hard drives—plus those of every institution we interact with, and countless ones we don’t (or do without knowing the interaction is there).

Facial recognition by computers is a genie that is not going back in the bottle. And there are no limits to wishes the facial recognition genie can grant the organizations that want to use it, which is why pretty much everything is being done with it. A few examples:

  • Facebook’s Deep Face sells facial recognition for many purposes to corporate customers. Examples from that link: “Face Detection & Landmarks…Facial Analysis & Attributes…Facial Expressions & Emotion… Verification, Similarity & Search.” This is non-trivial stuff. Writes Ben Goertzel, “Facebook has now pretty convincingly solved face recognition, via a simple convolutional neural net, dramatically scaled.”
  • FaceApp can make a face look older, younger, whatever. It can even swap genders.
  • The FBI’s Next Generation Identification (NGI), involves (says Wikipedia) eleven companies and the National Center for State Courts (NCSC).
  • Snap has a patent for reading emotions in faces.
  • The MORIS™ Multi-Biometric Identification System is “a portable handheld device and identification database system that can scan, recognize and identify individuals based on iris, facial and fingerprint recognition,” and is typically used by law enforcement organizations.
  • Casinos in Canada are using facial recognition to “help addicts bar themselves from gaming facilities.” It’s opt-in: “The technology relies on a method of “self-exclusion,” whereby compulsive gamblers volunteer in advance to have their photos banked in the system’s database, in case they ever get the urge to try their luck at a casino again. If that person returns in the future and the facial-recognition software detects them, security will be dispatched to ask the gambler to leave.”
  • Cruise ships are boarding passengers faster using facial recognition by computers.
  • Australia proposes scanning faces to see if viewers are old enough to look at porn.

Facial recognition systems are also getting better and better at what they do. A November 2018 NIST report on a massive study of facial recognition systems begins,

This report documents performance of face recognition algorithms submitted for evaluation on image datasets maintained at NIST. The algorithms implement one-to-many identification of faces appearing in two-dimensional images.

The primary dataset is comprised of 26.6 million reasonably well-controlled live portrait photos of 12.3 million individuals. Three smaller datasets containing more unconstrained photos are also used: 3.2 million webcam images; 2.5 million photojournalism and amateur photographer photos; and 90 thousand faces cropped from surveillance-style video clips. The report will be useful for comparison of face recognition algorithms, and assessment of absolute capability. The report details recognition accuracy for 127 algorithms from 45 developers, associating performance with participant names. The algorithms are prototypes, submitted in February and June 2018 by research and development laboratories of commercial face recognition suppliers and one university…

The major result of the evaluation is that massive gains in accuracy have been achieved in the last five years (2013-2018) and these far exceed improvements made in the prior period (2010-2013). While the industry gains are broad — at least 28 developers’ algorithms now outperform the most accurate algorithm from late 2013 — there remains a wide range of capabilities. With good quality portrait photos, the most accurate algorithms will find matching entries, when present, in galleries containing 12 million individuals, with error rates below 0.2%

Privacy freaks (me included) would like everyone to be creeped out by this. Yet many people are cool with it to some degree, and not just because they’re acquiescing to the inevitable: they’re relying on it because it makes interaction with machines easier—and they trust it.

For example, in Barcelona, CaixaBank is rolling out facial recognition at its ATMs, claiming that 70% of surveyed customers are ready to use it as an alternative to keying in a PIN, and that “66% of respondents highlighted the sense of security that comes with facial recognition.” That the bank’s facial recognition system “has the capability of capturing up to 16,000 definable points when the user’s face is presented at the screen” is presumably of little or no concern. Nor, also presumably, is the risk of what might get done with facial data if the bank gets hacked, or if it changes its privacy policy, or if it gets sold and the new owner can’t resist selling or sharing facial data with others who want it, or if (though more like when) government bodies require it.

A predictable pattern for every new technology is that what can be done will be done—until we see how it goes wrong and try to stop doing that. This has been true of every technology from stone tools to nuclear power and beyond. Unlike many other new technologies, however, it is not hard to imagine ways facial recognition by computers can go wrong, especially when it already has.

Two examples:

  1. In June, U.S. Customs and Border Protection, which relies on facial recognition and other biometrics, revealed that photos of people were compromised by a cyberattack on a federal subcontractor.
  2. In August, researchers at vpnMentor reported a massive data leak in BioStar 2, a widely used “Web-based biometric security smart lock platform” that uses facial recognition and fingerprinting technology to identify users, which was compromised. Notes the report, “Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives.” vpnMentor also had a hard time getting through to company officials, so they could fix the leak.

As organizations should know (but in many cases have trouble learning), the highest risks of data exposure and damage are to—

  • the largest data sets,
  • the most complex organizations and relationships, and
  • the largest variety of existing and imaginable ways that security can be breached

And let’s not discount the scary potentials at the (not very) far ends of technological progress and bad intent. Killer microdrones targeted at faces, anyone?

So it is not surprising that some large companies doing facial recognition go out of their way to keep personal data out of their systems. For example, by making facial recognition work for the company’s customers, but not for the company itself.

Such is the case with Apple’s late model iPhones, which feature FaceID: a personal facial recognition system that lets a person unlock their phone with a glance. Says Apple, “Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.”

But assurances such as Apple’s haven’t stopped push-back against all facial recognition. Some examples—

  • The Public Voice: “We the undersigned call for a moratorium on the use of facial recognition technology that enables mass surveillance.”
  • Fight for the Future: BanFacialRecognition. Self-explanatory, and with lots of organizational signatories.
  • New York Times: “San Francisco, long at the heart of the technology revolution, took a stand against potential abuse on Tuesday by banning the use of facial recognition software by the police and other agencies. The action, which came in an 8-to-1 vote by the Board of Supervisors, makes San Francisco the first major American city to block a tool that many police forces are turning to in the search for both small-time criminal suspects and perpetrators of mass carnage.”
  • Also in the Times, Evan Sellinger and Woodrow Hartzhog write, “Stopping this technology from being procured — and its attendant databases from being created — is necessary for protecting civil rights and privacy. But limiting government procurement won’t be enough. We must ban facial recognition in both public and private sectors before we grow so dependent on it that we accept its inevitable harms as necessary for “progress.” Perhaps over time, appropriate policies can be enacted that justify lifting a ban. But we doubt it.”
  • Cory Doctorow‘s Why we should ban facial recognition technology everywhere is an “amen” to the Selinger & Hartzhog piece.
  • BanFacialRecognition.com lists 37 participating organizations, including EPIC (Electronic Privacy Information Center), Daily Kos, Fight for the Future, MoveOn.org, National Lawyers Guild, Greenpeace and Tor.
  • MIT Technology Revew says bans are spreading in the U.S.: San Francisco and Oakland, California, and Somerville, Massachusetts, have outlawed certain uses of facial recognition technology, with Portland, Oregon, potentially soon to follow. That’s just the beginning, according to Mutale Nkonde, a Harvard fellow and AI policy advisor. That trend will soon spread to states, and there will eventually be a federal ban on some uses of the technology, she said at MIT Technology Review’s EmTech conference.”

Irony alert: the black banner atop that last story says, “We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements.” Notes the TimesCharlie Warzel, “Devoted readers of the Privacy Project will remember mobile advertising IDs as an easy way to de-anonymize extremely personal information, such as location data.” Well, advertising IDs are among the many trackers that both MIT Technology Review and The New York Times inject in readers’ browsers with every visit. (Bonus link.)

My own position on all this is provisional because I’m still learning and there’s a lot to take in. But here goes:

The only entities that should be able to recognize people’s faces are other people. And maybe their pets. But not machines.

But, since the facial recognition genie will never go back in its bottle, I’ll suggest a few rules for entities using computers to do facial recognition. All these are provisional as well:

  1. People should have their own forms of facial recognition, for example, to unlock phones, sort through old photos, or to show to others the way they would a driving license or a passport (to say, in effect, “See? This is me.”) But, the data they gather for themselves should not be shared with the company providing the facial recognition software (unless it’s just of their own face, and then only for the safest possible diagnostic or service improvement purposes). This, as I understand it, is roughly what Apple does with iPhones.
  2. Facial recognition used to detect changing facial characteristics (such as emotions, age, or wellness) should be required to forget what they see, right after the job is done, and not use the data gathered for any purpose other than diagnostics or performance improvement.
  3. For persons having their faces recognized, sharing data for diagnostic or performance improvement purposes should be opt-in, with data anonymized and made as auditable as possible, by individuals and/or their intermediaries.
  4. For enterprises with systems that know individuals’ (customers’ or consumers’) faces, don’t use those faces to track or find those individuals elsewhere in the online or offline worlds—again, unless those individuals have opted into the practice.

I suspect that Polanyi would agree with those.

But my heart is with Walt Whitman, whose Song of Myself argued against the dehumanizing nature of mechanization at the dawn of the industrial age. Wrote Walt,

Encompass worlds but never try to encompass me.
I crowd your noisiest talk by looking toward you.

Writing and talk do not prove me.I carry the plenum of proof and everything else in my face.
With the hush of my lips I confound the topmost skeptic…

Do I contradict myself?
Very well then. I contradict myself.
I am large. I contain multitudes.

The spotted hawk swoops by and accuses me.
He complains of my gab and my loitering.

I too am not a bit tamed. I too am untranslatable.
I sound my barbaric yawp over the roofs of the world.

The barbaric yawps by human hawks say five words, very explicitly:

Get out of my face.

And they yawp those words in spite of the sad fact that obeying them may prove impossible.

[Later bonus links…]

 

The answer is, we don’t know. Also, we may never know, because—

  • It’s too hard to measure (especially if you’re talking about the entire Net).
  • Too so much of the usage is in mobile devices of too many different kinds.
  • The browser makers are approaching ad blocking and tracking protection in different and new ways that change frequently, and the same goes for ad-blocking and tracking-protecting extensions and add-ons. One of them (Adblock Plus) is actually in the advertising business (which Wikipedia politely calls ad filtering) in the sense that they sell safe package for paying advertisers.
  • Some of the most easily sourced measures are surveys, yet what people say and what they do can be very different things.
  • Some of the most widely cited findings are from sources with conflicted interests (for example, selling anti-ad-blocking services), or which aggregate multiple sources that aren’t revealed when cited.
  • Actors good and bad in the ecosystem that ad blocking addresses also contribute to the fudge.

But let’s explore a bit anyway, working with what we’ve got, flawed though much of it may be. If you’re a tl;dr kind of reader, jump down to the conclusions at the end.

Part 1: ClarityRay and Pagefair

Between 2012 and 2017, the most widely cited ad blocking reports were by ClarityRay and PageFair, in that order. There are no links to ClarityRay’s 2012 report, which I cited here in 2013. PageFair links to their 2015, 2016 (mobile) and 2017 reports are still live. The company also said last November that it was at work on another report. This was after PageFair was acquired by Blockthrough (“the leading adblock recovery program”). A PageFair blog post explains it.

I placed a lot of trust in PageFair’s work, mostly because I respected Dr. Johnny Ryan (@JohnnyRyan), who left PageFair for Brave in 2018. I also like what I know about Matthew Cortland, who was also at PageFair, and may still be. Far as I know, he hasn’t written anything about ad blocking research (but maybe I’ve missed it) since 2017.

Here are the main findings from PageFair’s 2017 report:

  • 615 million devices now use adblock
  • 11% of the global internet population is blocking ads on the web

Part 2: GlobalWebIndex

In January 2016, GlobalWebIndex said “37% of mobile users … say they’ve blocked ads on their mobile within the last month.” I put that together with Statista’s 2017 claim that there were then more than 4.6 billion mobile phone users in the world, which suggested that 1.7 billion people were blocking ads by that time.

Now GlobalWebIndex‘s Global Ad-Blocking Behavior report says 47% of us are blocking ads now. It also says, “As a younger and more engaged audience, ad-blockers also are much more likely to be paying subscribers and consumers. Ad-free premium services are especially attractive.” Which is pretty close to Don Marti‘s long-standing claim that readers who protect their privacy are more valuable than readers who don’t.

To get a total ad blocking population from that 47%, one possible source to cite is Internet World Stats:

Note that Internet World Stats appears to be a product of the Miniwatts Marketing Group, whose website is currently a blank WordPress placeholder. But, to be modest about it, their number is lower than Statista’s from 2016: “In 2019 the number of mobile phone users is forecast to reach 4.68 billion.” So let’s run with the lower one, at least for now.

Okay, so if 47% of us are using ad blockers, and Internet World Stats says there were 4,312,982,270 Internet users by the end of last year (that’s mighty precise!), the combined numbers suggest that more than 2,027,101,667 people are now blocking ads worldwide. So, we might generalize, more than two billion people are blocking ads today. Hence the headline above.

Perspective: back in 2015, we were already calling ad blocking The biggest boycott in human history. And that was when the number was just “approaching 200 million.”

More interesting to me is GlobalWebIndex’s breakouts of listed reasons why the people surveyed blocked ads. Three in particular stand out:

  • Ads contain viruses or bugs, 38%
  • Ads might compromise my online privacy, 26%
  • Stop ads being personalized, 22%

The problem here, as I said in the list up top, is that these are measured behaviors. They are sympathies. But they’re still significant, because sympathies sell. That means there are markets here. Opportunities to align incentives.

Part 3: Ad Fraud Researcher

I rely a great deal on Dr. Augustine Fou (@acfou), aka Independent Ad Fraud Researcher, to think and work more deeply and knowingly than I’ve done so far here (or may ever do).

Looking at Part 2 above (in an earlier version of this post), he tweeted, “I dispute these findings. ASKING people if they used an ad blocker in the past month is COMPLETELY inaccurate and inconsistent with people who ACTUALLY USE ad blockers regularly.” Also, “Source: GlobalWebIndex Q3 2018 Base: 93,803 internet users aged 16-64, among which were 42,078 respondents who have used an ad-blocker in the past month”. Then, “Are you going to take numbers extrapolated from 42,078 respondents and extrapolate that to the entire world? that would NOT be OK.” And, “Desktop ad blocking in the U.S. measured directly on sites which humans visit is in the 8 – 19% range. Bots must also be scrubbed because bots do not block ads and will skew ad blocking rates lower, if not removed.”

On that last tweet he points to his own research, published this month.There is lots of data in there, all of it interesting and unbiased. Then he adds, “your point about this being the ‘biggest boycott in human history’ is still valid. But the numbers from that ad blocking study should not be used.”

Part 4: Comscore

Among the many helpful tweets in response to the first draft of this post was this one by Zubair Shafiq (@zubair_shafiq), Assistant Professor of Computer Science at the University of Iowa, where he researches computer networks, security, and privacy. His tweet points to Ad Blockers: Global Prevalence and Impact, by Matthew Malloy, Mark McNamara, Aaron Cahn and Paul Barford, from 2016. Here is one chart among many in the report:

The jive in the Geo row is explained at that link. A degree in statistics will help.

Part 5: Statista

Statista seems serious, but Ad blocking user penetration rate in the United States from 2014 to 2020 is behind a paywall. Still, they do expose this hunk of text: “The statistic presents data on ad blocking user penetration rate in the United States from 2014 to 2020. It was found that 25.2 percent of U.S. internet users blocked ads on their connected devices in 2018. This figure is projected to grow to 27.5 percent in 2020.”

Provisional Conclusions

  1. The number is huge, but we don’t know how huge.
  2. Express doubt about any one large conclusion. Augustine Fou cautions me (and all of us) to look at where the data comes from, why it’s used, and how. In the case of Statista, for example, the data is aggregated from other sources. They don’t do the research themselves. It’s also almost too easy to copy and paste (as I’ve done here) images that might themselves be misleading. The landmark book on misleading statistics—no less relevant today than when it was written in 1954 (and perhaps more relevant than ever)—is How to Lie With Statistics.
  3. Everything is changing. For example, browsers are starting to obsolesce the roles played by ad blocking and tracking protection extensions and add-ons. Brave is the early leader, IMHO. Safari, Firefox and even Chrome are all making moves in this direction. Also check out Ghostery’s Cliqz. For some perspective on how long this is taking, take a look at what I was calling for way back in 2015.
  4. Still, the market is sending a massive message. And that’s what fully matters. The message is this: advertising online has come to have massively negative value.

Ad blocking and tracking protection are legitimate and eloquent messages from demand to supply. By fighting that message, marketing is crapping on most obvious and gigantic clue it has ever seen. And the supply side of the market isn’t just marketers selling stuff. It’s developers who need to start working for the hundreds of millions of customers who have proven their value by using these tools.


Walked out on the front deck this morning and grabbed a photo set of the Moon between conjunctions with Venus (that was yesterday), Jupiter (tonight and tomorrow) and then Mercury (Saturday), before passing next to the Sun as a new moon on Sunday.

More about the show at EarthSky. Get up early and check it out.

Just before it started, the geology meeting at the Santa Barbara Central Library on Thursday looked like this from the front of the room (where I also tweeted the same pano):

Geologist Ed Keller

Our speakers were geology professor Ed Keller of UCSB and Engineering Geologist Larry Gurrola, who also works and studies with Ed. That’s Ed in the shot below.

As a geology freak, I know how easily terms like “debris flow,” “fanglomerate” and “alluvial fan” can clear a room. But this gig was SRO. That’s because around 3:15 in the morning of January 9th, debris flowed out of canyons and deposited fresh fanglomerate across the alluvial fan that comprises most of Montecito, destroying (by my count on the map below) 178 buildings, damaging more than twice that many, and killing 23 people. Two of those—a 2 year old girl and a 17 year old boy—are still interred in the fresh fanglomerate and sought by cadaver dogs.* The whole thing is beyond sad and awful.

The town was evacuated after the disaster so rescue and recovery work could proceed without interference, and infrastructure could be found and repaired: a job that required removing twenty thousand truckloads of mud and rocks. That work continues while evacuation orders are gradually lifted, allowing the town to repopulate itself to the very limited degree it can.

I talked today with a friend whose business is cleaning houses. Besides grieving the dead, some of whom were friends or customers, she reports that the cleaning work is some of the most difficult she has ever faced, even in homes that were spared the mud and rocks. Refrigerators and freezers, sitting closed and without electricity for weeks, reek of death and rot. Other customers won’t be back because their houses are gone.

Highway 101, one of just two freeways connecting Northern and Southern California, runs through town near the coast and more than two miles from the mountain front. Three debris flows converged on the highway and used it as a catch basin, filling its deep parts to the height of at least one bridge before spilling over its far side and continuing to the edge of the sea. It took two weeks of constant excavation and repair work before traffic could move again. Most exits remain closed. Coast Village Road, Montecito’s Main Street, is open for employees of stores there, but little is open for customers yet, since infrastructural graces such as water are not fully restored. (I saw the Honor Bar operating with its own water tank, and a water truck nearby.) Opening Upper Village will take longer. Some landmark institutions, such as San Ysidro Ranch and La Casa Santa Maria, will take years to restore. (From what I gather, San Ysidro Ranch, arguably the nicest hotel in the world, was nearly destroyed. Its website thank firefighters for salvation from the Thomas Fire. But nothing, I gather, could have save it from the huge debris flow wiped out nearly everything on the flanks of San Ysidro Creek. (All the top red dots along San Ysidro Creek in the map below mark lost buildings at the Ranch.)

Here is a map with final damage assessments. I’ve augmented it with labels for the canyons and creeks (with one exception: a parallel creek west of Toro Canyon Creek):

Click on the map for a closer view, or click here to view the original. On that one you can click on every dot and read details about it.

I should pause to note that Montecito is no ordinary town. Demographically, it’s Beverly Hills draped over a prettier landscape and attractive to people who would rather not live in Beverly Hills. (In fact the number of notable persons Wikipedia lists for Montecito outnumbers those it lists for Beverly Hills by a score of 77 to 71.) Culturally, it’s a village. Last Monday in The New Yorker, one of those notable villagers, T.Coraghessan Boyle, unpacked some other differences:

I moved here twenty-five years ago, attracted by the natural beauty and semirural ambience, the short walk to the beach and the Lower Village, and the enveloping views of the Santa Ynez Mountains, which rise abruptly from the coastal plain to hold the community in a stony embrace. We have no sidewalks here, if you except the business districts of the Upper and Lower Villages—if we want sidewalks, we can take the five-minute drive into Santa Barbara or, more ambitiously, fight traffic all the way down the coast to Los Angeles. But we don’t want sidewalks. We want nature, we want dirt, trees, flowers, the chaparral that did its best to green the slopes and declivities of the mountains until last month, when the biggest wildfire in California history reduced it all to ash.

Fire is a prerequisite for debris flows, our geologists explained. So is unusually heavy rain in a steep mountain watershed. There are five named canyons, each its own watershed, above Montecito, as we see on the map above. There are more to the east, above Summerland and Carpinteria, the next two towns down the coast. Those towns also took some damage, though less than Montecito.

Ed Keller put up this slide to explain conditions that trigger debris flows, and how they work:

Ed and Larry were emphatic about this: debris flows are not landslides, nor do many start that way (though one did in Rattlesnake Canyon 1100 years ago). They are also not mudslides, so we should stop calling them that. (Though we won’t.)

Debris flows require sloped soils left bare and hydrophobic—resistant to water—after a recent wildfire has burned off the chaparral that normally (as geologists say) “hairs over” the landscape. For a good look at what soil surfaces look like, and are likely to respond to rain, look at the smooth slopes on the uphill side of 101 east of La Conchita. Notice how the surface is not only a smooth brown or gray, but has a crust on it. In a way, the soil surface has turned to glass. That’s why water runs off of it so rapidly.

Wildfires are common, and chaparral is adapted to them, becoming fuel for the next fire as it regenerates and matures. But rainfalls as intense as this one are not common. In just five minutes alone, more than half an inch of rain fell in the steep and funnel-like watersheds above Montecito. This happens about once every few hundred years, or about as often as a tsunami.

It’s hard to generalize about the combination of factors required, but Ed has worked hard to do that, and this slide of his is one way of illustrating how debris flows happen eventually in places like Montecito and Santa Barbara:

From bottom to top, here’s what it says:

  1. Fires happen almost regularly, spreading most widely where chaparral has matured to become abundant fuel, as the firefighters like to call it.
  2. Flood events are more random, given the relative rarity of rain and even more rare rains of “biblical” volume. But they do happen.
  3. Stream beds in the floors of canyons accumulate rocks and boulders that roll down the gradually eroding slopes over time. The depth of these is expressed as basin instablity. Debris flows clear out the rocks and boulders when a big flood event comes right after a fire and basin becomes stable (relatively rock-free) again.
  4. The sediment yield in a flood (F) is maximum when a debris flow (DF) occurs.
  5. Debris flows tend to happen once every few hundred years. And you’re not going to get the big ones if you don’t have the canyon stream bed full of rocks and boulders.

About this set of debris flows in particular:

  1. Destruction down Oak Creek wasn’t as bad as on Montecito, San Ysidro, Buena Vista and Romero Creeks because the canyon feeding it is smaller.
  2. When debris flows hit an obstruction, such as a bridge, they seek out a new bed to flow on. This is one of the actions that creates an alluvial fan. From the map it appears something like that happened—
    1. Where the flow widened when it hit Olive Mill Road, fanning east of Olive Mill to destroy all three blocks between Olive Mill and Santa Elena Lane before taking the Olive Mill bridge across 101 and down to the Biltmore while also helping other flows fill 101 as well. (See Mac’s comment below, and his link to a top map.)
    2. In the area between Buena Vista Creek and its East Fork, which come off different watersheds
    3. Where a debris flow forked south of Mountain Drive after destroying San Ysidro Ranch, continuing down both Randall and El Bosque Roads.

For those who caught (or are about to catch) Ellen’s Facetime with Oprah visiting neighbors, that happened among the red dots at the bottom end of the upper destruction area along San Ysidro Creek, just south of East Valley Road. Oprah’s own place is in the green area beside it on the left, looking a bit like Versailles. (Credit where due, though: Oprah’s was a good and compassionate report.)

Big question: did these debris flows clear out the canyon floors? We (meaning our geologists, sedimentologists, hydrologists and other specialists) won’t know until they trek back into the canyons to see how it all looks. Meanwhile, we do have clues. For example, here are after-and-before photos of Montecito, shot from space. And here is my close-up of the latter, shot one day after the event, when everything was still bare streambeds in the mountains and fresh muck in town:

See the white lines fanning back into the mountains through the canyons (Cold Spring, San Ysidro, Romero, Toro) above Montecito? Ed explained that these appear to be the washed out beds of creeks feeding into those canyons. Here is his slide showing Cold Spring Creek before and after the event:

Looking back at Ed’s basin threshold graphic above, one might say that there isn’t much sediment left for stream beds to yield, and that those in the floors of the canyons have returned to stability, meaning there’s little debris left to flow.

But that photo was of just one spot. There are many miles of creek beds to examine back in those canyons.

Still, one might hope that Montecito has now had its required 200-year event, and a couple more centuries will pass before we have another one.

Ed and Larry caution against such conclusions, emphasizing that most of Montecito’s and Santa Barbara’s inhabited parts gain their existence, beauty or both by grace of debris flows. If your property features boulders, Ed said, a debris flow put them there, and did that not long ago in geologic time.

For an example of boulders as landscape features, here are some we quarried out of our yard more than a decade ago, when we were building a house dug into a hillside:

This is deep in the heart of Santa Barbara.

The matrix mud we now call soil here is likely a mix of Juncal and Cozy Dell shale, Ed explained. Both are poorly lithified silt and erode easily. The boulders are a mix of Matilija and Coldwater sandstone, which comprise the hardest and most vertical parts of the Santa Ynez mountains. The two are so similar that only a trained eye can tell them apart.

All four of those geological formations were established long after dinosaurs vanished. All also accumulated originally as sediments, mostly on ocean floors, probably not far from the equator.

To illustrate one chapter in the story of how those rocks and sediments got here, UCSB has a terrific animation of how the transverse (east-west) Santa Ynez Mountains came to be where they are. Here are three frames in that movie:

What it shows is how, when the Pacific Plate was grinding its way northwest about eighteen million years ago, a hunk of that plate about a hundred miles long and the shape of a bread loaf broke off. At the top end was the future Malibu hills and at the bottom end was the future Point Conception, then situated south of what’s now Tijuana. The future Santa Barbara was west of the future Newport Beach. Then, when the Malibu end of this loaf got jammed at the future Los Angeles, the bottom end of the loaf swept out, clockwise and intact. At the start it was pointing at 5 o’clock and at the end (which isn’t), it pointed at 9:00. This was, and remains, a sideshow off the main event: the continuing crash of the Pacific Plate and the North American one.

Here is an image that helps, from that same link:

Find more geology, with lots of links, in Making sense of what happened to Montecito. I put that post up on the 15th and have been updating it since then. It’s the most popular post in the history of this blog, which I started in 2007. There are also 58 comments, so far.

I’ll be adding more to this post after I visit as much as I can of Montecito (exclusion zones permitting). Meanwhile, I hope this proves useful. Again, corrections and improvements are invited.

30 January

6 April, 2020
*I was told later, by a rescue worker who was on the case, that it was possible that both victims’ bodies had washed all the way to the ocean, and thus will never be found.

In this Edhat story, Ed Keller visits a recently found prior debris flow. An excerpt:

The mud and boulders from a prehistoric debris flow, the second-to-last major flow in Montecito, have been discovered by a UCSB geologist at the Bonnymede condominiums and Hammond’s Meadow, just east of the Coral Casino.

The flow may have occurred between 1,000 and 2,000 years ago, said Ed Keller, a professor of earth science at the university. He’s calling it the “penultimate event.” It came down a channel of Montecito Creek and was likely larger on that creek than during the disaster of Jan. 9, 2018, Keller said. Of 23 people who perished on Jan. 9, 17 died along Montecito Creek.

The long interval between the two events means that the probability of another catastrophic debris flow occurring in Montecito in the next 1,000 years is very low, Keller said.

“It’s reassuring,” he said, “They’re still pretty rare events, if you consider you need a wildfire first and then an intense rainfall. But smaller debris flows could occur, and you could still get a big flash flood. If people are given a warning to evacuate, they should heed it.”

Take a look at this chart:

CryptoCurrency Market Capitalizations

screen-shot-2017-06-21-at-10-37-51-pm

As Neo said, Whoa.

To help me get my head fully around all that’s going on behind that surge, or mania, or whatever it is, I’ve composed a lexicon-in-process that I’m publishing here so I can find it again. Here goes:::

Bitcoin. “A cryptocurrency and a digital payment system invented by an unknown programmer, or a group of programmers, under the name Satoshi Nakamoto. It was released as open-source software in 2009. The system is peer-to-peer, and transactions take place between users directly, without an intermediary. These transactions are verified by network nodes and recorded in a public distributed ledger called a blockchain. Since the system works without a central repository or single administrator, bitcoin is called the first decentralized digital currency.” (Wikipedia.)

Cryptocurrency. “A digital asset designed to work as a medium of exchange using cryptography to secure the transactions and to control the creation of additional units of the currency. Cryptocurrencies are a subset of alternative currencies, or specifically of digital currencies. Bitcoin became the first decentralized cryptocurrency in 2009. Since then, numerous cryptocurrencies have been created. These are frequently called altcoins, as a blend of bitcoin alternative. Bitcoin and its derivatives use decentralized control as opposed to centralized electronic money/centralized banking systems. The decentralized control is related to the use of bitcoin’s blockchain transaction database in the role of a distributed ledger.” (Wikipedia.)

“A cryptocurrency system is a network that utilizes cryptography to secure transactions in a verifiable database that cannot be changed without being noticed.” (Tim Swanson, in Consensus-as-a-service: a brief report on the emergence of permissioned, distributed ledger systems.)

Distributed ledger. Also called a shared ledger, it is “a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions.” (Wikipedia, citing a report by the UK Government Chief Scientific Adviser: Distributed Ledger Technology: beyond block chain.) A distributed ledger requires a peer-to-peer network and consensus algorithms to ensure replication across nodes. The ledger is sometimes also called a distributed database. Tim Swanson adds that a distributed ledger system is “a network that fits into a new platform category. It typically utilizes cryptocurrency-inspired technology and perhaps even part of the Bitcoin or Ethereum network itself, to verify or store votes (e.g., hashes). While some of the platforms use tokens, they are intended more as receipts and not necessarily as commodities or currencies in and of themselves.”

Blockchain.”A peer-to-peer distributed ledger forged by consensus, combined with a system for ‘smart contracts’ and other assistive technologies. Together these can be used to build a new generation of transactional applications that establishes trust, accountability and transparency at their core, while streamlining business processes and legal constraints.” (Hyperledger.)

“To use conventional banking as an analogy, the blockchain is like a full history of banking transactions. Bitcoin transactions are entered chronologically in a blockchain just the way bank transactions are. Blocks, meanwhile, are like individual bank statements. Based on the Bitcoin protocol, the blockchain database is shared by all nodes participating in a system. The full copy of the blockchain has records of every Bitcoin transaction ever executed. It can thus provide insight about facts like how much value belonged a particular address at any point in the past. The ever-growing size of the blockchain is considered by some to be a problem due to issues like storage and synchronization. On an average, every 10 minutes, a new block is appended to the block chain through mining.” (Investopedia.)

“Think of it as an operating system for marketplaces, data-sharing networks, micro-currencies, and decentralized digital communities. It has the potential to vastly reduce the cost and complexity of getting things done in the real world.” (Hyperledger.)

Permissionless system. “A permissionless system [or ledger] is one in which identity of participants is either pseudonymous or even anonymous. Bitcoin was originally designed with permissionless parameters although as of this writing many of the on-ramps and off-ramps for Bitcoin are increasingly permission-based. (Tim Swanson.)

Permissioned system. “A permissioned system -[or ledger] is one in which identity for users is whitelisted (or blacklisted) through some type of KYB or KYC procedure; it is the common method of managing identity in traditional finance.” (Tim Swanson)

Mining. “The process by which transactions are verified and added to the public ledger, known as the blockchain. (It is) also the means through which new bitcoin are released. Anyone with access to the Internet and suitable hardware can participate in mining. The mining process involves compiling recent transactions into blocks and trying to solve a computationally difficult puzzle. The participant who first solves the puzzle gets to place the next block on the block chain and claim the rewards. The rewards, which incentivize mining, are both the transaction fees associated with the transactions compiled in the block as well as newly released bitcoin.” (Investopedia.)

Ethereum. “An open-source, public, blockchain-based distributed computing platform featuring smart contract (scripting) functionality, which facilitates online contractual agreements. It provides a decentralized Turing-complete virtual machine, the Ethereum Virtual Machine (EVM), which can execute scripts using an international network of public nodes. Ethereum also provides a cryptocurrency token called “ether”, which can be transferred between accounts and used to compensate participant nodes for computations performed. Gas, an internal transaction pricing mechanism, is used to mitigate spam and allocate resources on the network. Ethereum was proposed in late 2013 by Vitalik Buterin, a cryptocurrency researcher and programmer. Development was funded by an online crowdsale during July–August 2014. The system went live on 30 July 2015, with 11.9 million coins “premined” for the crowdsale… In 2016 Ethereum was forked into two blockchains, as a result of the collapse of The DAO project. The two chains have different numbers of users, and the minority fork was renamed to Ethereum Classic.” (Wikipedia.)

Decentralized Autonomous Organization. This is “an organization that is run through rules encoded as computer programs called smart contracts. A DAO’s financial transaction record and program rules are maintained on a blockchain… The precise legal status of this type of business organization is unclear. The best-known example was The DAO, a DAO for venture capital funding, which was launched with $150 million in crowdfunding in June 2016 and was immediately hacked and drained of US$50 million in cryptocurrency… This approach eliminates the need to involve a bilaterally accepted trusted third party in a financial transaction, thus simplifying the sequence. The costs of a blockchain enabled transaction and of making available the associated data may be substantially lessened by the elimination of both the trusted third party and of the need for repetitious recording of contract exchanges in different records: for example, the blockchain data could in principle, if regulatory structures permitted, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration.(Wikipedia)

Initial Coin Offering. “A means of crowdfunding the release of a new cryptocurrency. Generally, tokens for the new cryptocurrency are sold to raise money for technical development before the cryptocurrency is released. Unlike an initial public offering (IPO), acquisition of the tokens does not grant ownership in the company developing the new cryptocurrency. And unlike an IPO, there is little or no government regulation of an ICO.” (Chris Skinner.)

“In an ICO campaign, a percentage of the cryptocurrency is sold to early backers of the project in exchange for legal tender or other cryptocurrencies, but usually for Bitcoin…During the ICO campaign, enthusiasts and supporters of the firm’s initiative buy some of the distributed cryptocoins with fiat or virtual currency. These coins are referred to as tokens and are similar to shares of a company sold to investors in an Initial Public Offering (IPO) transaction.” (Investopedia.)

Tokens. “In the blockchain world, a token is a tiny fraction of a cryptocurrency (bitcoin, ether, etc) that has a value usually less than 1/1000th of a cent, so the value is essentially nothing, but it can still go onto the blockchain…This sliver of currency can carry code that represents value in the real world — the ownership of a diamond, a plot of land, a dollar, a share of stock, another cryptocurrency, etc. Tokens represent ownership of the underlying asset and can be traded freely. One way to understand it is that you can trade physical gold, which is expensive and difficult to move around, or you can just trade tokens that represent gold. In most cases, it makes more sense to trade the token than the asset. Tokens can always be redeemed for their underlying asset, though that can often be a difficult and expensive process. Though technically they could be redeemed, many tokens are designed never to be redeemed but traded forever. On the other hand, a ticket is a token that is designed to be redeemed and may or may not be trade-able” (TokenFactory.)

“Tokens in the ethereum ecosystem can represent any fungible tradable good: coins, loyalty points, gold certificates, IOUs, in game items, etc. Since all tokens implement some basic features in a standard way, this also means that your token will be instantly compatible with the ethereum wallet and any other client or contract that uses the same standards. (Ethereum.org/token.)

“The most important takehome is that tokens are not equity, but are more similar to paid API keys. Nevertheless, they may represent a >1000X improvement in the time-to-liquidity and a >100X improvement in the size of the buyer base relative to traditional means for US technology financing — like a Kickstarter on steroids.” (Thoughts on Tokens, by Balaji S. Srinivasan.)

“A blockchain token is a digital token created on a blockchain as part of a decentralized software protocol. There are many different types of blockchain tokens, each with varying characteristics and uses. Some blockchain tokens, like Bitcoin, function as a digital currency. Others can represent a right to tangible assets like gold or real estate. Blockchain tokens can also be used in new protocols and networks to create distributed applications. These tokens are sometimes also referred to as App Coins or Protocol Tokens. These types of tokens represent the next phase of innovation in blockchain technology, and the potential for new types of business models that are decentralized – for example, cloud computing without Amazon, social networks without Facebook, or online marketplaces without eBay. However, there are a number of difficult legal questions surrounding blockchain tokens. For example, some tokens, depending on their features, may be subject to US federal or state securities laws. This would mean, among other things, that it is illegal to offer them for sale to US residents except by registration or exemption. Similar rules apply in many other countries. (A Securities Law Framework for Blockchain Tokens.)

In fact tokens go back. All the way.

In Before Writing Volume I: From Counting to Cuneiform, Denise Schmandt-Besserat writes, “Tokens can be traced to the Neolithic period starting about 8000 B.C. They evolved following the needs of the economy, at first keeping track of the products of farming…The substitution of signs for tokens was the first step toward writing.” (For a compression of her vast scholarship on the matter, read Tokens: their Significance for the Origin of Counting and Writing.

I sense that we are now at a threshold no less pregnant with possibilities than we were when ancestors in Mesopotamia rolled clay into shapes, made marks on them and invented t-commerce.

And here is a running list of sources I’ve visited, so far:

You’re welcome.

To improve it, that is.

thIn The American Dream, Quantified at Last, David Leonhardt in The New York Times makes a despairing case for a perfect Onion headline: American Dream Ends When Nation Wakes Up.

Like so much else the Times correctly tries to do, the piece issues a wake-up call. It is also typical of the Times’ tendency to look at every big social issue through the lenses of industrial age norms, giving us lots of stats and opinions from Serious Sources, and offering policy-based remedies (e.g. “help more middle- and low-income children acquire the skills that lead to good-paying jobs”).

It should help to remember that the ancestors who gave us surnames like Tanner, Smith, Farmer and Cooper didn’t have “jobs.” As a word, “jobs” acquired its current meaning after industry won the industrial revolution—and began to wane in usage after personal computing and the Internet showed up, giving us countless new ways to work on our own and with each other. You can see that in the rate at which the word “jobs” showed up in books:

jobsI’m not even sure “work” was all the Tanners and Smiths of the world did. Maybe it was what we now call “a living,” in an almost literal sense.

Whatever it was, it involved technologies: tools they shaped, and which also shaped them. (Source.) Yet for all the ways those ancestors were confined and defined by the kind of work they did, they were also very ingenious in coping with and plying those same technologies. Anyone who has spent much time on a farm, or in any kind of hardscrabble existence, knows how inventive people can be with the few means they have to operate in the world.

This is one reason why I have trouble with all the predictions of, for example, robot and AI take-overs of most or all work. For all the degrees to which humans are defined and limited by the tools that make them, humans are also highly ingenious. They find new ways to make new work for themselves and others. This is why I’d like to see more thought given to how ingenuity shows up and plays out. And not just more hand-wringing over awful futures that seem to be linear progressions out of industrial age (or dawn-of-digital age) framings and norms.

Note: the spear point above is one I found in a tilled field north of Chapel Hill, NC. It is now at the Alamance County Historical Museum.

 

Save

« Older entries