Category: First person technology (Page 3 of 4)

The Castle Doctrine

home castle

The Castle doctrine has been around a long time. Cicero (106–43 BCE) wrote, “What more sacred, what more strongly guarded by every holy feeling, than a man’s own home?” In Book 4, Chapter 16 of his Commentaries on the Laws of England, William Blackstone (1723–1780 CE) added, “And the law of England has so particular and tender a regard to the immunity of a man’s house, that it stiles it his castle, and will never suffer it to be violated with impunity: agreeing herein with the sentiments of ancient Rome…”

Since you’re reading this online, let me ask, what’s your house here? What sacred space do you strongly guard, and never suffer to be violated with impunity?

At the very least, it should be your browser.

But, unless you’re running tracking protection in the browser you’re using right now, companies you’ve never heard of (and some you have) are watching you read this, and eager to use or sell personal data about you, so you can be delivered the human behavior hack called “interest based advertising.”

Shoshana Zuboff, of Harvard Business School, has a term for this:surveillance capitalism, defined as “a wholly new subspecies of capitalism in which profits derive from the unilateral surveillance and modification of human behavior.”

Almost across the board, advertising-supported publishers have handed their business over to adtech, the surveillance-based (they call it “interactive”) wing of advertising. Adtech doesn’t see your browser as a sacred personal space, but instead as a shopping cart with ad space that you push around from site to site.

So here is a helpful fact: we don’t go anywhere when we use our browsers. Our browser homes are in our computers, laptops and mobile devices. When we “visit” a web page or site with our browsers, we actually just request its contents (using the hypertext protocol called http or https).

In no case do we consciously ask to be spied on, or abused by content we didn’t ask for or expect. That’s why we have every right to field-strip out anything we don’t want when it arrives at our browsers’ doors.

The castle doctrine is what hundreds of millions of us practice when we use tracking protection and ad blockers. It is what called the new Brave browser into the marketplace. It’s why Mozilla has been cranking up privacy protections with every new version of Firefox . It’s why Apple’s new content blocking feature treats adtech the way chemo treats cancer. It’s why respectful publishers will comply with CHEDDAR. It’s why Customer Commons is becoming the place to choose No Trespassing signs potential intruders will obey. And it’s why #NoStalking is a good deal for publishers.

The job of every entity I named in the last paragraph — and every other one in a position to improve personal privacy online — is to bring as much respect to the castle doctrine in the virtual world as we’ve had in the physical one for more than two thousand years.

It should help to remember that it’s still early. We’ve only had commercial activity on the Internet since April 1995. But we’ve also waited long enough. Let’s finish making our homes online the safe places they should have been in the first place.

 

Why #NoStalking is a good deal for publishers

"Just give me ads not based on tracking me.."

That line, scribbled on a whiteboard at VRM Day recently at the Computer History Museum, expresses the unspoken social contract we’ve always had with ad-supported print publications in the physical world. But we never needed to say it in that world, for the same reason we never needed to say “don’t follow me out of your store,” or “don’t use ink that will give me an infection.” Nobody ever would have considered doing anything that ridiculously ill-mannered.

But following us, and infecting our digital bodies (e.g. our browsers) with microbes that spy on us, is pro forma for ad-supported publishers on the Internet. That’s why Do Not Track was created in 2007, and a big reason why since then hundreds of millions of us have installed ad blockers and tracking protection of various kinds in our browsers and mobile devices.

But blocking ads also breaks that old social contract. In that sense it’s also ill-mannered (though not ridiculously so, given the ickyness that typifies so much advertising online).

What if we wanted to restore that social contract, for the good of publishers that are stuck in their own ill-mannered death spiral?

The first and easiest way is by running tracking protection alone. There are many ways of doing that. There are settings you can make in some browsers, plus add-ons or extensions from Aloodo, BaycloudDisconnect, the EFF and others.

The second is requesting refined settings from browser makers. That’s  what @JuliaAnguin does in this tweet about the new Brave browser:

Julia Angwin's request to Brave

But why depend on each browser to provide us with a separate setting, with different rules? How about having our own pro forma rule we could express through all our browsers and apps?

We have the answer, and it’s called the NoStalking rule. In fact, it’s already being worked out and formalized at the Kantara Initiative and will live at Customer Commons, where it will be legible at all three of these levels:

3way

It will work because it’s a good one for both sides. Individuals proffering the #NoStalking term get guilt-free use of the goods they come to the publisher for, and the publisher gets to stay in business — and improve that business by running advertising that is actually valued by its recipients.

The offer can be expressed in one line of code in a browser, and accepted by corresponding code on the publisher’s side. The browser code can be run natively (as, for example, a choice in the Brave menu above) or through an extension such as an ad or tracking blocker. In those cases the blocker would open the valve to non-tracking-based advertising.

On the publisher’s side, the agreement can be automatic. Or simply de facto, meaning the publisher only runs non-tracking based ads anyway. (As does, for example, Medium.) In that case, the publisher is compliant with CHEDDAR, which was outlined by Don Marti (of Aloodo, above) and discussed  both at VRM Day and then at  IIW, in May. Here’s an icon-like image for CHEDDAR, drawn by Craig Burton on his phone:

Sketch - 7

To explain CHEDDAR, Don wrote this on the same whiteboard where the NoStalking term above also appeared:

cheddar

For the A in CHEDDAR, if we want the NoStalking agreement to be accountable from both sides, it might help to have a consent receipt. That spec is in the works too.

What matters most is that individuals get full respect as sovereign actors operating with full agency in the marketplace. That means it isn’t good enough just for sites to behave well. Sites also need to respond to friendly signals of intent coming directly from individuals visiting those sites. That’s why the NoProfiling agreement is important. It’s the first of many other possible signals as well.

It also matters that the NoProfiling agreement may be the first of its kind in the online world: one where the individual is the one extending the offer and the business is the one agreeing to it, rather than the other way around.

At VRM Day and IIW, we had participants affiliated with the EFF, Mozilla, Privacy Badger, Adblock Plus, Consent Receipt, PDEC (Personal Data Ecosystem Consortium),  and the CISWG (Consent & InfoSharing Working Group), among others. Work has continued since then, and includes people from the publishing, advertising and other interested communities. There’s a lot to be encouraged about.

In case anybody wonders if advertising can work as well if it’s not based on tracking, check out Pedro Gardete: The Real Price of Cheap Talk: Do customers benefit from highly targeted online ads?  by Eilene Zimmerman (@eilenez) in Insights by Stanford Business. The gist:

Now a new paper from Stanford Graduate School of Business professor Pedro Gardete and Yakov Bart, a professor at Northeastern University, sheds light on who is likely to benefit from personalized advertising and identifies managerial best practices.

The researchers found that highly targeted and personalized ads may not translate to higher profits for companies because consumers find those ads less persuasive. In fact, in some cases the most effective strategy is for consumers to keep information private and for businesses to track less of it.

You can also mine the oeuvres of Bob Hoffman and Don Marti for lots of other material that makes clear that the best advertising is actual advertising, and not stalking-based direct marketing that only looks like advertising.

Our next step, while we work on all this, is to put together an FAQ on why the #NoProfiling deal is a good one for everybody. Look for that at Customer Commons, where terms behind more good deals that customers offer will show up in the coming months.

How customers can debug business with one line of code

744px-Olive_branch.svg

Four years ago, I posted An olive branch to advertising here. It began,

Online advertising has a couple of big problems that could possibly be turned into opportunities. One is Do Not Track, or DNT. The other is blocking of ads and/or tracking.

Publishers and the advertising business either attacked or ignored Do Not Track, which was too bad, because the ideas we had for making it work might have prevented the problem those businesses now have with ad blocking.

According to the latest PageFair/Adobe study,  the number of people blocking ads passed 200 million last May, with double-digit increases in adoption, worldwide. Tracking protection is also gaining in popularity.

While those solutions provide individuals with agency and scale, they don’t work for publishers. Not yet, anyway.

What we need is a solution that scales for readers and is friendly to publishers and the kind of advertising readers can welcome—or at least tolerate, in appreciation of how ads sponsor the content they want. This is what we have always had with newspapers, magazines, radio and TV in the offline world, none of which ever tracked anybody anywhere.

So now we offer a solution. It’s a simple preference, which readers can express in code, that says this: Just show me ads that aren’t based on tracking me. Equally simple code can sit on the publishers’ side. Digital handshakes can also happen between the two.

This term will live at Customer Commons, which was designed for that purpose, on the model of Creative Commons (which also came out of work done by folks here at the Berkman Center).  This blog post provides some context.

We’ll be working on that term, its wording , and the code that expresses and agrees to it, next week at the Computer History Museum in Silicon Valley. Monday will be VRM Day. Tuesday through Thursday will be IIW—the Internet Identity Workshop (where ProjectVRM was incubated almost ten years ago). VRM Day is mostly for planning the work we’ll do at IIW. VRM Day is free, and IIW is cheap for three days of actually getting stuff done. (It’s by far the most leveraged conference I know, partly because it’s an unconference: no keynotes, panels or sponsor booths. Just breakouts that participants create, choose and lead.)

If you care about aligning publishing and advertising online with what worked for hundreds of years offline — and driving uninvited surveillance out of business itself — come help us out.

This one term is a first step. There will be many more before we customers get the full respect we deserve from ad-funded businesses online. Each step needs to prove to one business category or another that customers aren’t just followers. Sometimes they need to take the lead.

This is one of those times.  So let’s make it happen.

See you next week.

 

 

IoT & IoM next week at IIW

blockchain1

(This post was updated and given a new headline on 20 April 2016.)

In  The Compuserve of Things, Phil Windley issues this call to action:

On the Net today we face a choice between freedom and captivity, independence and dependence. How we build the Internet of Things has far-reaching consequences for the humans who will use—or be used by—it. Will we push forward, connecting things using forests of silos that are reminiscent the online services of the 1980’s, or will we learn the lessons of the Internet and build a true Internet of Things?

In other words, an Internet of Me (#IoM) and My Things. Meaning things we own that belong to us, under our control, and not puppeted by giant companies using them to snarf up data about our lives. Which is the  #IoT status quo today.

A great place to work on that is  IIW— the Internet Identity Workshop , which takes place next Tuesday-Thursday, April 26-28,  at the Computer History Museum in Silicon Valley. Phil and I co-organize it with Kaliya Hamlin.

To be discussed, among other things, is personal privacy, secured in distributed and crypto-secured sovereign personal spaces on your personal devices. Possibly using blockchains, or approaches like it.

So here is a list of some topics, code bases and approaches I’d love to see pushed forward at IIW:

  • OneName is “blockchain identity.”
  • Blockstack is a “decentralized DNS for blockchain applications” that “gives you fast, secure, and easy-to-use DNS, PKI, and identity management on the blockchain.” More: “When you run a Blockstack node, you join this network, which is more secure by design than traditional DNS systems and identity systems. This  is because the system’s registry and its records are secured by an underlying blockchain, which is extremely resilient against tampering and control. In the registry that makes up Blockstack, each of the names has an owner, represented by a cryptographic keypair, and is associated with instructions for how DNS resolvers and other software should resolve the name.” Here’s the academic paper explaining it.
  • The Blockstack Community is “a group of blockchain companies and nonprofits coming together to define and develop a set of software protocols and tools to serve as a common backend for blockchain-powered decentralized applications.” Pull quote: “For example, a developer could use Blockstack to develop a new web architecture which uses Blockstack to host and name websites, decentralizing web publishing and circumventing the traditional DNS and web hosting systems. Similarly, an application could be developed which uses Blockstack to host media files and provide a way to tag them with attribution information so they’re easy to find and link together, creating a decentralized alternative to popular video streaming or image sharing websites. These examples help to demonstrate the powerful potential of Blockstack to fundamentally change the way modern applications are built by removing the need for a “trusted third party” to host applications, and by giving users more control.” More here.
  • IPFS (short for InterPlanetary File System) is a “peer to peer hypermedia protocol” that “enables the creation of completely distributed applications.”
  • OpenBazaar is “an open peer to peer marketplace.” How it works: “you download and install a program on your computer that directly connects you to other people looking to buy and sell goods and services with you.” More here and here.
  • Mediachain, from Mine, has this goal: “to unbundle identity & distribution.” More here and here.
  • telehash is “a lightweight interoperable protocol with strong encryption to enable mesh networking across multiple transports and platforms,” from @Jeremie Miller and other friends who gave us jabber/xmpp.
  • Etherium is “a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.”
  • Keybase is a way to “get a public key, safely, starting just with someone’s social media username(s).”
  • ____________ (your project here — tell me by mail or in the comments and I’ll add it)

In tweet-speak, that would be @BlockstackOrg, @IPFS, @OpenBazaar, @OneName, @Telehash, @Mine_Labs #Mediachain, and @IBMIVB #ADEPT

On the big company side, dig what IBM’s Institute for Business Value  is doing with “empowering the edge.” While you’re there, download Empowering the edge: Practical insights on a decentralized Internet of Things. Also go to Device Democracy: Saving the Future of the Internet of Things — and then download the paper by the same name, which includes this graphic here:

ibm-pyramid

Put personal autonomy in that top triangle and you’ll have a fine model for VRM development as well. (It’s also nice to see Why we need first person technologies on the Net , published here in 2014, sourced in that same paper.)

Ideally, we would have people from all the projects above at IIW. For those not already familiar with it, IIW is a three-day unconference, meaning it’s all breakouts, with topics chosen by participants, entirely for the purpose of getting like-minded do-ers together to move their work forward. IIW has been doing that for many causes and projects since the first one, in 2005.

Register for IIW here: https://iiw22.eventbrite.com/.

Also register, if you can, for VRM Day: https://vrmday2016a.eventbrite.com/. That’s when we prep for the next three days at IIW. The main focus for this VRM Day is here.

Bonus link: David Siegel‘s Decentralization.

 

 

 

The coming collapse of surveillance marketing

A few minutes ago, on a mailing list, somebody asked me if Google hadn’t shown people don’t mind having personal data harvested as long as they get value in exchange for it. Here’s what I answered:

It’s not about Google — or Google alone. It’s about the wanton and widespread harvesting of personal data without permission, by pretty much the entire digital marketing field, or what it has become while in maximum thrall of Big Data.

That this is normative in the extreme does not make it right, or even sustainable. The market — customers like you and me — doesn’t like it. Technologists, sooner or later, will provide customers with means of control they still lack today.

The plain fact is that most people don’t like surveillance-based marketing. Study after study (by TRUSTe, Pew, Customer Commons and others) have shown that 90+% of people have problems with the way their data and their privacy are abused online.

The Tradeoff Fallacy: How Marketers Are Misrepresenting American Consumers and Opening Them Up to Exploitation” by Annenberg (at the U. of Pa) says,

a majority of Americans are resigned to giving up their data—and that is why many appear to be engaging in tradeoffs. Resignation occurs when a person believes an undesirable outcome is inevitable and feels powerless to stop it. Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. The study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened.

More from Penn News:

Survey respondents were asked whether they would accept “tradeoffs,” such as discounts, in exchange for allowing their supermarkets to collect information about their grocery purchases.  Among the key findings:

    • 91 percent disagree (77 percent of them strongly) that “if companies give me a discount, it is a fair exchange for them to collect information about me without my knowing.”
    • 71 percent disagree (53 percent of them strongly) that “it’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless Internet, or Wi-Fi, without charge.”
    • 55 percent disagree (38 percent of them strongly) that “it’s okay if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
Only about 4 percent agree or agree strongly with all three propositions.

But 58 percent agreed with both of the following two statements that together indicate resignation:  “I want to have control over what marketers know about me online” and “I’ve come to accept that I have little control over what marketers can learn about me online.”

The Net we know today was born only twenty years ago, when it opened to commercial activity. We are still naked there, lacking in clothing and shelter (to name two familiar privacy technologies in the physical world). Eventually we’ll have clothing and shelter in many forms, good means for preventing and permitting the ways others deal with us, and full agency in our dealings with business and government.

In the meantime we’ll have a status quo to which we remain resigned.

I suspect that even Google knows this will change.

Bonus Link.

Think about an irony here. Most brick-and-mortar merchants would be appalled at the thought of placing tracking beacons on visiting customers, to spy on them after they leave the store, just so they can be “delivered” a better “advertising experience.” And obviously, customers would hate it too. Yet many of the same merchants hardly think twice about doing the same online.

This will change because there is clear market sentiment against it. We see this through pressure toward regulation (especially in Europe), and through ad and tracking blocking rates that steadily increase.

But both regulation and blockers are stone tools. Eventually we’ll get real clothing and shelter.

That’s what we’ve been working on here with ProjectVRM. It’s taking longer than we expected at first, but it will happen, and not just because there is already a lot of VRM development going on.

It will happen because we have the Net, and the Net is not just Google and Facebook and other modern industrial giants. The Net is where all of those companies live, in the company of customers, to whom, — sooner or later, they become accountable.

Right now marketing is not taking the massive negative externalities of surveillance into account, mostly because marketing is a B2B rather than a B2C business, and there persists a blindered mania around Big Data. But they will take those externalities into account eventually, because the Cs of the world will gain the power to protect themselves against unwanted surveillance, and will provide far more useful economic signaling to the businesses of the world than marketing can ever guess at.

Once that happens, the surveillance marketing business, and what feeds it, will collapse.

“A house divided against itself cannot stand,” Lincoln said. That was in 1858, and in respect to slavery. In 2015 the language of marketing — in which customers are “targets” to be “acquired,” “controlled,” “managed” and “locked in” — is not much different than the language of slave owners in Lincoln’s time.

This will change for the simple reason that we are not slaves. We are the ones with the money, the choice about patronage, and the network. Companies that give us full respect will be the winners in the long run. Companies that continue to treat us as less than human will suffer the consequences.

First we take Oz

Sydneydoc 017-018_combined_medAustralia’s privacy principles are among the few in the world that require organizations to give individuals personal information gathered about them.* This opens the path to proving that we can do more with our own data than anybody else can.

Estimating the size of the personal data management business is like figuring the size of the market for talking or driving. (Note: we can also do more with those than companies can.)

Starting us down this path is  Ben Grubb (@BenGrubb) of the Sydney Morning Herald. Ben requested personal data held by the Australian telco giant Telstra, and found himself in a big fightwhich he won. (Here’s the decision. Telstra is appealing, but they’re still gonna lose.)

Bravo to Ben — not just for whupping a giant, but for showing a path forward for individual empowerment in the marketplace. Thanks to Australia’s privacy principles, and Ben’s illustrative case, the yellow brick road to the VRM future is widest in Oz.

Here (and in New Zealand) we not only have lots of VRM developers (Flamingo, Fourth Party, Geddup, Meeco, MyWave, OneExus, Welcomer and others I’ll insulting by not listing yet), but legal easement toward proving that individuals can do the more with their own data than can the companies that follow us. And proving as well that individuals managing their own data will be good for those companies as well. The data they get will be richer, more accurate,  more contextual, and more useful.

This challenge is not new. It’s as old as our species. The biggest tech revolutions have always been inventions individuals could put to the best use:

  • Stone tools
  • Weaving
  • Smithing
  • Musical instruments
  • Hand-held hunting and fighting tools
  • Automobiles
  • PCs
  • The Internet (which is a node-to-node invention, not an advanced phone or cable company, even though we pay those things for access to it)
  • Mobile phones and tablets
  • Movable type (which would be nowhere without individual authors — and writing tools in the hands of those authors)

There should be symbiosis here. There are things big organizations do best, and things individuals do best. And much that both do best when they work together.

Look at cars, which are a VRM technology: we use them to get around the marketplace, and to help us do business with many companies. They give us ways to be both independent and engaging. But companies don’t drive them. We do. Companies provide parking lots, garages, drive-up windows and other conveniences for drivers. Symbiosis.

So, while Telstra is great at building and managing communication infrastructure and services, its customers will be great at doing useful stuff with the kind of data Ben requested, such as locations, calls and texts — especially after customers get easy-to-use tools and services that help them work as points of integration for their own data, and managers of what gets done with it. There are many VRM developers working toward that purpose, around the world, And many more that will come once they smell the opportunities.

These opportunities are only apparent when you look at the market through your own eyes as a sovereign human being. The same opportunities are mostly invisible when you look at the market from the eye at the top of the industrial pyramid.

Bonus links:


* My understanding is that privacy principles such as the OECD’s and Ontario’s provide guidance but not the full force of law, or means of enforcement. Australia’s differ because they have teeth. See the Determination on page 36 of the Privacy Commissioner’s investigation and decision. Canada’s also has teeth. See the list of orders issued in Ontario. If there are other examples of decisions like this one, anywhere in the world, please let us know.

Of vaults and honey pots

Personal Blackbox (pbb.me) is a new #VRM company — or so I gather, based on what they say they offer to users: “CONTROL YOUR DATA & UNLOCK ITS VALUE.”

So you’ll find them listed now on our developers list.

Here is the rest of the text on their index page:

pbbWheel

PBB is a technology platform that gives you control of the data you produce every day.

PBB lets you gain insights into your own behaviors, and make money when you choose to give companies access to your data. The result? A new and meaningful relationship between you and your brands.

At PBB, we believe people have a right to own their data and unlock its benefits without loss of privacy, control and value. That’s why we created the Personal Data Independence Trust. Take a look and learn more about how you can own your data and its benefits.

In the meantime we are hard at work to provide you a service and a company that will make a difference. Join us to participate and we will keep you posted when we are ready to launch.

That graphic, and what seems to be said between the lines, tells me Personal Blackbox’s customers are marketers, not users.  And, as we so often hear, “If the service is free, you’re the product being sold.”

But, between the last paragraph and this one, I ran into Patrick Deegan, the Chief Technology Officer of Personal Blackbox, at the PDNYC meetup. When I asked him if the company’s customers are marketers, he said no — and that PBB (as it’s known) is doing something much different that’s not fully explained by the graphic and text above, and is tied with the Personal Data Independence Trust, about which not much is said at the link to it. (At least not yet. Keep checking back.) So I’ll withhold judgement about it until I know more, and instead pivot to the subject of VRM business models, which that graphic brings up for me.

I see two broad ones, which I’ll call vault and honey pot.

The vault model gives the individual full control over their personal data and what’s done with it, which could be anything, for any purpose. That data primarily has use value rather than sale value.

The honey pot model also gives the individual control over their personal data, but mostly toward providing a way to derive sale value for that data (or something similar, such as bargains and offers from marketers).

The context for the vault model is the individual’s whole life, and selective sharing of data with others.

The context for the honey pot model is the marketplace for qualified leads.

The vault model goes after the whole world of individuals. Being customers, or consumers, is just one of the many roles we play in that world. Who we are and what we do — embodied in our data — is infinitely larger that what’s valuable to marketers. But there’s not much money in that yet.

But there is in the honey pot model, at least for now. Simply put, the path to market success is a lot faster in the short run if you find new ways to help sellers sell.  $zillions are being spent on that, all the time. (Just look at the advertising coming along with that last link, to a search).

FWIW, I think the heart of VRM is in the vault model. But we have a big tent here, and many paths to explore. (And many metaphors to mix.)

Toward VRooMy privacy policies

Canofworms1In The nightmare of easy and simple, T.Rob unpacks the can of worms that is:

  1. one company’s privacy policy,
  2. provided by another company’s automatic privacy policy generating system, which is
  3. hosted at that other company, and binds you to their privacy policy, which binds you to
  4. three other companies’ privacy policies, none of which assure you of any privacy, really. Then,
  5. the last of these is Google’s, which “is basically summed up as ‘we own your ass'” — and worse.

The company was GeniCan — a “smart garbage can” in the midst of being crowdfunded. GeniCan, like so many other connected devices, lives in the Internet of Things, or IoT. After exploring some of the many ways that IoT is already FUBAR in the privacy realm, T.Rob offers some constructive help:

The VRM Version
There is a possible version of this device that I’d actually use.  It would be the one with the VRM-ypersonal cloud architecture.  How does that work?  Same architecture I described in San Francisco:

  • The device emits signed data over pub/sub so that secondary and tertiary recipients of data can trust it.

  • By default, the device talks to the vendor’s service so users don’t need any other service or device to make it work.

  • The device can be configured to talk to a service of the user’s choosing instead of, or in addition to that of the manufacturer.

  • The device API is open.

Since privacy policy writing for IoT is pretty much a wide-open greenfield, that provides a helpful starting point. It will be good to see who picks up on it, and how.

Preparing for the 3D/VR future

Look in the direction that meerkatMeerkat and periscopeappPeriscope both point.

If you’ve witnessed the output of either, several things become clear about their evolutionary path:

  1. Stereo sound is coming. So is binaural sound, with its you-are-there qualities.
  2. 3D will come too, of course, especially as mobile devices start to include two microphones and two cameras.
  3. The end state of both those developments is VR, or virtual reality. At least on the receiving end.

The production end is a different animal. Or herd of animals, eventually. Expect professional gear from all the usual sources, showing up at CES starting next year and on store shelves shortly thereafter. Walking around like a dork holding a mobile in front of you will look in 2018 like holding a dial-phone handset to your head looks today.

I expect the most handy way to produce 3D and VR streams will be with  glasses like these:

srlzglasses

(That’s my placeholder design, which is in the public domain. That’s so it has no IP drag, other than whatever submarine patents already exist, and I am sure there are some.)

Now pause to dig @ctrlzee‘s Fast Company report on Facebook’s 10-year plan to trap us inside The Matrix. How long before Facebook buys Meerkat and builds it into Occulus Rift? Or buys Twitter, just to get Periscope and do the same?

Whatever else happens, the rights clearing question gets very personal. Do you want to be broadcast and/or recorded by others or not? What are the social and device protocols for that? (The VRM dev community has designed one for the glasses above. See the ⊂ ⊃ in the glasses? That’s one. Each corner light is another.)

We should start zero-basing the answers today, while the inevitable is in sight but isn’t here yet. Empathy is the first requirement. (Take the time to dig Dave Winer’s 12-minute podcast on the topic. It matters.) Getting permission is another.

As for the relevance of standing law, almost none of it applies at the technical level. Simply put, all copyright laws were created in times when digital life was unimaginable (e.g. Stature of Anne, ASCAP), barely known (Act of 1976), or highly feared (WIPO, CTEA, DMCA).

How would we write new laws for an age that has barely started? Or why start with laws at all? (Nearly all regulation protects yesterday from last Thursday. And too often its crafted by know-nothings.)

We’ve only been living the networked life since graphical browsers and ISPs arrived in the mid-90’s. Meanwhile we’ve had thousands of years to develop civilization in the physical world. Which means that, relatively speaking, networked life is Eden. It’s brand new here, and we’re all naked. That’s why it’s so easy anybody to see everything about us online.

How will we create the digital equivalents of the privacy technologies we call clothing and shelter? Is the first answer a technical one, a policy one, or both? Which should come first? (In Europe and Australia, policy already has.)

Protecting the need for artists to make money is part of the picture. But it’s not the only part. And laws are only one way to protect artists, or anybody.

Manners come first, and we barely have those yet, if at all. None of the big companies that currently dominate our digital lives have fully thought out how to protect anybody’s privacy. Those that come closest are ones we pay directly, and are financially accountable to us.

Apple, for example, is doing more and more to isolate personal data to spaces the individual controls and the company can’t see. Google and Facebook both seem to regard personal privacy as a bug in online life, rather than a feature of it. (Note that, at least for their most popular services, we pay those two companies nothing. We are mere consumers whose lives are sold to the company’s actual customers, which are advertisers.)

Bottom line: the legal slate is covered in chalk, but the technical one is close to clean. What do we want to write there?

We’ll be talking about this, and many other things, at VRM Day (6 April) and IIW (7-9 April) in the Computer History Museum in downtown Silicon Valley (101 & Shoreline, Mountain View).

The most important event, ever

IIW XXIIW_XX_logothe 20th IIW — comes at a critical inflection point in the history of VRM. If you’re looking for a point of leverage on the future of customer liberation, independence and empowerment, this is it. Wall Street-sized companies around the world are beginning to grok what Main Street ones have always known: customers aren’t just “targets” to be “acquired,” “managed,” “controlled” and “locked in.” In other words, Cluetrain was right when it said this, in 1999:

if you only have time for one clue this year, this is the one to get…

Now it is finally becoming clear that free customers are more valuable than captive ones: to themselves, to the companies they deal with, and to the marketplace.

But how, exactly? That’s what we’ll be working on at IIW, which runs from April 7 to 9 at the Computer History Museum, in the heart of Silicon Valley: the best venue ever created for a get-stuff-done unconference. Focusing our work is a VRM maturity framework that gives every company, analyst and journalist a list of VRM competencies, and every VRM developer a context in which to show which of those competencies they provide, and how far along they are along the maturity path. This will start paving the paths along which individuals, tool and service providers and corporate systems (e.g. CRM) can finally begin to fit their pieces together. It will also help legitimize VRM as a category. If you have a VRM or related company, now is the time to jump in and participate in the conversation. Literally. Here are some of the VRM topics and technology categories that we’ll be talking about, and placing in context in the VRM maturity framework:

« Older posts Newer posts »

© 2024 ProjectVRM

Theme by Anders NorenUp ↑