You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Binding Operational Directive 17-01

2

We’ve mentioned several times while discussing the development of the ARPANET, which we’ll see this week led to the Internet, that the security of the network itself, the security of the data passing over it, and security of the hosts connected to the network were not a front-of-mind concerns for the original designers. As we all experience today, this design decision has created numerous problems for a world so dependent now on the operation of what we all hope is trustworthy Internet.

Trust. The news this week shows that trust is increasingly hard to come by.

First Equifax, a company whose purpose for being is to create a trusted source of all of our credit worthiness, admits that it could have prevented the catastrophic breach of its networked systems if it had patched an Apache Struts web application vulnerability sometime in the two months between the announcement of the vulnerability (and its corresponding patch) and the later intrusion using that vulnerability. Patch management has been a best practice since the mid-1990s, and I hope all of you run automatic software updates on your laptops and other personal networked devices.

And on September 13, 2017, the U.S. Department of Homeland Security (DHS) issued Binding Operational Directive 17-01, which questions the trust that many people, corporations, and governments put in “information security products, solutions, and services supplied directly or indirectly by AO Kaspersky Lab or related entities.” If we can’t trust the anti-virus and intrusion prevention software running on our networked machines, whom can we trust? As you may or may not know, in order for these programs to protect our systems they must have unencumbered access to practically everything on our systems, which is pretty much our entire lives these days. And to keep up-to-date in the never-ending battle against malware and malicious hackers, these trusted programs communicate regularly with a trusted server somewhere (typically run by the anti-virus and intrusion prevention manufacturer) on the Internet.

What’s the path forward in Binding Operational Directive 17-01? Two things that have me scratching my head.

On the one hand, the directive sets a timeframe for action: 30 days to identify what Kaspersky products are installed on government systems; 60 days to plan for their removal; and 90 days to complete the removal. Does that sound like Internet time to you? It sounds to me like this thinking was what got Equifax into trouble.

On the other hand, the U.S. government says that it is willing to consider a written argument from Kaspersky Labs that would address their fears. So far, Kaspersky has issued only a commercially based argument as to why the U.S. government should trust Kaspersky products with its sensitive data. I hope that that doesn’t convince anyone in DHS with a rudimentary understanding of networks and computer science. Then again, Kaspersky will have a hard time proving that their program won’t release any sensitive information on the system it is meant to protect. Doing that sounds a lot like trying to solve the Halting Problem. And do you even bother trying when the security program downloads new functionality as an integral part of its functionality (i.e., to update itself so that it can protect against new threats)?

It will be interesting to see where this all goes. It’s hard to operate in this world without some level of trust.

My 10-year-old gets a cellphone

ø

I’ve been thinking about the past this past week. Some of this urge to reflect on days gone by was, of course, a result of this week’s reading and discussion. The remaining impetus came from a discussion this weekend with my wife about getting our 10-year-old a cellphone.

I grew up in New Jersey in a town caught at the time between its rural, farming past and a suburbanite future, where cornfields one fall sprouted strip malls and elegant McMansions the next spring. For me, being 10 meant that I could walk everyday to a school that my mother couldn’t see from our front door. Sweet independence. Recognition that I was growing up and didn’t need an adult watching over me every second of the day. And for me, it was the beginning of new rules: (1) get the hell out from underfoot; (2) make sure that my share of the daily chores were done before supper; and (3) whenever I went during daylight hours, come immediately when I heard my parents holler from the backdoor that it was suppertime.

The world has certainly changed. Yes, my wife and I are finding ways to give our 10-year-old more freedoms. One of us no longer has to watch his every move, but the purchase of a cellphone for him doesn’t represent the new way to holler from the backdoor that dinner is ready. It’s a recognition that his days are now more complicated. His after-school program is no longer in the same location as his elementary school. He can’t walk from home to school, or from school to his after-school program, and the coordination required between all the adults involved in this dance across the school year rivals the logistics of an Amazon distribution center. If something in this daily dance runs off the rails, he needs to be able to contact us. I had to have a dime in my pocket so that I could find and use a payphone. Payphones no longer exist except in a few quaint places, and the dime has become a contract for $40 monthly cellphone plan.

In class this week, we talked briefly about interoperability, walled gardens, and what principles influenced the design of the ARPANET, which as we will see laid the foundation for the design of today’s Internet. In particular, robustness and responsiveness drove the earliest design decisions, and for my son’s cellphone, my wife and I certainly want his phone to work when he needs it and for him to respond when we call or text. We also realize that the latter desire has less to do with the Internet than with our son’s own decision making, but his decision making won’t be the main impediment if the Internet and the cellphone network aren’t themselves responsive!

However, what occupied most of our thinking this weekend was safety and security. If we get our son a cellphone, are we making him more or less safe? Despite the name, cellphones are not just a communication device that connects two people together through a cellular network. They are repositories of our personal information. They are portals to every part of today’s world, the good and the bad. Finally, they are ways for others to have unmediated access to our children, our children’s personal information, and their location.

My parents may have decided it was fine for me to gain some additional freedom, but they knew that the risks of that freedom were relatively small. I couldn’t go to far from my neighborhood (a couple of miles at best on my bike), and my parents knew the good and the bad that existed within that circle. If I started spending more time with some other adult in town, it was almost certain that they would hear about it. There are few secrets in a small town.

Today’s cellphones are, by default, open portals to the whole wide world and at the same time devices difficult for those sitting right next to you to know with whom you’re communicating. My parents, in letting me out of their sight, were fairly certain that they weren’t putting me into some other adult’s hands. I certainly worry that putting a cellphone in my son’s hands might mean putting him in some other unknown adult’s hands.

The answer seemed obvious to me: Balance the good and the bad by locking down our son’s new cellphone to the extent possible. If you want to get more informed about such things yourself, you might check out the this Wired article as a place to start.

 

Thank you

ø

Wow. The class is already over. It seems like just yesterday that Jim and I first introduced ourselves to our students, and last week we said goodbye. Of course, we both hope it is not really goodbye, but the first of many different interactions that we will have with them over their time at Harvard and then later in life as alumni of our institution. But that’s probably getting ahead of things, since they’re still working to finish their first semester here!

For the last seminar, we had the students pick the topic, choose the readings, and lead the seminar. The topics that rose to the top of their minds were: the impact of the Internet on the election, in retrospective; and the way that the Internet is viewed in developing nations. We just scratched the surface of the latter topic, and all of us brought a lot of emotion to the former.

In my reading, I was most struck by Issie LaPowsky’s article titled “The 2016 Election Exposes the Very, Very Dark Side of Tech” where LaPowsky wrote:

A Buzzfeed analysis of partisan Facebook pages found that often, the more a page shares false or misleading information, the more viral its posts become.

In particular, the Buzzfeed article said:

The review of more than 1,000 posts from six large hyperpartisan Facebook pages selected from the right and from the left also found that the least accurate pages generated some of the highest numbers of shares, reactions, and comments on Facebook — far more than the three large mainstream political news pages analyzed for comparison.

What causes so many of us to share such pages? I’m far from an expert on human behavior, but it seems like we still have a lot to learn about what drives people to do what they do on the Internet, and what that behavior actually means. This reminded me of the work Jim and I did for our privacy class the first time we taught it many years ago. It surprised many in the early days of social media that people shared some of their most mundane information or activities, and some of their most personal ones. I still don’t understand why some of my friends post some of the things that they do, even though I’ve known them for decades. Or I thought I knew them well because I had known them for decades.

So, here I am ending my last blog post for this fall’s class with more questions than answers. Not surprising given the fact that Jim and I enjoy our classes most when we’re learning as much as the students. And we did learn a great deal from these students. To each of them, thank you. Thank you for your faith in a class with a title that we obviously weren’t going to be able to answer. Thank you for your engagement in the material. And thank you for your spirit, which made each and every day a gem.

Good luck, and stay in touch!

The real me?

ø

This week we talked about online identities, the influence of social media on these identities, and what is the authentic you. I wish I could say that Jim and I had planned everything that the seminar uncovered this week, but it’s not true. This is clearly a fast moving space, where we have much more to learn and discover about ourselves and each other. I strongly encourage everyone in the class to read each other’ blogs this week. They are a fascinating read.

This weekend I was catching up on some pleasure reading and came across the Wired.com article titled “Snap’s Spectacles Are the Beginning of a Camera-First Future” by David Pierce. (Apparently, these are a hot thing to buy right now too.) The Wired article talks about video blogger Jesse Wellens and his first experience with Snap’s Spectacles. Connected to this week’s seminar, I was intrigued to read the following paragraph from the article:

A few days later, Wellens published his first vlog in a while, shot entirely in the 10-second, circular Spectacles format. He says it felt different from any other episode. Before, he says, “I would film myself and other people, but when there are cameras out, you always get a different reaction from other people.” But with Spectacles, “You’re getting a real, inside look into someone’s life. This is a way that you’re getting real raw emotions, and interactions.” He only had to make one alteration to get there: he stuck a round piece of electrical tape over the spot above his left eye, where Snap put a spinning circle of LEDs that indicates the wearer is taking video.

There’s that authentic thing again. Clearly Wellens feels that the personality we show in front of a camera is not the “real” us. So, the thing we do in front of a camera, which in this day and age we know will persist probably long after we’re gone, is not who we are, but just what we want the generations that follow to think about us? (Please imagine me shaking my head in confusion.) I have seen that some people become more reserved in front of a camera, while others more gregarious and even outrageous. Are our unguarded moments more real? And how do we process the fact that these are 10-second moments placed on Snapchat, which promises us that they’ll be ephemeral glimpses of us shared with our friends? That wasn’t enough to get the “real raw emotions” that Wellens desired? I have to admit that I am nowhere near feeling like I have any understanding of this space and where it is going.

I want to share one other experience I had in the last two weeks. This related experience wasn’t in a new technology setting, but in what I think of as an “old school” setting. In particular, I had to give a deposition in a legal matter, and this deposition included not only a whole raft of lawyers packed into a small room with a court stenographer, but also a court videographer. It’s hard to forget that the stenographer is there during your 7 hours of grilling, since that person sits right next to you and between you and the lawyer asking you questions. I suppose that that location is best for the stenographer to hear both the lawyer’s questions and your answers. The videographer and her camera, however, sit at the other end of the room. You’re the only person shown in the video shot, as my lawyer explained to me. And interestingly, he said in preparing me for the deposition that I’d soon forget about the fact that the camera was there. As someone who dislikes being filmed, I had my doubts, but my lawyer was right. The video camera soon faded into the background (in a manner unlike my attention on the stenographer). Given that the purpose of a deposition is find out what the witness knows and preserve it, I find it interesting that the legal system doesn’t seem to feel that it needs to “place electrical tape” over the fact that the witness is being videotaped to get “real raw emotions and interactions.”

There is so much we still don’t understand.

Fake News and Our Responsibilities

ø

This week we talked about cyber war, cyber conflict, and cyber crime. While definitions might remain in flux, it’s still pretty easy to tell when you’ve been ripped off through cyber crime or attacked in an online manner. I’d like to focus here on what we’ve learned is harder to understand: When have I been fed fake news? In the aftermath of our country’s recent presidential election, many are asking if the citizens of the United States were too lax about “fake news” being distributed to us through our social networks and especially around our comfort in getting our “news” from Twitter and Facebook.

With calls from many corners for Facebook to fix the problem of fake news, Mark Zuckerberg recently posted his thoughts on how Facebook might help combat misinformation. I agree with Mr. Zuckerberg that this is a hard problem and I was glad to see him say that he doesn’t want Facebook “to be arbiters of truth [themselves],” but I was not impressed with the ideas he threw out. Then again, I wasn’t surprised since Facebook believes that “[t]he goal of News Feed is to connect people to the stories that matter most to them.” If you start with the goal of making people happy and not with the goal of presenting what the person should know about what’s going on in our nation (or the world), you’re not going to be too interested in addressing fake news.

Perhaps we should try to agree what the problem actually is. I personally like Stephen Colbert’s comment about fake news. In a recent event with his pal, John Oliver, reported by the NYT, Colbert said, “What we did was fake news. We got on TV and we said: ‘This is all going to be fake. We’re going to make fun of news.’” Colbert went on to say, “The fact that they call this stuff fake news upsets me, because this is just lying.”

The media calls it “fake news.” Zuckerberg calls it “misinformation.” Colbert calls it “lying.” The truth is that what we decide to post on our news feeds and what Facebook decides to distribute to our news feeds is just free speech. The problem starts when we chose to believe that our Facebook news feed or our Twitter feed is all that we need to know.

Criminals are out there trying to rip us off. Terrorists and agents of enemy states are out there trying to disrupt our way of life. We need to remember that democracies function when their citizens take it upon themselves to be informed. We have a free press because the founders of our country didn’t trust the government to feed us the truth. If we didn’t want to trust our government to fed us the truth, why do we now trust our social media feeds to provide us with everything we need to know? I don’t think it is solely Mr. Zuckerberg’s job to police our news feed. It is our job as citizens to seek the truth in what we get through our social media. It won’t be easy, but neither is preserving our democracy for our children.

Technology and government and Waldo

ø

Before you read the paragraphs following this first one, please first click over and read Jim’s post on Technology and government. I want to add to his thoughts.

Ok, you’re back. Perhaps unsurprisingly, since Jim and I like to teach together, we have similar beliefs about running successful, collaborative projects. Co-teaching is a collaborative project as are the large technical/software projects that Jim describes. In my corporate life, I certainly experienced what Jim talks about as the process-focused projects, and I didn’t enjoy them as a member of those technical teams. This was not because I felt that I was just a cog in the machine, but because I believe process should be in service of the project’s goal. As a software developer on a large technical project, I knew I had a job to do, and a good process made my job easier and my good work more impactful to the project overall. Yet, it was when the process (or the latest software technology) became more important to our daily discussions than the project’s goal that I became worried.

Jim talks about his Magnificent Seven approach — great movie, by the way. We saw this approach used in the development of the ARPANET and the BBN IMP. People matter, and the best projects result from a small team that believes they’re responsible for delivering the best solution to the problem at hand. You want good people with uniquely appropriate skill sets, and you want them to care about the result. And you need a management team that listens to this team. The technical team can’t make every decision (i.e., run without oversight or constraints), but it knows things that management doesn’t.

Even more important, the intended user base “knows” things that neither management nor the technical team does. In the (successful) software projects I ran, we spent a lot of time gathering feedback from the user base and incorporating that feedback into our design and system documentation. Successful software projects aren’t imagined, they evolve. Jim mentioned “soft launches” in our discussion. You can gather all of the use-case information you can possibly gather by talking to potential users, but you quickly learn that users don’t actually know what they want. Things often look very different when you put something concrete in front of them. Telling the user that that’s what they asked you to build doesn’t matter one bit to them when they say they don’t like what they see.

So back to eGovernment. In a dean in academia, I have had to fight the natural inclination of the faculty to want to debate a topic to death and then craft and pass the “perfect” motion. It’s a lot of work to get the faculty to a point where they want to vote on a resolution. Faculty are trained to be critical; many move up in prestige by writing criticism. We’re good at arguing. Unfortunately, this is not the same as being good a delivering what is needed. Our first attempt at our current Gen Ed program is one example. We needed a real 5-year review because we didn’t get it right the first time. (Taking seriously your 5-year review is better than not having one at all, but it is not equivalent to a soft launch approach.) My worry is that government is more like academia than successful technology companies. Lots of arguing and then one big vote. Lots of requirement writing and then a big software development and then one big launch. The result has to be right because legislators argued for many hours. Sorry, it is right when users can successfully use it.

Two paragraphs back, I talked about incorporating user feedback into our designs and system documentation. This is something that I can’t emphasize enough. If you’re going to iterate your design based on constantly gathered user input, and you’re working on a large, multi-year collaborative project, you’d better make sure the lessons learned are documented somewhere. And the lesson must include the context in which the lesson was valid. New software engineers and new managers need to know the lessons of the past if they want to avoid repeating them in the future. Process in service of the project’s goal.

Selfie Voting

ø

This past Monday we talked about Internet voting. We didn’t talk much about the actual proposals for Internet voting and instead talked about how paper voting (or voting machines) compared with the aspirations for Internet voting. We also talked about how social media companies have experimented with their ability to influence voter turnout. As such, I thought I’d use this post to point you toward some actual Internet voting proposals. I’ll then end with a thought that came to me this morning while watching a news segment on selfies in the voting booth.

When I was an undergraduate, I met Andy Neff, a brilliant graduate student. Andy always impressed me as someone who could solve any problem, no matter how complicated. We lost track of each other during the late 1980s and early 1990s, and then we reconnected when I got interested in Internet voting schemes as part of Harvard’s Center for Research on Computation and Society. Andy had taken an interest in cryptographic techniques and their application to voting protocols, and he had become the Chief Technology Officer at a company called VoteHere. If there was anyone who could wrestle problems of Internet voting to the mat, I thought, it would be Andy.

While Andy and others made fantastic progress, even he will tell you that the world was not ready for Internet voting in the year 2005. To his credit, his work was one of several efforts that produced provably superior systems to the Direct Recording Electronic (DRE) voting machines that were being used in many polling places around the U.S. Unfortunately, he couldn’t solve every problem that could occur and any failure mode was enough to keep the world from moving to something that looked very different than the paper system. An excellent paper describing the issues that remained was written by David Wagner and his colleagues at U.C. Berkeley in 2005.

VoteHere doesn’t exist anymore, but if you want to dive into a system for verifiable online elections, I suggest you investigate Helios. This is a very popular voting system, but do note the answer to the last question in their FAQ.

This leaves us with the news segment about people taking selfies in voting booths that I saw on CBS This Morning. In some states, like Tennessee where singer Justin Timberlake took and posted a selfie of himself voting, it is illegal to take pictures in the voting booth. This law enforces the rule that, in most states, elections are held by secret ballot. Ballots are anonymous and by marking your ballot in secret, it becomes much harder for someone to intimidate you and influence your vote, or for someone to buy your vote. If you vote in secret and no ballot carries your name, how can someone who is trying to intimidate you or buy your vote know how you voted? Obviously, if you take a selfie of yourself with your completed ballot, this completely destroys the secret nature of the election.

I was further amazed when Charlie, Gayle, and Norah announced after the segment was complete that they didn’t see why it wasn’t perfectly fine to take a selfie in the voting booth. Have we gotten so far beyond the days of blatant voter intimidation and vote buying that no one feels that they need to vote in secret?

Let’s assume, for a moment, that the answer to this question is yes. How might we build a new voting system that takes advantage of the selfie movement? What if we all just took selfies of our completed ballots and submitted those “votes” to a server overseen by observers from both parties? They verified the count and could spot check the vote recorded for each of us was the vote we posted on our social media page. I’ve spent only a few minutes thinking about how you might do this, and so I’m sure I’ve forgotten some attack vectors. Still, what rules do we follow today for paper voting that no one seems to question why we continue to follow them? To be clear, I’m not saying every vote should be recorded this way, since some people may not want their vote recorded this way or may not have a smart phone or social media account. I’m simply asking whether this might work for some — apparently non-trivial, given the popularity of selfies in the voting booth — portion of our population.

AI is here

ø

I’ve fallen behind in my posts and today I’m going to try to write two. This first one deals with our seminar last week on “AI, the Internet, and the Intelligence Singularity — will the machines need us?” We spent quite a bit of our time together discussing the AI singularity, but I’m going to focus here on the current rapid pace of change in artificial intelligence. It’s fun to imagine what it might be like to interact with a clearly intelligent machine, but as you can see from the students’ blog posts, it is really hard to come to consensus on what each of us would characterize as a clearly intelligent machine. And without consensus, our minds just run wild in talking about what The Singularity — whether a point in time or a process over time — would look like.

With less imagination, what fascinates me is the practical advance of artificial intelligence in our daily lives. I have lived through several cycles of hype around how artificial intelligence would radically change our daily lives, and for the first time, it feels like it is finally happening. Siri was fun when it first came out, but it didn’t change my life and I never really used it. But about a month ago, my family got an Amazon Echo, and Alexa has changed our lives. While it is not perfect, we use it constantly. As a childhood fan of Star Trek (the original series), I feel like I have what Captain Kirk had when talking to the Starship Enterprise’s computer. Wow!

Outside the home, I’m astounded by the rapid adoption of self-driving technology. As someone who still drives a stick shift, I can’t say that I’ll be an early adopter, but I can’t deny that broad adoption of the technology is coming. And coming soon. In the New York Times today, the most emailed article is titled, “Self-Driving Truck’s First Mission: A 120-Mile Beer Run.” Perhaps it’s the reference to beer, but I would bet that this just shows how interested the general public is in self-driving technology. This particular technology comes out of Otto, which is owned by Uber, and founded by researchers from Google’s multi-year efforts into autonomous vehicles. Self-driving trucks is not just a research idea. It’s a business plan for Uber.

And the government has noticed too. About a month ago, the Times wrote an article titled, “Self-Driving Cars Gain Powerful Ally: The Government.” This is an important first step toward a future where our policies, regulations, and laws begin to catch up with the changes that technology is making on the nation’s highways and roads. It will be interesting to watch the battle between oversight and overregulation. And it’s good to see an early push to consider issues of safety, security, and privacy. Too often these issues have been left as an afterthought.

As we talked about in our seminar, self-driving cars are not just a technological challenge. In designing and coding for these cars, software engineers are making ethical decisions. How do you write the code that decides between an outcome that causes a car to swerve and hit a pedestrian and another that causes the car to swerve and injury the passenger? What software engineering practices do you put in place for situations that the artificial intelligence might encounter that are not as obvious to the designer as the example I just mentioned? Working at a college focused on a liberal arts and sciences approach to education, we need the humanities to be as strong as — and interacting with — our engineering programs as we enter this world of ubiquitous artificial intelligence.

Is it hot in here? Not yet

ø

This week in class we talked about the Internet of Things (IoT), and I opened up our discussion by asking what kinds of devices the students had in their homes that connected to the Internet. I meant non-computer, non-tablet, non-cell-phone-like devices. Things that you’d look at and think, “That’s a home appliance, not a computing or network device.”

To my surprise, no one spoke up. In fact, I got a lot of blank stares.

Let’s contrast this with our discussion the previous week, where we talked about how the Internet had changed business. How did the students get their music? Off the Internet, even when driving. I, in contrast, still listen to FM radio. Had they used ride-hailing apps? Oh yes. We had reviewed statistics that had shown that users of ride-sharing services are predominantly less than 45 years old. Ok, I’m a bit older than that, and I also still prefer to use public transportation to get around Boston (i.e., buses and the T).

The IoT, it appears, is mostly invisible to our younger generation. Why, I began to wonder, do I want to control my thermostat over the Internet, but I’m perfectly happy to walk 0.8 miles to wait in the T station for the next train to arrive? Why do my kids expect Lyft to show up instantaneously and outside our front door, but probably can’t find the thermostat inside that same house?

At first I thought the answer might be related to our age and place in life. I’ve used public transportation for many years, and putting aside the recent failures of the T to run when we really need it, public transportation has served me just fine. Why do I need to change? But wait, my thermostat has worked perfectly well and very reliably for years too. Why would I want to replace it with one that might have software bugs? This line of reasoning didn’t seem to get me anywhere.

When you’re out of ideas what do you do? Yup, I typed “Who buys Nest products” into Google. I had hoped that this search would provide me with demographic information on those consumers buying Nest products, but other than the top hits directing me to Nest’s homepage, its online store page, and its “Where to buy” page, I was directed to pages discussing Google’s original acquisition of Nest and a couple of pages reviewing the performance of the acquisition two years later. While scanning these articles, it hit me: Nest sells gadgets.

Obvious, I know, but think about it. I grew up in a hardware-dominated generation. Yes, software existed, but the hardware mattered more. Today’s students have grown up in a software world. It’s an app that gets Lyft to show up. Yes, there’s an app for the Nest Thermostat, but you have to buy the thermostat to ever have any interest in getting the app.

It was also interesting to read the contrast between Google buying Nest for $3.2B in 2014 and Apple’s acquisition of Beats for $3B also in 2014. Both are hardware companies, but Beats sells accessories for your Internet-connected phone. Nest sells home appliances that connect to the Internet and simply use the phone for remotely controlling the device. Today, Beats is booming and Nest is viewed as an acquisition failure for Google.

The bottom line here is that I, with my hacked together “Internet-connected home,” am probably not a good example of the immediate future for IoT-focused, consumer companies. The only people I know with these devices in their homes are those who have received birthday gifts from me lately (and employed me as their IT help). The Wired article by Daniel Burrus that we read for this week’s class said, “[w]hen we start making things intelligent, it’s going to be a major engine for creating new products and new services.” A lot of things are getting smarter, but it feels like we’re a ways off from the kind of consumer success we’re seeing in the app world.

One Click Only

ø

Tim Berners-Lee, like Sean Connery playing Captain Ramius in The Hunt for Red October, is asking the web’s governing body to fix the craziness that exists on the Internet around how we buy things. If anyone can get the web to listen, it would be the person who invented it.

Nathaniel Popper of the New York Times wrote an interesting article on this topic this week. You can find the story here. For those of you who read my previous post, this effort might sound quite a bit like the consortium work around the OSI networking standard. Popper writes that the W3C “has brought together the giants of the internet” to create “a new global standard for online payments.” Alternate solutions exist and are already in widespread use. Sound familiar? How do you think this will play out? Will this standardization effort succeed? If so, why? If not, what parts of history will repeat themselves?

I will admit that I’m not much of a shopper, online or in stores. I feel the urge to do too much comparison shopping before I buy anything, and when I get to the point of making a purchase, I don’t mind that it takes me more than a single click. Buying something and parting with money should take time, or at least that was how I was raised. Ever since I first heard of Amazon 1-click, I’ve always felt that this was something that advantaged the seller more than the buyer. Again, what do you think? Is “tap, tap, buy” good for our society?

On the other hand, we could use with better security around web purchases and payments. Shopping online isn’t going away. In fact, it’s growing. The W3C and the members of this group aren’t wrong that more standardization and careful thought around the security issues would benefit us all. My last question for you: So how do we get from here to there?

If you want to learn more about this particular standardization effort, you might visit “Web Payments at W3C.”

Log in