Cyber war, cyber crime, and jurisdiction

0

It’s an odd thing about ‘cyber’ as a prefix– with the exception of cyberspace, it almost always means something bad. We have cyber-crime, cyber-war, cyber-bullying, but never cyber-puppies or cyber-joy. And most of the people working in technology don’t use the term at all. But it is a big thing in government and policy circles.

We had a great discussion in the seminar this week with Michael Sulmeyer about cyber war. The subject is complicated by the difficulty of distinguishing between cyber war, cyber crime, and cyber espionage. There are rules about war, but they were developed for the kind of conflict that occurs in physical space. The rules for conflict in the digital world are not well understood. And the notion that the two spheres of conflict will remain distinct is something that few believe. We have already seen some attacks that move from the digital world to the physical world, but there is little understanding of how an escalation from the digital world to the physical world would work. What are the rules, and what can be expected from adversaries? Without having some notion of reasonable escalation, it is hard to tell were any attack will end.

One worry that I have is that the pace of change in technology is so much faster than the pace of change in the policy and legal worlds. Getting countries to talk to each other about the rules of cyber engagement takes years, and reaching an agreement takes even longer. By the time treaties can be written and agreed upon about some aspect of technology, the technology has changed so much that the agreements are irrelevant. How to get these time scales more in synch is a difficult problem.

But I think a larger problem is getting the right set of players into the discussion. Most countries think that discussions about trans-national conflict need to take place between countries, which is reasonable in the physical world. But when we talk about the cyber world, just having the various countries at the table misses a major set of actors– the technology companies that are building and shipping the technology that make up the cyber world. As was pointed out in our reading by Egloff, we now live in a world where major players include the corporations, much as was the case during the age of exploration. Keeping these players out of the discussion means that major forces are not represented. Companies like Google or Apple may be based in a single country, but their interests cannot be fully represented by their home government. They are powers themselves, and need to be represented as such.

It may seem strange to think of the tech giants in this way, but no more so than seeing the influence of the East India Company or the Hudson Bay Company during the age of exploration. It took a couple hundred years to work out the law of the sea; I hope that we can do better with cyberspace.

Governing the ungovernable

0

Many thanks to Jonathan Zittrain for joining us this last week to talk about Internet governance. JZ is always thought-provoking, entertaining, and leaves you thinking about things you haven’t thought about before. I feel lucky to count him as a friend and colleague.

Talking about the Internet Engineering Task Force (IETF) is always fun and interesting. The IETF is governance as done by the geeks; it doesn’t really exist (at least legally), it has participants rather than members, and the members (even when they work for and are supported by a company or government) are expect to represent themselves, not their employers. It is a technocracy, running on the principles of loose consensus and running code. In many ways, it is just a somewhat larger version of the groups of graduate students who got together when their advisors told them to write the code for the original ARPAnet. But it is hard to argue with the power of what they produced, even if you can’t understand how they could have done it.

The other aspect of the IETF that tends to confuse people steeped in governance is its enforcement mechanism. Passing laws or creating standards isn’t much good if there is no way to encourage or force others to follow those laws or standards. After all, passing a law doesn’t do much good if you don’t have police to enforce the law.

But here the IETF is different, as well. It has no enforcement power. If you don’t implement a standard that the IETF publishes as an RFC, no one will challenge you. There are no fines to pay, and no one goes to jail. Nothing happens.

Except, of course, that you can’t communicate with any of the computers that do implement the IETF standard. Nothing says that a computer has to speak TCP/IP, and nothing happens if the computer doesn’t. Including getting the traffic from the other computers that do implement the standard.

In fact, there are lots of IETF RFCs that haven’t been implemented. There is even a group of them that are famous (well, famous in the IETF community) for being April Fools jokes. Some of my favorites are RFC 3092, an etymology of the term “foo”; and the standard for electricity over IP (RFC 3251). Not all RFCs are taken seriously, even those that are meant to be by the proposers.

But the core RFCs define the interoperability of the Internet, and as such they become self-enforcing. You don’t have to follow them, but if you don’t you are shut out of the community. And if you want to replace them, you need to get others to not only agree to the replacement, but get them to do so simultaneously with everyone else. Which is pretty much impossible. So the best approach is to simply go along with what everyone else is doing, and follow the standard.

This is much of the reason that groups like the ITU or various parts of the United Nations, that would dearly love to have control over the Internet, can’t quite figure out how to take that control. They might declare that they own the standards (they in fact have). They can insist that everyone change to use their standard (they have done this, as well). But they can’t make it worth anyone’s while to make the change, so they have no enforcement mechanism.

It’s enough to make a bureaucrat cry. Which is enough to make the geeks smile, and continue…

Who do you trust?

0

Our seminar this week was billed as talking about voting and the Internet, but rather rapidly changed into a discussion of fake news, polling, and how to determine what is true. Another technology class going philosophical in front of our eyes. Towards the end of our discussion, we went around the table to say who it is that we each trusted, and the answers were both interesting and revealing. Trust in parents was phrased in the past tense. Most institutions were not considered particularly trustworthy. Most often mentioned were crowd-sourced sites like Reddit, Wikipedia, and Quora. Occasionally a news magazine like the Atlantic was mentioned, but I found the lack of trust in any sort of expert-based or curated site interesting.

Trust in experts seems to be at an all-time low, or maybe it is simply that we don’t recognize experts in some fields. With the advent of the Internet, everyone believes he or she can be a journalist, even though professional journalists go through a lot of training on how to insure that they have multiple sources, how to balance between the public’s right to know and the safety of releasing information, and the like. One of the real differences I see between the leaking of the Department of Defense and State Department information by Chelsea Manning and the leak of NSA information by Edward Snowden is that Manning gave the information to WikiLeaks (which then released everything) while Snowden gave the information to a team of journalists (who decided what should be released and what should be held back, balancing the right to know with the damage the information could do). One can argue that this is not a material difference, but in the Snowden case there was a reliance on trained expertise that was missing in the Manning case.

There have certainly been times when the experts have made huge mistakes. The Vietnam War has often been blamed on the hubris and self-deception of the “best and brightest” around Robert McNamara. Reports of weapons of mass destruction in Iraq that came from intelligence experts (although, it should be noted, other experts disagreed) led to another war, the consequences of which we are still seeing (and paying). Just because someone is an expert doesn’t mean that they are always right.

But we seem to have come to a point where the possibility of being wrong is confused with the certainty that someone must be wrong, or at least so prejudiced that their conclusions can’t be trusted.  The press is either liberal leaning or conservative, so neither can be trusted. Many people seem to take E.B. White’s stance, that “All writing slants the way a writer leans, and no man is born perpendicular.”

People tend to forget the whole thing that White was saying– the full sentence is “All writing slants the way a writer leans, and no man is born perpendicular, but many men are born upright.” Just because someone has a point of view doesn’t mean that they are wrong, or somehow dishonest, or manipulative. Trust, as Ronald Reagan said, may require verification. But imperfection doesn’t mean that trust is impossible. I certainly believe that the New York Times has a point of view, but I don’t believe that it impacts the truth of what they report. I often see the same reporting in the Wall Street Journal (on the news pages), a publication with a very different point of view. This leads me to trust both (for the news), as opposed to, say, Fox News or the National Enquirer, where I find the stories much more difficult to independently substantiate.

The push towards crowd-sourcing of knowledge and away from trust in expertise appears to rest on the assumption that the prejudices and distortions of a large populace will be evenly spread around the truth, so using the wisdom of the crowd will cancel out the individual prejudices. But I find little or no evidence that this is generally true, in spite of the nice democratic flavor of such a stance. Around matters of technology, I find that there are people who simply know more than others, and are better able to solve certain problems. I trust climate scientists more than, say, Senators on the subject of climate change. This is a form of elitism, but one that I’m willing to live with. It doesn’t mean that these people know about everything, or even that they are right in everything they say about their particular subject. But they are more likely to be right than someone randomly picked.

It is tempting, given the difficulty of having to think about what is really true, to take the stance that nothing can be trusted and it is all relative. Unfortunately, the universe doesn’t really care if we believe the facts or not; the facts are as they are. Disbelieving experts can lead to rather bad outcomes; thinking that there is no difference between truth and lies (or mis-statements) can lead to other bad outcomes (as we are seeing). Finding at least the best approximation of the truth can be difficult, but not doing that work is worse.

 

Empire and Innovation

1

I’m late in posting again this week, but this time I have a reason. Our discussion last time (and many thanks to David Eaves for visiting and leading the session) was about the interaction of the Internet and government. By coincidence, I had agreed to go to Washington, D.C. on Friday of last week to give a talk to interested Senate staffers about cyber security. So I thought I’d wait until after the trip to see if going to the seat of the federal government would trigger any thoughts.

The trip was fascinating– I had been asked to give a talk that was the conclusion of a month-long series of talks and panel sessions, organized by the Sargent at Arms for the Senate, on cyber security and social media. The Sargent at Arms is, essentially, the head administrator of the Senate, running all of the departments and groups that allow the Senate to do its work. My audience was made up of members of these administrative units, along with staff members for the Senators themselves. There were about a dozen people in the room, but the talk was also broadcast on the Senate’s own version of CNN, both within the Senate office buildings (there are many) and to the field offices of the Senators.

The room where I gave my talk was one of the (many) Senate hearing rooms. It was impressive, even by Harvard standards. Beautifully painted ceiling (with a zodiac motif), high ceilings and huge windows, lots of wood and carvings, and a raised area with a table and chair for the Senators (blocked off so no one would enter the space). After the talk I got a great tour of the Capitol itself, one-on-one with a staff member of the computer security group, which let me go all kinds of places that are generally not open to the public. The size of the place, the scale of the rooms, and the history recalled were all pretty awe inspiring and a bit overwhelming.

The only places I could compare it to that I have visited are the Coliseum in Rome, the Doge’s palace in Venice, and St. Peter’s in the Vatican. All monuments to their empires, all built at the height of that empire’s power.

But as I was feeling the awe (and pride) caused by seeing the Capitol, I couldn’t help but think of the places I knew out in Silicon Valley, in the New York tech scene, or in the tech companies around Boston. None of them were as beautiful and impressive as what I was seeing. But there was also a sense of ponderousness, of self-satisfaction, and of slow moving deliberation in the halls of the Senate that contrasted sharply with the feeling of excitement, experimentation, and energy that I remember from the tech companies.

All of which makes me wonder about the effect and interaction between the world of government and the world of technology. We talked some in the seminar about how technology can improve the delivery of services by government, but often that is just government adopting technology that has been used in the rest of the world like reasonably designed web sites and APIs that allow access to information in a way that enables others to write useful applications. This may be new and different in the world of government, but has been the norm in the rest of the world for a decade or more.

David’s stated worry was that government could use technology to impose a surveillance state and become far more controlling than anything thought of by Orwell. We have seen some things (like the Snowden revelations) that might back this view up, but so far I haven’t seen evidence that the government agencies can more quickly enough or competently enough to really carry this off. Nor do I think that the government  believes that it has to…the environment in which those running the government, like the Senate, are designed to make them feel that they are already masters of the world. Why would they need to do something different?

I have a very different worry– that the tech companies will move so fast in comparison to what happens in government that they make the Senate and the rest of government pretty much irrelevant, at least on a day-to-day basis. Yes, we will need the government bodies that deal with defense and foreign affairs to continue dealing with those subjects, but our everyday lives will be molded and controlled by corporations that move so fast and change so rapidly that the government bodies that are supposed to regulate those companies, protect us from their abuse, and insure that they don’t overstep the bounds of law and ethics are simply left behind. It has taken a year for the federal government to even start investigating what role tech companies like Google, Facebook, and Twitter played in the last election. Imagine how quickly a tech company would go out of business if it took a year to react to something like that?

I’m generally an optimist, and I don’t think that tech companies (or any other kinds of companies, for that matter) are actively evil. But they are currently pretty much unanswerable to the public, and this is beginning to worry that same public. We need to find some way of addressing these issues, but it won’t be by slowing down tech so that it matches the pace of government. The time scales are too different, and the incentives are too far out of alignment. We need a new approach to these problems, one that combines speed with responsibility. Our technologists need to think beyond the technology to the effects of that technology, and our legislators and regulators need to learn to understand both where technology is and where it is going. I don’t see an easy answer, but this is a problem we need to solve.

Projection and Confession

1

I increasingly find myself in the middle of discussions about what the machines are going to do when they become smarter than we are. This worry has fueled science fiction for as long as there has been science fiction (I have an anthology from the 1940s and early 1950s where the theme shows up). But the conversation has taken on a new immediacy since the deep learning advances of the past couple of years. Machines now play go at the highest level, machine vision is getting much better, and there seem to be new breakthroughs all the time. So it’s just a matter of time, right?

I’m not so sure.

My first area of skepticism is whether, as the AIs get better and better at what they do, that they come closer and closer to being sentient or thinking. Computers play chess in a very different way than people play chess, and I suspect that the new silicon go champions are not approaching the game the way their human counterparts do. I’m always reminded of Dijkstra’s comment “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” Submarines certainly move through the water well, but it isn’t what I would call swimming. And while computers can do lots of interesting tasks, I’m not sure it makes sense to say that they think.

Saying that they think projects our own model of what it takes to solve a problem or do a task onto (or into) the machines. We tend to build these kinds of models, where we anthropomorphize non-human things, all the time. We even do it with machines– we talk about how our computers are out to get us, or the personality of our cars. Of course, we also project our internal life to other people, where all we have as evidence is that they act and react like we do. But we also share a lot more with other humans (evolution, biology, and the like) that makes the projection seem a bit more reasonable (although not provable, as the problem of other minds is still active in philosophy).

So I tend to be a bit skeptical of the claim that, because machines can do the things that we do, that they are therefore able to think and be conscious in the same way we do.

But even if I were willing to grant that at some point of complexity and with the ability to learn the computers of the future will become sentient and self-aware, I’m not sure that the worries of many who talk about the singularity are warranted. My skepticism here is the unstated assumption that if the machines become sentient, they will also behave in the way that people behave. The worriers seem to jump from the singularity to the conclusion that the new, super-intelligent machines will keep us as pets, or try to destroy us, or set themselves up as gods, or simply not need us and treat us as ants.

Maybe this too is projection. If the machines are sentient the way we are, they will act the way we do. I tend to see this as more a case of confession on the part of the people doing the worrying– this tells us what they would do if they we the super-intelligent beings. But the motivations for humans are much more complex than what sentience or even intelligence dictates. We are still wired for the desires for food, and reproduction, and all sorts of other things that have nothing to do with learning or being intelligent (if you think human behavior is driven by intelligence, you haven’t been paying attention).

So I’m not at all sure that machines will ever be intelligent or sentient, but if they do I’m even less sure I know what will drive their actions. A super-intelligent machine might decide to go all Skynet on us, but I think it is just as likely to ignore us completely. And just as we don’t understand how many of the current machine algorithms actually work, we might not understand much about a super-intelligent machine. Because of this, on my list of worries, the singularity doesn’t make the cut…

Prematurely right…

2

Part of our reading for the discussion of the Internet of Things this last week was a selection from the National Academies study Embedded Everywhere. I was part of the study group, which published the book back in 2001 (after a couple years of the study). It’s interesting to look back at something like this– we got so much of the big picture right that I’m surprised, especially since we got almost all of the details wrong.

What we got right was how ubiquitous networked sensors were going to be. We saw them being used to help with traffic congestion, agriculture, climate science, seismology, and a host of other things. All of which is being done.

What we got wrong was how all this would happen. We talked about “smart dust,” a notion that the miniaturized sensors, with low-power networking and long-lasting batteries, would be sprinkled around the environment so that the data they collected could be fed back to banks of servers. There was even a project at UC Berkeley that produced what they called motes that were seen to be a first step along this path. Somewhere in my office I think I still have one or two of these hanging around. But it turned out that these were never as small as we had hoped. Batteries didn’t get better as fast as we thought they might. And calibration of large numbers of sensors turns out to be an amazing difficult problem to solve unless you have some way to get a human to deal with the individual sensors.

Instead, all this happened through cell phones. There were cell phones back when we wrote the study, but they were used as, well, phones. They didn’t have GPS embedded. They didn’t include accelerometers, or other forms of sensing. The networks they connected to were cell networks, optimized for voice and pretty bad (and limited) when transferring data. They were interesting and useful devices, but they weren’t the kinds of sensor platforms we were envisioning.

Like the early users of the ARPAnet and the Internet, we didn’t see what Moore’s Law and the acceleration of network technology was going to do to our little phones. Within a short number of years Apple introduced the iPhone, which was really a small networked computer with enough sensors to make things interesting. Restrictions on the accuracy of civilian GPS were lifted just before the study was published, but we had no idea the size of the impact that would have. As sensors, cameras, and microphones became smaller and better, they got pushed into the phones. The networks for the cell phone system got better and better, both in bandwidth and reliability. Calibration ceased to be a problem, since all of the sensors were paired with a person who could help with the calibration. Soon all of the data we had hypothesized being sent to clusters of servers was being gathered and sent. Just by a different set of technologies than we had been able to imagine.

The connection of people to the networked sensors caused other good things to happen, as well. People could get involved in the projects that we originally thought would be machine-only. There were community government projects to allow people to report pot-holes (automatically, based on bumps that their cars encountered), damaged sidewalks (where pedestrians could take a picture of the sidewalk, with a GPS tag, and send it to the local government), and the monitoring of air pollution or how well transit systems were keeping to their published schedules (which have given way to real-time schedules that tell you where the busses or trains are, not where they are expected to be).

It’s another reminder to all of us technologists. We tend to think of what the technology can do on its own. But the most valuable uses of the technology pairs the technology with people who use it, sometimes in unexpected ways. We think of how we can use machines to replace us, rather than how we will use machines to enhance what we do. But the valuable uses of technology are in enhancing the human enterprise, and that’s how we end up using the technology, even when it wasn’t designed for that. A lesson that we need to keep in mind, since we seem to constantly forget it.

 

Good service or privacy invasion…

0

You wouldn’t know it now, but there was a time when I was a pretty good dresser. Fashionable, even.

I will admit that this was through no fault (or effort) of my own. But when I was a teenager, I bought my clothing at a single store, where there was a clerk who essentially picked out all of my clothing. I would go in a couple of times a year, and he would tell me what to buy, and what went with what, and how I could combine what I was buying with what I already owned. He didn’t quite stitch the garanimals on the clothing for me, but it was close. He knew everything that I owned, and all I had to do was follow his instructions.

When I went off to college, I was on my own and things degraded quickly. But for a while there I had great service, and was willing to pay a price and know that there was someone who knew more about what was in my closet than I did. He also cared more than I did. But I liked the result.

I can now foresee a time when this sort of service could be offered again, pretty much to everyone. It could be done via the web, using an AI that was able to filter my preferences (taken from what I had bought in the past) and add some sense of fashion taste, and offer me the right clothing to buy. It would know more about what was in my closet than I did, and could customize itself to my taste and preferences.

But we tend to worry about all of the data that on-line corporations like Amazon and Google know about us, given all of the data that they gather. We worry about the micro-targeting of ads (which can also be seen as only showing us ads about things in which we are interested) and the honing of our news feeds that put us in a bubble. Because of this, there is talk of regulating the data that these companies gather about us, limiting their ability to know what we have done and predict what we will do.

While I share a lot of these concerns, I also wonder if we are letting the unfamiliarity and “creepiness” of the situations dictate an emotional response that may not be justified. When I hear people talking about how much, say, Google knows about me, I wonder who it is that actually knows. Corporations don’t know things, they are abstract entities. The computers at Google don’t know things, either; they process data and perform algorithms, but they no more know anything than submarines swim. Is there any person at Google who knows all of this information about me (I have friends at Google who know a lot about me, but that isn’t from tracking the data I enter on-line). There might be someone at the NSA that knows about me (although I doubt I’m that interesting), but I don’t think there is anyone at Google.

One big difference between the technologies that know about me and the clothing store clerk of my youth is that I trusted the clerk. He was another human, and I talked with him and interacted in a way that led me to respect his opinions (and do what I was told). There is no such trust relationship with the technologies with which I interact. But that could change, if the sophistication of the algorithms improves. Rather than showing me things I have just purchased to see if I want to purchase them again, maybe the algorithms will start showing me things related to those purchases. The more they could be like the trusted clerk, the less I would be creeped out.

I don’t think they will ever get to the point that I will be a fashionable dresser again. But it might mitigate some of my worries about privacy. Maybe…

Design…

0

End-to-end Arguments in System Design is one of my favorite papers in the computer science universe. It is well written, and it clearly states a design principle that was followed in the creation of the network we now know as the Internet. It gives some nice examples. What more could you want?

Well, I’d like more papers like that. In the computer science/software engineering/programming community, we don’t often talk about what makes a good design, or why we made the design decisions that we make. Oh, there are lots of book and articles that tell you how you ought to go about doing design. Maybe you should use pattern languages, or you should adopt an agile methodology, or you should do everything in UML diagrams. I’ve even seen combinations, saying that you should use an agile pattern language (no doubt producing UML diagrams). All of these can be useful as mechanisms for communication of a design, but I don’t find any of them actually talking about what makes a good design good, or how to go about doing such a design.

Writing about good design is far rarer. There is Butler Lampson’s classic Hints for Computer System Design. This is a great paper, but it is also pretty old (although not, surprisingly, outdated). There are a couple of books that tried to talk about design at different levels (Beautiful Code for programming, and Beautiful Architecture for collections of programs), but the results in both are mixed (full disclaimer; I wrote a chapter in the second and am not at all sure that it was much of a success). I’ve always like Paul Graham’s Hackers and Painters (the blog rather than the book), but it is more a contemplation on the art and craft of programming rather than on design. I’ve tried to talk about how to get good design, but it is a very slippery topic. Even Fred Brooks has written a book on the subject, which I like but is also, in the end, somewhat less than satisfying.

One of the reasons, I believe, for the lack of literature on the subject is that it is so hard to say anything that doesn’t seem to be either trite or just wrong. We all can agree that simplicity is a good thing, but what makes a design simple? Breaking a system into modules that make sense is trite; saying what it is for a module to make sense is difficult. You can recognize a good design when you see one, but explaining what a good design will be before the fact– well, I’ve never seen that done all that well.

After thinking about this for some time, I’ve come to the conclusion that good design is more art than science, more craft and taste than knowledge and process. As Brooks says, often the best that one can say is that good design comes from good designers. Good designers are often trained by having apprenticed with other good designers, but that’s about as far as you can go with the explanation. Even that training (which is really an apprenticeship) may not be all that conscious. I know of companies that set up mentoring programs in the hopes of getting good design training, but my apprenticeship (and I remember it well) was much more like that in Kill Bill, Vol. II– I spent a lot of time doing the computing equivalent of carrying water, and often felt bruised and battered. But I learned.

This is another way in which system design is like art– the way you learn to be an artist is to try, hear criticism from a  more experienced artist, and try again. Like a painting, the design of a system is never finished– there are always things you would like to change, decisions you would like to revisit. Sometimes you decide that it is good enough, and you ship. But you know you could always do better.

Simplicity and Scale

0

Something that always surprises me (well, more accurately, boggles my mind) is how successfully the Internet has scaled.

The original ARPAnet was designed to connect a small number of research locations so that they could share resources. Starting with just three sites, it scaled up to 10, then 100, over a long period. Some of the protocols changed, until finally TCP/IP came along. While there have been minor changes in those protocols since they were first introduced, they have remained pretty much recognizable as the Internet has gone from hundreds of connections to billions. Which is more than something of a miracle.

If I had to pick a reason for this, it would be the simplicity of the design of the network. Making the Internet really simple. All it does is deliver packets, using best effort, and relying on the end-points to do as much as possible. This means no part of the Internet needs to know about all of the Internet. It also means that new applications can be added without changing the underlying network. All sorts of things are possible when you keep the network as simple as possible.

What is often not appreciated is how hard it is to design something that is this simple. The temptation to add a feature here or something that will make life at the endpoints easier there can be overwhelming. I have great admiration for the original designers of the Internet protocols for their ability to resist this. As an engineer, one of the most difficult things to do is know when to say “no”. We can thank those early designers for the taste and courage to do just that.

Back on the air…

1

Hard to believe that I haven’t written since last November, but there it is. Now the second version of our Freshman Seminar is up and running, so I have a forcing function to write…

I’m always fascinated by reading the early history of the ARPAnet/Internet. The whole effort was an amazing leap of faith by a group that didn’t know if what they were trying to do would ever work. They certainly had no conception of how it would scale and the importance it would have in the current world. They were just trying to get something that would let them do their work more easily.

Which leads me back to a question I often consider, which is how can standards best be set? The standards that enable the internet (TCP/IP, DNS, and the like) were, for the most part, built to solve problems people were actually having. They weren’t built using an inclusive, multi-stakeholder, democratic standards organization. The IETF, which is the organization that has the most to do with Internet standards, is run by a process that causes governance people to faint. It’s a bunch of geeks, figuring out how to get the job done. But the results are pretty good.

I’ve argued elsewhere that the best technology standards are the ones that describe an existing tool or technology that everyone uses because it is useful. The C programming language, for example, was standardized well after everyone who was going to use it was already using it. The purpose of the standard was to write down what we all knew. The early days of Java(tm) were the same, as was Unix (which had, in its early days, two standards; one from AT&T and the other from Berkeley). I think of these standards efforts and writing down what are already de facto standards. This is valuable, and the standards seem to work.

I contrast these standards with those that begin with a standard’s process. These are technologies like Ada, the object-oriented language mandated by the Department of Defense and built by committee, or the ISO networking standards, produced by a full multi-stakeholder process. While they may have their good points, they tend to be clumsy, ugly, and not very useful (and therefore not very used). These are attempts to impose standards, not describe them. I used to claim that the reason managers liked these standards is that they turned technical questions (which the managers didn’t understand) into political questions (which they did). But perhaps I’ve mellowed with age; I now think an even more basic problem with these committee-invented standards is that they are created without a particular problem to solve, and so solve no problems at all.

One of the real blessing of the early Internet was that no one other than the engineers working on it thought it was important enough to be worth putting under some form of governance. The geeks got to solve their problems, and the result is a system that has scaled over many orders of magnitude and is part of the infrastructure of all technology. But one of the reasons that it works is that those who started it had no idea what they were doing, or what it would become.