You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

The Living Business Plan

ø

(I wrote this in 2002.) A lightweight tool for getting and keeping
people on the same page in running a high-tech business (and maybe
others too).

Recapping KM Cluster at IBM Research: Technology for Social Networking

ø

Our KM Cluster panel on technology for social networks at IBM Research three weeks ago was well-received.  Then Bill Ives asked me to recap some of the points.  Here goes:

The question on the table was, “what makes good technology for social networking applications?”  To answer this, the logical thing is to ask first, “What makes a good social networking application?”

In summary, you have to have something valuable to exchange, and not lose it in a thicket of other junk.  Second, you have to group users into tight “affinity groups” within which they are likely to share.  Finally, third, you have to make both contributing and consuming information really easy.  To illustrate these points, at great professional risk, I re-told the (true, I swear) Tale Of The Binge-o-Matic.

So then you ask yourself the following questions about technology:

1. at what cost can I modify it to focus the feature set only on the one or two things that are most valuable to exchange?

2. does its scheme for defining and managing groups and permissions  support the affinity group structure I think will maximize sharing?

3. at what cost can I modify it to support the simplest possible structured contribution and consumption of what’s shared?

Generally speaking, the first and third of these are easy if you’re custom-building a web app. Any good package should also make them easy. 

The second is hard.  The right way to do it is with abstractions that support inheritance of group properties and permissions.  But abstractions can be seriously slow if not done well.  (Getting this right is part of what makes OpenACS/.LRN really special.)

Thanks again to Bill, and to Kate Ehrlich for hosting us.

January 21 KM Cluster Meeting at IBM Research in Cambridge, MA: “Inside Social Networks”

1

Bill Ives kindly invited me to sub for Judith Meskill as a panelist at the Spring 2005 Meeting of Boston’s KM Cluster.  The meeting is being hosted by IBM Research in Kendall Square.  The topic of the panel I’m participating in is “Technology Context — What is the Role of Technology in the Social Enterprise?”

Column in Mass High Tech: Going Global The Open-Source Way

ø

Many thanks to Jack Jackson of On-Message Public Relations
(communications advisor to the .LRN Consortium) for securing this
writing opportunity.  Unfortunately Sloan’s cost savings over
alternatives got garbled in the copywriting by MHT — the actual cost
advantage for Sloan is 75% not 25%.

.LRN: Avanti!

ø

The revolution spreads: .LRN adopted for a major exam by 24 major Italian universities…

see also http://www.cineca.it/press/ECDL_SIIen/

There Is No Open-Source Community…

ø

… and what to do about it.  A few thoughts I’ve finally gotten down. 

 

Professor Jerry Mechling invited me to be a guest instructor in his “Leadership for a Digital World” course at Harvard’s Kennedy School of Government today.  As part of  the class, we’re interviewing Massachusetts CIO Peter Quinn on the state’s priorities for IT-enabled initiatives.  Peter is a well-known proponent of open-source software alternatives for the public sector, and a primary force behind the Government Open Code Collaborative

To help myself prepare for the class, I’ve put down some observations on open-source which I hope will be helpful to others for whom open-source is unfamiliar but potentially important.  In the past, I’ve been a user of open-source software, a senior executive in both open- and closed-source software firms and at an impartial integrator, a board member of an open-source software consortium, and a sponsor of mission-critical application development efforts that use open-source software in major organizations.  So hopefully my experiences will be helpful too.

A few words of explanation:

There are two ways of distributing software.  One way is called a “binary” distribution.  This means that what you get is a bunch of ones and zeros that a computer can make sense of, but which humans can’t.  Another way is to distribute the “source code”, or the code in which the program was originally written.  This allows users of the software to modify the source code for their purposes as they see fit. 

Binaries can be distributed free of charge (this is sometimes called “freeware” or “shareware”), and they can be copied.  They just can’t be modified.  In commercial software, getting binaries to work requires a license key, a unique code that unlocks the binary.  You can copy commercial software if permitted (for example, to make a backup copy), but you need that license key to make it work.  Sharing the license key is usually not kosher.

With open-source though, you can view, copy, edit, and redistribute the “secret sauce”.  However, the terms of permissible redistribution vary widely according to the specific license under which a user originally obtains open-source software.  Some licenses like the GNU (GNU’s Not Unix) Public License, or GPL, require that any modifications made to the original source code must be redistributed openly (that is, as open-source, not binary  format) if they are redistributed along with the original source code (which itself must of course be redistributed as open-source).  Other licenses, such as the Mozilla Public License, or MPL,  allow open- or “closed-source” modifications to piggyback on re-distributions of open-source software, at the discretion of the people or organization who write the modifications or extensions to the base code.

The proliferation of open-source software licenses — check out http://opensource.org — illustrates both the vitality of the open-source world, but also the myriad and widely-varying interests and strategies being pursued by its participants.  Which brings me to the principal point of this essay, namely, that there is no such thing as “the open-source” community.

Why does this matter?

I meet people all the time, both folks who work in open-source software projects as well as people who are current or potential users or sponsors of applications that use open-source software, who make assumptions about what open-source means, how it will work, and how its providers will behave, that are based on really bad assumptions about things like quality, cost, and process.  Many believe generally that open-source is either inferior or superior to closed-source alternatives.  A better perspective is that “it depends.”

The first rule of software selection is to have a clear idea of what your requirements are.  A common mistake is to stop at defining the features you need, and to ignore performance (including scalability of this performance to however many users you need to support), usability, extensibility (how sophisticated and well-documented are the API’s provided), usage and support, total/ lifetime cost of ownership, and degree of legal risk, among others.

The most helpful thing you can do when considering open- and closed-source alternatives to meet whatever needs you have is to forget the labels “open-source” and “closed-source”.  For any given set of requirements, there are open-source alternatives that are far superior to most if not all closed-source alternatives, and the reverse can be equally true. 

So how do you figure out what software makes sense?  Unfortunately it’s getting harder.  While I claim that there is no open-source community (certainly not any more, though there may have been one in the past), there are many open-source communities.  Each of these has its own unique and often competing interests.  In the past, this divergence of interests has been masked by the existence of a common “enemy” — usually Microsoft — and smaller, less mature markets.  But, as the markets for open-source software expand and the communities become increasingly commercialized, competition intensifies.  With this competition comes, via human nature, a lot of hype and FUD-based (Fear, Uncertainty, Doubt) marketing.  Each project’s proponents make claims about the universal value and applicability of their own hacks, in many cases ahead of any reality.  In the commercial / closed-source world, the defense is opacity — you can’t see the code to evaluate whether or not these claims are real.  In the open-source world, the defense is extensibility — if you don’t like what you see, you have the freedom to extend it.  The latter defense is better than the former, but not by much.

(Related to this is a problematic expectation for how new open-source code gets developed.  Many people assume that if they wait around, “the open-source community” will sense their needs and develop what’s required, for free.  I suppose this is statistically possible, in the same way that Shakespeare’s monkeys  will eventually get around to finishing things.  But as a practical matter, most open-source software development is funded by someone.  And even if someone else builds what you need, you have to be mutually aware and willing to engage.  Also, it’s especially nice if your paths are roughly parallel and not just the momentary crossing of significantly different development vectors, and this takes ongoing coordination, which itself requires funding.  And it’s even more important that there not be two of you, but lots more users, and committed users at that.  Naivete about this is not unique to users of open-source software.  Plenty of people “in the community” have expectations for altruism that get disappointed all the time.  A better assumption is that people might share what they have already (funded and) developed themselves, if it’s in their interest to do so.)

What can users do to help themselves?

I’ve been thinking, not originally, that the best way for users to approach open-source software projects is to “open-source” their requirements, and work to reconcile their differences into as few specifications for starting-point solutions as practical.  Then let software providers, both open- and closed-source, compete based on the best combination of answers to the different dimensions I described above.  (Good answers come with proof points — real users.)

In the private sector this is harder, because those requirements often reflect proprietary trade secrets (implied processes, etc.).  But it seems to me that in the public sector, and in education as well, this secrecy constraint does not exist or even make sense.  I suppose these requirements are already public somewhere, but they sure are hard to find.  Further, they are most certainly not reconciled across all of the different entities whose needs are sufficiently overlapped to make it worthwhile to do so.  Perhaps there is a role here for intermediary associations.  An enterprising community that leads this “standards-setting” effort on behalf of potential users, which also benefit from intimacy with them, might make a good partner.

.LRN Consortium Launches

ø

See my article earlier this year on .LRN

Preparing for November 2

ø

If you are a public official concerned with election security and emergency preparedness, then this model might be of interest. See also BENS.

The State of e_Government

1

This week I attended the National Electronic Commerce Coordinating Council meetings in Boise, Idaho.  Here’s  my account.

This week I attended the National Electronic Commerce Coordinating Council meetings in Boise, Idaho.  The theme of the conference was “Government in the Digital Age: Myths, Realities & Promises – A Candid Assessment and Road Map for Success.”

(I went on behalf of the E-Government Executive Education (3E) Project at Harvard’s JFK School of Government (KSG), where I am working with Professor Jerry Mechling to develop a new online application called the Compass.  The Compass, built on .LRN with the support of IBM, includes assessment, benchmarking, and library tools which will support both the 3E Project’s executive education programs, and perhaps later other KSG and other programs.  We showed an early version of the Compass to 25 execs gracious enough to give us their time, and their response and feedback were both encouraging and useful.

The first “production” use of the Compass prototype will be for a 3E program we’ll be running in December called “Leadership in a Networked World: The 2005 Leadership Agenda”.  The purpose of the program is to help senior public sector executives and their advisors evaluate and set priorities among IT-enabled initiatives, not only in light of their potential value, but of the political calculus associated with them as well.  After choosing a few to focus on, the program then considers how best to pursue these priorities given their associated political considerations.)

Conferences can be a mixed bag.  This was a good one.  The participants included current and former state comptrollers, auditors, and CIOs.  Vendors were represented by very experienced folks who had themselves been in the roles of the government participants and were well-known and respected.  So in addition to a very collegial feeling there was little of the awkwardness and stiff hype that’s typical of other vendor-customer interactions.

I’d never been to Idaho before.  Boise is a nice town, with some attractive architecture (state capitol, churches) and some incredible views.  There’s a really good, inexpensive Basque restaurant called Bar Gernika across the street from the Grove Hotel where Jerry and I had lunch (lamb-dip sandwich: good) and Dan Combs and I had dinner (lamb stew: great).  Apparently you can fish in the river that runs through town, though I didn’t get to try that.  And of course it took me a bit to adjust to saying “how-do” to passersby, and to wait to cross until the light said I could.

Here’s some of what I heard and learned (not necessarily a faithful transcription of what people said, presented, or intended).

Wednesday morning’s plenary session was keynoted by Peter Harkness, the editor and publisher of Governing magazine. His theme: the e-government revolution hasn’t really happened yet.  My notes:

  • E-government hasn’t affected costs yet.
    • From 1994 to 2003, the number of federal employees has shrunk from 2.2 million to 1.95 million, but this masks outsourcing to consultants
    • Over the same period, total employment by states grew from 4.6 million to 5 million, and local public sector employment grew from 11.7 to 13.8 million
    • (I checked: US population grew 16%, or from 240M to 280M from 1990 to 2000.  Per capita, we still have roughly the same number of government employees we used to.)
    • So e-gov has not reduced costs, even though “online v inline” is progress…
  • The revolution requires integration
    • Technology isn’t the obstacle
    • Organizational balkanization is the real problem
      • The exception that proves the rule is the military
        • We’ve seen the power of network-centric warfare
        • The real enabler is the elimination of inter-service rivalry in the conduct of warfare
    • Balkanization exists in many places in government
  • Among local, state, and federal, but also within agencies and branches of government at each level
  • Politics impede integration
    • Here’s an example of how it’s made worse:
      • Sophisticated parties + redistricting = primaries, not general elections are the actual places that public officials get selected with no enduring commitment to a moderate middle; add term limits to the mix and there’s little time to do the relationship building and horse-trading based on trust that allows room to get things done.
    • Here’s another example of how the “Politics of Policy” are dividing feds from states and locals:
      • FDA doesn’t want to allow importation of prescription drugs from lower-cost places like Canada, having been persuaded by the pharmaceutical industry that the practice is unsafe for consumers.  But Montgomery County, MD, where the taxpayer pays a prescription drug benefit for 80k retired public workers, says they are.  And in an era where federal non-defense discretionary spending is eclipsed by the size of the annual budget deficit, federal ability to influence local practice with funding is diminishing.  So incentives to cooperate on systems to ensure optimal use of drugs by seniors and other beneficiaries are also reduced.  We also see this adversarial relationship in homeland security efforts
        .
      • State AG’s have been suing investment banks, mutual fund companies, pharmaceutical firms, utilities & auto manufacturers, and have found themselves in court fighting not only these organizations but the federal government as well (e.g., California’s attempt to impose fleet mileage requirements in the state)

At lunch later that day, David Lewis, formerly the CIO of Massachusetts, gave me an example of the as-yet-unfulfilled promise of e-government.  David observed that a really useful thing would be for DHS case workers to be able to enter some parameters that describe a particular person’s or family’s needs, and then to have that “filtering” return a list of all of the relevant benefits  available to that situation from across 600 -plus programs offered by over 40 agencies.  He noted that the obstacle to this isn’t really technical, but the need to reconcile the words and languages that agencies use to characterize eligibility and relevant benefits. 

Obviously this is a challenging thing to do.  I offered that maybe one approach would be to complement traditional top-down reconciliation with a grass-roots “social entrepreneurship” approach.  This would entail a motivated 20-something grabbing some free software to build a demo, then getting a small grant to pay some moonlighting case workers and program staff to do an 80-20 cut at translating a significant subset of program eligibility requirements and benefits to some formal or de facto standard.  It might only take three months, six at the outside, to have a reasonably useful tool. 

Later in the afternoon, I attended a very interesting presentation by Glen Teal, who is the Portfolio Manager for Citizen & Customer Services for the Manukau City Council in New Zealand.  Glen was in the US at the invitation of Peoplesoft’s J.D. Williams, formerly the Idaho state comptroller (and a very wise and charming man).  His presentation was a thorough and well-organized story of how they had implemented CRM software to permit the consolidation of service interfaces with citizens across a number of different agencies, which then permitted the selective outsourcing of the fulfillment of these services to private firms under different contracting schemes depending on how well-developed pre-existing markets for these services were. 

Two lessons from his talk: first, never outsource relationship management with constituents; second, don’t try to create markets to outsource to where none previously existed.  So, on the latter point, it makes no sense for the government to do engineering services because the private sector has a healthy market for this that can be relied on.  However, private-sector dog catchers mostly do not exist in nature.  Outsourcing this function inevitably leads to consolidation of an initially fragmented mom-and-pop market by enterprising ex-public sector execs.  Once they achieve duopoly or something close, the government is at their mercy:  pay or it’s “Release the hounds!”.  More politely, Glen called it “market capture”.

The theme of opportunities from consolidation, and the structural barriers to it, wove its way throughout a number of conversations I had.  In the hallway between sessions, David Lewis and I chatted with Brian Ridderbush from Unisys about how Medicaid reimbursement rules encourage states to each develop their own benefit structures, which means perhaps $18 billion in additional expenses for variations in related claims processing systems.  Is the variety worth it?  I think we’re unlikely to find out.  The federal money is now allocated in the form of block grants as part of the overall trend toward devolution.  So the feds can’t prescribe any rationalization.  The feds reimburse the states for 90 cents of every Medicaid dollar they spend, so the greater cost of variety is not a place states focus on to save money.

Dan Combs, who was Iowa’s Director of Digital Government, gave me another example of this at dinner.  He described how in Iowa there were 99 counties.  The seat of each was located, when the state was established, at a distance of a day’s ride from the next.  Horses typically traveled about 30 miles a day in 1850, so Iowa has lots of charming but subscale local governments in the era of the automobile.  Does Iowa, with a population of 3M, really need 99 county governments?  Maybe not, there’s some advantages and a lot of tradition in their favor.  I guess it depends on what Iowans feel they can afford.

I talked with a couple of legislative auditors about how the part-time nature of many state legislatures affects the continuity and will they need to initiate and sustain major change that consolidation requires.  They noted that the pool tends to be limited to people who don’t typically have a lot of inclination or experience with the management of large organizations.  Frequently, legislators are attorneys whose firms can cover for their absence while they serve in ways that also indirectly advance their firms’ interests.  Or, they are wealthy ranchers who can spare three months while the snow covers their fields.  This means part of the challenge for the bureaucrats is to try to educate legislators on what’s going on and how to deal with it effectively.  But as one put it to me, “It’s sort of like trying to teach a pig to sing:  it’s not very effective and it annoys the pig.”

So what’s a government to do in the face of all these limitations?

One move is to de-emphasize new technology as the panacea.  Pam Ahrens is CIO of Idaho.  Idaho has been creative and ahead of the curve on e-government issues, but Pam noted that these days especially it’s people and not technology that’s on the critical path to change.  So Idaho is de-emphasizing R&D for S&C – search and copy – while they focus in on people issues.  After all, she notes, “Pioneers get killed, settlers get rich.”  I had the strong sense as a visitor from Back East that I ought to take her word on this.

Another approach is to “zero-base” government.  California seems to be pursuing this more radical approach.  Governor Arnold is quoted as saying in the course of commissioning the California Performance Review, “I don’t want to reorganize the boxes, I want to blow them up.”  Echoing this theme is the approach Michael Bloomberg is taking in New York.  Accenture’s Ken Dircks, who is a leader in the firm’s 311 engagement with New York City, told me Thursday night about Hizzoner’s takeover of the New York City Board of Education.  On hearing an explanation of why change would be tough to implement through a multi-tiered bureaucracy, the mayor simply ordered the physical buildings of an entire layer of the bureaucracy padlocked, effectively cutting “management” in this layer out of the process and forcing a higher layer to deal directly with schools.

Well now.  But part of me feels it’s insufficient.  Colin Powell’s “Pottery Barn” rule likely applies here as well — “You break it, you own it.”  So part of owning it is showing the way ahead.  And a general principle I believe in is that people are better at reacting than acting.  Show them something real they can try out and use and you’ll have more impact than if you describe it to them in principle.

Maybe open-source software can help make this experimentation more feasible and affordable.  Friday morning I attended a session on open-source in government.  The format was a debate between Daniel Greenwood from MIT’s Ecommerce Architecture Program and Stuart McKee, former Washington State CIO and now a government-sector “evangelist” at Microsoft (“advocate for Microsoft to the public sector and vice versa”).  Is open source ready for government, and vice versa?  I won’t summarize the discussion, except to say of course that it depends on the specific need and the specific open-source project under consideration.  But I did find a few recent, interesting articles on Microsoft and open-source in government.  Reading them I’m reminded of Gandhi: “First they ignore you, then they fight you, then you win”.

 

  • “Microsoft is expanding a program to give government organizations access to some of its tightly guarded software blueprints amid growing competition from rivals who make such source code freely available…” Wired 9/19/04 
  • “Microsoft chief executive Steve Ballmer told resellers at the European Partner conference that anyone in danger of losing business to Star Office should email him and he would send in the cavalry…” The Register10/6/04

The next NECCC annual meeting is in Boston next November.

MIT Sloan CIO Al Essa on .LRN at IEEE Tonight in Cambridge

1

Free, open to public, no registration required.

Log in