Technology

You are currently browsing the archive for the Technology category.

Searches:

So if you’re looking for something about privacy that’s not a site with a privacy policy, you’re also looking at a high haystack/needle ratio.

Just saying.

Not sure what else that data says, such as it is. But it’s interesting.

Tags: , , ,

Here’s what one dictionary says:

World English Dictionary
privacy (ˈpraɪvəsɪ, ˈprɪvəsɪ) [Click for IPA pronunciation guide]
n
1. the condition of being private or withdrawn; seclusion
2. the condition of being secret; secrecy
3. philosophy the condition of being necessarily restricted to a single person

Collins English Dictionary – Complete & Unabridged 10th Edition
2009 © William Collins Sons & Co. Ltd. 1979, 1986 © HarperCollins
Publishers 1998, 2000, 2003, 2005, 2006, 2007, 2009

I especially like that last one: restricted to a single person. In the VRM community this has been our focus in general. Our perspective is anchored with the individual human being. That’s our point of departure. Our approach to privacy, and to everything else, starts with the individual. This is why we prefer user-driven to user-centric, for example. The former assumes human agency, which is one’s ability to act and have effects in the world. The latter assumes exterior agency. It’s about the user, but not by the user. (Adriana Lukas unpacks some differences here.)

But this is a post about privacy, which is a highly popular topic right now. It’s also the subject of a workshop at MIT this week, to which some friends and colleagues are going. So talk about the topic is one thing that makes it front-burner for me right now. The other thing is that it’s also the subject of a chapter in the book I’m writing.

My argument is that privacy is personal. That’s how we understand it because that’s how we experience it. Our minds are embodied, and we experience privacy through our bodies in the world. We are born with the ability to grab, to hold, to make and wear clothing, to build structures that give us boundaries and spaces within which we can isolate what are our concerns alone.

Privacy requires containment, and concept of a container is one of our most basic, and embodied. Here’s George Lakoff and Mark Johnson in Philosophy in the Flesh:

Our bodies are containers that take in air and nutrients and emit wastes. We constantly orient our bodies with respect to containers—rooms, beds, buildings. We spend an inordinate amount of time putting things in and taking things out of containers. We also project abstract containers onto areas in space, when we understand a swarm of bees being in the garden. Similarly every time we see something move, or move ourselves, we comprehend that movement i terms of a source-path-goal schema and reason accordingly.

I don’t think privacy itself is a container, but I do think the container provides a conceptual metaphor by which we think and talk about privacy. I also think that the virtual world of the Net and the Web—the one I call the Giant Zero—is one in which containment is very hard to conceive, much less build out, especially for ourselves. So much of what we experience in cyberspace is at odds with the familiar world of physical things, actions and spaces. In the absence of well-established (i.e. embodied) understandings about the cyber world, there are too many ways for organizations and institutions to take advantage of what we don’t yet know, or can too easily ignore. (This is the subject, for example, of the Wall Street Journal’s What They Know series.)

That’s where I am now: thinking about containers and privacy, but not with enough help from scholarly works. That’s why I’m looking for some help. One problem I have is that the word privacy appears on every Web page that has a privacy policy. There are too many false radar images in every search. Advanced searching helps, but I can’t find a way to set the filter narrowly enough. And my diggings so far into cognitive science haven’t yet brought up privacy as a focus of concern. Privacy shows up in stuff on ethics, politics, law and other topics, but is not a subject in itself — especially in respect to our embodied selves in this cyber world we’re making.

So, if anybody can point me to anything on the topic, I would dig it very much. Meanwhile, here’s a hunk of something I wrote about privacy back in September:

Take any one of these meanings, or understandings, and be assured that it is ignored or violated in practice by large parts of today’s online advertising business—for one simple reason (I got from long ago): Individuals have no independent status on the Web. Instead we have dependent status. Our relationships (and we have many) are all defined by the entities with which we choose to relate via the Web. All those dependencies are silo’d in the systems of sellers, schools, churches, government agencies, social media, associations, whatever. You name it. You have to deal with all of them separately, on their terms, and in their spaces. Those spaces are not your spaces. (Even if they’re in a place called . Isn’t it weird to have somebody else using the first person possessive pronoun for you? It will be interesting to see how retro that will seem after it goes out of fashion.)

What I’m saying here is that, on the Web, we do all our privacy-trading in contexts that are not out in the open marketplace, much less in our own private spaces (by any of the above definitions). They’re all in closed private spaces owned by the other party—where none of the rules, none of the terms of engagement, are yours. In other words, these places can’t be private, in the sense that you control them. You don’t. And in nearly all cases (at least here in the U.S.), your “agreements” with these silos are contracts of adhesion that you can’t break or change, but the other party can—and often does.

These contexts have been so normative, for so long, that we can hardly imagine anything else, even though we have that “else” out here in the physical world. We live and sleep and travel and get along in the physical world with a well-developed understanding of what’s mine, what’s yours, what’s ours, and what’s none of those. That’s because we have an equally well-developed understanding of bounded spaces. These differ by culture. In her wonderful book , Polly Platt writes about how French —comfortable distances from others—are smaller than those of Americans. The French feel more comfortable getting close, and bump into each other more in streets, while Americans tend to want more personal space, and spread out far more when they sit. Whether she’s right about that or not, we actually have personal spaces on Earth. We don’t on the Web, and in Web’d spaces provided by others. (The Net includes more than the Web, but let’s not get into that here. The Web is big enough.)

So one reason that privacy trading is so normative is that dependency requires it. We have to trade it, if that’s what the sites we use want, regardless of how they use whatever we trade away.

The only way we can get past this problem (and it is a very real one) is to create personal spaces on the Web. Ones that we own and control. Ones where we set the terms of engagement. Ones where we decide what’s private and what’s not.

For a bonus link, here’s a paper by Oshani Seneviratne that was accepted for the privacy workshop this week. It raises the subject of accountability and proposes an approach that I like.

Lately, thanks to the inexcusably inept firing of Juan Williams by NPR brass, and the acceptance of a $1.8 million grant from George Soros, NPR has tarred its credentials as a genuinely fair and balanced news organization. Which it mostly still is, by the way, no matter how much the right tries to trash it. (And mostly succeeds, since trying to stay in the middle has itself become a lefty thing to do.)

Columnists all over the place are calling for the feds “pull the plug on funding for Natonal Public Radio”. (That’s from No subsidy for NPR, by Boston Globe columnist Jeff Jacoby. An aside: NPR’s name is now just NPR. Just like BP is no longer British Petroleum.) In fact NPR gets no money from the feds directly. What NPR does is produce programs that it wholesales to stations, which retail to listeners and sponsors. According to NPR’s finances page, about 10% of that sponsorship comes from the Corporation for Public Broadcasting (CPB). Another 6% comes from “federal, state and local government”.

Jeff points to a NY Times piece, Move to Cut NPR Funding is Defeated in the House, which says “Republicans in the House tried to advance the defunding measure as part of their ‘YouCut‘ initiative, which allows the public to vote on which spending cuts the G.O.P. should pursue.’ The You Cut page doesn’t mention public radio. It does have this:

Terminate Broadcasting Facility Grant Programs that Have Completed their Mission.

Potential Savings of $25 million in the first year, $250 million over ten years.

In his most recent budget, President Obama proposed terminating the Public Broadcasting Grants at the Department of Agriculture and Public Telecommunications Facilities Grants at the Department of Commerce. The President’s Budget justified terminating these programs, noting that: “Since 2004, the USDA Public Broadcasting Grants program has provided grants to support rural public television stations’ conversion to digital broadcasting. Digital conversion efforts mandated by the Federal Communications Commission are now largely complete, and there is no further need for this program.” and “Since 2000, most PTFP awards have supported public television stations’ conversion to digital broadcasting. The digital television transition was completed in 2009, and there is no further need for DOC’s program.”

CPB isn’t in there. And they’re right: the digital conversion is done. So maybe one of ya’ll can help us find exactly what the congressional Republicans are proposing here.

Here’s a back-and-forth between Anna Christopher of NPR and Michael Goldfarb of the Weekly Standard. Says Anna,

NPR receives less than 2% of its funding from competitive grants sought by NPR from federally funded organizations (such as the Corporation for Public Broadcasting, National Science Foundation and the National Endowment for the Arts).

Replies Michael,

I appreciate the smug, condescending tone of this letter, but I’m unconvinced. As one former CPB official I spoke to explained, “they love to claim they’re insulated, but they’re very much dependent on the public tit.” The other 98 percent of NPR’s funding comes from a mix of donations, corporate support, and dues from member stations. The fees and dues paid by member stations comprise more than half of NPR’s budget. Where does that money come from? In large part, from the federal government.

Take the local NPR affiliate in Washington, WAMU 88.5. That station paid NPR in excess of $1.5 million in dues, the station’s largest single expense outside of fundraising and personnel. The station also took in $840,000 in public funding and grants from the CPB. The station spent nearly $4 million on “fund-raising and membership development,” with a return of just $6 million. Fundraising is expensive — public money isn’t.

I looked at the .pdf at that link and don’t see the same numbers, but it’s clear enough that NPR affiliates pay a lot for NPR programming, and a non-trivial hunk of that money comes from CPB. According to this CPB document, its regular approriation for fiscal year 2010 is $420 million, and it’s looking for $430 million in 2011, $445 million for 2012 and $604 million for 2013. Bad timing.

Still, here’s the really interesting thing that almost nobody is talking about. Public radio kicks ass in the ratings. It’s quite popular. In fact, I would bet that it’s far more popular, overall, than right wing talk radio.

In Raleigh-Durham, WUNC is #2, with an 8.2 share. That’s up from 7.5 in the prior survey. Radio people can tell ya, that number is huge.

In San Francisco, KQED is #4 with a 5.2 share.

In New York, WNYC-FM is down in the teens with a 2.2 share, but nobody has more than a 6.5. Add WNYC-AM’s .8 share and classical sister WQXR’s 1.8 share, and you get a 4.8, which is #3 overall.

Here is Boston, WBUR has a 3.3 share. WGBH has a 1.1. Its classical sister station, WCRB (which now avoids using call letters) has a 2.7. Together those are 6.1, or #3 overall.

In Washington, WAMU gets a 4.8, , and stands at #5. Classical WETA has a 4.4, for #6. Add in Pacifica’s jazz station, WPFW, with .8, and you get 10, which would be #1 if they were counted together.

There are places where public radio, relatively speaking, sucks wind. Los Angeles is one. The public stations there are good but small. (The Pacifica station is technically the biggest in the country, but its appeal is very narrow.) Dallas is another. But on the whole, NPR stations do very well.

But do they do well enough to stand on their own? I think so. In fact, I think they should. That’s one reason we created ListenLog, which I visited at length here last July. ListenLog is an app that currently works with the Public Radio Player from PRX.  The idea is to show you what you listen to, and how much you value it. Armed with informative self-knowledge, you should be more inclined to pay than just to cruise for free.

We’re entering an era when more and more of our choices are both a la carte and our own. Meaning we’re more responsible, on the whole. And so are our suppliers. There will be more connections between those two facts, and we’ll be in a position to make those connections — as active customers, and not just as passive consumers.

So, if you want public radio to do a better job, to be more accountable to its listeners and not just to the government (even if indirectly), pony up. Make it yours. And let’s keep building better tools to help with that.

[Later…] Here’s a bonus link from Bob Garfield’s AdAge column. (He’s also a host of NPR’s On the Media.) And a quote:

The only quality journalism available, at least in this country, is from a few dozen newspapers and magazines, NPR, some alt weeklies, a few websites  Slate.com, for instance) and a few magazine/website hybrids such as Atlantic. On TV, there is “The News Hour” and “Frontline” on PBS and that is it. Cable “news” is a wasteland (watch for a while and let me know when you see a reporter, you know, reporting). Network news, having taught cable how to cut costs and whore itself to ratings, isn’t much better. Local TV news is live remotes from crime scenes and “Is Your Microwave Killing Your Hamster?”

Good stuff. Read the whole thing.

Two new and worthy posts over at the ProjectVRM blog: Awake at the Wheels and VRM as Agency. Featured are and .

If you want to know what data you’re sharing — without (thus far) knowing about it — on Facebook, ISharedWhat.com is the way. You run it as a simulator and what’s what.

It was developed by Joe Andrieu, a stalwart contributor of wisdom and code to the VRM community, and has been covered by and tweeted by the Wall Street Journal’s @WhatTheyKnow.

It’s what we call a fourth party app, meaning it performs as an instrument of your intentions, rather than a seller’s or a site operator’s. Check it out and give Joe feedback.

Tags: , , , , , ,

Sitting in the Harvard Law Library, where John Palfrey is about to give what I sense will be a landmark lecture, on the occasion of his chair appointment as Henry N. Ess III Professor of Law at Harvard Law School. So I’m taking notes here. [Later… John’s own notes — the abstract for his talk — are here. Also here. I also shot pictures, which are here. One of those follows.]

John is arguing for a new clearly connected system for sharing legal information. Presenting data in open, distributable and interoperable way.

One reason for doing this is cost. HLS spends $4 million on legal materials. HLS stives to have the world’s greatest collection of these at any given time. In theory at least, HLS bought everything in the law. There was no policy other than to buy it all. For a long time. Oliver Wendell Holmes surrounded himself in this. (His round desk is in the back of the room, and from it drinks will be served later.) Thomson Eest, Reed Elsevier (Lexis-Nexis), Wolters Kluwer, et.al. are the big sources.

Props to Henry N. Ess III, namesake of John’s new chair, and collector of many books that surround us now.

John reviews nine hundred years of history, from roots in manuscripts behind English common law, works by Littleton and Coke in the mid-teen centuries, then Blackstone in the eighteenth century, then Langdell and West in the nineteenth.

Now we are in the 21st century, and it’s digital. This is our new era, and we are just getting started.

Thanks to Google Books, more is available in digital form, but there are “scary bits” in it. Having this amazing digial library of Alexandria managed by a private entity without public interest at its core is troubling.

An intent: When we have committed for a journal article, we will have it in the public domain. This is a way of systematizing the ideal here.

The notion of putting all the legal information in the world in cyberspace is wacky yet not enough. We need to design it and do it deliberately in a way that is useful and makes sense.

Our students now are born digital. Teachers need to recognize this change.

We now presume that media will be in a digital format. iTunes is the top seller of music. YouTube is the top source of video.

But there is one anomaly in this story. Notice that students in the library outside this room use both laptops and paper casebooks — because the latter work with the three Bs: bed, bath and beach. So paper is still with us. But the presumption remains digital.

Changes in the computing system. One is cloud computing. Computing power and storage has moved to a large degree to places other than our own devices.

There are also changes in publishing. Books may will go toward digital. Sales of Kindle books at Amazon now exceed sales of print books.

We can now print books when we want them. We can now write, publish in print and online in close to real time.

The Digital Lab Team (featured at the Berkman Lunch today) is on screen now. And now we see many resources that are available through Google’s scholar portal. But one bad story that might happen here is that libraries turn into warehouses for print books. Students here today start with Google Scholar, then go to HOLLIS (the Harvard online library resource), and then to the physical library itself — or elsewhere.

So the effort perhaps should go not to completing collections, but to the interface to scholarship in general.

The current slide is a Stack View of books. “We can’t re-create the must” (in stacks). (I love the smell of library stacks. One of my favorite smells in the world.)

The problem is, there isn’t a stack. Most books go to the depository. But we can create a digital stack. And we can create a new way of looking for books and other sources that uses our familiar interface (the stack shelf), and also the serendipitous other advantages of digital connections and presentations.

Next slide, CALI.org and eLangdell.

There are tradtions other than our Anglo-American own. (A Chinese liberary slide is up now.)

Demerit of the system proposed: money. The courts don’t like these ideas. We don’t give enough money to our courts, and thus it is hard to make this possible. But if we gave a bit more, we would be able to overcome the klugey process we have today. We can drive costs out of the system.

Another: privacy. The redaction problem. By putting info in a single system, we might create combinations that are unhappy. Divorces and children combined with criminal law. Depositions and so on. So we need to be careful what we expose and what we don’t. Maybe depositions don’t go there. This is a possible enduring cost.

Another: authentication. Some librarians don’t like these ideas because printed-out stuff seems more reliable. We can do a better job digitally, but this will have a cost — a near-term one.

It is entirely possible that one might get information without context. There will be challenges to teaching in this way. But teachers are seeing this right now already.

Now for the merits.

First, putting things in XML format and making them downloadable (already started) can be enormously powerful

Next, scale. Much more is now being published. It takes less time to produce more, and we need to produce more, faster.

Next, we can create new code. think of the great search engines, and familiar leading code projects (yahoo, google, facebook, et. al.)… Many of these were created by students. Think about how tech can make hard-to-read stuff accessible.

Next, new connections. Visualizaitons, for example. (Points to Jeffrey Schnapp, with Visualization of Republic of Letters on the screen.) This kind of visualization will create needed curricular reforms.

Implictions: perception, practice, scholarship…

Perception: This might undercut what we see as the magesty of the law.

Practice: For judges, this could make them uneasy. Much as Charlie Nesson’s efforts to webcast court proceedings made them unconfortable. There might be a chilling in the way we practice the law. A possible side-effect might be a little of the medicine that judges’ kids are getting now around privacy. There is an extent to which it is possible that people who have lived in a protected environment might not see how digital natives live in an exposed environment. To see the world in a different way than their kids may have a distoring aeffect.

Scholarship. The slide: “For the rational study of law the black-letter man may be the man of the present, but thee man of the future is the man of statistics and the master of economics.” — Oliver Wendell Holmes, Jr.

We may see the rise and fall of the tradition and writing of treatises. Having individuals, without teachers in some cases, DIY-ing it…

Richard Suskind‘s The End of Lawyers? is on screen (is that Suskind is in the front row?). Everything Richard writes about will be amplified by the trends we’re talking about.

Is this the end of law libraries?, the slide asks.

On the way in we passed the stature of Joseph Story, who saved HLS, which was down to one student when he did. Here on the top floor you pass lots of students, more than ever before, studying in this space, where contemplation is possible. There is something about the physical space. (Thanks the dean for not taking away space.) Next, the portrait of Justice Taney, who wrote the Dred Scott decision. You can see the unhappiness on his face. Isaac Royall is on the wall here. Made money in the slave trade in Antiqua. Libraries help us learn from these people, these decisions, this history.

In the future no law library will do it all. We have a lot of law schools around here.

Not every regime in the world is stable. Here, more than most. For example, the pre-Soviet materials here make available what isn’t easy to find in Russia. People come here for materials not available in Turkey. We have legal information from around the world, saved for the ages.

The community of people here who provide access to knowledge is extraordinary. We have this notion that you can make a call and get what you want. The HLS team, on whom the many assets and benefits of this place rests, make it alive and accessible at key moments.

The game plan. The designers of this place — Langdell Hall — good as it is, needs to grow digitally. We have not put together information architects as good as the ones who designed the physical space. We need a design charette to make this right. We need to do right by the jailhouse lawyer, the prosaic litigant… It will be better though uncomfortable at first for the teachers and learners that we make these changes, providing access to justice through information.

Qustion from Jonathan Zittrain… We have SSRN having to implement anti gaming measures… Choice of what to think about, and what modality to think about… Is this an article, a blog?… What are your instincts about the future of legal scholarship? What are the right mix of advances that will excite the rest of the world?

JP: I want to defend the long-form argument, but first an aside: The greatest friend of this library is Charlie Donohue… What this will do is create pressure and opportunity for what will count as legal scholarship. We are looking at extension of text analysis, of (missed it)… We need these new modalities. We will see the gradual (shrinking of black letter law as a percentage of the whole).

Q: What are the implications for The Law? Is this the end of The Law? How much depends on what Holmes and others saw as a closed system, with a set of materials that constituted what The Law was and meant? In this new environment do we still have that? As more information becomes accessible for people to make arguments from, does this set new boundaries for what The Law is? What should now be in a law library rather than in a cloud? (Each question so far is a series of questions.)

JP: A great question, and not a new one. Back when printing was new, one of the debates was about this same thing. Is scholarly work in fact the law? So we already have this weird conflation. What we have now is the same problem. What is interdisciplinary work? A thoroughly connected system allows many answers to come. But we still have this problem that law itself is unfinished. If law itself is information, then what is information about the law? That’s where we get hung up. (Hope I got that right.)

Q: Access, and how is it paid for. Who controls what is available? How is it kept reliable? What is the future of what closed systems did so well?

JP: Students want more floors open more hours. In a serious way, what should be open is the platform that involves the primary and secondary law in a virtual sense. That’s the bedrock. There will be a much greater diversity than what we now get with . Many more people looking at the same core of information through different lenses. We will still have open and closed spaces, but the former will be the larger context.

[Later…] John speaks slowly and carefully enough to follow with an outliner, which is what I did here. Go here for his original abstract (which is comprehensive). And watch MediaBerkman for the audio and video.

Live blogging Barbara van Schewick’s talk at Maxwell Dworkin here at Harvard. (That’s the building from which Mark Zuckerberg’s movie character stumbles through the snow in his jammies. Filmed elsewhere, by the way.)

All the text is what Barbara says, or as close as I can make it. My remarks are in parentheses. The talk should show up at the MediaBerkman site soon. When it does, go there for the verbatim version.

(In the early commercial Net, circa 1995 forward), the innovator doesn’t need to ask permission from the network provider to innovate on the network. Many different people can innovate. Individuals at the network’s ends are free to choose and to use. Obligation to produce a profit in the future isn’t required to cover development costs, because those costs are often cheap.

Innovators decide, users decide, low costs of innovation let a large and diverse group can participate.

The network is application-blind. That’s a virtue of end-to-end. (Sources Reed, Saltzer and End-to-End Arguments in System Design.)

Today the network operators are in a position to control execution of programs. “Imagine you have this great idea for a video application… that means you never have to go back to cable again. You know you have a fair chance at the marketplace…” In the old system. Not the current one. Now the network provider can stand in the way. They say they need to manage bandwidth, or whatever. Investors don’t invest in apps or innovators that threaten the carriers directly.

Let’s say Google ran the network when YouTube came along. Would YouTube win this time, like it did the first time? (Disregard the fact that Google bought YouTube. What matters is that YouTube was free to compete then in ways it probably would not now—so she suggests.)

In the early Net (1995+), many innovators decided, and users decided. There was little uncertainty about the supportive nature of the Internet.

User uncertainty or user heterogeneity? More and better innovation that better meets user needs. More ideas realized. (That’s her slide.)

With fewer or less diverse innovators, fewer ideas are realized.

Her book concentrates on innovators with little or no outside funding. (Like, ProjectVRM? It qualifies.)

One might ask, do we need low cost innovators now that there are so many billionaires and giants like Google and Yahoo? Yes. The potential of the early Web was realized by Netscape, not Microsoft. By Amazon, not by Barnes & Noble.

Established companies have different concerns and motivations than new innovators. Do we prefer innovation from large self-protecting paranoid companies or small aggressive upstarts?

Users decide vs. Network providers decide. That’s the choice. (The latter like to choose for us. They did it with telephony and they did it with cable TV.) In Europe some network providers prohibit Skype because it competes with their own services. Do we want them to pick winners and losers? (That’s what they want to do. Mostly they don’t want to be losers.)

Users’s interests: Innovators decide. Users decide Network can’t control Low costs of innovation, very large and diverse group of innovators. (Her slides are speaker’s notes, really.)

Network providers’ interests: They are not interested in customer or user innovation. In fact they oppose it. They change infrastructure to protect their interests. There is a gap between their private and public interests: what economists call a Market failure.

Do we need to regulate network providers? That’s what Network neutrality is about. But the high cost of regulation is a difficult question. Not saying we need to preserve the Net’s original architecture. We do need to protect the Net’s ability to support innovation.

Let’s pull apart network neutrality and quality of service (which the carriers say they care most about).

Best effort is part of the original design. Didn’t treat packets differently. Doing that is what we call Quality of Service (QoS).

Question: How to define discrimination? We need to ask questions. Such as, do we need a rule against blocking? Such as against Skype. One defining factor in all NN proposals is opposition to blocking. If Comcast slows down YouTube or something else from Google to favor it’s own video services (e.g. Xfinity), that’s discrimination.

Option 1: allow all discrimination…. or no rule against discrimination. That’s what the carriers want. Think of all the good things you could get in the future that you can’t now if we allow discrimination, they say. (Their promise is a smooth move of cable TV  to the Net, basically.)

Option 2: ban all discrimination … or treat every packet the same. This is what Susan Crawford and others argue for. Many engineers say “just increase capacity,'” in suipport of that. But that’s not the best solution either. It’s not the job of regulators to make technical decisions about the future.

All or nothing doesn’t work. Nether allow all discrimination nor Ban all discrimination.

Application blindness is the answer.

Ban discrimination based on applicaitons. Ban discrimination based on applications or classes of aplicaitons.

Fancast vs. Hulu. YouTube vs. Hulu. Allow discrimination based on class of aplication… or like treatment. Treat internet telephony vs. email differently. But don’t favor Skype over Vonage. (This is hard to describe here. Forgive.)

Problem 1. Distorting competion. Capturing some value from gaming, for example, by favoring it as a class. Give it no-delay service while not doing that for VoIP. But both are affected by delays. In the Canadian network management proceding, we found that P2P is slowed down either all the time or during congestion time. That allows real-time to work well. But then real-time video came along. What class do they say that belongs to? We don’t really know what the Canadian carriers did, but we do visit the question of what they should do if they discriminate by class. Thus…

Probem 2. High cost of regulation. (Self-explanatory, so it saves me the effort to transcribe.)

Problem 3. User choice. Support from the network. The moment you require support from the network (as a user or app provider), you throttle innovation.

Constraints on Network Evolution allows quality of service: 1) Dfferent classes of service offered on a non-discriminatory basis; 2) Users able to choose wheter and when to use which class of service; 3) Net provider only allowed to charge its own Internet service sustomers for use of different classes of service*. So network providers don’t destroy competition any more. Users get to choose which quality of service to use. And the network provider doesn’t need to provide QoS except in a general way. They’re out of the market equation.

(Bob Frankston is across the aisle from me, and I can see the word balloon over his head: “Why constrain thinking with ‘services’ at all? Why not just start with connectivity? Services keeps us in the telecom bottle.”)

Constraints on network evolution. Cost of regulation.

MY SOLUTION: (not on screen long enough.. there was more on the slide)

Preserve factors that have fostered application innovation ≠ Preserve original archictecture of the internet.

Final question to talk about. Why care about application innovation?

Have you ever tried to explain to your partner’s grandmother why she should use the Internet? You don’t argue about sending packets back and forth. You talk about grandchildren pictures, and being able to talk for free. That comes from innovation at the ends, not the carriers.

We need to protect the sources of innovation.

Yochai: What do you do with Apple iPhone? Tremendous user adoption being driven precisely by a platform that reverses many of your assumptions smack in the middle othe most controversial boundary, regarding wireless. (Not verbatim, but as close as I could get.)

Barbara: People say, “Look, I’ve got a closed device supporting lots of innovation.” No, you need to think about this differently. Apple created a device with open interfaces that supported lots of innovation. So it moved us from a world where few could innovate and it was costly, to a world where many could and it was cheap. Proves my point. Now we have an experiment with iPhone vs. Android. Apple controls, Google doesn’t. Now we get to see how this plays out. We’re starting to see where lots of innovators are moving to Android as well. More are starting with the Android, experimenting and then moving to the iPhone. The cost of starting on the Android is less. So we have two shifts. I think we will se the platform with no control being more successful.

Every network neutrality proposal has a network management exception. Mine doesn’t.

Q from the audience; Some apps still need a lot of money, whether or not the network is neutral. Building a big data warehouse isn’t cheap. And why is innovation all that matters? What happens when it is actually hurtful to rich incumbents such as news channels?

Barbara: I agree. If you’re a rich company, your costs of entry are lower. Kids with rich parents have advantages too. To me the network itself is special because it is the fundamental point of entry into the marketplace. We want the impediments to be as low as possible. The cost of starting Facebook for Mark Zuckerberg was actually rather low. He scaled after getting VC money, but he got a significant number of users first, without a lot of costs. I do think this is very important. Innovation is often disruptive, sure. But that’s not a reason for messing with this fundamental infrastructure. If newspapers have a problem with the Net, fix the papers. Separate that problem from the infrastructure itself. As a general matter, one of the good things about the Net’s infrastructure is that it allows disruption.

Q: What about companies as users? (Can’t summarize the answer.)

Bob Frankston: If your grandmother is on a phone… (couldn’t get what Bob said or make sense of Barbara’s response… sorry).

Q: (What about subsidies? I think.) The theory of two-sided markets. With papers, subscibers and advertisers. With the Net, users and app providers. If you’re attached to one platform, the providers are likely to attach to one side. (I think that’s what she’s saying.) This gives the provider a way to monopolize. In Europe, where there is more competition, there are more trade-offs. I think what would happen if we forced the net to be neutral, would we solve the problem by charging a different way. Subsidies, tax breaks. Perhaps a solvable problem. Let’s say we allow the carriers to charge extra (for premium use?). We break the system at its core. It doesn’t make sense to give up the value of the Internet to solve a problem that can be solved a different way.

Q: A question about managed vs. unmanaged isochronous delivery. We should be thinking about what happens when the carriers start charging for better service. (But they already do, with service tiers, and business-grade service (with assigned IP addresses, unblocked ports, etc.). The Europeans give the regulators the ability to monitor quality and impose minimum standards. This has a whole bunch of problems What really are acceptable levels? for example. The Europeans think this is sufficient to discipline providers. Well, in the end there might be some apps that require strict guarantees.

Okay, it’s later now. Looking back over this, I have to say I’m not sure it was a great idea to live-blog it. There are others who are better at it. Within the Berkman fold, David Weinberger is one, and Ethan Zuckerman is another. Neither were in the room, so I thought I’d give it a try. Again, visit MediaBerkman for the actual talk. Or just go get her book, Internet Architecture and Innovation. I got one, and will start reading it shortly.

The picture above, by the way, is one of a set I shot at the talk.

For folks interested in what makes Steve Jobs and Apple (same thing) tick, Being Steve Jobs’ Last Boss, in the current Bloomberg Businessweek, is helpful reading. It’s an interview of John Sculley by Leander Kahney of Cultofmac.com. Sculley had been a very successful president of Pepsico when he was recruited as CEO of Apple in 1983, essentially to serve as Steve Jobs’ adult supervisor. While Sculley oversaw much growth at Apple in the following decade, mistakes were made (including the ousting of Steve), and Sculley himself was ousted after a decade on the job.

The encompassing statements:

Steve had this perspective that always started with the user’s experience; and that industrial design was an incredibly important part of that user impression. He recruited me to Apple because he believed the computer was eventually going to become a consumer product. That was an outrageous idea back in the early 1980s. He felt the computer was going to change the world, and it was going to become what he called “the bicycle for the mind.”

What makes Steve’s methodology different from everyone else’s is that he always believed the most important decisions you make are not the things you do, but the things you decide not to do. He’s a minimalist. I remember going into Steve’s house, and he had almost no furniture in it. He just had a picture of Einstein, whom he admired greatly, and he had a Tiffany lamp and a chair and a bed. He just didn’t believe in having lots of things around, but he was incredibly careful in what he selected.

Everything at Apple can be best understood through the lens of designing. Whether it’s designing the look and feel of the user experience, or the industrial design, or the system design, and even things like how the boards were laid out. The boards had to be beautiful in Steve’s eyes when you looked at them, even though when he created the Macintosh he made it impossible for a consumer to get in the box, because he didn’t want people tampering with anything.

And,

The reason why I said it was a mistake to have hired me as CEO was Steve always wanted to be CEO. It would have been much more honest if the board had said, “Let’s figure out a way for him to be CEO.”

As I wrote to Dave (in September 1997, after Steve came back to Apple),

The simple fact is that Apple always was Steve’s company, even when he wasn’t there. The force that allowed Apple to survive more than a decade of bad leadership, cluelessness and constant mistakes was the legacy of Steve’s original Art. That legacy was not just an OS that was 10 years ahead of the rest of the world, but a Cause that induced a righteousness of purpose centered around a will to innovate — to perpetuate the original artistic achievements. And in Steve’s absence Apple did some righteous innovation too. Eventually, though, the flywheels lost mass and the engine wore out.

In the end, by when too many of the innovative spirts first animated by Steve had moved on to WebTV and Microsoft, all that remained was that righteousness, and Apple looked and worked like what it was: a church wracked by petty politics and a pointless yet deeply felt spirituality.

Now Steve is back, and gradually renovating his old company. He’ll do it his way, and it will once again express his Art.

These things I can guarantee about whatever Apple makes from this point forward:

  1. It will be original.
  2. It will be innovative.
  3. It will be exclusive.
  4. It will be expensive.
  5. It’s aesthetics will be impeccable.
  6. The influence of developers, even influential developers like you, will be minimal. The influence of customers and users will be held in even higher contempt.

And here we are.

Bonus link.

Nice interview with Dan Levy of Sparksheet:

From Part I:

What opportunities does the widespread adoption of mobile smartphones present for VRM?

This is the limitless sweet spot for VRM.

Humans are mobile animals. We were not built only to sit at desks and type on machines, or even to drive cars. We were built to walk and talk before we did anything else.

This is why mobile devices at their best serve as extensions of ourselves. They enlarge our abilities to deal with the world around us, with each other, and with the organizations we relate to. This especially applies to companies we do business with.

Right now we are at what I call the “too many apps” stage of doing this. Every store, every radio station, every newspaper and magazine wants to build its own app. At this early stage in the history of mobile development we need lots and lots of experimenting and prototyping, so having so many apps (where in lots of cases one would do) is fine.

But as time goes on we’re going to want fewer apps and better ways of dealing with multiple entities. For example, we’ll want one easy way to issue a personal RFP, or to store and selectively share personal data on an as-needed basis.

We won’t want our health data in five different clouds, each with its own app. We may have it in one cloud, for example, much as most of us currently have our money in one bank. But we’ll also need for that data to be portable, and the services substitutable.

From Part II:

I want to ask you about privacy, which is an important part of the VRM discussion. We want businesses to recognize our past interactions and treat us in a personalized way, but we’re also a little creeped out when it happens. So how do you see people using VRM tools to navigate that line in a way that makes us feel safe and well served?

We need our own tools for controlling the way our data and other personal information is used. Some of these tools will be technical. Others will be legal. That means we will have tools for engagement that say right up front how we want our data used and respected. We can do this without changing any laws at all – just the way we engage.

As I said in The Data Bubble, the tide began to turn with the Wall Street Journal article series titled “What They Know,” which is about how companies gather and use data about us. More and more of us are going to be creeped out by assumptions made by marketers about what we might want.

This is also part of what I believe is an advertising bubble. Our tolerance of too much advertising is like the proverbial frog, boiling slowly. The difference is that the frog dies, while we’re going to jump out. Everything has its limits, and we will discover how much advertising we’re willing to suffer, especially as more of it gets too personal.

The holy grail of advertising for many decades has been personalization. If we know enough about a person, the theory goes, we can make perfect bull’s-eye messages for them. But this goal has several problems.

The first problem is that personal advertising is kind of an oxymoron. Advertising has always been something you do for populations, not individuals, even if ads show up in searches by individuals, and advertisers are looking for individual responses.

From the individual’s side, advertising shouldn’t be any more personal than a floor tile. You don’t want the floor tile in a public bathroom to speak into your pants.

In fact, we’ve never liked personalized advertising of the old conventional sort, such as direct mail. We see our name on the envelope and then toss it anyway, most of the time.

The second problem is the belief that it’s actually possible to have perfect information about somebody. It’s not. And where it gets close it gets creepy.

The third problem is that advertising is still guesswork.

We need it, to let lots of customers know what we’ve got. But there should also be more efficient ways for supply and demand to meet and get acquainted – ways in which, for example, individual customers eliminate guesswork by telling vendors exactly what they want. VRM is one answer to that need.

These and other topics will be subjects of a panel I’m on this morning at Slush in Helsinki. Ted Shelton of OpenFirst is moderating.

Tags:

I just learned by Craig Smith that KCET, the flagship PBS TV station in Los Angeles, is “going rogue.” Specifically, Craig says, “KCET will be dropping its PBS affiliation at the end of the year. That means if you live in Santa Barbara and want to watch the PBS NewsHour, Tavis Smiley, Charlie Rose, Antiques Roadshow or even Sesame Street, you may be out of luck starting at the beginning of next year.”

KCET is a Los Angeles station that puts no signal at all into Santa Barbara (except though a translator on Gibraltar Peak). But it’s the nearest PBS affiliate and is therefore on the local cable system (Cox), thanks to must-carry rules.

Here’s the LA Times storyHere’s another one. Both rake KCET over the coals. They’re abandoning viewers, paying their general manager too much, yada yada.

As all those pieces point out, KCET isn’t the only source of PBS programming in the LA area. KOCE, licensed to Huntington Beach in Orange County, is another long-time PBS affiliate and promises to at least help pick up the slack. And it’s in a good position to do that. Where KOCE used to radiate from a local site in Orange County, it now also broadcasts from Mt. Wilson, which overlooks Los Angeles and is home to nearly all the area’s TV and FM stations. In fact, KOCE is actually putting out a signal that maxes at one million watts, while KCET is currently at 190,000 watts with a construction permit for 106,000 watts. This means that technically, at least, KOCE is now a bigger station. At 162,000 watts, so isKLCS, another PBS station in Los Angeles.

At least one of those others is sure to show up on cable systems in outlying areas such as Santa Barbara, bringing familiar shows to PBS audiences there. (The bihg question for KOCE is whether it can still be an Orange County station, and not morph into National/Southern California one.)

But the real story here is the death of TV as we knew it, and the birth of whatever follows.

Relatively few people actually watch TV from antennas any more. KCET, KOCE and KLCS are cable stations now. That means they’re just data streams with channel numbers, arriving at flat screens served by cable systems required to carry them.

What makes a TV station local is now content and culture, not transmitter location and power. In fact, a station won’t even need a “channel” or “channels” after the next digital transition is done. That’s the transition from cable to Internet, at the end of which all video will be either a data stream or a file transfer, as with a podcast.

All that keeps cable coherent today is the continuing perception, substantiated only by combination of regulation and set-top box design, that “TV” still exists, and choices there are limited to “channels” and program schedules. All of those are anachronisms. Living fossils. And very doomed.

KCET bailed on PBS because it didn’t want to pay whatever it took to stay affiliated with that program source. This means KCET has some faith — or at least a good idea — that Whatever Comes Next will be good enough for lots of people to watch. If we’re lucky, what’s liberated will also be liberating.

I sure hope so. Dumping PBS was a brave move by KCET. They deserve congratulations for it.

[Later…] Please read John Proffitt’s comment below. He lays out a scenario so likely yet easily denied that it has the ring of prophesy. TV is still TV, and KCET and its competitors are all TV stations. The next digital transition for the likes of KCET will indeed give us more more kinds of Ken Burns. The one that follows will bring us whatever we bring ourselves. Yes, there will still be big heads and long tails, but the game won’t be a closed one, or assume a sphinctered distribution system (which TV still is—and will still be if everything still has to run through regulated BigCos). More in my own responses and others that follow in the comments.

For bonus links, check out what KETC (not a typo and no relation), the landmark PBS station in St. Louis has been up to lately. There is lots of co-thinking out loud, including this stuff, facilitated by Robert Paterson

(For some reason the text here keeps reverting to an earlier version, then back to a later one, each time I edit it. Very strange. In fact, I just discovered that half this post disappeared somehow. I just restored it from Google search cache. I hope.)

« Older entries § Newer entries »