2009

You are currently browsing the yearly archive for 2009.

To their credit, fixing my problem has become a higher priority with Cox. A senior guy came out today, confirmed the problem (intermittent high latencies and packet losses), made some changes that adjusted voltages at the modem, and found by tracing the coax from our house to the new pole behind it that the guys who installed the pole nearly severed the coax when they did it. So he replaced that part of the line and brought the whole pole situation up closer to spec… for a few minutes.

Alas, the problem is still there. The engineer from Cox duplicated the problem on his own laptop, so he told me the ball is still in Cox’s court.

At its worst the problem is so bad, in fact, that this was as far as I got with my last ping test:

PING google.com (74.125.67.100): 56 data bytes
64 bytes from 74.125.67.100: icmp_seq=2 ttl=56 time=101.462 ms
^C
— google.com ping statistics —
9 packets transmitted, 1 packets received, 88% packet loss

The guy from Cox said my plight had been escalated, and has the attention of higher-up engineers there. He also said they’d come out to continue trouble-shooting the problem. “Probably by Thursday.”

We’ve had the problem  since June 17.

Meanwhile, I’m connecting to the Net and posting this through my Sprint datacard, just like I did last week in Maryland. Same results: good connections, adequate speeds and awful latencies:

dsearls2$ ping harvard.edu
PING harvard.edu (128.103.60.28): 56 data bytes
64 bytes from 128.103.60.28: icmp_seq=0 ttl=235 time=1395.515 ms
64 bytes from 128.103.60.28: icmp_seq=1 ttl=235 time=750.396 ms
64 bytes from 128.103.60.28: icmp_seq=2 ttl=235 time=295.272 ms
64 bytes from 128.103.60.28: icmp_seq=3 ttl=235 time=823.698 ms
64 bytes from 128.103.60.28: icmp_seq=4 ttl=235 time=1404.692 ms
64 bytes from 128.103.60.28: icmp_seq=5 ttl=235 time=1360.761 ms
64 bytes from 128.103.60.28: icmp_seq=6 ttl=235 time=803.610 ms
64 bytes from 128.103.60.28: icmp_seq=7 ttl=235 time=446.081 ms
64 bytes from 128.103.60.28: icmp_seq=8 ttl=235 time=554.643 ms
64 bytes from 128.103.60.28: icmp_seq=9 ttl=235 time=425.423 ms
^C
— harvard.edu ping statistics —
12 packets transmitted, 10 packets received, 16% packet loss

For work such as this blog post, which seems to require lots of dialog between my browser and WordPress at the server, the latencies are exasperating, because there’s so much dialog between server and client. I watch the browser status bar say “Connecting to blogs.law.harvard.edu…”, “Waiting for blogs.law.harvard.edu…” and “Transferring from blogs.law.harvard.edu…” over and over and over for a minute or more, every time I click on a button (such as “save draft” or “publish”).

So don’t expect to read much here until we finally get over this hump. Which has been in front of me since 17 June. Meanwhile I’m hoping to get back to editing in .opml soon, which should make things faster.

But I’ll need real connectivity soon, and I can only get that from Cox. (Don’t tell me about Verizon. They’re great back at my place in Boston, where I have FiOS; but here in Santa Barbara I’m too far from their central office to get more than mimimal-speed ADSL.)

The good thing is, Cox knows the problem is one they still have to solve, and they seem serious about fixing it. Eventually.

Meanwhile, for interested Cox folks, here’s how pings to Google currently go:

dsearls2$ ping google.com
PING google.com (74.125.127.100): 56 data bytes
64 bytes from 74.125.127.100: icmp_seq=0 ttl=45 time=110.803 ms
64 bytes from 74.125.127.100: icmp_seq=1 ttl=45 time=164.317 ms
64 bytes from 74.125.127.100: icmp_seq=2 ttl=45 time=204.076 ms
64 bytes from 74.125.127.100: icmp_seq=3 ttl=45 time=259.795 ms
64 bytes from 74.125.127.100: icmp_seq=4 ttl=45 time=397.490 ms
64 bytes from 74.125.127.100: icmp_seq=5 ttl=45 time=581.123 ms
64 bytes from 74.125.127.100: icmp_seq=6 ttl=45 time=506.292 ms
64 bytes from 74.125.127.100: icmp_seq=7 ttl=45 time=128.939 ms
64 bytes from 74.125.127.100: icmp_seq=8 ttl=45 time=328.000 ms
64 bytes from 74.125.127.100: icmp_seq=9 ttl=45 time=160.761 ms
64 bytes from 74.125.127.100: icmp_seq=10 ttl=45 time=176.398 ms
64 bytes from 74.125.127.100: icmp_seq=11 ttl=45 time=187.511 ms
64 bytes from 74.125.127.100: icmp_seq=12 ttl=45 time=188.291 ms
64 bytes from 74.125.127.100: icmp_seq=13 ttl=45 time=347.966 ms
64 bytes from 74.125.127.100: icmp_seq=14 ttl=45 time=285.017 ms
64 bytes from 74.125.127.100: icmp_seq=15 ttl=45 time=389.641 ms
64 bytes from 74.125.127.100: icmp_seq=16 ttl=45 time=399.993 ms
64 bytes from 74.125.127.100: icmp_seq=17 ttl=45 time=113.803 ms
64 bytes from 74.125.127.100: icmp_seq=18 ttl=45 time=153.111 ms
64 bytes from 74.125.127.100: icmp_seq=19 ttl=45 time=147.549 ms
64 bytes from 74.125.127.100: icmp_seq=20 ttl=45 time=198.597 ms
^C
— google.com ping statistics —
21 packets transmitted, 21 packets received, 0% packet loss

And here’s how they go to the nearest Cox gateway:

ping 68.6.66.1
PING 68.6.66.1 (68.6.66.1): 56 data bytes
64 bytes from 68.6.66.1: icmp_seq=0 ttl=239 time=676.134 ms
64 bytes from 68.6.66.1: icmp_seq=1 ttl=239 time=263.575 ms
64 bytes from 68.6.66.1: icmp_seq=2 ttl=239 time=429.944 ms
64 bytes from 68.6.66.1: icmp_seq=3 ttl=239 time=470.586 ms
64 bytes from 68.6.66.1: icmp_seq=4 ttl=239 time=473.553 ms
64 bytes from 68.6.66.1: icmp_seq=5 ttl=239 time=416.172 ms
64 bytes from 68.6.66.1: icmp_seq=6 ttl=239 time=489.699 ms
64 bytes from 68.6.66.1: icmp_seq=7 ttl=239 time=471.640 ms
64 bytes from 68.6.66.1: icmp_seq=8 ttl=239 time=349.825 ms
64 bytes from 68.6.66.1: icmp_seq=9 ttl=239 time=588.051 ms
64 bytes from 68.6.66.1: icmp_seq=10 ttl=239 time=606.703 ms
64 bytes from 68.6.66.1: icmp_seq=11 ttl=239 time=573.560 ms
64 bytes from 68.6.66.1: icmp_seq=12 ttl=239 time=454.920 ms
64 bytes from 68.6.66.1: icmp_seq=13 ttl=239 time=259.428 ms
^C
— 68.6.66.1 ping statistics —
14 packets transmitted, 14 packets received, 0% packet loss

And here is a traceroute to the same gateway:

traceroute to 68.6.66.1 (68.6.66.1), 64 hops max, 40 byte packets
1  10.0.2.1 (10.0.2.1)  2.376 ms  0.699 ms  0.711 ms
2  68.28.49.69 (68.28.49.69)  109.610 ms  78.637 ms  73.791 ms
3  68.28.49.91 (68.28.49.91)  84.093 ms  161.432 ms  84.844 ms
4  68.28.51.54 (68.28.51.54)  187.814 ms  166.084 ms  181.780 ms
5  68.28.55.1 (68.28.55.1)  126.050 ms  100.136 ms  239.987 ms
6  68.28.55.16 (68.28.55.16)  80.512 ms  147.347 ms  373.152 ms
7  68.28.53.69 (68.28.53.69)  121.593 ms  265.198 ms  323.666 ms
8  sl-gw10-bur-1-0-0.sprintlink.net (144.223.255.17)  331.535 ms  346.841 ms  279.394 ms
9  sl-bb20-bur-10-0-0.sprintlink.net (144.232.0.66)  397.594 ms  542.053 ms  546.655 ms
10  sl-crs1-ana-0-1-3-1.sprintlink.net (144.232.24.231)  986.040 ms  451.456 ms  630.898 ms
11  sl-st21-la-0-0-0.sprintlink.net (144.232.20.206)  726.689 ms  452.451 ms  235.828 ms
12  144.232.18.198 (144.232.18.198)  194.067 ms  295.496 ms  99.809 ms
13  64.209.108.70 (64.209.108.70)  262.008 ms  93.663 ms  114.594 ms
14  68.1.2.127 (68.1.2.127)  145.956 ms  123.435 ms  345.784 ms
15  ip68-6-66-1.sb.sd.cox.net (68.6.66.1)  346.696 ms  654.332 ms  406.933 ms

Draw (or re-draw) your own conclusions.

Maybe somebody out there in geekland can see the problem and help offer a solution. Thanks.

Tags: , , , , ,

Funny… Thanks to a quote in a caption (“We play the hands of cards life gives us. And the worst hands can make us the best players.” from this blog post here) — sans quotation marks — Mahalo thinks this Flickr picture by Oftana Media is one of me.

Tags: , , , ,

Forget financial markets for a minute, and think about the directions money moves in retail markets. While much of it moves up and down the supply chains, the first source is customers. The money that matters most is what customers spend on goods and services.

Now here’s the question. Where is there more money to be made — in helping supply find demand or in helping demand find supply? Substitute “drive” for “find” and you come to the same place, for the same reason: customers are the ones spending the money.

For the life of the commercial Web, most of those looking to make money there have looked to make it the former way: by helping supply find or drive demand. That’s what marketing has always been about, and advertising in particular. Advertising, last I looked, was about a $trillion business. Now ask yourself: Wouldn’t there be more money to be made in helping the demand side find and drive supply?

Simply put, that’s what VRM is about. It’s also what Cluetrain was about ten years ago. It wasn’t about better ways for the supply side to make money. It wasn’t about doing better marketing. It was about giving full respect to the human beings from whom the Web’s and the Net’s biggest values derive. When Cluetrain (actually Chris Locke) said “we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.“, it wasn’t saying “Here’s how you market to us.” It was saying “Our new power to deal in this new marketplace exceeds your old powers to drive, lock in, or otherwise control us.” When Cluetrain said “The sky is open to the stars”, it wasn’t issuing utopian palaver. It was speaking of a marketplace of buyers and sellers whose choices were wide open on both sides. [Later… Chris Locke, who wrote that line (and those that followed), offers a correction (and expansion) below.]

On Cluetrain’s 10th anniversary, we have hardly begun to explore the possibilities of truly free and open markets on the Internet. They are still inevitable, because supporting those markets is intrinsic to the Net’s essentially generative design. Lock down users, or lock one in and others out, and you compromise the wealth the Net can create for you. Simple as that.

And that wealth starts with customers.

This is also what How Facebook Could Create a Revolution, Do Good, and Make Billions, by Bernard Lunn in ReadWriteWeb, is about.

I just wrote a brief response in Gain of Facebook, on the ProjectVRM blog.

No time for more. Not because it’s the Fourth of July, but because I’m in a connectivity hole (with latencies and packet losses that start at 1+ second and 15% packet losses and go up from there), but because I’m at my daughter’s wedding, and I need to get ready. Cheers.

Tags: , , , , , , , ,

One of the best things about living in (or just following) Santa Barbara is reading Nick Welsh’s Angry Poodle Barbeque column each week in the Independent — one of the best free newsweeklies anywhere. This week’s column, El Corazón del Perro, is a classic. One sample:

For those of us without the heart to pursue our own dream, or even the imagination to have one, Jackson provides cold reassurance. If someone so rich, so famous, and so hugely adored could wind up so agonizingly wretched, maybe the moral of the story is that one’s bliss was never meant to be followed.

This, however, isn’t just another knock on the late Jacko. It’s a column about afterdeath effects in Santa Barbara County, which was home to Jackson through his Neverland years:

This past Tuesday, a coterie of key county executives from law enforcement, public works, fire protection, public health, planning, emergency response, and communications spent the better part of the day shuttling from one emergency meeting to the next, trying to figure out what was real and what to do about it. No less than five employees of the Sheriff’s Department spent their day fielding calls from media outlets around the world. Associated Press dispatched a reporter to stake out the County Administration Building all day. By 7 p.m., Tuesday, no actual communication had taken place between county government and the Jackson camp. Instead, Sheriff’s officials relied upon contacts they have with the L.A. County Sheriff’s Department for whatever vague rumors and rumblings they could get. Somehow through this opaque and osmotic chain of communication, county officials are hoping to persuade the Jackson clan to call it off, if in fact it was they who started something in the first place.

Some in the Sheriff’s Department expressed confidence that the whole thing has been an exceptionally expensive and elaborate fire drill. Personally, I like the idea that the whole thing is a big fake-out, an angry practical joke on the county that prosecuted Jackson. When Paul McCartney’s former wife, Linda McCartney, died several years ago, I remember how rumors were strategically planted that she died in Santa Barbara County. In fact, she did not. The County Coroner complained he spent so much time fielding media calls that he couldn’t get any work done. Cadavers, he said, were piling up in his coolers like firewood. Ultimately, we would discover the whole thing was an elaborate dodge so that the McCartney clan could grieve unmolested by the paparazzi. But not before Santa Barbarans — ever willing to embrace the rich and famous, even if they never lived here — held a solemn and tearful candlelight vigil at the County Courthouse’s Sunken Gardens.

Some of the worries in the piece are stale now (a Neverland funeral appears unlikely), but it’s still a good read.

Tags: , , , , , , ,

Great minds discuss ideas. Average minds discuss events. Small minds discuss people. — Eleanor Roosevelt Somebody

I wish to discuss an idea here. It’s an idea about celebrity, and it follows an event that has become a black hole in nearly all media: the death of Michael Jackson.

According to Don Norman, a black hole topic is one that is essentially undiscussable: “Drop the subject into the middle of a room and it sucks everybody into a useless place from which no light can escape.”

Michael Jackson was more than a celebrity. He was a first-rank contributor to pop music and pop culture. He was also far more weird than anybody else at the same rank, changing his face so radically that he no longer appeared to belong to his original race and gender. This fact alone made his death at 50 unsurprising yet very interesting.

Most of us can’t help falling into conversational black holes. But we can help getting sucked into celebrity obsession.

Unless, of course, we’re making money at it. This is the path down which People Magazine went when it morphed from a spun-off section of Time Magazine into a tabloid. More recently Huffington Post has done the same thing. But that’s the supply side. What about demand?

I submit that obsessing about celebrity is unhealthy for the single reason that it is also unproductive. Celebrity is to mentality as smoking is to food. (I originally wrote “chewing gum” there, but I think smoking is the better analogy.) It is an unhealthy waste of time. And time is a measure of life. We are born with an unknown sum of time, and have to spend all of it. “Saving” time is a rhetorical trick. So is “losing” it. Our lives are spent, one end to the other. What matters most is how we choose to spend it.

The Net maximizes the endlessness of choice about how we spend our time. It also maximizes many kinds of productiveness. Nearly all the code we are using, right now, to do stuff on the Net, was written by many collaborators across many distances. Some were obsessing about what they were producing. Others were just working away. Either way, they chose to be productive. To contribute. To work on what works.

The Net itself is an idea so protean and varied that there is little agreement about what it actually is. Yet it is endlessly improvable, as are the goods and services it supports.

This improvable millieu presents us with choices that become more stark as the millieu itself grows. We can make useful contributions — preferably in ways nobody else can. Or we can coast.

Obsessing about celebrity is a form of coasting. And I suggest that we’ll see a growing distance between coasting and producing.

Tags: , , , , , , ,

Major props to Cox for cranking up my speeds to 18Mb/s downstream and 4Mb/s upstream. That totally rocks.

I’m getting that speed now. Here’s what Cox’s local diagnostic tool says:

TCP/Web100 Network Diagnostic Tool v5.4.12
click START to begin
Connected to: speedtest.sbcox.net  —  Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . .  Done
checking for firewalls . . . . . . . . . . . . . . . . . . .  Done
running 10s outbound test (client-to-server [C2S]) . . . . . 3.79Mb/s
running 10s inbound test (server-to-client [S2C]) . . . . . . 18.04Mb/s
The slowest link in the end-to-end path is a 10 Mbps Ethernet subnet
Information: Other network traffic is congesting the link

That won’t last. The connection will degrade again, or go down completely. Here we go:

Connected to: speedtest.sbcox.net  —  Using IPv4 address
Checking for Middleboxes . . . . . . . . . . . . . . . . . .  Done
checking for firewalls . . . . . . . . . . . . . . . . . . .  Done
running 10s outbound test (client-to-server [C2S]) . . . . . 738.0kb/s
running 10s inbound test (server-to-client [S2C]) . . . . . . 15.09Mb/s
Your Workstation is connected to a Cable/DSL modem
Information: Other network traffic is congesting the link
[C2S]: Packet queuing detected

Here’s a ping test to Google.com:

PING google.com (74.125.127.100): 56 data bytes
64 bytes from 74.125.127.100: icmp_seq=0 ttl=246 time=368.432 ms
64 bytes from 74.125.127.100: icmp_seq=1 ttl=246 time=77.353 ms
64 bytes from 74.125.127.100: icmp_seq=2 ttl=247 time=323.272 ms
64 bytes from 74.125.127.100: icmp_seq=3 ttl=246 time=343.178 ms
64 bytes from 74.125.127.100: icmp_seq=4 ttl=247 time=366.341 ms
64 bytes from 74.125.127.100: icmp_seq=5 ttl=246 time=385.083 ms
64 bytes from 74.125.127.100: icmp_seq=6 ttl=246 time=406.209 ms
64 bytes from 74.125.127.100: icmp_seq=7 ttl=246 time=434.731 ms
64 bytes from 74.125.127.100: icmp_seq=8 ttl=246 time=444.653 ms
64 bytes from 74.125.127.100: icmp_seq=9 ttl=247 time=474.976 ms
64 bytes from 74.125.127.100: icmp_seq=10 ttl=247 time=472.244 ms
64 bytes from 74.125.127.100: icmp_seq=11 ttl=246 time=488.023 ms

No packet loss on that one. Not so on the next, to UCSB, which is so close I can see it from here:

PING ucsb.edu (128.111.24.40): 56 data bytes
64 bytes from 128.111.24.40: icmp_seq=0 ttl=52 time=407.920 ms
64 bytes from 128.111.24.40: icmp_seq=1 ttl=52 time=427.506 ms
64 bytes from 128.111.24.40: icmp_seq=2 ttl=52 time=441.176 ms
64 bytes from 128.111.24.40: icmp_seq=3 ttl=52 time=456.073 ms
64 bytes from 128.111.24.40: icmp_seq=4 ttl=52 time=237.366 ms
64 bytes from 128.111.24.40: icmp_seq=5 ttl=52 time=262.868 ms
64 bytes from 128.111.24.40: icmp_seq=6 ttl=52 time=287.270 ms
64 bytes from 128.111.24.40: icmp_seq=7 ttl=52 time=307.931 ms
64 bytes from 128.111.24.40: icmp_seq=8 ttl=52 time=327.951 ms
64 bytes from 128.111.24.40: icmp_seq=9 ttl=52 time=352.974 ms
64 bytes from 128.111.24.40: icmp_seq=10 ttl=52 time=376.636 ms
ç64 bytes from 128.111.24.40: icmp_seq=11 ttl=52 time=395.893 ms
^C
— ucsb.edu ping statistics —
13 packets transmitted, 12 packets received, 7% packet loss
round-trip min/avg/max/stddev = 237.366/356.797/456.073/69.322 ms

That’s low to UCSB, by the way. I just checked again, and got 9% and 25% packet loss. At one point (when the guy was here this afternoon), it hit 57%.

Here’s a traceroute to UCSB:

traceroute to ucsb.edu (128.111.24.40), 64 hops max, 40 byte packets
1  192.168.1.1 (192.168.1.1)  0.687 ms  0.282 ms  0.250 ms
2  ip68-6-40-1.sb.sd.cox.net (68.6.40.1)  349.599 ms  379.786 ms  387.580 ms
3  68.6.13.121 (68.6.13.121)  387.466 ms  400.991 ms  404.500 ms
4  68.6.13.133 (68.6.13.133)  415.578 ms  153.695 ms  9.473 ms
5  paltbbrj01-ge600.0.r2.pt.cox.net (68.1.2.126)  16.965 ms  18.286 ms  15.639 ms
6  te4-1–4032.tr01-lsanca01.transitrail.ne… (137.164.129.15)  19.936 ms  24.520 ms  20.952 ms
7  calren46-cust.lsanca01.transitrail.net (137.164.131.246)  26.700 ms  24.166 ms  30.651 ms
8  dc-lax-core2–lax-peer1-ge.cenic.net (137.164.46.119)  44.268 ms  98.114 ms  200.339 ms
9  dc-lax-agg2–lax-core2-ge.cenic.net (137.164.46.112)  254.442 ms  277.958 ms  273.309 ms
10  dc-ucsb–dc-lax-dc2.cenic.net (137.164.23.3)  281.735 ms  313.441 ms  306.825 ms
11  r2–r1–1.commserv.ucsb.edu (128.111.252.169)  315.500 ms  327.080 ms  344.177 ms
12  128.111.4.234 (128.111.4.234)  346.396 ms  367.244 ms  357.468 ms
13  * * *

As for modem function, I see this for upstream:

Cable Modem Upstream
Upstream Lock : Locked
Upstream Channel ID : 11
Upstream Frequency : 23600000 Hz
Upstream Modulation : QAM16
Upstream Symbol Rate : 2560 Ksym/sec
Upstream transmit Power Level : 38.5 dBmV
Upstream Mini-Slot Size : 2

… and this for downstream:

Cable Modem Downstream
Downstream Lock : Locked
Downstream Channel Id : 1
Downstream Frequency : 651000000 Hz
Downstream Modulation : QAM256
Downstream Symbol Rate : 5360.537 Ksym/sec
Downstream Interleave Depth : taps32Increment4
Downstream Receive Power Level : 5.4 dBmV
Downstream SNR : 38.7 dB

The symptoms are what they were when I first blogged the problem on June 21, and again when I posted a follow-up on June 24. That was when the Cox service guy tightened everything up and all seemed well … until he left. When I called to report the problem not solved Cox said they would send a “senior technician” on Friday. A guy came today. The problems were exactly as we see above. He said he would have to come back with a “senior technician” (or whatever they call them — I might be a bit off on the title), which this dude clearly wasn’t. He wanted the two of them to come a week from next Wednesday. We’re gone next week anyway, but I got him to commit to a week from Monday. That’s July 6, in the morning. The problem has been with us at least since the 18th, when I arrived here from Boston.

This evening we got a call from a Cox survey robot, following up on the failed service visit this afternoon. My wife took the call. After she indicated our dissatisfaction with the visit (by pressing the appropriate numbers in answer to a series of questions), the robot said we should hold to talk to a human. Then it wanted our ten-digit Cox account number. My wife didn’t know it, so the robot said the call couldn’t be completed. And that was that.

I doubt another visit from anybody will solve the problem, because I don’t think the problem is here. I think it’s in Cox’s system. I think that’s what the traceroute shows.

But I don’t know.

I do know that this is inexcusably bad customer service.

For Cox, in case they’re reading this…

  • I am connected directly to the cable modem. No routers, firewalls or other things between my laptop and the modem.
  • I have rebooted the modem about a hundred times. I have re-started my computers. In fact I have tested the link with three different laptops. Same results. Re-booting sometimes helps, sometimes not.
  • Please quit trying to fix this only at my end of the network. The network includes far more than me and my cable modem.
  • Please make it easier to reach technically knowledgeable human beings.
  • Make your chat system useful. At one point the chat person gave me Linksys’ number to call.
  • Thanks for your time and attention.

Tags: , , , , , , ,

This twitter post, from @KNX1070 four minutes ago, says Michael Jackson is dead. Google News‘ latest, from Fox, says he’s being rushed to the hospital. Here’s the latest Google search, as of 3:42pm Pacific:

michaeljackson_search1

A snapshot in time, already changed. (FWIW, the KNX item came up the first time I searched, but not this time. The System That Isn’t, isn’t perfect.) The Twitter results up top are courtesy of a Greasemonkey script.

It is here that we see manifest the split between the Live Web and the Static Web.

I’ve been writing and talking about this split since my son Allen first mentioned the term in 2003.* He saw the World Live Web then as an absence, as unstarted business. Google searched the Static Web of sites and domains that were architected, designed and built like real estate projects. The Live Web would be more alive and human. In it machines wouldn’t answer your questions now. People would.

Now the Live Web is here, big-time. Or, as current parlance would have it, real-time.

I still prefer “live”. Can you imagine if NBC had called its top weekend show “Saturday Night Real-Time”? Or if they announced, “Real time, from New York..”?

Live is better.

If Michael Jackson were still with us, I’m sure he’d agree.

* Here’s the same link: http://www.google.com/search?hl=en&q… . I’m not sure why, but WordPress isn’t letting me get that link in there. I post the html, find no links in the results, and then when editing find the linked term flanked by partial the letter “a” in angle brackets, sans the slash that closes a link. Not sure what’s up with that. Maybe my tortuously broken connection. Anyway, I have more to add, but won’t bother. Plenty of other reading on the Web anyway. Rock on.

Tags: , , , , , , , , ,

The idea was to take some down time in Santa Barbara and get work done in my own nice office, with my nice comfortable chair, surrounded by space and time, with soft sea breezes blowing through.

Instead it’s been tech crash city since I got here last Thursday. (Except for getting out to the Live Oak Festival. That rocked. Also, trees, dirt and great music tend not to crash.)

First a system upgrade hosed a beloved old mail program. So far I can’t get the archives to migrate anywhere. I can still get email addressed to my searls.com and Gmail accounts, but not to my Harvard.edu account. I can send from Gmail. But balls are being dropped and lost all over the place.

Next my Internet connection through Cox got flaky. Mostly it’s bad. Details in my last post. A Cox repair guy finally came today. And, as Russ predicted, tightened everything up, tested it out, and all was fine. Dig this: I didn’t know that service had improved to 18Mb/s downstream and close to 4Mb/s upstream. It was right up there when he left, along with two-digit ping times to everything.

That was then. Soon as he left, we were back to bad. We’re at 3-digit ping times and packet losses. One other discovery: my 8-port Netgear Firewall/Router/Hub/Switch (I forget the name, which cannot be remembered — it demonstrates the opposite of branding) has Issues too. It introduces latencies and packet losses of its own when it’s in the loop. It’s out right now, not that it makes any difference. I’m back using my Sprint data card.

When I called Cox to get them to come back and finish the job, they said they’d send a senior tech on Friday afternoon. That’s two days from now. Then, in the middle of a tech support call with Apple, a Cox robot made an automated survey call. I couldn’t talk and hung up on it.

If you want to reach me, text or call. Or use a Twitter DM. Meanwhile, I’m going to take a shower and go for a long walk. Or vice versa.

Hope everybody’s enjoying Reboot. I really miss being there.

Tags: , ,

Starting a few days ago, nothing outside my house on the Net has been closer than about 300 miliseconds. Even UCSB.edu, which I can see from here, is usually no more than 30 ms away on a ping test. Here’s the latest:

PING ucsb.edu (128.111.24.40): 56 data bytes
64 bytes from 128.111.24.40: icmp_seq=0 ttl=52 time=357.023 ms
64 bytes from 128.111.24.40: icmp_seq=1 ttl=52 time=369.475 ms
64 bytes from 128.111.24.40: icmp_seq=2 ttl=52 time=389.372 ms
64 bytes from 128.111.24.40: icmp_seq=3 ttl=52 time=414.025 ms
64 bytes from 128.111.24.40: icmp_seq=4 ttl=52 time=428.384 ms
64 bytes from 128.111.24.40: icmp_seq=5 ttl=52 time=28.120 ms
64 bytes from 128.111.24.40: icmp_seq=6 ttl=52 time=164.643 ms
64 bytes from 128.111.24.40: icmp_seq=7 ttl=52 time=292.241 ms
64 bytes from 128.111.24.40: icmp_seq=8 ttl=52 time=332.866 ms
64 bytes from 128.111.24.40: icmp_seq=9 ttl=52 time=330.573 ms
64 bytes from 128.111.24.40: icmp_seq=10 ttl=52 time=369.385 ms
64 bytes from 128.111.24.40: icmp_seq=11 ttl=52 time=375.593 ms
64 bytes from 128.111.24.40: icmp_seq=12 ttl=52 time=405.028 ms
64 bytes from 128.111.24.40: icmp_seq=13 ttl=52 time=413.990 ms
64 bytes from 128.111.24.40: icmp_seq=14 ttl=52 time=437.124 ms

It’s been this way for days. I can’t get a human at Cox, our carrier, so I thought I’d ask the tech folks among ya’ll for a little diagnostic help.

Here is a traceroute:

traceroute to ucsb.edu (128.111.24.40), 64 hops max, 40 byte packets
1  ip68-6-68-81.sb.sd.cox.net (68.6.68.81)  5.828 ms  3.061 ms  2.840 ms
2  ip68-6-68-1.sb.sd.cox.net (68.6.68.1)  324.824 ms  352.686 ms  358.682 ms
3  68.6.13.121 (68.6.13.121)  359.635 ms  369.743 ms  372.376 ms
4  68.6.13.133 (68.6.13.133)  386.039 ms  389.809 ms  415.532 ms
5  paltbbrj01-ge600.0.r2.pt.cox.net (68.1.2.126)  430.554 ms  447.079 ms  423.461 ms
6  te4-1–4032.tr01-lsanca01.transitrail.ne… (137.164.129.15)  464.229 ms  453.908 ms  423.090 ms
7  calren46-cust.lsanca01.transitrail.net (137.164.131.246)  206.217 ms  251.298 ms  261.263 ms
8  dc-lax-core1–lax-peer1-ge.cenic.net (137.164.46.117)  264.824 ms  284.859 ms  285.808 ms
9  dc-lax-agg2–lax-core1-ge.cenic.net (137.164.46.110)  289.834 ms  307.450 ms  313.997 ms
10  dc-ucsb–dc-lax-dc2.cenic.net (137.164.23.3)  323.183 ms  331.668 ms  345.606 ms
11  r2–r1–1.commserv.ucsb.edu (128.111.252.169)  340.756 ms  377.695 ms  375.946 ms
12  128.111.4.234 (128.111.4.234)  365.500 ms  397.311 ms  393.919 ms

Looks to me like the problem shows up at the second hop. Any guesses as to what that is? Yes, I’ve rebooted the cable modem, many times. No difference.

I’m talking now over my Sprint data card. EVDO over the cell system. Latencies run around 70-90 ms. So the problem is clearly one with Cox, methinks.

I’m only home from the Live Oak Festival for a shower, so I’ll leaving again in a few minutes and won’t get around to dealing with this (or anything) until tomorrow. Just wanted to get the question out there to the LazyWeb in the meantime. If the problem really is Cox’s, I’d like to know what to tell them when I go down to their office. No use calling on the phone. Too many robots.

Happy solstice, everybody. And thanks!

Tags: , , ,

For reasons I don’t have time to trouble-shoot, there is too much latency between my house and Cox, my Internet provider here in Santa Barbara.

On top of that, re-setting my SMTP (outbound email) to smtp.west.cox.net, which has always worked in the past, doesn’t work this time. So mail isn’t going out. I don’t have time to trouble-shoot that either, because I’m already late for the Live Oak Festival, where we already have a tent set up. I’m just back at the house picking up some stuff.

See ya’ll Monday.

Tags: , , , ,

« Older entries § Newer entries »