“What does a scanner see? he asked himself. I mean, really see? Into the head? Down into the heart? Does a passive infrared scanner like they used to use or a cube-type holo-scanner like they use these days, the latest thing, see into me – into us – clearly or darkly? I hope it does, he thought, see clearly, because I can’t any longer these days see into myself. I see only murk. Murk outside; murk inside. I hope, for everyone’s sake, the scanners do better. Because, he thought, if the scanner sees only darkly, the way I myself do, then we are cursed, cursed again and like we have been continually, and we’ll wind up dead this way, knowing very little and getting that little fragment wrong too.”

A sign of a good New Year’s Eve party

Belltown Onions

If any of you lost your bag of onions on New Year’s Eve, it was last seen lying on a sidewalk in Belltown, surrounded by bewildered drunk people.

Dan Carlin on the Great Filter

Fermi Paradox from versa on Vimeo.

“And those grand halls of correction, well I think that they’re here to stay”

Frank Black is a god damned national treasure.

“We are all we have tonight”

“Oh the dreamers may die, but the dream lives on”

Man, no one does over-the-top, magisterial prog metal quite like Iron Maiden. First album in five years, sixteenth studio album, and it features an 18-minute-long epic about a doomed steampunk airship. It’d be a joke if it weren’t so damned well done.

“An elegant memorial”

Leaving our umbrella behind, we picked up the switch panel and marched to the end of the dead-end bridge that jutted out into the water. The reservoir had been created by damming a river: its banks followed an unnatural curve, the water lapping halfway up the mountainside. The color of the water suggested an eerie depth. Falling drops made fine ripples on the surface.

One of the twins took the switch panel from the paper bag and handed it to me. In the rain it looked ever more pathetic than usual.

“Now say a prayer,” one of the twins said.

“A prayer?” I cried in surprise.

“It’s a funeral. There’s got to be a prayer.”

“But I’m not ready,” I said. “I don’t know any prayers by heart.”

“Any old prayer is all right,” one said.

“It’s just a formality,” added the other.

I stood there, soaked from head to toenails, searching for something appropriate to say. The twins’ eyes traveled back and forth between the switch panel and me. They were obviously worried.

“The obligation of philosophy,” I began, quoting Kant, “is to dispel all illusions borne of misunderstanding. . . . Rest in peace, ye switch panel, at the bottom of this reservoir.”

“Now throw it in.”

“Huh?”

“The switch panel!”

I drew my right arm all the way back and hurled the switch panel at a forth-five-degree angle into the air as hard as I could. It described a perfect arc as it flew through the rain, landing with a splash on the water’s surface. The ripples spread slowly until they reached out feet.

“What a beautiful prayer!”

Did you make it up yourself?”

“You bet,” I said.

The three of us huddled together like dripping dogs, looking out over the reservoir.

“How deep is it?” one asked.

“Really, really deep,” I answered.

“Do you think there are fish?” asked the other.

“Ponds always have fish.”

Seen from a distance, the three of us must have looked like an elegant memorial.

I just finished reading Haruki Murakami’s Wind/Pinball. It’s an English translation of his first two (very short) novels. It’s mostly of interest to existing Murakami fans. Along with its prologue, it serves as a sort of super-hero-like origin story for his writing. If you’re not already a fan, you’re much better off picking up a copy of his short stories or his 1Q84 trilogy. Murakami clearly isn’t Murakami yet here, but you can still find masterfully rendered scenes like the one above. It’s clear that the seeds of his gorgeously weird characters were present from the start, but it’s equally clear that he needed a few decades of practice to render them with the required clarity.

Highly recommended for people who already buy into Murakami’s style of weird, sparse, ambiguously plotted adventures of beautiful monsters. For everyone else, probably better to start elsewhere in his work.

“…the mighty roar of the robot guns.”

The Protomen covering Iron Maiden’s “The Trooper” as only they can.

“Hell below me, stars above” – On evolution and Alexander on AI Risk

“If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

Worse, the reasons we humans aren’t more intelligent are really stupid. Like, even people who find the idea abhorrent agree that selectively breeding humans for intelligence would work in some limited sense. Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren. But think about how weird this is! Breeding smart people isn’t doing work, per se. It’s not inventing complex new brain lobes. If you want to get all anthropomorphic about it, you’re just “telling” evolution that intelligence is something it should be selecting for. Heck, that’s all that the African savannah was doing too – the difference between chimps and humans isn’t some brilliant new molecular mechanism, it’s just sticking chimps in an environment where intelligence was selected for so that evolution was incentivized to pull out a few stupid hacks. The hacks seem to be things like “bigger brain size” (did you know that both among species and among individual humans, brain size correlates pretty robustly with intelligence, and that one reason we’re not smarter may be that it’s too annoying to have to squeeze a bigger brain through the birth canal?) If you believe in Greg Cochran’s Ashkenazi IQ hypothesis, just having a culture that valued intelligence on the marriage market was enough to boost IQ 15 points in a couple of centuries, and this is exactly the sort of thing you should expect in a world like ours where intelligence increases are stupidly easy to come by.”

Reading Scott Alexander on AI risk this morning, and the above passage jumped out at me. It relates to my own personal transhumanist obsession: aging. The reasons that we don’t live longer are predominately that there’s never been evolutionary pressure to live longer. (Viz. Robert Heinlein’s Lazarus Long for a fictionalized, overly-rosy view of the implications there.) But we know that lifespan is partially genetic and that ceteris paribus people with longer-lived parents and grandparents will themselves live longer.

The reasons why we aren’t all Jeanne Calment is because most of our evolution exists in an era when once your grandkids were hunting on the Savannah, you’d done your piece and were unlikely to pass on long-lived, long-health-span genes any more effectively than someone who died at 60.

But it’s interesting to see that that might be changing. People are having kids older, living and staying healthy older (thanks largely to environmental factors, yes), and it could be that one’s ability to stay active later might push out our natural lifespans a few years. Even if you’re not having kids into your 50s, like an increasing number of people are, you’re still contributing to society and providing resources and support for the next generation. Population effects can be just as strong as individual genetic effects sometimes.

So it’ll be interesting to see if lifespan absent medical intervention starts to creep upwards over the next few decades.

Of course, all of that assumes that something like SENS doesn’t take off and just sort this aging thing out for good. In which case, I will be perfectly content knowing if I’m right about the evolution of aging.

As for Alexander’s essay, it’s an interesting take on the question. I like his point that the AI risk model really matters in the hard-start singularity scenario. Tech folks tend to default to open in open vs closed scenarios, but Alexander makes the most compelling argument for closed, careful, precautionary AI I’ve seen yet. I’m not convinced, entirely, though, for a few reasons.

One is my doubt in a hard-start singularity (or even just AI) scenario. I just don’t find it feasible that creating a roughly-human machine intelligence somehow implies that it could easily overcome barriers to higher levels of intelligence. The only reason we can’t see the (I suspect many, serious) barriers to drastically super-human intelligence is because we haven’t run into them yet. What if we create a six-delta-smart human-like AI and it suddenly runs into a variety of nearly insurmountable problems? This is where thinking about IQ as an undifferentiated plane can really be problematic. There are certain intellectual operations that are qualitatively different, not just a reflection of IQ, and we don’t yet know what some of those are as you start surpassing human intelligence. (Think, e.g., of a super-human intelligence that, for whatever reason, lacked object permanence.)

Second, I think there are bound to be strong anthropic effects on AI as it’s being developed. This cuts off a lot of the scenarios that particularly worry AI risk writers (e.g. paperclip optimizers and such). Simply put: if there’s no reason that an AI researcher would ever build it, we’re probably better off excluding it from our threat model.

Finally, the dichotomous structure of Alexander’s Dr. Good vs. Dr. Amoral argument misses a lot of important shade. I work in security and all the time see instances where smart people overlook really easy-to-exploit vulnerabilities. AI is going to be heinously complex, and any view of it as monolithic, perfectly-operational super-brains is missing the reality of complex systems. Hell, evolution is the most careful, brutally efficient design mechanism that we know of, and yet human intelligence still has a number of serious unpatched vulnerabilities.

This can be taken in a couple of ways. The hopeful one is that good people outweigh bad, and good uses of AI will outweigh bad, so we’ll probably end up with some uneasy detente as with the modern Internet security ecosystem.

The cynical one is that we’re as likely to be killed by a buffer overflow vuln in Dr. Good’s benevolent AI as we are by Dr. Amoral’s rampaging paperclip optimizer of death.

Welcome to the future!

“It’s warm and golden like an oven that’s wide open”

Return top

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.