“Hell below me, stars above” – On evolution and Alexander on AI Risk

“If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

Worse, the reasons we humans aren’t more intelligent are really stupid. Like, even people who find the idea abhorrent agree that selectively breeding humans for intelligence would work in some limited sense. Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren. But think about how weird this is! Breeding smart people isn’t doing work, per se. It’s not inventing complex new brain lobes. If you want to get all anthropomorphic about it, you’re just “telling” evolution that intelligence is something it should be selecting for. Heck, that’s all that the African savannah was doing too – the difference between chimps and humans isn’t some brilliant new molecular mechanism, it’s just sticking chimps in an environment where intelligence was selected for so that evolution was incentivized to pull out a few stupid hacks. The hacks seem to be things like “bigger brain size” (did you know that both among species and among individual humans, brain size correlates pretty robustly with intelligence, and that one reason we’re not smarter may be that it’s too annoying to have to squeeze a bigger brain through the birth canal?) If you believe in Greg Cochran’s Ashkenazi IQ hypothesis, just having a culture that valued intelligence on the marriage market was enough to boost IQ 15 points in a couple of centuries, and this is exactly the sort of thing you should expect in a world like ours where intelligence increases are stupidly easy to come by.”

Reading Scott Alexander on AI risk this morning, and the above passage jumped out at me. It relates to my own personal transhumanist obsession: aging. The reasons that we don’t live longer are predominately that there’s never been evolutionary pressure to live longer. (Viz. Robert Heinlein’s Lazarus Long for a fictionalized, overly-rosy view of the implications there.) But we know that lifespan is partially genetic and that ceteris paribus people with longer-lived parents and grandparents will themselves live longer.

The reasons why we aren’t all Jeanne Calment is because most of our evolution exists in an era when once your grandkids were hunting on the Savannah, you’d done your piece and were unlikely to pass on long-lived, long-health-span genes any more effectively than someone who died at 60.

But it’s interesting to see that that might be changing. People are having kids older, living and staying healthy older (thanks largely to environmental factors, yes), and it could be that one’s ability to stay active later might push out our natural lifespans a few years. Even if you’re not having kids into your 50s, like an increasing number of people are, you’re still contributing to society and providing resources and support for the next generation. Population effects can be just as strong as individual genetic effects sometimes.

So it’ll be interesting to see if lifespan absent medical intervention starts to creep upwards over the next few decades.

Of course, all of that assumes that something like SENS doesn’t take off and just sort this aging thing out for good. In which case, I will be perfectly content knowing if I’m right about the evolution of aging.

As for Alexander’s essay, it’s an interesting take on the question. I like his point that the AI risk model really matters in the hard-start singularity scenario. Tech folks tend to default to open in open vs closed scenarios, but Alexander makes the most compelling argument for closed, careful, precautionary AI I’ve seen yet. I’m not convinced, entirely, though, for a few reasons.

One is my doubt in a hard-start singularity (or even just AI) scenario. I just don’t find it feasible that creating a roughly-human machine intelligence somehow implies that it could easily overcome barriers to higher levels of intelligence. The only reason we can’t see the (I suspect many, serious) barriers to drastically super-human intelligence is because we haven’t run into them yet. What if we create a six-delta-smart human-like AI and it suddenly runs into a variety of nearly insurmountable problems? This is where thinking about IQ as an undifferentiated plane can really be problematic. There are certain intellectual operations that are qualitatively different, not just a reflection of IQ, and we don’t yet know what some of those are as you start surpassing human intelligence. (Think, e.g., of a super-human intelligence that, for whatever reason, lacked object permanence.)

Second, I think there are bound to be strong anthropic effects on AI as it’s being developed. This cuts off a lot of the scenarios that particularly worry AI risk writers (e.g. paperclip optimizers and such). Simply put: if there’s no reason that an AI researcher would ever build it, we’re probably better off excluding it from our threat model.

Finally, the dichotomous structure of Alexander’s Dr. Good vs. Dr. Amoral argument misses a lot of important shade. I work in security and all the time see instances where smart people overlook really easy-to-exploit vulnerabilities. AI is going to be heinously complex, and any view of it as monolithic, perfectly-operational super-brains is missing the reality of complex systems. Hell, evolution is the most careful, brutally efficient design mechanism that we know of, and yet human intelligence still has a number of serious unpatched vulnerabilities.

This can be taken in a couple of ways. The hopeful one is that good people outweigh bad, and good uses of AI will outweigh bad, so we’ll probably end up with some uneasy detente as with the modern Internet security ecosystem.

The cynical one is that we’re as likely to be killed by a buffer overflow vuln in Dr. Good’s benevolent AI as we are by Dr. Amoral’s rampaging paperclip optimizer of death.

Welcome to the future!

“It’s warm and golden like an oven that’s wide open”

“…the psychopaths we fear inside ourselves.”

The arch of a highway rose up in the distance to the west. The sky was a shiny orange which turned into a deep blue above the crest of the highway. This highway stood over the sun. It was clear and empty. It was quiet of the ghosts of activity. As I regarded it, traffic slid onto and over it and filled it up to a crawl. In the middle of this queue of vehicles was a truck carrying a multitude of cars, probably to a car dealership. Above the sun, a structure heavy with cars supported a car heavy with cars. The car-carrying truck stopped in the middle of the highway and stood in the front of my imagination for half of my walk. Then the traffic picked up into a flow for an instant. A great tree some tens of meters in front of me blocked my view of the right edge of the bridge. The tree ate the truck with a vicious slowness, and I was again as alone as we all always are, with my sorrow, my sadness, my iPhone, and many meandering anecdotes we own about the psychopaths we fear inside ourselves.

I took my phone out of my pocket. I took a picture of the bridge.

“She caves”

As good as Bright Eyes was and is, I can’t escape the creeping notion that Mike Mogis was too good for the band by half.

“Boom boom boom boom”

John Lee Hooker covered on the gayageum. Damn I love the Internet sometimes.

“…Didn’t help.”

Some thoughts on “The Mercy”, by Philip Levine and 2015.11.13

The ship that took my mother to Ellis Island
eighty-three years ago was named “The Mercy.”
She remembers trying to eat a banana
without first peeling it and seeing her first orange
in the hands of a young Scot, a seaman
who gave her a bite and wiped her mouth for her
with a red bandana and taught her the word,
“orange,” saying it patiently over and over.
A long autumn voyage, the days darkening
with the black waters calming as night came on,
then nothing as far as her eyes could see and space
without limit rushing off to the corners
of creation. She prayed in Russian and Yiddish
to find her family in New York, prayers
unheard or misunderstood or perhaps ignored
by all the powers that swept the waves of darkness
before she woke, that kept “The Mercy” afloat
while smallpox raged among the passengers
and crew until the dead were buried at sea
with strange prayers in a tongue she could not fathom.
“The Mercy,” I read on the yellowing pages of a book
I located in a windowless room of the library
on 42nd Street, sat thirty-one days
offshore in quarantine before the passengers
disembarked. There a story ends. Other ships
arrived, “Tancred” out of Glasgow, “The Neptune”
registered as Danish, “Umberto IV,”
the list goes on for pages, November gives
way to winter, the sea pounds this alien shore.
Italian miners from Piemonte dig
under towns in western Pennsylvania
only to rediscover the same nightmare
they left at home. A nine-year-old girl travels
all night by train with one suitcase and an orange.
She learns that mercy is something you can eat
again and again while the juice spills over
your chin, you can wipe it away with the back
of your hands and you can never get enough.

This has been on my mind today, as I’ve been reading too much about the blinkered reactions to the horrible events in Paris. Despite the rhetoric from many of our politicians, I don’t think this is a war that can be won by bombs or guns. Invading Syria and Iraq may damage Daesh, but it won’t change the fact of the death cult that they and others have been cultivating for decades. After all, it’s looking increasingly like all of the attackers in Paris were EU citizens. Most of the 9/11 attackers were Saudi.

And for the sake of a horror inflicted by a few, we would magnify that horror by shutting out the poor, starving, and war-weary. That won’t win this war either. A starving, stateless mass, bombed out of their homes and rejected by the free peoples of the world is a loss in and of itself. Not to mention the coup it provides Daesh’s propagandists.

Bombs won’t win this war, because the enemy isn’t attacking us from Syria or Iraq. They’re attacking us from Belgium and Saudi Arabia. Keeping out refugees won’t help, because Daesh is recruiting from our societies directly.

So this is not a war of bombs, and it’s not a war of migration.

Everything I see tells me that this is a war of ideas and culture. Perhaps of a kind not fought before in human history. This is the first war of a world where communications are instant and ubiquitous. And the only way to win it is to prove that free, open societies are better than the closed, fearful, backward one that Daesh wants to create.

And we don’t prove that by turning our backs on the refugees of Daesh’s horror. Quite the opposite. We win this war by offering a hand to those that Daesh turned out. We make sure they know that where Daesh destroyed their homes and livelihoods, we will gladly let them build new homes as our neighbor. So that the whole world can see that free societies are immune to the evil perpetrated by monsters like Daesh.

We win by showing that terrorists can kill people. Who we will mourn, and avenge with careful, decisive, swift action. But that after, our societies will be just as free and open as they were before. And ultimately the provocateurs will have accomplished nothing.

All the while we welcome with open arms those fleeing the horror that Daesh is creating, and help them to build their new lives in a free country.

I suppose I can’t guarantee that that’s what wins this war. But I think it’s got a much better chance than bombs. And if we give up our liberty and compassion, I’m not sure how much winning the war would even matter.

A spirited defence (sic) of English spelling

In the art of rationality there is a discipline of closeness-to-the-issue –trying to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

The Wright Brothers say, “My plane will fly.” If you look at their authority (bicycle mechanics who happen to be excellent amateur physicists) then you will compare their authority to, say, Lord Kelvin, and you will find that Lord Kelvin, is the great authority.

If you demand to see the Wright Brothers’ calculations, and you can follow them, and you demand to see Lord Kelvin’s calculations (he probably doesn’t have any apart from his own incredulity), then authority becomes much less relevant.

If you actually watch the plan fly, the calculations themselves become moot for many purposes, and Kelvin’s authority not even worth considering.

The more directly your arguments bear on a question, without intermediate inferences –the closer the observed nodes are to the queried node, in the Great Web of Causality –the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.

Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”

Just as it is superior to argue physics than credentials, it is also superior to argue physics than rationality. Who was more rational, the Wright Brothers or Lord Kelvin? If we can check their calculations, we don’t have to care! The virtue of a rationalist cannot directly cause a plane to fly.

“I still sleep on the right side”

One of these days I will just embrace my super fandom and turn this into a full-time Silversun Pickups blog.

But today cannot be that day, because, to my great sadness and shame, I managed to miss them performing this amazing acoustic set at Sun Liquor here in Seattle last month. Definitely worth listening to the whole thing, but if you’re the impatient type, you should at least jump to 27:26 and listen to them tear through “Lazy Eye”.

Return top

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.