Archive for the ‘Philosophy’ Category

Does this look dangerous to you?

Remember: the fascist has no problem being punched. It’s being laughed at that he finds intolerable.

Many…were outraged. One Saudi citizen, Hassan al-Ghamdi, tweeted: “The director offends the Muslim women in our country. Where are our preachers to deny this?”

Another man identified as Majid called it “cheap and extremely inappropriate”. A third said it was “disgusting”….

Under Saudi Arabia’s strict interpretation of Sharia, women are barred from obtaining driving licences and those who flout the ban are jailed. All female citizens are subject to a crushing guardianship system that obliges them to seek permission from male relatives to do everything from opening a bank account to travelling.


Churchill on Habit, Hobby, and Change

“Change is the master key. A man can wear out a particular part of his mind by continually using it and tiring it, just in the same way as he can wear out the elcbows of his coat…[O]ne cannot mend the frayed elbows of a coat by rubbing the sleeves or shoulders; but the tired parts of the mind can be rested and strengthened not merely by rest, but by using other parts…

To be really happy and really safe, one ought to have at least two or three hobbies, and they must all be real.” – Winston Churchill, “Hobbies”, Pall Mall, 1925

“If we start right out by asking ‘What is bias?,’ it comes at the question in the wrong order. As the proverb goes, ‘There are forty kinds of lunacy but only one kind of common sense.’ The truth is a narrow target, a small region of configuration space to hit. ‘She loves me, she loves me not’ may be a binary question, but E = mc^2 is a tiny dot in the space of all equations, like a winning lottery ticket in the space of all lottery tickets. Error is not an exceptional condition; it is success that is a priori so improbable that it requires an explanation.” – Eliezer Yudkowski, Rationality from AI to Zombies

Dan Carlin on the Great Filter

Fermi Paradox from versa on Vimeo.

“Hell below me, stars above” – On evolution and Alexander on AI Risk

“If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

Worse, the reasons we humans aren’t more intelligent are really stupid. Like, even people who find the idea abhorrent agree that selectively breeding humans for intelligence would work in some limited sense. Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren. But think about how weird this is! Breeding smart people isn’t doing work, per se. It’s not inventing complex new brain lobes. If you want to get all anthropomorphic about it, you’re just “telling” evolution that intelligence is something it should be selecting for. Heck, that’s all that the African savannah was doing too – the difference between chimps and humans isn’t some brilliant new molecular mechanism, it’s just sticking chimps in an environment where intelligence was selected for so that evolution was incentivized to pull out a few stupid hacks. The hacks seem to be things like “bigger brain size” (did you know that both among species and among individual humans, brain size correlates pretty robustly with intelligence, and that one reason we’re not smarter may be that it’s too annoying to have to squeeze a bigger brain through the birth canal?) If you believe in Greg Cochran’s Ashkenazi IQ hypothesis, just having a culture that valued intelligence on the marriage market was enough to boost IQ 15 points in a couple of centuries, and this is exactly the sort of thing you should expect in a world like ours where intelligence increases are stupidly easy to come by.”

Reading Scott Alexander on AI risk this morning, and the above passage jumped out at me. It relates to my own personal transhumanist obsession: aging. The reasons that we don’t live longer are predominately that there’s never been evolutionary pressure to live longer. (Viz. Robert Heinlein’s Lazarus Long for a fictionalized, overly-rosy view of the implications there.) But we know that lifespan is partially genetic and that ceteris paribus people with longer-lived parents and grandparents will themselves live longer.

The reasons why we aren’t all Jeanne Calment is because most of our evolution exists in an era when once your grandkids were hunting on the Savannah, you’d done your piece and were unlikely to pass on long-lived, long-health-span genes any more effectively than someone who died at 60.

But it’s interesting to see that that might be changing. People are having kids older, living and staying healthy older (thanks largely to environmental factors, yes), and it could be that one’s ability to stay active later might push out our natural lifespans a few years. Even if you’re not having kids into your 50s, like an increasing number of people are, you’re still contributing to society and providing resources and support for the next generation. Population effects can be just as strong as individual genetic effects sometimes.

So it’ll be interesting to see if lifespan absent medical intervention starts to creep upwards over the next few decades.

Of course, all of that assumes that something like SENS doesn’t take off and just sort this aging thing out for good. In which case, I will be perfectly content knowing if I’m right about the evolution of aging.

As for Alexander’s essay, it’s an interesting take on the question. I like his point that the AI risk model really matters in the hard-start singularity scenario. Tech folks tend to default to open in open vs closed scenarios, but Alexander makes the most compelling argument for closed, careful, precautionary AI I’ve seen yet. I’m not convinced, entirely, though, for a few reasons.

One is my doubt in a hard-start singularity (or even just AI) scenario. I just don’t find it feasible that creating a roughly-human machine intelligence somehow implies that it could easily overcome barriers to higher levels of intelligence. The only reason we can’t see the (I suspect many, serious) barriers to drastically super-human intelligence is because we haven’t run into them yet. What if we create a six-delta-smart human-like AI and it suddenly runs into a variety of nearly insurmountable problems? This is where thinking about IQ as an undifferentiated plane can really be problematic. There are certain intellectual operations that are qualitatively different, not just a reflection of IQ, and we don’t yet know what some of those are as you start surpassing human intelligence. (Think, e.g., of a super-human intelligence that, for whatever reason, lacked object permanence.)

Second, I think there are bound to be strong anthropic effects on AI as it’s being developed. This cuts off a lot of the scenarios that particularly worry AI risk writers (e.g. paperclip optimizers and such). Simply put: if there’s no reason that an AI researcher would ever build it, we’re probably better off excluding it from our threat model.

Finally, the dichotomous structure of Alexander’s Dr. Good vs. Dr. Amoral argument misses a lot of important shade. I work in security and all the time see instances where smart people overlook really easy-to-exploit vulnerabilities. AI is going to be heinously complex, and any view of it as monolithic, perfectly-operational super-brains is missing the reality of complex systems. Hell, evolution is the most careful, brutally efficient design mechanism that we know of, and yet human intelligence still has a number of serious unpatched vulnerabilities.

This can be taken in a couple of ways. The hopeful one is that good people outweigh bad, and good uses of AI will outweigh bad, so we’ll probably end up with some uneasy detente as with the modern Internet security ecosystem.

The cynical one is that we’re as likely to be killed by a buffer overflow vuln in Dr. Good’s benevolent AI as we are by Dr. Amoral’s rampaging paperclip optimizer of death.

Welcome to the future!

A spirited defence (sic) of English spelling

In the art of rationality there is a discipline of closeness-to-the-issue –trying to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

The Wright Brothers say, “My plane will fly.” If you look at their authority (bicycle mechanics who happen to be excellent amateur physicists) then you will compare their authority to, say, Lord Kelvin, and you will find that Lord Kelvin, is the great authority.

If you demand to see the Wright Brothers’ calculations, and you can follow them, and you demand to see Lord Kelvin’s calculations (he probably doesn’t have any apart from his own incredulity), then authority becomes much less relevant.

If you actually watch the plan fly, the calculations themselves become moot for many purposes, and Kelvin’s authority not even worth considering.

The more directly your arguments bear on a question, without intermediate inferences –the closer the observed nodes are to the queried node, in the Great Web of Causality –the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.

Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”

Just as it is superior to argue physics than credentials, it is also superior to argue physics than rationality. Who was more rational, the Wright Brothers or Lord Kelvin? If we can check their calculations, we don’t have to care! The virtue of a rationalist cannot directly cause a plane to fly.

A dialog on immigration

OPENO: Hello, friend! Something seems to be on your mind.

RESTRICTES: Yes, Openo, I am troubled. This migrant crisis in Europe has reawakened my concerns about immigration to the West in general. While I’m sympathetic to the plight of those fleeing civil war (though not to economic migrants), I think that Westerners on both sides of the Atlantic are being extremely foolish and letting sentiment and blank slate ideology blind them to the long term, irreversible consequences of their decisions.

OPENO: Then we have very different attitudes. My “sentiment” leads me to believe that the migration from poorer to richer countries will – despite the difficulties associated with any large change – be in the end a great benefit not only to the immigrants and their descendants, but to the host nations as well. Perhaps we can use reason and evidence to bridge this gulf between us. Can you elaborate on your position more?

A very well-written hypothetical dialog ensues. Highly recommended reading.

A Noble Endeavor

Some of the crazy, beautiful bastards at LessWrong have set out to collect a list of the best textbooks on every subject.

10 “Fully General Counterarguments” to watch out for

It is an unchallengeable orthodoxy that you should wear a coat if it is cold out. Day after day we hear shrill warnings from the high priests of this new religion practically seething with hatred for anyone who might possibly dare to go out without a winter coat on. But these ideologues don’t realize that just wearing more jackets can’t solve all of our society’s problems. Here’s a reality check – no one is under any obligation to put on any clothing they don’t want to, and North Face and REI are not entitled to your hard-earned money. All that these increasingly strident claims about jackets do is shame underprivileged people who can’t afford jackets, suggesting them as legitimate targets for violence. In conclusion, do we really want to say that people should be judged by the clothes they wear? Or can we accept the unjacketed human body to be potentially just as beautiful as someone bundled beneath ten layers of coats?

Listed, with succinct examples.

Return top

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.