Archive for the ‘Living in the Future’ Category

An Idoru dispels misconceptions about Japan

William Gibson sadly unavailable for comment.

“Hell below me, stars above” – On evolution and Alexander on AI Risk

“If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.

Worse, the reasons we humans aren’t more intelligent are really stupid. Like, even people who find the idea abhorrent agree that selectively breeding humans for intelligence would work in some limited sense. Find all the smartest people, make them marry each other for a couple of generations, and you’d get some really smart great-grandchildren. But think about how weird this is! Breeding smart people isn’t doing work, per se. It’s not inventing complex new brain lobes. If you want to get all anthropomorphic about it, you’re just “telling” evolution that intelligence is something it should be selecting for. Heck, that’s all that the African savannah was doing too – the difference between chimps and humans isn’t some brilliant new molecular mechanism, it’s just sticking chimps in an environment where intelligence was selected for so that evolution was incentivized to pull out a few stupid hacks. The hacks seem to be things like “bigger brain size” (did you know that both among species and among individual humans, brain size correlates pretty robustly with intelligence, and that one reason we’re not smarter may be that it’s too annoying to have to squeeze a bigger brain through the birth canal?) If you believe in Greg Cochran’s Ashkenazi IQ hypothesis, just having a culture that valued intelligence on the marriage market was enough to boost IQ 15 points in a couple of centuries, and this is exactly the sort of thing you should expect in a world like ours where intelligence increases are stupidly easy to come by.”

Reading Scott Alexander on AI risk this morning, and the above passage jumped out at me. It relates to my own personal transhumanist obsession: aging. The reasons that we don’t live longer are predominately that there’s never been evolutionary pressure to live longer. (Viz. Robert Heinlein’s Lazarus Long for a fictionalized, overly-rosy view of the implications there.) But we know that lifespan is partially genetic and that ceteris paribus people with longer-lived parents and grandparents will themselves live longer.

The reasons why we aren’t all Jeanne Calment is because most of our evolution exists in an era when once your grandkids were hunting on the Savannah, you’d done your piece and were unlikely to pass on long-lived, long-health-span genes any more effectively than someone who died at 60.

But it’s interesting to see that that might be changing. People are having kids older, living and staying healthy older (thanks largely to environmental factors, yes), and it could be that one’s ability to stay active later might push out our natural lifespans a few years. Even if you’re not having kids into your 50s, like an increasing number of people are, you’re still contributing to society and providing resources and support for the next generation. Population effects can be just as strong as individual genetic effects sometimes.

So it’ll be interesting to see if lifespan absent medical intervention starts to creep upwards over the next few decades.

Of course, all of that assumes that something like SENS doesn’t take off and just sort this aging thing out for good. In which case, I will be perfectly content knowing if I’m right about the evolution of aging.

As for Alexander’s essay, it’s an interesting take on the question. I like his point that the AI risk model really matters in the hard-start singularity scenario. Tech folks tend to default to open in open vs closed scenarios, but Alexander makes the most compelling argument for closed, careful, precautionary AI I’ve seen yet. I’m not convinced, entirely, though, for a few reasons.

One is my doubt in a hard-start singularity (or even just AI) scenario. I just don’t find it feasible that creating a roughly-human machine intelligence somehow implies that it could easily overcome barriers to higher levels of intelligence. The only reason we can’t see the (I suspect many, serious) barriers to drastically super-human intelligence is because we haven’t run into them yet. What if we create a six-delta-smart human-like AI and it suddenly runs into a variety of nearly insurmountable problems? This is where thinking about IQ as an undifferentiated plane can really be problematic. There are certain intellectual operations that are qualitatively different, not just a reflection of IQ, and we don’t yet know what some of those are as you start surpassing human intelligence. (Think, e.g., of a super-human intelligence that, for whatever reason, lacked object permanence.)

Second, I think there are bound to be strong anthropic effects on AI as it’s being developed. This cuts off a lot of the scenarios that particularly worry AI risk writers (e.g. paperclip optimizers and such). Simply put: if there’s no reason that an AI researcher would ever build it, we’re probably better off excluding it from our threat model.

Finally, the dichotomous structure of Alexander’s Dr. Good vs. Dr. Amoral argument misses a lot of important shade. I work in security and all the time see instances where smart people overlook really easy-to-exploit vulnerabilities. AI is going to be heinously complex, and any view of it as monolithic, perfectly-operational super-brains is missing the reality of complex systems. Hell, evolution is the most careful, brutally efficient design mechanism that we know of, and yet human intelligence still has a number of serious unpatched vulnerabilities.

This can be taken in a couple of ways. The hopeful one is that good people outweigh bad, and good uses of AI will outweigh bad, so we’ll probably end up with some uneasy detente as with the modern Internet security ecosystem.

The cynical one is that we’re as likely to be killed by a buffer overflow vuln in Dr. Good’s benevolent AI as we are by Dr. Amoral’s rampaging paperclip optimizer of death.

Welcome to the future!

Some Thoughts on Getting Paid for a Pull Request

I recently made my first contributions to TextSecure. For those not familiar, it’s a handle little secure messaging app that manages both SMS messages and secure push notifications. It’s one of the best examples of usable security out there for mobile, so I’m stoked to finally be able to contribute to it.

One of the many cool things about the project is that they’ve set up BitHub, an automated donations/payout service for open source projects. The service is remarkably simple in interface, allowing for donations to a pool of funds and paying out automatically for every commit. In my case, as soon as my commit landed, the equivalent of roughly 12USD landed in my BitCoin wallet. The process was transparent and painless for me.

Looking through the git history for the project, I noticed a bunch of commits that included the line:

// FREEBIE

While I couldn’t find any explicit documentation of it anywhere in the BitHub docs, it turns out that this does, as you’d expect, “donate” the commit in question. I tagged my second commit to TextSecure with FREEBIE, and sure enough, no payout resulted.

This whole mechanism is interesting for a few reasons, almost all of them having to do with usability and the API.

See, paying for open source contributions isn’t a radically new concept. There have been various attempts to make it work for years, and most of them end up looking or actually being small companies built up around various projects. Some particular projects have bug bounty systems. Others are covered by third-party bug bounty systems.

But a per-project-funded, per-commit-paying, automated donation and payout system, the API for which is almost entirely included in the existing git infrastructure is new. And what’s more, having used it now, I’m convinced it’s as close as we’ve come so far to the Correct way of doing payouts for open source contributions. Its transparency and seamless integration make it trivially usable. It can be configured to pay out or not by default, with overrides being as simple as appropriately tagging your commit. The bitcoin-based payout is so easy and quick that it’s damned near magic.

I haven’t looked into setup or configuration much, so I don’t know how much of a headache it is from an operational point of view. (I don’t know precisely where, e.g., it looked to discover the right BitCoin address, though I suspect it’s just doing a committer email lookup on CoinBase. I’m not sure what the payout mechanism would look like if I didn’t already have a CoinBase account setup or if can be setup to query identity services like OneName) But from a user’s point of view it was so easy and invisible that it would be easy to miss right up until the BTC landed in one’s wallet.

It’s this usability that makes something like BitHub a potential game changer. A zero friction way for developers to get a little payout for fixing something on a project creates a powerful new incentive for them to get over the learning curve. There’s a huge psychological difference between needing to learn a new codebase just to fix a bug or two, and learning a codebase that actively pays contributors. Especially when the actual payout mechanism involves no special setup for the dev.

In short, BitHub’s git integration, transparency, and near total lack of friction make it the best open source payment solution I’ve seen. And that’s not just a nice thing, it’s an important thing. Getting paid drives commits. Friction drives complacency. BitHub finally nails the best of both worlds for open source payments.

Living in the Future: Better Cyborgs Edition

We now have the capability to create mind-controlled cybernetic limbs. The fit and finish leaves something to be desired, and there are many more hurdles to clear, but they’re good enough to restore real, meaningful function to amputees.

Which is damned awesome, if you ask me. Props to the team at APL. Thanks for helping to make this future a good one.

Living in the Future: Smarter Cars Edition

Just in case you were wondering how far along our species is with the whole “autonomous automobile” project, check out this demo of the latest Tesla S, including the slick new autopilot.

Pretty wicked stuff.

Living in the Future: Digital Makeup Edition

OMOTE / REAL-TIME FACE TRACKING & PROJECTION MAPPING. from something wonderful on Vimeo.

Living in the Future: Explaining Amazon Fresh to a Cat Edition (with Apologies to Randall Munroe)

I’m spending the evening using a metal rectangle full of little lights to prepare a presentation for my colleagues. The presentation is about a horrible failure in the way our light-boxes share patterns with one another. It allowed them to know each others secrets! Patterns of lights were seen where they weren’t meant to appear! While I change the lights in my presentation, I realized that I was low on a few essentials, including black pepper and diet pepsi (two of the foundation stones of my food pyramid). Fortunately, we have a whole army of light-boxes whose whole job is to use light patterns to signal a fleet of pickers, packers, shippers, and drivers, to deliver essentials right to my apartment at short notice. And so, with a few deft flicks of the lights, I signaled what I needed and scheduled a convenient time for them to deliver it, less than 24 hours in the future.

Our light-boxes may not make us as happy as a cat, but they sure can solve a lot of problems for us. (When their patterns aren’t all wrong and causing us tons of grief, that is.)

Living in the Future: Bathing Edition

William Gibson said that Japan is the world’s default mental setting for the future. That’s just as true for the mundane as it is for the esoteric.

Living in the Future: Deextinction Edition

Did you know that mankind has successfully (albeit briefly) restored an extinct species? Were you aware that we’re on the cusp of restoring many more?

Stewart Brand is helping spearhead an effort that will (relatively soon, from the sound of it) restore some of the species that we’ve destroyed over the years.

Welcome to the de-extincted future, fellow H. Saps!

Living in the Future, QotD Edition

Mark Steckbeck of Liberal Order, as quoted over at Cafe Hayek:

A couple of weeks ago I was watching a football game on my 46-inch flat panel HD television set with surround sound, all fully remote. I didn’t really need the sound because I was listening to music through my iPhone, with access to over 5,000 songs on my computer’s server on the other side of the house. This was through the Apple Airport Express I had attached to an amp on my stereo speakers in the same room. I have a two other Airport Extremes connected to speakers around the house on which I can play these 5,000 songs from my computer in different rooms. No getting up to turn the record over.

Read the whole thing, as well as the WSJ article it’s in response to.

We hear a lot these days about the collapse of the middle class. I’m happy to report that the vast majority of the hand-wringing over it entirely unwarranted. Not only are goods cheaper on a per-hour-worked basis, but they are of much higher quality. We also have access to goods and technologies that were unthinkable even a few decades ago. And despite some impressively fallacious numbers being bantered around, inflation-adjusted median income is near its all-time highs. Take into account the fact that such incomes are almost always reported at the household level and the fact that households are smaller than they used to be, and the picture gets even better.

Return top

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.