Archive for the ‘Science and Technology’ Category

How It Works: The M1 Garand Rifle

That NYT Amazon Story

I’ve had a lot of folks ask me about that Amazon “exposé” in the New York Times. All I can say is that I’ve never seen anything like what was described in that story and have every confidence that the leaders to whom I’ve reported at Amazon would never tolerate anything like it. Which isn’t to say those individual cases didn’t happen, necessarily. But current and former Amazon white-collar employees is a pool large enough that anyone would be able to cherry-pick a few examples to prove any point. I feel bad that those unlucky folks had such horrific experiences, and I hope that the people responsible for them have, as the Amazon saying goes, been “promoted to customer”.

But the Amazon described in that New York Times story is not the Amazon I know and have happily worked for the past five years.

Obligatory Disclosure Notice.

The Great Filter as Great Filter

NOTE: This post is just epistemic play. I’m writing this mainly to figure out the shape and scope of this argument, not to rigorously propose it as a serious solution to the Fermi Paradox. I don’t, personally, even think the Great Filter is a good solution to Fermi’s Paradox, as there’s a whole host of better reasons why we haven’t run into our universal neighbors yet.

The most interesting question in modern cosmology whether or not extraterrestrial life exists. Because whether it exists or not, both possibilities are incredible. On the one hand, space is unfathomably vast and, given that we know intelligent life arises with probability greater than 0 on suitably habitable planets, the universe should be teaming with life. On the other hand, despite serious efforts to locate extraterrestrial life, we’ve found no evidence that there’s anyone else out there. Nor do we have any evidence that they’ve popped ’round for a visit.

This is known as the Fermi Paradox. Named after the physicist Enrico Fermi who stated it simply as “Where is everybody?”

There are a few different resolutions to the Fermi Paradox, one of which is known as the Great Filter Hypothesis. This is the theory that, in the development of intelligent civilization, there tends to be one or more events or processes that tend to prevent intelligent species from becoming communicative, space-faring, and/or simply advanced enough to be detected from outside their local space. A number of candidates have been identified for the Great Filter, including the development of nuclear weapons, overpopulation, resource depletion, short time horizons, religion, atheism, post-scarcity, virtual reality, disease, invasion whatever species first actually developed space-faring technology and became intergalactic murder gods jumping from planet to planet pillaging other species, etc. etc.

Let’s add one more to the long list of “bullshit over-rationalized hypotheses for the Great Filter”. What if understanding of the Great Filter is itself the Great Filter? Or, more prosaically, what if an obsession with impending doom tends to grip cultures, pushing them away from the path of technological progress? Longtime readers of this blog will be familiar with my Things Are Better Than You Think series, and know that I definitely see this trend going on in modern Western society. But what if it’s not just us? What if there comes a certain point in the development of civilization when the panicky, risk-averse memes that tend to benefit earlier, more fragile cultures cause advanced civilizations to descend into paranoid paralysis, always looking over the next horizon for the apocalypse?

They notice that they use some resources that aren’t readily replenished and so give in to peak resource panics and place crushing burdens on anyone using those resources. Or they notice that their impact on the environment is deleterious and so they de-industrialize instead of taking the time and energy to make their technologies sustainable. Or they even just notice that there’s a lot of things out there that can destroy a species or a sapient individual and become resigned to their fate and don’t try and stop it. A species-wide memento mori could function just like it does for individuals: as either a call to action or as an excuse to slack off, since we’re all going to die someday anyway.

We see this trend today in Western cultures. The debate over global warming, for instance, has gotten mired down around two poles: N.) It’s real and we’re all fucked. S.) It’s a conspiracy and everything is fine.

But it is real, and we’re probably not fucked. That’s the excluded middle, and it seems to be (to me, at least) the most likely projection based on the evidence. What if in advanced civilizations, the N pole of the apocalypse argument tends to win out, leading to heavy restrictions on growth and progress and an en-mass return to simple, squalid agrarianism?

Or, to use an example that’s no longer as highly charged, what if the first two super-states to create nuclear weapons tend to lock into a stable M.A.D regime of brutally logical brinkmanship. They spend all of their resources developing better measures and countermeasures, until a cold war becomes a static, cold civilization that does nothing but huddle under the threat of nuclear annihilation. All resources that aren’t spent avoiding the apocalypse are spent fearing it.

Of course, as with all candidates for the Great Filter, this one is automatically suspect since it glorifies our species problems by making them universal. Cosmology, like history, is seductive in its false familiarity. Modern America is not early-decline Rome, and Western Civilization is not every intelligent civilization everywhere. But insofar as our experiences universalize, I think there’s a non-zero chance that the fear of an apocalypse could be just as much of a Great Filter as the actual apocalypse itself.

A Koan

The security Student asked the security Master, “at what point will my threat model be complete?”

The Master produced two coins, one silver and one gold. He placed both coins in his left hand and closed his fist around them. From his fist he withdrew the silver coin, which he placed in the pocket of his robes.

“What coin is in my hand?” The Master asked.

“The gold.” The Student replied.

The Master opened his hand. Both the silver and the gold coin remained.

He closed his hand again. This time he withdrew the gold coin and placed it in the pocket of his robes.

“Which coin is in my hand?”

“The silver.”

The master opened his hand. Both the silver and the gold coin remained.

The master closed his hand again.

“When I withdraw the next coin from my hand, what coin do you think will remain?” The Master asked.

“Both of them, clearly!” The student replied.

The master reached into his hand and withdrew a copper coin.

And the student was enlightened.

An Elegant Proof.

Destruction is always easier than creation: “By Stefanovitch’s reckoning, just two individuals had accounted for almost all the destruction, eviscerating the completed puzzle in about one percent of the moves and two percent of the time it had taken a crowd of thousands to assemble it.”

Alternately phrased: one malicious shitheel can destroy the work of hundreds of good people, but only if the system permits it.

One thing that I think gets glossed over in the (otherwise) excellent write-up above is that Stefanovitch’s original reckoning was correct. The project didn’t actually have one antagonist, but two. I’ll give you a hint as to the other:

“‘We were crossing our fingers, hoping we wouldn’t get sabotaged,’ says [REDACTED], the team’s security expert.”

“Security” “expert”. Malfeasance is one thing, but when it comes to security, willful incompetence is just as bad.

One from the Great EWD

“A problem solved in my head.”

Of two unknown integers between 2 and 99 (bounds included) a person P is told the product and a person S is told the sum. When asked whether they know the two numbers, the following dialog takes place:

P: “I don’t know them.”

S: “I knew that already.”

P: “Then I know now the two numbers.”

S: “Then I now know them too.”

With the above data we are requested to determine the two numbers, and to establish that our solution is unique.

(Readers that would like to think about the problem themselves need not turn the page.)

Living in the Future: Better Cyborgs Edition

We now have the capability to create mind-controlled cybernetic limbs. The fit and finish leaves something to be desired, and there are many more hurdles to clear, but they’re good enough to restore real, meaningful function to amputees.

Which is damned awesome, if you ask me. Props to the team at APL. Thanks for helping to make this future a good one.

All hail our dark, Markovian configuration management tools

“During the Jurassic Age the Old Ones had perhaps become satisfied with their decadent art—or had ceased to recognize the superior merit of the older (activerecord) backends in a multi-master environment.”

The Doom that Came to Puppet, found via @lxt.

“…a beautiful specimen.”

WARNING: NSFL, contains detailed shots of a recently removed human brain. It’s a fascinating study of just how fragile the most critical part of us is.

Return top

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.