Two muffins are sitting in an oven, baking. One muffin turns to the other and says: “Is it just me, or is it getting really hot in here.” The second muffin turns to the first and says: “Holy crap, a talking muffin!”

The above joke is funny, largely because it commits a logical fallacy called a category error, and then immediately turns around and calls itself on it. It’s generally accepted that muffins can’t talk and so to ascribe them that ability is ridiculous. Ascribing a quality or set of qualities to an object that can’t possibly possess them is called a category error.

Another, less obvious example: many people say that anyone who plays the lottery is a fool. After all, the odds of winning are narrow, the expected return is staunchly negative. It seems to me, however, that for most purchasers of lottery tickets, that argument is fallacious. After all, most of the people buying lottery tickets don’t actually expect to win. And vanishingly few of them actually expect to make money on the prospect in the long run. Most of them are playing the lottery in order to day-dream for a few days what they would do if they win. (My personal fantasy is to pay for Logic 101, Computer Science 101, and Economics 101 classes for every person in the country.)

So to tell a lottery player that they’re making a bad gamble is a category error. You assume that lottery playing is a form of gambling, when really it’s a form of assisted daydreaming. The anti-lottery killjoy is ascribing to the lottery player motives they don’t actually possess.

Here’s a less abstract example. When the first photocopy machines came on the market, some of the early adopting executives took the time to proofread every copy the machine made, just in case it made transcription errors. They assumed (reasonably) that a photocopy machine was the same category of object as a human transcriptionist, and so ascribed it the ability to make typographical or transcription errors. This was a category mistake because photocopy machines use captured images to make copies, and don’t actually transcribe documents at all.

In day-to-day life, category errors occur frequently in fields that are poorly understood by the general populace, like science or technology. Many of the technological problems that people encounter come from bad analogies leading them to think that, for instance, a computer and a human brain are the same category of thing. The answer to “why did my computer do something so stupid” is, invariably, “because someone told it to.” Computers only ever do exactly what they’re told, and this leads to bad behavior at times. Brains, on the other hand, have no such constraints and, indeed, can’t have them because they aren’t artifacts like computers are.

Almost all category errors rely on explicit or implicit analogies. This is because all analogies are imperfect, even though we tend to treat them as flawless identities. Joel Spolsky calls this the “Law of Leaky Abstractions“. We assume our computer is like a brain, because that’s an analogy that serves us well some high percentage of the time. We end up expecting our computer to behave “intelligently” (meaning roughly: however I really wish it would behave) and get angry when it’s “dumb” enough to do something like download a virus or delete our files. Ascribing intelligence or lack thereof to a modern computer is a category error based on the leaky abstraction that computers are “sort of like” brains.1

Category errors contain a couple of important subspecies that I’ll probably talk more about in a future post. These are the Fallacy of Composition, and the Fallacy of Division. Basically these are category errors in which qualities held by a part is ascribed to the whole or vice versa.

Like all informal fallacies, avoiding category errors can be difficult. The best way to attempt to avoid them is often to be rigorous about examining your suppositions. Whenever you ascribe qualities to person or thing without direct evidence, or you find yourself making assumptions based on analogy, it’s a good time to step back and ask if those assumptions are warranted.


1 It’s actually an open question whether that will always be a category error. The question of general artificial intelligence has a number of unresolved technical and philosophical pieces. For the pro-AI side, see Turing, Minsky, Kurzweil, et al. For the anti-AI side see Searle, Lanier, Penrose, et al. For the dismissive “that’s kind of a stupid question” side of the argument, see the wonderful E. W. Dijkstra.