Induction and Deduction



In the previous section, we discussed Sense Perception as the source of all of our knowledge. It's our tie to reality, and that's how we get raw data. But what do you do with raw data?

Induction is a process of taking specific data and generalizing it. It something we do all the time, and you use the product of it constantly. For instance, if you see that the first car you drive has turn signals, and you see another car, and it has turn signals, you can extrapolate and guess that all of them do. If you see the sun rises in the morning, and falls in the evening, you guess that it will continue to do that. You may later supplement this knowledge with a reason behind it, but you start of seeing the results and you generalize the event.

The knowledge you get from induction is of a general sort. You go from the more specific to the more general. This can be done with sensory data, but you can also do it with abstract data. You can be aware of monarchies, democracies, dictatorships, and socialist states, and based on similarities draw a wider conclusion, like that a government must have the consent of the governed or they'll be overthrown.

Objectivism doesn't have a complete theory of induction of its own, so I'm going to describe some of the general issues of induction. The first is what's sometimes called the problem of induction. Basically when you generalize data, how do you know you're generalizing correctly? Say you find a dog that chases cats. And then another. And then another. You may conclude that all dogs chase cats. But it's not true. Some dogs don't chase cats. How do you know if your generalization is correct? Is there a proper method? How certain should you be of any particular generalization?

The first thing to do is look at what kind of information we have about generalizations. The first thing is our sample size. If you know there are a billion dogs on the planet, how many have you actually seen chase cats? 10? 100? 100? Or all 1 billion? Obviously if your sample size is very small, you can't count on the generalization as much as if you've tested every dog. But even testing them all doesn't guarantee success. A new dog might be born tomorrow that doesn't chase cats, or there might be a dog you're not aware of that doesn't.

The next piece of information is the kind of data set we're using. Which is more useful in this experiment, a set of 1000 random dogs, or a set of one million Pitt Bulls (a very aggressive dog, for those of you who aren't familiar with them)? Obviously you can come to bad conclusions by not having enough breadth in your sample set.

Another piece of information is the context. Do the dogs chase the cats when they're hungry, protecting their young, bored, or what? If you test them only under certain circumstances, you may come to the wrong conclusion.

Another way of improving your chances of being correct is finding an explanation for a particular generalization. In other words, don't just measure whether dogs chase cats, but come up with a hypothesis about why they would. The example of the sun rising in the morning is better. You can, by sheer induction, guess that because it always has risen in the morning, it will in the future. But when you grasp that it is the earth spinning on its axis, you understand the cause of it. Of course, this information will be based on a whole other set of inductions and deductions, but the fact that the generalization end up supporting each other is useful. This is how most of our knowledge works.

Statistics is an entire science based on the needs of induction.

And so is Probability. Probability doesn't try to come up with a 100% correct generalization, but instead tries to show the odds or rates of something happening. This is useful in a lot of ways, and is a kind of induction. By processing the data, you can make statements about classes of objects or result.

Concept-formation is another kind of induction, and that's an area where Rand has a unique theory. I'll get into that later.

Another thing to note about induction is that you can disprove a generalization with one piece of data. So when using induction, it's important to look for evidence that you might be wrong. In other words, you can't just look for examples that are true. You need to look to see if you can find any that are false. In the dog example, it would be bad to set up a website about dogs that chase cats, and have people send in their stories. If you want to be scientific, you have to seek out examples that would prove your theory wrong.

Of course, that's not easy to do. If you have a hypothesis about why the generalization seems to hold up, it's not too hard to find something that goes against that hypothesis. But you may still miss it. An example I learned growing up is seeing a set of numbers that include (among others) {2, 8, 6, 4, 16, 12, 1042}. The test asked you to try to figure out what numbers are in the data set, and you could guess other data points and see if they are. The trick is that if you assume it's even numbers, you might try 20, 22, 1million, etc. But you need to test to see if there's other numbers, like 1, 3, 9? It could be the set of natural numbers. And then you have to test 0. And then -1, -2, -4. And if they happen to all fit in it, you have to try fractions or decimals or whatever else. But if your theory of what the set contains is wrong, or in the more general sense your theory behind a generalization is incorrect, you may not look for the right kind of falsifying evidence.

There's a bit more to induction than this. When trying to find the cause of some result, there may be more than one possible cause. You can learn to test whether any particular factor is part of the cause. Or in places where there is more than one cause, you can figure out which is the primary cause, or you can figure out what relationship between the causes produces the effects. These are all inductive exercises, looking at data and trying to generalize it.

That should give you a pretty good idea of what induction is all about. Induction is one of those areas that really upsets some philosophers.

Deduction is the opposite of induction. It goes from the general to the specific. If you say that all apples are tasty, then you know that a particular apple is going to be tasty. If the general principle is true, then you know that the conclusion is true. It's a 100% thing, and very attractive.

The basic form of a deduction is a syllogism. There are a few types of these, but it's something of the type "All A is B. A. Therefore B." There are also a lot of logical fallacies, where things get a little confused, but the meaning is entirely wrong. For instance "All A is B. B. Therefore A." The correct version is embodied in the saying "All men are mortal. Socrates is a man. Therefore, Socrates is mortal". The flawed version would say that because Socrates is a mortal, he must be a man. But that isn't true from the premises. He could be a horse for all we know.

Yes, deduction gives you nice clean answers. If the premises are correct, and the logic is performed correctly, the result must be correct. But you'll notice that in deduction, you have some general principle (All A is B, or something of the sort). That means that deduction is necessarily based on some generalized principle, which can only be attained through induction. As I said, that upsets some philosophers, because they want clean answers that have no chance of being proven wrong. One way some philosophers have tried to get around this obstacle is by claiming that some general principles are known without induction. A Priori knowledge is what they call it. Knowledge that comes before experience. If the general principles could be automatic, then they wouldn't have to worry about the messy world of induction.

You'll often hear Lindsay and I talk about Rationalists. Despite the name, it does not mean people who are rational (although in some contexts, it's used that way). It means people who think that deduction is the only means of gaining real knowledge, and consequently dismiss induction. Of course, they can't dismiss the products of induction, or they'd have nothing to deduce. So instead they ignore it.

Like most false dichotomies, there is another side that is completely flawed as well, but in a different way. The Empiricists. In a general sense it's supposed to mean people who believes knowledge comes from the sense. But like many descriptions, it misses the bigger point. Empiricists discount deduction and abstraction. They uphold experience as the only source of knowledge. I don't want to argue about how consistent they can be with this, since consistency is typically not a trait of a bad philosophy, but imagine that someone wants to jump off a bridge, and you tell him that he'll die. An Empiricist would then ask whether anyone has died jumping off this bridge? And even if they have, it doesn't mean this guy will!

I talked more about induction than deduction, because most people are familiar with deduction. Most books on logic will focus on deduction, with possibly a smaller section on induction. David Kelley has one title "The Art of Reasoning". It's a general logic textbook, and although he gives some Objectivist type example (role of government, for instance), it isn't explicitly Objectivist, and he doesn't say which of the many theories of logic presented does Objectivism accept. 4 of the 18 chapters are on induction.

If you want to go into more detail on any of these topics, let me know.

previousLectures Home next