Epistemic Closure, at the least the single premise variety, is usually formulated like this:
(EC) IF you know that P and you know that (if P, then Q), THEN you know or are at least in a position to know that Q.
There are loads of alleged counterexamples. My favorite is the zebra one. You go to the zoo. You see a zebra in the cage. You believe that there's a zebra in the cage, and intuitively you know that there is a zebra in the cage. You also know that if there is a zebra in the cage, then there is not a cleverly disguised mule in the cage. This because nothing can be both a zebra and a cleverly disguised mule. Suppose that you go on to believe that there is not a cleverly disguised mule in the cage based on this bit of reasoning. It seems to many, including me, that you do not know that there is not a cleverly disguised mule in the cage. It is relatively hard to know that something is not a cleverly disguised mule. It requires thorough investigation. It can't be known simply by looking at a zebra-looking creature.
I find these sorts of counterexamples totally convincing. What can we do to save something like closure if we want to take the zebra intuition seriously? What's gone wrong with closure as it's stated.
In my view, what gone wrong is that there has been level confusion (see Alston). We forget that often when we know something, we don't know that we know it. What we know is that we believe it. When you look at a zebra in a cage and form a corresponding belief, you (i) know that there is a zebra in the cage, at least typically, and (ii) know that you believe as much. But do you know that you know? In my ears, that would require knowing that the creature is not a painted mule. To know that you know, it seems, you have to know that the relevant alternatives do not obtain.
This suggests a strengthening of the antecedent of (EC).
(KK EC) IF you know that you know that P and you know that (if P then Q), then you know or are at least in a position to know that Q.
Quickly, let me tie this together with another famous counterexample to (EC). This one is by way of Kripke. Suppose that you know that P. You know that (if P then any evidence suggesting not-P is misleading). So, you know that any evidence suggesting the contrary of anything you know is misleading.
This sounds like a recipe for dogmatism. Consider an instance. I know that my sister is in Boulder. Suppose that my mom calls and tells me that my sister is not in Boulder (perhaps she believes that my sister is not in Boulder). Certainly, I am not in a position to know that the evidence that my mom is giving me that suggests that my sister is not in Boulder is misleading.
What's gone wrong? Well, intuitively, what's gone wrong is that I don't know that I know my sister is in Boulder. I presume as much, as I believe it and think it's true. But I don't think I am in a position to know that I know that my sister is in Boulder. If I did know that I know my sister was in Boulder, I would know that the evidence my mom is giving me is misleading.
Subscribe to:
Post Comments (Atom)
2 comments:
"To know that you know, it seems, you have to know that the relevant alternatives do not obtain."
Isn't that the usual principle for first-order knowledge? To know it's a zebra, you have to rule out the relevant alternatives: that it is a giraffe, etc. Being a "cleverly disguised mule", though an alternative, is not in ordinary conditions considered to be a relevant alternative. So that's how you can know it's a zebra, even though you can't rule out this fancy alternative.
Perhaps you mean to suggest that once we ascend to higher-order knowledge, a wider range of alternatives become "relevant". That's an interesting idea. Puzzling, though. Do you have any suggestions why that should be the case?
[Aside: Let R be the proposition that I know that P. It would be strange if the principles governing knowledge of R were wildly different from the principles governing knowledge of P. We should want to give a unified account of knowledge, which applies to knowledge of mental states in much the same way as it applies to knowledge of external facts.]
Here's a suggestion: a relevant alternative to my knowing that P is my justifiably yet falsely believing that P. In order to rule out this latter alternative, we need (descending now to the first level) to do more than just rule out ordinary alternative to P (e.g. giraffes). I must also rule out the more distant alternatives that would falsify P whilst leaving it justifiable to believe in (e.g. cleverly disguised mules). What do you think?
P.S. I think contextualism handles closure nicely. But I should post about this on my own blog.
I think you need to provide a definition for what you mean by "to know"!!
Reading about Epistemology in wikipedia, knowledge is a subset of truth and beliefs.
In this definition, we cannot say the we know its Zebra, unless it is assumed to be a Zebra! I don't see the problem you describe according to that definition...
Post a Comment