It's been ages since I've thought about this, but I spent this morning jotting down some ideas about epistemic modals that I thought I'd post here. It's nothing revolutionary. Heck, it's not particularly interesting. But, here goes.
Recall that Rysiew wanted to show that CKAs such as (1) can express true propositions, a point he was keen to defend because he though that this captured the fallibilist's view:
(1) I know that Harry is a zebra, but it might be that he's a cleverly disguised mule.
Rysiew gave the fallibilists something they wanted by giving a pragmatic explanation as to why (1) sounds contradictory but, as Stanley pointed out, he didn't give the fallibilists what they needed. He didn't explain how (1) could be true. Indeed, it seems (1) couldn't be true given a standard treatment of epistemic modals:
(EPk) p is epistemically possible for S iff ~p isn’t obviously entailed by something S knows.
Given (EPk), (1) entails:
(4) I know that Harry is a zebra, but I don’t know that Harry isn’t a painted mule and thus a painted non-zebra.
But, (4) couldn't be true unless said by someone who is incredibly dense.
Dougherty and Rysiew recommend replacing (EPk) with (EPe):
(EPe) p is epistemically possible for S iff ~p isn’t entailed by S’s evidence.
The problem, however, is that CKAs turn out to be contradictory when p is known non-inferentially if we add:
(IKSE) Non-inferential knowledge that p suffices for p's inclusion in your evidence.
Note that a similar problem arises for Stanley’s (2005) defense of fallibilism. If we introduce some sceptical hypothesis (e.g., Cartesian demons or BIVs), it seems not improper to concede that it might be that there are no hands. It seems that Stanley can only say that (5) expresses a true proposition if (6) is true:
(5) It might be that there are no hands.
(6) My evidence doesn’t include the proposition that I have a hand.
Stanley’s fallibilist, you’ll recall, accepts (3) while rejecting (1):
(3) I know that Harry is a zebra, but the evidence I have to believe that this is so does not logically entail that Harry is not just a painted mule.
Stanley’s fallibilist thinks that the concessions are appropriate when the speaker does not have entailing evidence for the proposition she concedes she might be mistaken about. It follows from (IKSE) and the Moorean observation that we can know non-inferentially that we have hands that (6) is false.
The repair. Replace (EPk) with:
(EPc) p is epistemically possible for S iff ~p isn’t obviously entailed by something S knows with complete certainty.
According to (EPc), the concession that ‘It might be that ~p’ is just the acknowledgement that either p isn’t known or if it is known it isn’t known with complete certainty. The problem with Dougherty and Rysiew’s solution, arguably, was that they could sever the connection between (1) and (4) only by severing the connection between knowledge and evidence by insisting that it is much harder to acquire evidence than we might have antecedently thought. If we say that more is required for epistemic necessity than mere knowledge, we can say that less is required for open epistemic possibilities than either ignorance or the lack of entailing evidence.
I think there’s some independent motivation for replacing (EPk) with (EPc). Think about cases of inductive knowledge. It seems odd to think that you could have knowledge of future events only if your beliefs that these future events will occur are epistemically necessary for you. Or, think about the introduction of sceptical hypotheses into a conversation. Suppose knowledge is the norm of assertion. Suppose (EPk) is true. Suppose that concessions of the form ‘It might be that ~p’ are assertions. You should only concede ‘It might be that ~p’ if you are in a position to know that you don’t know that p. It seems, to me, that this makes it too difficult to get into the proper position to concede that you might be mistaken. If someone introduces the possibility that I’ve been hallucinating and I concede that I might be mistaken, am I really in a position to assert that either my belief is false, I don’t hold the belief, I don’t have adequate justification for that belief, or I’m in a Gettier case? I don’t think so. Denying that you know or knew is hard. Conceding that you might be mistaken is easy. It seems that knowledge isn’t the norm of assertion, concessions aren’t really assertions, or the mere fact that you know doesn’t completely close some epistemic possibility. Finally, consider the contrast between (1) embedded and similar embedded statements:
(7) I believe that I know that Harry is a zebra, but it might be that Harry is just a painted mule.
(8) I believe that I know that Harry is a zebra, but he isn’t.
(9) I believe that I know that Harry is a zebra, but there’s no reason for me to believe that he is.
(10) I believe that I know that Harry is a zebra, but I don’t believe that Harry is a zebra.
If after seeing the zebra you raise the possibility that the zookeepers painted a mule and put it in the zebra cage, it seems that I could speak truthfully if I utter (7). When we embed these other claims where the second conjunct denies that a condition necessary for knowledge obtains, the embedding doesn’t seem to wash away the sin of asserting (8), (9), or (10). If we take the effect of embedding these claims to be that the speaker thinks that it is not altogether unlikely that the embedded claims are true, the fact that we find (7) to be acceptable is some indication that the defectiveness of CKAs is not due to the fact that they express obvious falsehoods but something else. Finally, consider the sort of cases that led Radford (1966) to say that knowledge doesn’t require belief. Pressed for answers on a quiz show, a contestant consistently gives the right answers and is pleasantly surprised to discover that the answers she’s giving are correct. It seems that as she’s doing this she might rightly think to herself that she might be mistaken while someone at home might be right to say that she knew the answers to the questions.