Tuesday, October 9, 2012

When is true belief knowledge?

I'm finishing off my review of Foley's new book.  Thought I'd post some initial thoughts here.  My overall impression is that it's a bold attempt to introduce a new way of thinking about knowledge and that Foley's turn might be fruitful. It's really hard to say at this stage because it's difficult to determine the implications of the account he offers.  Here, I raise some problems that I think arise for a version of his view.  It might be that if he modified his views only slightly, none of these problems would have come up.  Foley's account is that if your belief about p doesn’t constitute knowledge, it’s either because it doesn’t fit the facts or because there is some important truth that you’re missing.  What’s needed to ‘turn’ a true belief into knowledge is just more true belief.  Knowledge is true belief plus adequate information (where adequate information is understood in terms of true belief).  


How does Foley’s approach handle lottery propositions?  If somebody believes correctly that her ticket is a loser, we don’t credit her with knowledge.  What’s missing?  Billy believes that his ticket, #345, lost after the drawing was held, but he won’t know that it lost simply on the basis of his correct beliefs about the set up of the lottery and the probability of losing. Foley says that his ignorance is due to some important gap in his information. For example, he doesn’t have this bit of information—ticket #543 was the winner (72).

Is this approach preferable to, say, an approach on which there’s a sensitivity condition or a safety condition?  That’s not clear.  The paper announces that #543 is the winner. If Billy reads that and he knows that his ticket is #345, he’ll know his ticket lost. What if the paper didn’t announce the winning number but simply announced that Billy’s ticket lost.  If he reads that, he should know he lost.  If that’s sufficient, what important truth was Billy initially missing?  The important piece of information he’s missing can’t be that his ticket lost.  He’d have that information if he believed the true proposition that his ticket lost.  He has that belief, so he has that information.  Maybe the important truth he’s missing is not a truth about the outcome of the lottery but a truth about what it says in a paper.  If he already has the information that he’d get from the paper, what does the information about what it says on the page add?  What role does the paper play?  One thought might be that the paper is run in such a way that beliefs formed on the basis of that paper are sensitive or safe. The need for sensitive or safe belief would explain the need to consult the paper, but Foley’s account denies that there’s any general sensitivity or safety condition. On these approaches, there’s an explanation as to why Billy needs to look at the paper. On Foley’s, I don’t see why this should be.

Rationality and Justification
On Foley’s account of knowledge, rationality and justification don’t seem to be necessary for knowing p.  A virtue of this approach, he says, is that:
It frees the theory of knowledge from the dilemma of either having to insist on an overly intellectual conception of knowledge, according to which one is able to provide an intellectual defense of whatever one knows, or straining to introduce a nontraditional notion of justified belief because the definition of knowledge is thought to require this (126).
I don’t think that this dilemma is all that serious.  Many plausible accounts of justification have been offered that would preserve the link between knowledge and justification that don’t lead to an overly intellectual conception of either knowledge or justification.  It seems we have some independent reason to think that knowledge and justification do go together.  Suppose you know (p or q). Suppose you justifiably believe ~p, but don’t know that ~p. Suppose you infer q.  It doesn’t seem that it follows that you know q because q isn’t derived from known premises.  It does seem, however, that there’s something going for your belief about q because it’s derived from premises either known or justified.  Why not think of q as justifiably believed?  To accommodate the intuition that there’s something going for the belief, it’s tempting to think of it as justified. To think of it as justified, however, I think we’d want to say that it came from justified beliefs.  To say that, we’d want to say that you didn’t just know (p or q), but that you justifiably believed it. Assuming that there is a connection between knowledge and justification helps us make sense of what’s happening in cases that have this shape.[1]

Suppose that you take a true-belief pill. The pill induces scores of new true beliefs.  Depending upon which pill you take, you might suffer from one of two side effects.  First, it was found that some users would form a false belief incompatible with every true belief that they formed as a result of taking the pill.  While they moved towards an accurate and maximally comprehensive set of beliefs, they also acquired a comprehensive set of false beliefs. I don’t think that their new beliefs constitute knowledge. The problem is familiar from attempts to formulate omniscience in terms of knowing all the truths.  There are no important truths that you lack. The problem is that there are too many falsehoods.  Giving you more truths won’t help you dig out.  (Yes, there’s a sense in which you would be aware of which falsehoods were false. If ‘awareness’ is cashed out in terms of true belief, you will believe truthfully that the falsehoods are false. The trouble is that you will also seem to be aware of the truths as being false.) Second, it was found that some users would form further true beliefs. For each first-order belief formed by taking the pill, the subject believed that that belief was one that the subject could not rationally accept.  It seems that if you correctly believe of your own attitude towards p that it’s irrational for you to have that attitude, you don’t know p.  Adding in further true beliefs about the power of the pill only makes you seem crazier. 

To handle these cases, Foley can say that there’s a minimal condition of rationality or consistency required for knowledge.  If it was robust enough to deal with the problem cases, it would seem to require something akin to a familiar sort of rationality or justification requirement on knowledge (e.g., something like an internalist view on which all justifiably held beliefs are backed by internally available grounds).   

In Chapter 20, Foley discusses cases in which we admit that we’re not in a position to know something.  Some philosophers think that if you appreciate that you’re not in a position to know p, you can’t then rationally believe p.  Foley thinks that there’s nothing at all puzzling about believing what you concede you don’t know.  He’s right, I think, that reports of the form ‘I believe p, but I don’t know it’ are common (101). Still, there are puzzles lurking here. We often say ‘I believe p’ as a way of hedging. It’s a way of expressing that we don’t take on the commitment to the truth of p typical of outright or full belief.  What about cases of full belief in which you concede you don’t know?  Consider, ‘Dogs bark but I don’t know that they do’.  Here, the speaker expresses the belief that dogs bark and concedes that he doesn’t know that they do.  This strikes many of us as irrational.  Can you know the proposition expressed?  To know that dogs bark, there would have to be no important truths that you were missing.  The second conjunct is true iff you don’t know that dogs bark.  Assuming you believe correctly that dogs bark, the second conjunct couldn’t be true unless there’s some important truth that you were missing.  Foley’s account explains why you can’t know both conjuncts.

Foley’s account nicely handles this sort of case, but what cases of the form, ‘p, but my evidence doesn’t show/establish that p’?  It doesn’t seem that you can know that the proposition this expresses is true.  Why can’t you know that this is so?  It’s perfectly consistent, so its status as unknowable isn’t down to the fact that it’s necessarily false.  If it’s not known, it has to be because there’s some important truth that you’re missing.  I can’t think of what truth that might be.  One could argue that this is unknowable on the following grounds:
To know the conjunction, you’d have to know both conjuncts. To know p, you’d have to have evidence that establishes p.  If you have that evidence, the second conjunct is false and the conjunction is not known. If you lack that evidence, you don’t know the first conjunct and the conjunction is not known. The conjunction is not knowable.
I don’t think this explanation is available to Foley because he wouldn’t want to say that knowing p requires having evidence that establishes p.  One could offer a different style of explanation:
To know the conjunction, you’d have to know both conjuncts.  To know p, you can’t be irrational in believing p.  Believing the second conjunct makes believing the first conjunct irrational.  You can’t know the conjunction without believing the second conjunct.  The conjunction is not knowable.
On neither approach to explaining why the conjunction is unknowable does it seem that there is an important truth that you’re missing. On the first, you don’t satisfy an evidential requirement that Foley thinks isn’t required for knowledge and can’t be satisfied simply by having more true beliefs. On the second, your problem has to do with violating a requirement that says, in effect, that knowledge of p requires that you’re not irrational in believing p.  Remedying that defect requires believing less or finding new evidence. It’s not a matter of missing some important truth.


[1] See Williamson (2007) for discussion of this sort of argument.


Foley’s account of knowledge has paradoxical implications.  Consider Sartwell’s (1991) view that knowledge is merely true belief and consider the following:
(*) You don’t know (*).
Suppose (*) is false. If it is, you know (*).  You can’t know (*), however, if (*) is false. So, the supposition is false. Since you followed the reasoning thus far, you must be tempted to conclude that (*) must be true. If you believe (*) on the basis of the reasoning just sketched, however, and (*) is true, Sartwell’s account implies that (*) is known.  This contradicts (*).  Either way, on Sartwell’s view, (*) generates a contradiction.  To avoid generating the same contradiction, Foley has to avoid saying (*) is known.  On his view, your belief about p constitutes knowledge so long as p is true and there’s no important truth that you’re missing.  For reasons just sketched, you might believe (*) and it might seem (*) is true.[1]  What important truth might you be missing that explains why you don’t know (*)?  I can’t think of one.  Your problem doesn’t seem to be due to some lack of information.
 


[1] I owe this example to Brian Weatherson. He discusses its significance for various theories of knowledge and for the norms of assertion on his blog, Thoughts, Arguments, and Rants (http://tar.weatherson.org/2009/11/19/your-favourite-theory-of-knowledge-is-wrong/).

No comments: