Sunday, October 14, 2012

Draft of Foley Review


Richard Foley, When is True Belief Knowledge? Princeton University

Introduction
The orthodox view is that true belief is that true belief is sometimes knowledge.  What distinguishes the true beliefs that make for knowledge from those that don’t? Is it that a person is justified in believing a true proposition? No, not if Gettier is right. Is it reliability? Sensitivity? Safety? Aptness? No, not if Foley is right. If Foley is right, it was a mistake to try to find some general differentiating condition that distinguishes knowing p from being justified in believing correctly that p.  In his bold new book, Foley argues that what we need to add to true belief to get knowledge is more true belief.  If you believe p without knowing p, you’re either mistaken about p or there’s some important truth that you’re missing.  If you’re right about p and you have adequate information, you’ll know p.
What does it take to have adequate information?  Foley understands information as true belief.  Adequacy isn’t understood in terms of quantity.  You might have little information concerning p and still have enough to know p.  The adequacy of your information doesn’t supervene upon facts about the true beliefs you have.  Somebody could have the very same true beliefs that you do and not know something you do.  Adequate information seems to be defined by what’s missing.  Your information is adequate if you’re not missing an important truth.  If your belief about p is correct and there’s no important truth that you’re missing, you know that p. If there’s some important truth that you’re missing, you won’t know that p. 
What’s an important truth? Foley doesn’t think that there’s much that important truths share in common.  Just as the particularists seem to think that right acts share little in common apart from rightness, Foley seems to think that what important truths share in common is importance.  He recommends an “ecumenical” approach.  Sometimes an important truth might concern a clue that the subject is missing. Sometimes it might have to do with the reliability of processes or methods responsible for a subject’s beliefs.  Sometimes differences in practical stakes mean that truths that aren’t important for you will be important to others.  Foley is skeptical of the commonly held view that there’s some general way of characterizing the defects and depravity that undermine knowledge.  If there’s no general account of important truths, how can Foley’s approach shed light on the notion of knowledge?  He thinks we have a knack for finding important truths.  In any of the normal cases where a subject’s true belief doesn’t constitute knowledge, he thinks we’ll find the important truth if we look for it.
It’s not difficult to recommend Foley’s excellent book.  He has offered a genuinely novel approach to the theory of knowledge.  It’s not immediately clear whether his approach improves upon extant approaches you’ll already find in the literature.  If you’re dissatisfied with the standard accounts of knowledge, you’ll likely agree that a new approach is called for.  Time will tell whether Foley’s approach will advance the discussion.
When is True Belief Knowledge? is divided into twenty-seven chapters.  In the first seven, Foley outlines the basic contours of his account. In the remaining chapters, he addresses some puzzles, discusses different sources of knowledge, and argues that the theories of knowledge and rationality/justification should be developed independently from one another.  In this review, I’ll identify some features of his view that strike me as being the most problematic.    

Rationality and Knowledge
According to Foley, knowledge doesn’t require rationality or justification.  A virtue of this approach, he says, is that:
It frees the theory of knowledge from the dilemma of either having to insist on an overly intellectual conception of knowledge, according to which one is able to provide an intellectual defense of whatever one knows, or straining to introduce a nontraditional notion of justified belief because the definition of knowledge is thought to require this (126).
If rationality/justification aren’t understood in terms of their relationship with knowledge, how should they be understood?  Foley offers an account of rationality/justification in Chapter 26.  Believing p is epistemically rational, on his view, if it is epistemically rational for you to believe that believing p would acceptably satisfy the epistemic goal of now having accurate and comprehensive belief (148).  Believing p is justified if it is epistemically rational to believe that your procedures with respect to p have been acceptable given your goals and your limitations (132).  Epistemic rationality is, on Foley’s view, the foundational concept in an account of practical rationality.  Whether it would be rational to f in sense X (e.g., moral, prudential, etc.) depends upon the rationality of believing that f-ing would do an acceptably good job at satisfying your goals of type X (128).[1]  Perhaps if ‘goal’ is understood broadly enough, the account can provide an account of overall practical rationality.  Some provision should probably be made to handle cases where agents have adopted confused or unreasonable goals (e.g., it isn’t clear that there’s a rational way to go about trying to count the moon, but perhaps somebody could have that as a goal).
One area of potential concern has to do with pragmatic encroachment.  At various places Foley expresses some sympathy for the view that knowledge can be harder to attain when the practical stakes are high.  It’s not clear what role, if any, practical significance plays in his account of epistemic rationality.  That’s because it’s not at all clear what role the practical stakes can play in determining whether believing p would satisfy your twin epistemic goals. Provided that p isn’t itself about some practical subject matter, it seems that the account would exclude practical considerations.  Would an account that combines a purist account of epistemic rationality with an impurist account of knowledge be stable? It might be. It might not be incoherent.  Would it accommodate our intuitions? That’s hard to say.  Much of the intuitive motivation for accepting pragmatic encroachment has to do with intuitions about when it’s rational to proceed on the information you have and when it would be rational to search for additional evidence before making a decision.[2]  In light of this, it’s hard to see how to square the standard intuitions offered in support of pragmatic encroachment for knowledge with a seemingly purist account of rational belief if Foley is right and the rational thing to do is determined by rational beliefs about what would do an acceptable job meeting your goals.
A second area of potential concern has to do with the seriousness of the dilemma Foley wishes to avoid.  There are many plausible accounts of rational/justified that would preserve the link between knowledge and justification that don’t lead to an overly intellectual conception of either knowledge or justification.  (It’s not clear, for example, why Foley’s own theory of rational belief doesn’t solve this dilemma since it’s not clear whether there are cases where you know p where it’s not rational to believe that your belief concerning p would do an acceptably good job in terms of meeting your own epistemic goals.)  Moreover, we do have some independent reason to think that knowledge and justification do go together.  Suppose you know (p or q) and that you justifiably believe ~p without knowing ~p.  You infer q.  It seems that there must be something going for believing q because you’ve deduced q from a set of premises justifiably believed or known.  We can’t assume that q is known because it’s not deduced from a set of known premises (and it’s consistent with what’s been said that q is false).  To accommodate the intuition that there’s something good about believing q, we either need to say that the belief is rational/justified or introduce some wholly new term of epistemic approval.  I can’t see any good reason to coin a new term here to pick out beliefs that are good in some way because deduced from premises justifiably believed or known that are not themselves justified or known, so I’d prefer to describe the belief as rational or as justified.  This seems to require that there’s a link between knowledge and rationality/justification.  Assuming that there is a connection between knowledge and justification helps us make sense of what’s happening in cases with this shape.[3]
Let me mention one final concern.  One of the costs of severing the connection between rationality and knowledge that’s emerged from the recent literature on epistemic norms is that it’s difficult to explain why certain combinations of belief and concessions about what your not in a position to know strike us as being irrational.  If we know that knowing has nothing at all to do with rationality and rationality has nothing at all to do with knowledge, why is it irrational to believe outright, say, that dogs bark while conceding that you don’t know whether they do? This is easily explained on views that treat knowledge as a goal, an aim, or a standard of correctness and uses the regulative function of knowledge to explain the standards of rationality.  
Is knowledge a mutt?
On Foley’s approach, pedigree doesn’t matter in the way that it does in more familiar accounts of knowledge.  He doesn’t think that reliability, for example, is a necessary condition for knowledge.  He does acknowledge that it will often seem to us that a case of unreliably formed, true belief isn’t a case of knowledge, but he thinks that the reason that the subject doesn’t know is that the subject is missing an important truth.  It’s not unreliability, per se, that undermines the belief’s epistemic standing.
To test this, he thinks we should consult our intuitions about cases involving subjects that have maximally comprehensive accurate sets of beliefs.  Consider an example:
Imagine that Sally’s beliefs are as accurate and comprehensive as it is humanly possible for them to be. She has true beliefs about the basic laws of the universe, and in terms of these she can explain what has happened, is happening, and will happen. She can explain the origin of the universe, the origin of the earth, the mechanisms by which cells age, and the year in which the sun will die.  She even has a complete and accurate explanation of how it is that she came to have all this information.  Consider a truth p-cells about the aging mechanism in cells.  Sally believes p-cells, and because her beliefs about these mechanisms are maximally accurate and comprehensive, there are few gaps of any sort in her information, much less important ones. Thus, she knows p-cells (33).
It’s consistent with the story that Sally doesn’t meet the conditions on knowledge imposed by a reliabilist account of knowledge.  Let’s stipulate that the processes that produce Sally’s beliefs are unreliable. We can suppose that it was a series of strange processes and unlikely events that led her to believe p-cells. Under these conditions, is Foley right that Sally knows?
I don’t share Foley’s intuition about the case.  If we stipulate that Sally is trapped inside Nozick’s experience machine, I don’t think she knows p-cells.  On this stipulation, I also fear that the case hasn’t been described in suitably neutral terms. Suppose someone believes correctly that the barn burned down because a cow kicked over a lantern.  Suppose, however, that she doesn’t know that the barn burned down, doesn’t know that a cow kicked over a lantern, and doesn’t know that the barn burned down because a cow kicked over a lantern. (Because our subject has been stuffed into Nozick’s experience machine, her beliefs are only accidentally correct.)  Can she explain why the barn burned down?  I don’t think so.  She can explain why barns burn, why cows topple lanterns, etc., but she cannot explain why events she didn’t know about transpired.  Give Sally all the knowledge she needs to be able to explain these things, and I’d probably agree that she knows p-cells. I’m less inclined to do so if you describe the case carefully as one in which most of her beliefs are only accidentally true.
Anticipating this response, Foley tries to motivate his description of the case by noting that “Sally is fully aware that however strange and unlikely this history may be, in her case it led to her having maximally accurate and comprehensive beliefs” (34).  I still have reservations. First, I don’t think he’s entitled to describe the case as one in which Sally is ‘aware’ of these facts. Can you be aware that p if you don’t know that p?  He might argue that Sally is aware of the facts related to p-cells, but that’s a controversial description that needs justification.  Second, Sally’s beliefs about her own strange and unlikely history are among the beliefs that aren’t grounded by reliable processes.  If we think those beliefs don’t constitute knowledge, it’s not clear that they’d help to turn her belief about p-cells into knowledge.

Lotteries
How does Foley’s approach handle lottery propositions?  Billy believes that his ticket, #345, lost after the drawing was held, but he won’t know that it lost simply on the basis of his correct beliefs about the set up of the lottery and the probability of losing. Foley says that his ignorance is due to some important gap in his information. For example, he doesn’t have this bit of information—ticket #543 was the winner (72).
Is this approach preferable to approaches that impose a sensitivity or safety condition?  That’s not clear.  If the paper announces that #543 is the winner Billy will learn by reading the paper that he lost. So far, everyone is on the same page. What if the paper didn’t announce the winning number but simply announced that Billy’s ticket lost?  If he reads that, he should know he lost.  If that’s sufficient for knowledge, what important truth was Billy missing before he read the paper that he has now?  The important piece of information he’s missing can’t be that his ticket lost.  If information is true belief, that’s information he already had. Maybe the important truth he’s missing is not a truth about what it says in a paper.  This would be an odd way to account for the intuition.  You might think that that information only matters because it provides you with information (in some intuitive sense of ‘provides information’ that’s more demanding than the notion Foley works with) about the winners and losers.  A natural explanation as to why reading the paper matters is that it’s only after you’ve read the paper that you can have a sensitive belief or a safe belief.  While it’s not clear that our intuitive verdicts about lotteries are at odds with Foley’s view, it’s not clear whether his view has the explanatory resources to account for those intuitions in the straightforward ways that rivals accounts do.     

Ignorance as a lack
In Chapter 20, Foley discusses cases in which we admit that we’re not in a position to know something.  Some philosophers think that if you appreciate that you’re not in a position to know p, you can’t then rationally believe p.  Foley thinks that there’s nothing at all puzzling about believing what you concede you don’t know.  He’s right, I think, that reports of the form ‘I believe p, but I don’t know it’ are common (101). Still, there are puzzles lurking here. We often say ‘I believe p’ as a way of hedging. It’s a way of expressing that we don’t take on the commitment to the truth of p typical of outright or full belief.  What about cases of full belief in which you concede you don’t know?  Consider, ‘Dogs bark but I don’t know that they do’.  Here, the speaker expresses the belief that dogs bark and concedes that he doesn’t know that they do.  This strikes many of us as irrational.  Can you know the proposition expressed?  To know that dogs bark, there would have to be no important truths that you were missing.  The second conjunct is true iff you don’t know that dogs bark.  Assuming you believe correctly that dogs bark, the second conjunct couldn’t be true unless there’s some important truth that you were missing.  Foley’s account explains why you can’t know both conjuncts.
Foley’s account nicely handles this sort of case, but I don’t think it can easily handle beliefs expressed by statements of the form, ‘p, but my evidence doesn’t show/establish that p’.  It doesn’t seem that you can know that the proposition this expresses is true.  How can we explain this?  The proposition expressed isn’t necessarily false. If someone believed this without knowing that it’s true, Foley’s account implies that there’s some important truth that the subject is missing.  I can’t think of what that truth might be.
One could try to explain why the proposition can’t be known as follows:
To know the conjunction, you’d have to know both conjuncts. To know p, you’d have to have evidence that establishes p.  If you have that evidence, the second conjunct is false and the conjunction is not known. If you lack that evidence, you don’t know the first conjunct and the conjunction is not known. The conjunction is not knowable.
This explanation isn’t available to Foley because he wouldn’t want to say that knowing p requires having evidence that establishes p.[4] 
One could offer a different style of explanation:
To know the conjunction, you’d have to know both conjuncts.  To know p, you can’t be irrational in believing p.  Believing the second conjunct makes believing the first conjunct irrational.  You can’t know the conjunction without believing the second conjunct.  The conjunction is not knowable.
If he offers this second sort of explanation, he can say that having evidence that shows that p isn’t necessary for knowing. Instead, he can say that not believing that one lacks this evidence is necessary for knowing.  While this seems to be the better route for Foley to take, it faces a handful of problems.  First, this explanation assumes that your ignorance is due to a presence, not an absence.  It’s not due to the fact that you’re missing some truth, but due to the presence of a set of attitudes that’s rationally self-defeating.  Second, this explanation is shallow.  If it didn’t matter whether you had evidence that showed that p, why would it matter what view you had on whether you had this evidence?  Some explanation of the irrationality of believing p whilst believing that your evidence doesn’t show p is in order.  Does it fall out of Foley’s account of rationality? It’s not obvious that it does.  Moreover, it’s not clear that Foley’s account of rationality will help him explain the relevant data if it’s part of Foley’s account of knowledge that knowledge doesn’t require rationality.  
How serious are the problems discussed above? Foley might be right that ignorance is typically due to some lack or deficiency. Cases discussed in this section suggest that the gap isn’t always due to some lack of information.  Some conjunctive propositions might be unknowable truths because it would be irrational to believe the conjuncts in combination.  The irrationality precludes knowledge. Add all the true beliefs you like and you’ll not restore the rationality needed for knowledge.

Knowledge Blocks
Foley acknowledges that a pure version of his view might be difficult to defend. Conceding that his account won’t accommodate all of our intuitions, he suggests that a perfectly good fallback position would be one that acknowledges ‘knowledge blocks’. Think of a knowledge block as something that interferes with the normal conditions for knowledge, say, by preventing the subject from meeting some minimum standard of rationality, reliability, tethering of belief to experience, etc.  On the modified version of the view, knowledge is true belief with adequate information without any knowledge blocks.
To accommodate intuitions, it seems that Foley would need to introduce knowledge blocks. By doing so, it seems he would have to impose general rationality and reliability requirements on knowledge.  Can he do this while maintaining the distinctiveness of his approach?  That remains to be seen.  It depends upon whether the notion of an important truth does any explanatory work once a sufficient set of knowledge blocks is introduced.  
  
References
Adler, J.  2002.  Belief’s Own Ethics. MIT University Press.
Fantl, J. and M. McGrath.  2002.  Evidence, Pragmatics, and Justification.  Philosophical Review 111: 67-94.
Williamson, T. 2007. On Being Justified in One’s Head. In M. Timmons, J. Greco, and A. Mele (ed.), Rationality and the Good: Critical Essays on the Ethics and Epistemology of Robert Audi (Oxford University Press).


[1] As stated, the account is sketchy.  There are two areas that could use further discussion. The first is that he provides an account of goal-relative practical rationality, but no account of overall practical rationality.  Given the goal of meeting your moral obligations, it would be practically rational in the moral sense to f if it is rational to believe that f-ing would do acceptably well at meeting that goal. Given the goal of looking after your own interests, it would be practically irrational in the prudential sense to f if it is rational to believe f-ing would prevent you from meeting that goal.  What about all things considered practical rationality?  Is that notion confused?  Can we provide an account of that notion in terms of, say, some overarching goal?  He doesn’t say. The second is that he says nothing about the coherence or intelligibility of the goals. Can’t there be goals that are unintelligible or incoherent?  Are there practically rational ways to go about trying to count the moon?
[2] See Fantl and McGrath (2002) for discussion.
[3] See Williamson (2007) for discussion of this sort of argument.
[4] Adler (2002) argues that reflection on Moore’s paradox reveals that this requirement must be met to know and to satisfy the normative standards governing belief.

No comments: