Jessica Brown (forthcoming) identifies
a potential problem for Williamson’s (2000) approach to evidence and knowledge.
Williamson identifies your evidence with the propositions that you know:

E=K: S’s
evidence includes p iff S knows p.

He also accepts this account of
evidential probability:

EP: The
evidential probability of a proposition p for you is the conditional
probability of p on your total evidence.

Taken together,
these two claims commit Williamson to a form of infallibilism:

Infallibilism:
If S knows p, the evidential probability of p on S’s evidence is 1.

Why is Williamson forced to choose between inductive skepticism and the possibility of p being evidence for p? Consider a standard approach to
evidential support, one that Williamson accepts:

EV: e is part
of S’s evidence for h iff S’s evidence includes e and P(h/e) > P(h).

In the case of inductive inference, prior to believing p the evidential probability of p is less than 1. After adding p to your evidence, it's probability raises to 1. So, in the case of inductive inference that results in knowledge, p is evidence for p because (i) p is part of your evidence (by E=K and the anti-skeptical assumption) and (ii) P(p/p) > P(p).

How serious a problem is this for Williamson? Brown thinks it's quite serious. I disagree, but that's for another post.

I have written more about this in a paper that will probably never see the light of day (unless someone can think of a good journal for it):

http://www.academia.edu/4329464/Infallibilism_Evidence_and_Infelicity

## 8 comments:

It's not even clear to me what the problem here is supposed to be. This just looks like an argument that Williamson's view has certain implications. Are these implications supposed to be obviously false or something?

I agree! I dont see why it is a troubling implication but I just posted some set up bc Moti asked about the reasoning...

Hi Clayton,

Thanks for posting this. I’d have to check out Brown’s paper to see what the argument is.

At first glance, I don’t quite see what is the alleged dilemma into which Williamson is supposedly forced. Assuming E = K, if S knows that p, then p is the case and Pr(p) = 1. In that case, p cannot be evidence for p, in the sense of increasing the probability of p, since nothing can increase Pr(p), it’s already 1. On the other hand, if Pr(p) < 1, then S doesn’t know that p, which means that p cannot be evidence for p, since p is not included in S’s evidence.

But maybe I am missing something here.

Hi Clayton,

Thanks for posting this. I’d have to check out Brown’s paper to see what the argument is.

At first glance, I don’t quite see what is the alleged dilemma into which Williamson is supposedly forced. Assuming E = K, if S knows that p, then p is the case and Pr(p) = 1. In that case, p cannot be evidence for p, in the sense of increasing the probability of p, since nothing can increase Pr(p), it’s already 1. On the other hand, if Pr(p) < 1, then S doesn’t know that p, which means that p cannot be evidence for p, since p is not included in S’s evidence.

But maybe I am missing something here.

Hi Moti,

I think that the idea is that there's a body of evidence that changes it's composition. Initially, p on E is less than 1. After, p on E goes to 1. The only change to E is the addition of p. Does that seem right?

Since I am a total ignoramus, which Williamson (2000) paper do you have in mind. I am finding a few.

Ah, it's Knowledge and its Limits!

C

Hi Clayton, nice post. I dont see why it is a troubling implication too.

Post a Comment