Tuesday, December 29, 2009

Tuesday, Tuesday, Tuesday!

Matt Weiner and I will be discussing truth and warranted assertion at 10:00. I'm not entirely sure where the talk is supposed to take place. Hope to see you there!

Sunday, December 27, 2009

Justification isn't the right to believe?

I've been reading a forthcoming piece by Jeffrey Glick (too lazy for PQ link) where he argues against the view that having a justification for believing is a matter of having a certain kind of epistemic right. I take the view to be an utter triviality, so I had to take a look. (I can't imagine what a case of having a right to believe without having a justification for believing would look like, and I can't imagine how a successful justification for believing wouldn't show that having the relevant belief is consistent with the satisfaction of epistemic duty.) We disagree about (JustRightEPR). I like it. He doesn't. He writes:
JustRightEPR. A candidate for duties S has a justified belief in a proposition p if and only if
(a) S believes that p
(b) S has no epistemic duty not to believe that p.

If we now allow that the reference to an epistemic right in (JustRight) is to be interpreted as the relatively weak (JustRightEPR), as Wenar concluded, then the path is laid to save the view that justification confers an epistemic right to believe, and this parallel between ethics and epistemology is preserved. For it is surely the case that when one’s total evidence justifies one in believing that p, there is no epistemic duty against believing p ...

But things are not as straightforward as they may first appear. When philosophers talk of having an epistemic right to believe, or being epistemically entitled to believe, or having the epistemic authority to believe, they have in mind something stronger than the mere permissibility which (JustRightEPR) implies. The epistemic privilege right to believe is entailed by having on-balance justification for a belief, but there is more to having on balance justification for p than simply having no epistemic duty not to believe that p. This difference is what undermines the (JustRightEPR) interpretation of (JustRight). [I disagree with this--CL]

Here is an example of an ordinary privilege right. Smith picks up a piece of seaweed floating in international waters. Her doing so is permissible; she has no obligation not to pick up the seaweed. If she had chosen instead to leave it be, she would have done something permissible. Smith has no obligation not to leave it be. If the seaweed in the example is replaced by an object in the water that would have some non-negligible benefit to Smith but would serve no significant purpose to anyone else, a shell which would complete her collection, perhaps, again if Smith picks it up she does something permissible. If she chooses not to pick it up, she also does something permissible. She has no moral obligation to pick up the shell. It certainly is in her interest to pick up the shell. She would benefit if she did so, and the effort to acquire the shell is minimal. It may be prudentially obligatory that Smith should pick up the shell, but that is not the same as moral obligation.

This structure is not reflected in an epistemic right to believe. Suppose a rational adult agent Jones who has a total body of evidence E is considering which doxastic attitude to hold towards some proposition p. Suppose E justifies p: very roughly, were Jones to believe that p on the basis of E, he would be justified in believing that p. But p is false, and so it is false that were Jones to believe that p on the basis of E, then he would know that p. If he does in fact believe that p, he will do what is epistemically permissible. He has no epistemic obligation not to believe p given the facts about his evidence. But if Jones instead does not believe that p, then to maintain parity with the discussion of moral privilege rights, it should often not be the case that he has done something epistemically impermissible. He should often lack an epistemic duty not to disbelieve that p. But it is false that often Jones does something epistemically permissible when he believes that not-p when his total evidence justifies p. Therefore the view that epistemic rights are merely privilege rights is false.

This strikes me as a rather strange argument. I can't see how the argument could succeed unless we were to assume that there was no epistemic duty to refrain from believing without adequate evidence. But isn't there such a duty?

Bracket that. Glick seems to assume that you have a claim right only when you often also have a lot of latitude. But, there's nothing to the concept of a claim right that suggests (in a way that is obvious to me) that such rights are enjoyed only when there's also a lot of latitude in how to exercise such rights. Yes, there's a difference between picking up seaweed and picking up an attitude, but I don't see why the fact that there's less doxastic latitude entails that (JustRightEPR) is false since that claim entails nothing about the amount of doxastic latitude we have.

Here's a view on which justified belief is just the belief you have a claim right to. Your only epistemic duty is to refrain from believing what you don't know. I don't see that there's anything above that suggests that you couldn't defend this version of (JustRightEPR).

Saturday, December 26, 2009

Reasonable religious disagreement and the private evidence problem

Suppose Feldman is right that reasonable people cannot (in full awareness of each other) draw different conclusions from the same evidence while each regards the other as a peer. He thinks that in such cases, the reasonable thing to do is suspend judgment. He notes that it in realistic cases, we won't have the same evidence to support our beliefs:
In any realistic case, the totality of one’s evidence concerning a proposition will be a long and complex story, much of which may be difficult to put into words. This makes it possible that each party to a disagreement has an extra bit of evidence, evidence that has not been shared. You might think that each person’s unshared evidence can justify that person’s beliefs. For example, there is something about the atheist’s total evidence that can justify his belief, and there is something different about the theist’s total evidence that can justify her belief. Of course, not all cases of disagreement need to turn out this way. But perhaps some do, and perhaps this is what the students in my class thought was going on in our class. And, more generally, perhaps this is what people generally think is going on when they conclude that reasonable people can disagree.

Can we say that reasonable disagreement is possible in cases where the parties to the disagreement have 'private evidence'? He says, "It is possible that the private evidence includes the private religious (or nonreligious) experiences one has", but he seems to think that these experiences won't present much of a problem. The idea is that the theist (alleges) she has private evidence for her beliefs and this allows someone with a conciliatory view to say that theist is reasonable in the face of disagreement.

He says:
This response will not do. To see why, compare a more straightforward case of regular sight, rather than insight. Suppose you and I are standing by the window looking out on the quad. We think we have comparable vision and we know each other to be honest. I seem to see what looks to me like the dean standing out in the middle of the quad. (Assume that this is not something odd. He’s out there a fair amount.) I believe that the dean is standing on the quad. Meanwhile, you seem to see nothing of the kind there. You think that no one, and thus not the dean, is standing in the middle of the quad. We disagree. Prior to our saying anything, each of us believes reasonably. Then I say something about the dean’s being on the quad, and we find out about our situation. In my view, once that happens, each of us should suspend judgment. We each know that something weird is going on, but we have no idea which of us has the problem. Either I am ‘‘seeing things,’’ or you are missing something. I would not be reasonable in thinking that the problem is in your head, nor would you be reasonable in thinking that the problem is in mine.

I don't think this helps with mystical experience, not if the theist alleges that the object of such experiences chooses who to reveal itself to. It's not as if you can sneak a peak at God, on the theist's view, if you hide behind a bush while God appears to someone else.

Much of Feldman's remarks have to do with feelings of obviousness and insight that the theist and atheist can share, but if these remarks are intended to deal with religious experience, they don't seem to work:
Similarly, I think, even if it is true that the theists and the atheists have private evidence, this does not get us out of the problem. Each may have his or her own special insight or sense of obviousness. But each knows about the other’s insight. Each knows that this insight has evidential force. And now I see no basis for either of them justifying his own belief simply because the one insight happens to occur inside of him. A point about evidence that plays a role here is this: evidence of evidence is evidence.More carefully, evidence that there is evidence for P is evidence for P. Knowing that the other has an insight provides each of them with evidence.

Suppose that evidence that there is evidence for P is evidence for P. Can't P be better supported by the ground level evidence for P than the evidence that such evidence exists? If so, will the atheist really 'share' the theist's evidence when the theist reports a mystical experience?

Suppose we think of evidence as non-inferential knowledge. The theist claims that they know non-inferentially that God is speaking to them. This allegation, if true, means that their evidence rules out the hypothesis that there's no God. I see no reason to think that the theist's report of such an experience gives the atheist evidence that rules out the hypothesis that God exists and no reason to think that the upper limit of evidential support provided by an experience is determined by the degree of support a report of that experience can provide for another. So, I don't think there's anything in these passages that deals with the problem of private evidence understood as a kind of (alleged) mystical experience.

Now, just to be clear, that doesn't mean that the atheist should defer. They shouldn't believe that the kinds of experiences that the theist report are possible. (Should the theist believe they have the kinds of experiences they do? If 'should believe' is cashed out in the way that I think Feldman wants to, I cannot say with much confidence that the theist shouldn't believe that they could have the kinds of mystical experiences they report. (If, however, you shouldn't believe p if p is false or p isn't something you know, that's another matter ...)) The problem is that I don't see how Feldman can use the conciliatory view in the way he seems to want to without either begging the question against the theist who claims to know God directly via mystical experience or assuming something questionable about the kind of justificatory support experience provides.

You better watch out!



Thank goodness he's not real!

(Courtesy of Sketchy Santas)

Disagreement and universalism

I've been reading Foley's Intellectual Trust book and while it's filled with interesting stuff, I'm having a hard time figuring out what Foley's views are concerning disagreement (or maybe what they ought to be given what's he said). Foley defends universalism, and the universalist believes:

(U) If you discover that another person believes p, this provides you with a prima facie reason to believe p even if you happen to know nothing about the reliability of this other person.

Foley accepts universalism because he believes:
(i) that we should place trust in ourselves;
(ii) that there is rational pressure to place the same trust in others that we place in ourselves.

His argument for (i) is rather straightforward—self-trust is an essential part of any non-skeptical outlook. His arguments for (ii) are contained in these passages. First:
Our belief systems are saturated with the opinions of others. In our childhoods, we acquire beliefs from parents, siblings, and teachers without much thought. These constitute the backdrop against which we form yet other beliefs, and, often enough, these latter beliefs are also the products of other people’s beliefs. We hear testimony from those we meet, read books and articles, listen to television and radio reports, and then form opinions on the basis of these sources of information. Moreover, our most fundamental concepts and assumptions, the material out of which our opinions are built, are not self-generated but rather are passed down to us from previous generations as part of our intellectual inheritance. We are not intellectual atoms, unaffected by one another. Our views are continuously and thoroughly shaped by others. But then, if we have intellectual trust in ourselves, we are pressured also to have prima facie intellectual trust in others. For, insofar as the opinions of others have shaped our opinions, we would not be reliable unless they were (Foley 2004: 102).

Second:
[U]nless one of us has had an extraordinary upbringing, your opinions have been shaped by an intellectual and physical environment that is broadly similar to the one that has shaped my opinions. Moreover, your cognitive equipment is broadly similar to mine. So, once again, if I trust myself, I am pressured on the threat of inconsistency also to trust you (Foley 2004: 102).


At first, I thought that the universalist would be sympathetic to the conciliatory view. The universalist view is motivated by the thought that epistemic egoism and egotism are incoherent. (Basically, those who adopt these views don't take the fact that others believe p to be a prima facie reason to believe likewise.) But, it isn't clear that this is the view that Foley likes. He writes:
[T]here is an important and common way in which the prima facie credibility of someone else’s opinion can be defeated even when I have no specific knowledge of the individual’s track record, capacities, training, evidence, or background. It is defeated when our opinions conflict, because, by my lights, the person has been unreliable. Whatever credibility would have attached to the person’s opinion as a result of my general attitude of trust toward the opinions of others is defeated by the trust I have in myself. It is trust in myself that creates for me a presumption in favor of other people’s opinions, even if I know little about them. Insofar as I trust myself and insofar as this trust is reasonable, I risk inconsistency if I do not trust others, given that their faculties and environment are broadly similar to mine. But by the same token, when my opinions conflict with a person about whom I know little, the pressure to trust that person is dissipated and, as a result, the presumption of trust is defeated. It is defeated because, with respect to the issue in question, the conflict itself constitutes a relevant dissimilarity between us, thereby undermining the consistency argument that generates the presumption of trust in favor of the person’s opinions about the issue. To be sure, if I have other information indicating that the person is a reliable evaluator of the issue, it might still be rational for me to defer, but in cases of conflict I need special reasons to do so (Foley 2004: 109).

The problem is that last line. Those who defend the conciliatory view aren't committed to any particular view about the proper reaction to the discovery that some schmohawk happens to believe p. Those who defend the view are interested in cases of peer disagreement. Maybe Foley thinks that the default attitude to take in light of the "self-trust radiates outward" arguments is that we treat all we come across as if they are peers but they lose that status when they disagree with us unless we have special reasons for deferring, reasons that we needn't have when we meet someone we take to be a peer up to the moment of discovering that the person we've met disagrees with us.

At any rate, the last line seems out of line with the spirit of the conciliatory view.
On its face, two claims seem in tension. If you have no attitude concerning p and you discover someone believes p, you have a prima facie reason to believe p. If you have an attitude concerning p and you discover someone believes ~p, their belief gives you only a defeated reason to believe p whereas the reason provided by your belief remains undefeated. The first claim is motivated by the thought that we're all in roughly the same boat and so there's no rational justification for trusting yourself and not others. That seems to suggest that a kind of deference in the face of disagreement doesn't require a special reason to justify it.

Part of what bugs me about these passages is that when you discover the disagreement you discover that someone disagrees with you that you antecedently took to be no less likely to be wrong seems like an odd defeater for their attitude. I can see defending this line with an argument about, say, the problems of the equal weight view or some defense of the right reasons approach, but that's not what we have here. It is as if the argument against the conciliatory view is just an intuition about defeat and disagreement between what you initially took to be a peer.

Tuesday, December 22, 2009

Could 'ought' be objective but shifty?

[Fixed a gaff]
I think something like this exchange once took place:
LD: You should do something about the kitchen and leave the living room alone.
Me: No, I'm think I should paint the living room and leave the kitchen alone.
LD: In that case, you should paint the walls brown or grey but not that navy blue you're looking at.

I think there are many contexts in which an advisor will (properly) advise an agent to perform a suboptimal action because she knows that the agent simply will not perform the optimal action. (I don't think this lends any support to actualism.) Nevertheless, I think that the advisor needn't be anything less than perfectly conscientious. What goes for apartment improvement goes for morality as well. I think that an advisor could be perfectly morally conscientious, know that A is better than B, but advise the agent to pursue B upon learning that the advisee won't A.

Zimmerman says this about 'ought' and the conscientious agent:
It is with overall moral obligation that the morally conscientious person is primarily concerned. When one wonders what to do in a particular situation and asks, out of conscientiousness, 'What ought I to do?,' the 'ought' expresses overall moral obligation ... Conscientiousness precludes deliberately doing what one believes to be overall morally wrong (2)

I think that it even if it is with overall moral obligation that the morally conscientious advisor is primarily concerned (it might be the values that ground those obligations, however, that concerns the conscientious agent but let that pass), there might be legitimate reasons for the advisor to 'shift' focus to something she knows full well would be a violation of the advisee's obligations (e.g., when the advisee is just dead set on acting in ways that go against obligation but can be steered to act in such a way that she does the next best thing rather than something even worse).

And this raises a question. Assuming that this is so, why can't we say that just as a morally conscientious advisor might sincerely advise someone to do something other than what they really ought to do _and_ yet be primarily concerned with overall moral obligation (e.g., when they have good reason to advise the agent to do the next best thing) the agent herself might have good reason to focus on something other than her overall obligation. She could still be primarily concerned with her overall obligation, but have some good reason to strive for something else.

Here's the basic strategy for blocking the argument for prospectivism. In cases where the agent takes herself to have adequate information, the 'ought' she is primarily concerned with is one that picks out overall moral obligation. In cases where the agent takes herself to lack adequate information to determine what she ought to do all things considered, the conscientious agent might be concerned primarily with that same 'ought', but with that 'ought' out of cognitive reach, she'll aim to bring about the best state of affairs she can work out a strategy for bringing about given her state of ignorance. Provided that the 'ought' on the lips of the conscientious agent in these cases are different, intuitions about the proper use of 'ought' under ignorance is a poor guide to the truth-conditions for the 'ought' that the conscientious agent is primarily concerned about.

Following up on the post from earlier, the conscientious agent will only shift attention away from the 'ought' that picks out overall obligation when she has good moral reason to shift her attention. This requires identifying some good moral reason to set your sights on something other than what there's overall moral reason to do. I think that the desire to minimize a certain kind of risk could be just that reason.

Two cases seem to cause trouble for the objectivist view that says that an agent always ought to do what's best:
Case 2: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but it also indicates (in contrast with the facts) that giving him drug C would cure him completely and giving him drug A would kill him.

Case 3: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but her evidence leaves it completely open whether it is giving him Drug A or Drug C that will kill him or cure him.

Here’s Zimmerman’s version of the objection to the objectivist view:
Put Moore [or any objectivist] in Jill’s place in Case 2. Surely, as a conscientious person, he would decide to act as Jill did and so give John drug C. He could later say, “Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did.” But now put Moore in Jill’s place in Case 3. Sure, as a conscientious person, he would once again decide to act as Jill did and so give John drug B. But he could not later say, “Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did.” He could not say this precisely because he knew at the time that he was not doing what was best for John. Hence Moore could not justify his action by appealing to the Objective View … On the contrary, since conscientiousness precludes deliberately doing what one believes to be overall morally wrong, his giving drug B would appear to betray the fact that he actually subscribed to something like the Prospective View (Zimmerman 2008: 18).

Case 4: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable. Jill’s evidence strongly indicates that drug A would cure John completely and that drug C would kill him, but Jill doesn’t know that because she doesn’t know how to compute the expected value of her outcomes because she doesn’t know Bayes’ Theorem and needs to know how to use Bayes’ Theorem to work out the value.
Intuitively, it seems that Jill oughtn’t take the chance and ought to use drug B. But, it also seems that Jill knows that this course of action is not the course of action the Prospective View or the Objective View advises. As Zimmerman stresses, it is hard to know which option maximizes expected value and the innumerate among us know that he’s right on this point. Shouldn’t we sometimes play it safe in cases like case 4? I think this is what the conscientious person would do. From an intuitive point of view, case 4 is a lot like case 3. But, if intuition suggests that this is what Jill should do and the Prospective View says Jill should give drug A, it seems those who defend the Prospective View are in the same boat as those who defend the Objective View.
Do we have to dumb the Prospective View down? That’s one way to go, but I think that those who defend the Prospective View don’t have to go this route. If the conscientious agent in case 4 is thinking about subsidiary obligations (i.e., what to do if she's not going to do what she ought to do), we can save the prospective view from cases like case 4 but it seems the same thing should work for case 3. It will take some work to get the details right. If you ought to A but won't and have some subsidiary obligation to do B, that's because B is second best. Instead, maybe the idea is that the obligation the agent has in mind is the best world available that she can figure out a way to realize. Something like that.

Sunday, December 20, 2009

The prospects of prospectivism?

Another question about prospectivism. Consider two challenge cases to views that say that you ought to do the best you can (as opposed to saying that you ought to do what you believe to be best, what will probably be best, or what will maximize expectable value):

Case 2: All the evidence at Jill's disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but it also indicates (in contrast with the facts) that giving him drug C would cure him completely and giving him drug A would kill him.

Suppose that on the basis of the evidence, Jill gives drug C and kills John. Zimmerman's prospective view implies that Jill did what she ought to but the objective view implies that she did not. Someone like Moore would say that although Jill acted wrongly but she is not to blame for doing so.

Some will say that this response just isn't satisfactory, but making matters worse is this:
Case 3: All the evidence at Jill's disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but her evidence leaves it completely open whether it is giving him Drug A or Drug C that will kill him or cure him.

Zimmerman says (paraphrase) that if we put Moore in Jill's shoes in Case 2, he could say, "Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did" (18). He cannot say this in Case 3, however, because the conscientious person would knowingly do what would not be the best.

I think there are two things the objectivist might say in response. First, the objectivist might offer a sort of tu quoque. Zimmerman stresses that it can be exceptionally difficult to determine which actions will maximize expectable value and so I think he'd acknowledge that someone can have reasonable but mistaken beliefs about which acts will maximize expectable value. I don't see why cannot construct cases where an agent knows that the action that will maximize expectable value is either A or C, not know which of these options will be the one that maximizes expectable value, know that A or C (but not which one) will be the worst from the point of view of maximizing expectable value, and know that B is somewhere in between these two options.

Second, it seems that Zimmerman is assuming that the conscientious person will not do what they believe to be overall wrong. Can't the objectivist deny this? It might seem a desperate maneuver, but if that's a move that everyone has to make, we should all lump it.

Zimmerman anticipates a version of this response, and here's what he says:
One response that might be made on behalf of the Objective View is this. It is true that, if Moore were put in Kill's place in Case 3, as a conscientious person he would choose to give John Drug B. But the choice would be perectly in keeping with his adherence to the Objective View, for it would simply constitute an attempt on his part to minimize the risk of doing wrong (20)

He says:
This response is unacceptable. I have stipulated ... that the probability that giving John Drug B will cure him only partially is 1. From the prospective of the Objective View, then, the probability that giving him this drug is wrong is 1, whereas, for each of drugs A and C, the probability that giving him the drug is wrong is less than 1. Hence, according to the Objective View, giving John Drug B does not minimize the risk of doing wrong; only the contrary, it is guaranteed to be wrong (20).

Darn, good point. Why can't the objectivist say that the agent will minimize the risk of _harm_ or _negative value_ rather than wrongdoing? I think the idea is that the conscientious agent always acts on judgments about what's right or wrong rather than what's good/bad, but if denying that the conscientious agent never decides to do what he knows he oughtn't, this seems like a good fallback position.

So, a lot of this will depend upon whether there are variants of Case 2 and 3 that cause trouble for the prospectivist, but if there are, the response I'm imagining the objectivist could use could be used by the prospectivist as well. But, that does mean that the force of Case 2 and 3 has effectively been neutralized. I need to look at Zimmerman's remarks concerning determinate levels of evidence to see if he has a way of dealing with this.

Friday, December 18, 2009

Reprehensible but not responsible?

At last year's Eastern, I picked up a copy of Zimmerman's Living with Uncertainty at the Cambridge sale where I was spotted by someone I knew who confessed to being a bit jealous that I had my mitts on the thing since he had to write a review of it. I told him he could have it (I hope I did!), but I recall he declined saying that he was supposed to get one from the journal he's reviewing it for. Karma is a funny thing. Now I'm to review it and I let the wrong person know that I had a copy already, so no free copy for me. I'm really glad that I get an excuse to read the whole thing carefully and have taken this as an excuse to order Zimmerman's earlier work, The Concept of Moral Obligation. (Among my resolutions for the new year is to stop putting so many books on my credit cards.) In the last chapter of Uncertainty, Zimmerman argues that if you don't know that your behavior is wrong, you are not morally responsible (in the backwards looking sense) for that behavior. The view seems, well, counterintuitive. But, there's an argument for it that we should consider:

(1) Alf did A, A was wrong, but Alf was ignorant of this fact at the time he did A because he did not believe it was wrong [suppose].
(2) One is culpable for ignorant behavior only if one is culpable for the ignorance on or from which it was performed.
(3) So, Alf is culpable for having done A only if he is culpable for the ignorance on or from which he A'd.
(4) However, one is culpable for something only if one was in control of that thing.
(5) Alf is culpable for having done A only if he was in control of the ignorance in which he did A.
(6) One is never directly in control of whether one believes or does not believe something..
(7) Moreover, if one is culpable for something over which one had merely indirect control, then one's culpability for it is itself merely indirect.
(8) Furthermore, one is indirectly culpable for something only if that thing was a consequence of something else for which one is directly culpable.
(9) So, Alf is culpable for having done A only if there was something else, B, for which he is directly culpable and of which the ignorance-the disbelief-in or from which he did A was a consequence.
(10) But, whatever B was, it cannot itself have been an instance of ignorant behavior because then the argument would apply all over again to it.
(11) Thus, Alf is culpable for having done A only if there was some other act or omission, B, for which he is directly culpable and of which is failure to believe that A was wrong was a consequence, and B was such that Alf believed it at the time to be wrong.

What is true of Alf is true of Brenda, Charles, Doris, Edward, Frick and Frack, etc... (176).

What about the Nazis? Zimmerman says that we cannot say they are morally responsible (unless, I guess, we assume that Hitler believed that he was acting wrongly which I guess we shouldn't assume) but adds, "there are a variety of ways in which a person is open to moral evaluation; attributions of moral responsibility constitute only one such way. Thus, we may indeed say that the beliefs and actions of the youthful Nazi are morally reprehensible, and even that he is morally reprehensible in light of them, without saying that he is morally responsible for them" (179).

I want to get back to the argument, but I first want to try to understand this distinction between moral responsibility in the backwards looking sense and these others forms of moral evaluation. Can you be blameworthy for something we know you're not morally responsible for? (Oops, that was a mistake.) Can you be morally reprehensible for doing deeds that we know that you're not morally responsible for? We can futz around with the assumptions concerning control and control over our attitudes, but my first reaction is to say that our judgments about blame and about what's morally reprehensible (and not just bad or of negative value) are going to serve as the basis of our judgments about moral responsibility, so to the extent that it's plausible to say that you can be reprehensible for doing something you didn't know you shouldn't do it's plausible to say that you're responsible for those same things. If we're comfortable with the idea that you can be reprehensible for A-ing when you don't have the sort of control that Zimmerman has in mind, why not say that (4) is false?

Zimmerman says that we can imagine sadists that cannot control their sadistic impulses who are morally reprehensible but uncontrollably so, and this seems to cause trouble for those who would say that we can have responsibility without control. But I think there's an important difference between cases of moral ignorance and cases of uncontrollable impulses and the kind of failures of control we're considering when we compare the person who cannot resist an impulse to do something sadistic and the sadist who identifies with the sadistic action but could resist if they tried. I don't see that it's all that bad to say that those who truly cannot control their impulses are not responsible in a backwards looking sense, blameworthy, reprehensible, etc... whereas those who act on or from moral ignorance without being compelled to act by some irresistible impulse can be responsible for doing what they don't know they oughtn't because they identify with the wrong values. Like our nazis.

Universalism and Disagreement

I've been reading Foley's _Intellectual Trust_ and his defense of universalism, the view that tells us that when we discover another person believes p, this provides you with a prima facie reason to believe p even if you happen to know nothing about the reliability of this other person. I've been wondering what Foley would say about cases of peer disagreement. At first, I thought he'd favor the conciliatory approach that encourages us to modify our attitudes we discover that someone we take to be a peer disagrees. But, this passage muddies the waters significantly:
[T]here is an important and common way in which the prima facie credibility of someone else’s opinion can be defeated even when I have no specific knowledge of the individual’s track record, capacities, training, evidence, or background. It is defeated when our opinions conflict, because, by my lights, the person has been unreliable. Whatever credibility would have attached to the person’s opinion as a result of my general attitude of trust toward the opinions of others is defeated by the trust I have in myself. It is trust in myself that creates for me a presumption in favor of other people’s opinions, even if I know little about them. Insofar as I trust myself and insofar as this trust is reasonable, I risk inconsistency if I do not trust others, given that their faculties and environment are broadly similar to mine. But by the same token, when my opinions conflict with a person about whom I know little, the pressure to trust that person is dissipated and, as a result, the presumption of trust is defeated. It is defeated because, with respect to the issue in question, the conflict itself constitutes a relevant dissimilarity between us, thereby undermining the consistency argument that generates the presumption of trust in favor of the person’s opinions about the issue. To be sure, if I have other information indicating that the person is a reliable evaluator of the issue, it might still be rational for me to defer, but in cases of conflict I need special reasons to do so (Foley 2004: 109).

The problem is that last line. Those who defend the conciliatory view aren't committed to any particular view about the proper reaction to the discovery that some schmohawk happens to believe p. Those who defend the view are interested in cases of peer disagreement. Maybe Foley thinks that the default attitude to take in light of the "self-trust radiates outward" arguments is that we treat all we come across as if they are peers but they lose that status when they disagree with us unless we have special reasons for deferring, reasons that we needn't have when we meet someone we take to be a peer up to the moment of discovering that the person we've met disagrees with us. At any rate, the last line seems out of line with the spirit of the conciliatory view.

On its face, two claims seem in tension. If you have no attitude concerning p and you discover someone believes p, you have a prima facie reason to believe p. If you have an attitude concerning p and you discover someone believes ~p, their belief gives you only a defeated reason to believe p unless you have special reason for deferring. The first claim is motivated by the thought that we're all in roughly the same boat and so there's no rational justification for trusting yourself and not others. That seems to suggest that a kind of deference in the face of disagreement doesn't require a special reason to justify it.

He could say that there's no tension here because the discovery of disagreement undermines the justification for thinking that we're all in the same boat, epistemically. But, I don't think it's quite that simple. Here's one of the passages where Foley defends universalism:
[U]nless one of us has had an extraordinary upbringing, your opinions have been shaped by an intellectual and physical environment that is broadly similar to the one that has shaped my opinions. Moreover, your cognitive equipment is broadly similar to mine. So, once again, if I trust myself, I am pressured on the threat of inconsistency also to trust you (Foley 2004: 102).

If that's the rationale for treating your opinions like its one of my own, is there really some proposition in the rationale for universalism that gets called into question _simply_ because I've encountered an apparent peer who disagrees with me? Sure, if you believe in Zeus, I'll think we've had very different upbringing but we're talking about cases where we respond in different ways to the same sort of evidence and have seemed up until this point to be very similar in terms of epistemic ability, intelligence, intellectual virtue, etc...

At any rate, Christensen takes Foley to be defending a view that is at odds with the conciliatory view of disagreement and I can see that. But, I can also see (or I think I can) why a universalist would be attracted to the conciliatory view. So, I'm having a hard time connecting the arguments and positions defended in the book to the literature on disagreement. I'm tired, though, so maybe I'll sort it out later.

Thursday, December 17, 2009

Examined Life



This is a trailer for, Examined Life, a documentary film by Astra Taylor. It includes a series of vignettes with Cornel West, Avital Ronell, Peter Singer, Kwame Anthony Appiah, Martha Nussbaum, Michael Hardt, Slavoj Zizek, Judith Butler and Sunaura Taylor. Check it out here.

Tuesday, December 15, 2009

Watching the detectives

Should we trust the experts?

Here's Feser's $.02:
But of course there is another obvious way to interpret the results in question [He's speaking of the results of the Phil Papers survey that revealed that the majority of professional philosophers lean towards or accept atheism whereas the majority of professional philosophers who specialize in philosophy of religion lean towards or accept theism] – as clear evidence that those philosophers who have actually studied the arguments for theism in depth, and thus understand them the best – as philosophers of religion and medieval specialists naturally would – are far more likely to conclude that theism is true, or at least to be less certain that atheism is true, than other philosophers are. And if that’s what the experts on the subject think, then what the “all respondents” data shows is that most academic philosophers have a degree of confidence in atheism that is rationally unwarranted.


There's lots of interesting stuff to think about here. Should the confidence of non-experts reflect the attitudes of experts? Shouldn't this depend, in part, upon the size of the 'knowledge gap' between expert and non-expert? Suppose there's a gap. (Plausible). Is that gap anything like the gap between global warming deniers and climatologists? I don't think so, but that's still perfectly consistent with the idea that non-experts ought to be less confident in their attitudes upon learning what we've learned when the results were released.

Here's something that I think matters but I don't know what to make of it.

*Suppose the majority of the experts agree that a certain argument for the non-existence of X (electrons, phlogiston, fairies, objective moral standards, heaven, a justification for intentionally terminating a pregnancy) fails.
*Suppose that this is based on the widespread conviction that there's some adequate reason or other to believe in X.

This is all perfectly consistent with widespread disagreement amongst experts on two points:
(CP1) what the adequate reasons are for believing in X;
(CP2) what's wrong with the arguments for the non-existence of X.

So, some what-iffing based on next to nothing.

What if the experts were evenly divided in the following ways. We divide the experts into the A team and B team if we look at their attitudes concerning (CP1). We divide the experts into the C team and D team if we look at their attitudes concerning (CP2). The members of the A team thought that the reasons that the members of the B team had for believing in X were inadequate and poor for reasons readily available in the literature. The members of the B team thought that the reasons that the members of the A team had for believing in X were inadequate and poor for reasons readily available in the literature. The members of the C team thought that the members of the D team failed to neutralize the arguments for the non-existence of X because those responses rested on false premises that were shown to be false/unwarranted in the literature. The members of the D team thought that the members of the C team failed to neutralize the arguments for the non-existence of X because those responses rested on false premises that were shown to be false/unwarranted in the literature.

I can imagine some epistemologists saying that if the experts had a high degree of confidence in the hypothesis that X existed, that would be misplaced confidence given some principles about the weight of peer opinion and some evidentialist assumptions (which, admittedly, might be hard to rationally accept as a package given the principles about the weight of peer opinion which themselves are problematic given contingent facts about what opinions are floating around). I can imagine some epistemologists saying that when expert or ('expert') opinion is known to be not warranted by the evidence, the gap in confidence between expert and non-expert (which is really the gap between specialist and non-specialist) does not entail that the attitudes of non-specialists/non-experts are unwarranted/unreasonable/epistemically impermissible.

At any rate, I think the issue is a bit more complicated than some people have assumed. Indeed, I fear that I've oversimplified things. You've been good to read this far. Enjoy some Elvis Costello, you deserve a treat:

Friday, December 11, 2009

Fair and Balanced is perfectly consistent with Fair, Balanced, and Biased

From a Fox "News" Poll:
17. What do you think President Obama would like to do with the extra bank bailout money -- save it for an emergency, spend it on government programs that might help him politically in 2010 and 2012, or return it to taxpayers?

Thursday, December 10, 2009

Eastern APA

Had a strange dream about the APA last night. The drive in to NYC took longer than planned, there were notes apologizing that there wasn't enough beer at the talks, NYC was suffering from a severe and prolonged coffee shortage, and I couldn't hear the questions from the audience over the traffic noise. (The talk was outside at an abandoned gas station.) For reasons that weren't entirely clear, the members of the audience would all raise their hands at once. Not to ask questions, it was as if they were voting.

Upon waking, I saw that Matt had sent me his comments. Spooky.

The paper I'm giving is one of a handful of papers where I try to motivate claims about epistemic norms by appeal to claims about non-epistemic norms supported by intuition. There seem to be two responses to the general strategy:

R1: Anyone who buys into epistemic internalism will simply not have the intuition that the deontic status of an action can depend (in part) upon features of the situation that the subject is non-culpably ignorant of.

R2: Anyone who thinks about it will realize that the deontic status of actions depend (in part) upon features of the situation that the subject is non-culpably ignorant of and that the epistemic status of attitudes never depends upon features of the situation but only the subject's non-factive mental states.

If only we could get the R1 and R2 people in a room. My response to R1 people is (in part) that there are R2 people. R2 people are tough, they seem to require a real response. This isn't a response (yet) so much as some questions and hand waving.

An example:
GIN/PETROL
The first gin and tonic was delicious, so you order a second. You promise to share this one with your partner. The drink you are given looks like a gin and tonic, has the limes you’d expect a gin and tonic to have, but it is in fact petrol and tonic. You give it to your partner to drink and she becomes violently ill as a result.

Someone who is sympathetic to (R2) might say the following:
While you oughtn't give your partner the stuff, it's not the case that you shouldn't say that your partner should drink the stuff. The giving is wrong but the saying that you should give is not epistemically wrongful

You can only say this if you believe that faultless wrongdoing is possible. You have to think that you’re obliged to refrain from giving someone a drink containing petrol when you justifiably believe that it’s gin and know that you’ve promised to give them some of your gin drink. You have to believe that there are inaccessible normative reasons not to Φ that not only bear on whether to Φ but can still manage to defeat whatever reasons count in favor of Φ-ing. These inaccessible reasons aren’t diminished in strength just because they are inaccessible and so these reasons can be the ‘winning’ reasons.

I can’t see how this response to GIN/PETROL could be right unless we were to assume:
(FW) There can be cases of faultless wrongdoing, cases where the subject is obliged to refrain from Φ-ing when the subject was nevertheless rational, reasonable, and responsible in Φ-ing.

In defense of the idea that there deontic status of action and normative standing of attitude/assertion you can say two things. First, you can say that (FW) is false. If (FW) were true, morality would make unreasonable demands on us. Morality is, if anything, reasonable. Here’s what Fantl and McGrath say about the case:
… it is highly plausible that if two subjects have all the same very strong evidence for my glass contains gin, believe that proposition on the basis of this evidence, and then act on the belief in reaching to take a drink, those two subjects are equally justified in their actions and equally justified in treating what they each did as a reason, even if one of them, the unlucky one, has cleverly disguised petrol in his glass rather than gin. Notice that if we asked the unlucky fellow why he did such a thing, he might reply with indignation: ‘well, it was the perfectly rational thing to do; I had every reason to think the glass contained gin; why in the world should I think that someone would be going around putting petrol in cocktail glasses!?’ Here the unlucky subject is not providing an excuse for his action or treating what he did as a reason; he is defending it as the action that made the most sense for him to do and the proposition that made most sense to treat as a reason (forthcoming: 141).

If (FW) is false, the facts that the subject is ignorant of cannot be the facts that oblige the subject to act against her justified judgment about what to do and say. So, cases like GIN/PETROL aren’t a threat to Link:

(LINK) If S oughtn't Φ, an advisor epistemically oughtn't advise S to Φ.

Essentially, this is (R1). The problem with (R1) is that it isn't supported by intuition. Indeed, it is counterintuitive.

Suppose instead that (FW) is true and suppose factual ignorance can excuse, but does not obviate the need to justify giving your partner the petrol. Since there was no overriding reason to give your partner the petrol, you shouldn’t have given her that stuff to drink. Even if we assume (FW) is true, it still isn’t obvious why we should think that GIN/PETROL poses a threat to (LINK).

There has to be some explanation as to why the facts that the subject is non-culpably ignorant of adversely affect the normative standing of an action without adversely affecting the normative standing of the assertion that the action is to be performed. Any explanation as to how there could be no epistemic obligation to refrain from asserting that someone should Φ when the relevant agent shouldn't Φ would either focus on the epistemicness of the epistemic obligations or the obligatoriness of epistemic obligations.

You can’t say that it is the obligatoriness of the obligation that provides the explanation. If (FW) is true, there’s nothing about obligation, per se, that requires that the subject knows, is in a position to know, or is in a position to justifiably believe the obligation to be an obligation. This is not quite the same point, but it is related to a point that Gibbons make which is worth repeating. If we’re going to talk about normative reasons that bear on action and belief, at some level of abstraction we should expect reasons for action and belief to behave in the same way. They are, after all, reasons. What goes for reasons goes for obligations.

Someone could try to explain how it could be that non-normative facts bear on the deontic status of the action but not the assertion by focusing on the epistemicness of the obligations we’re under. This isn’t promising either. Epistemic obligations have to do with the pursuit of truth and avoidance of falsity. Practical obligations have to do with the pursuit of the best. Either you are really into the idea that faultless failures to bring about the best are failures to meet your obligations or you think this makes a joke out of morality. If you think that a faultless failure to produce what is actually deontically best is not a failure to live up to your obligations, you don’t accept (FW) and so won’t try to explain how there could be a moral obligation to act against the advice in GIN/PETROL. If you think that faultless failure to bring your beliefs/assertions in line with the truth is not a failure to meet your epistemic obligations but think that (FW) is true and that a faultless failure to bring about what is deontically best is a failure to meet your moral obligations, you are appealing to some difference between the epistemic and the practical that you haven’t explained.

Wednesday, December 9, 2009

The results are in!

Philosophers of Religion in Target Faculty
God: theism or atheism?
Accept or lean toward: theism 34 / 47 (72.3%)
Accept or lean toward: atheism 9 / 47 (19.1%)
Other 4 / 47 (8.5%)

All Respondents/Target Faculty
God: theism or atheism?
Accept or lean toward: atheism 678 / 931 (72.8%)
Accept or lean toward: theism 136 / 931 (14.6%)
Other 117 / 931 (12.5%)

There's some discussion of the numbers emerging over at Prosblogion.

Philosophers of Mind in Target Faculty
Mind: physicalism or non-physicalism?
Accept or lean toward: physicalism 117 / 191 (61.2%)
Accept or lean toward: non-physicalism 42 / 191 (21.9%)
Other 32 / 191 (16.7%)

All Respondents/Target Faculty
Mind: physicalism or non-physicalism?
Accept or lean toward: physicalism 526 / 931 (56.4%)
Accept or lean toward: non-physicalism 252 / 931 (27%)
Other 153 / 931 (16.4%)

Among the more interesting claims being floated is this one: just as theists go into philosophy of religion in order to defend theism, there are many atheists going into philosophy of mind in order to defend physicalism. I offered some suggestions as to why atheists/agnostics aren't going in to philosophy of religion. Unless people are converting rapidly, there's got to be some reason why it is. So far, I don't think I've hit upon any explanatory factors that have convinced anyone but (a) there has to be some explanation as to why this is (b) the explanation has to be partially contained in what I said because I covered just about all the possible explanations.

On the epistemology front:

Target Faculty/Epistemology
Epistemic justification: internalism or externalism?
Accept or lean toward: internalism 59 / 160 (36.8%)
Accept or lean toward: externalism 56 / 160 (35%)
Other 45 / 160 (28.1%)

Target Faculty/All Respondents
Epistemic justification: internalism or externalism?
Accept or lean toward: externalism 398 / 931 (42.7%)
Other 287 / 931 (30.8%)
Accept or lean toward: internalism 246 / 931 (26.4%)

Some move away from externalism about epistemic justification among the specialists, but the view is not without its defenders. This data is relevant to something I've had to contend with recently. In the paper I'm giving at the Eastern in a few weeks, I offer some examples that I used to elicit intuitions from undergraduates where I try to see whether they are internalists or externalists about moral permissibility. The intuitions suggest that the folk are externalists about the justification of action. (At the very least, they think you can generate reparative duties by bringing about bad effects when you couldn't have been expected to know that you would bring these effects about at the time of action.) I argue on theoretical grounds that you cannot accommodate these intuitions given the constraints imposed on you by internalism about the epistemic stuff.

Two responses to this. The first was that the undergrad responses are not a good guide to community standards. Here's a response:
* I could point to further data that suggests that community standards are externalist (e.g., Darley and Robinson's work (this is a good place to start) suggests that the dominant view in the community held by the folk is that the degree of punishment appropriate to an offense is partially determined by the effects of an action. When you have two subjects that are mental duplicates that bring about different effects, the community standard appears to be that the punishment appropriate for the agent that brought about the worse effects is greater than the punishment appropriate for the agent that brought about the lesser effects). That's more data, but it's the sort of data that one could offer without challenging the (empirical) claim that epistemic internalists won't share the intuitions I've tried to elicit and that Darley and Robinson have elicited (i.e., that there can be moral differences in the status of an action without mental differences that distinguish actors).

The second response to my argument was that epistemic internalists will simply not have the intuitions that these undergrads had. Three responses to that.
* First, the fact that they respond that way doesn't mean that the response is reasonable.
* Second, it's an empirical question as to whether they will react to that. (I can think of some prominent internalists who do _not_ react like that. Richard Feldman, for example, is a prominent an internalist and he rejects the view that you are morally justified in acting on your epistemically justified moral judgments precisely because he thinks that the consequences of an action (known or unknown) can bear on the permissibility of the action but can have no bearing on the epistemic standing of judgments about the deontic status of the action that brings those consequences about. Barbara Herman thinks that all moral evaluation is concerned with the quality of the agent's will and she tries to tell a Kantian story as to why we have what she thinks intuition suggests are duties of reparation to deal with the unforeseen consequences of our action. Theoretically, they are internalist but they have intuitions that appear to favor some externalist views.)
* Third, I think that, ceteris paribus, we want theories that are consistent with community standards that govern the application of normative terms. While a philosophical argument could correct these community standards, that would suggest that cateris isn't paribus, and one of the difficulties that such an argument would face is that it likely would have to be pinned down by intuition at some point. If those intuitions are unique to specialists with philosophical axes to grind, we should worry about theory contamination of intuition defeating their evidential significance.

Wouldn't wrap Fish in it

[W]hile I wouldn’t count myself a fan in the sense of being a supporter, I found it compelling and very well done. My assessment of the book has nothing to do with the accuracy of its accounts. Some news agencies have fact-checkers poring over every sentence, which would be to the point if the book were a biography ... “Going Rogue,” however, is an autobiography, and while autobiographers certainly insist that they are telling the truth, the truth the genre promises is the truth about themselves — the kind of persons they are — and even when they are being mendacious or self-serving (and I don’t mean to imply that Palin is either [Heavens, no]), they are, necessarily, fleshing out that truth. As I remarked in a previous column, autobiographers cannot lie because anything they say will truthfully serve their project, which, again, is not to portray the facts, but to portray themselves.As I remarked in a previous column, autobiographers cannot lie because anything they say will truthfully serve their project, which, again, is not to portray the facts, but to portray themselves.

Gag me.

Does he believe anything written in that book?
It doesn’t matter. What matters is that she does, and that her readers feel they are hearing an authentic voice. I find the voice undeniably authentic (yes, I know the book was written “with the help” of Lynn Vincent, but many books, including my most recent one, are put together by an editor). It is the voice of small-town America, with its folk wisdom, regional pride, common sense, distrust of rhetoric (itself a rhetorical trope), love of country and instinctive (not doctrinal) piety. It says, here are some of the great things that have happened to me, but they are not what makes my life great and American. (“An American life is an extraordinary life.”) It says, don’t you agree with me that family, freedom and the beauties of nature are what sustain us? And it also says, vote for me next time. For it is the voice of a politician, of the little girl who thought she could fly, tried it, scraped her knees, dusted herself off and “kept walking.”

Undeniably authentic, but wholly unbelievable. Holy unbelievable!

Monday, December 7, 2009

The evidence wars continue

Turri's False Evidence.

Weatherson's Evidence and Inference.

Will the truth out? Will truth out? We'll have to wait and see.

Some arguments (that might need some tinkering)

(1) If someone knows that p is part of her evidence, it seems that the question ‘Why is it that p?’ is appropriate/in place/proper/doesn't rest on a mistake in the way that 'Why do fish weigh less when they die?' is inappropriate/out of place/improper/rests on a mistake. That assumes that we’ll respond by saying either ‘No reason, it’s just a brute fact that p’ or ‘p because q’. Both answers entail p. I can't see how you could explain this unless you assumed that evidence is factive.

(2) If S knows that p is part of her evidence, she knows that p is true. If I know that p is part of S's evidence, it isn't an open question for me as to whether p.

(3) It seems that if A asserts that p is part of A’s evidence and then B asserts ~p, it seems that A and B disagree/can't both be right.

(4) If p is part of my evidence and I know that p is part of my evidence, I think I’m in a position to A for the reason that p (when I know that my choice to A is a p-dependent choice). You cannot A for the reason that p if ~p.

(5) It just sounds weird to say, ‘His evidence was that p, but of course ~p’ or 'His evidence was that p, but I don't believe p', but this cannot be weird for Moorean reasons because it's his evidence, not mine.

I've offered other arguments in the Synthese piece and in other posts, but I won't repeat them here.

Sunday, December 6, 2009

For the record

My membership in Sarah Palin's facebook fanclub is not non-ironic. Membership is, however, non-ironically awesome! For example, Friday's installment:
Voters have every right to ask candidates for information if they so choose. I’ve pointed out that it was seemingly fair game during the 2008 election for many on the left to badger my doctor and lawyer for proof that Trig is in fact my child. Conspiracy-minded reporters and voters had a right to ask... which they have repeatedly. But at no point – not during the campaign, and not during recent interviews – have I asked the president to produce his birth certificate or suggested that he was not born in the United States.

That's the combination of batpoo crazy and bitterness we can believe in!

Tuesday, December 1, 2009

'Might's might

Ages ago, I wanted to write a paper called ''Might' made right'. That's not going to happen, but I'm still working on epistemic possibility. Ordinarily, I think it would be pedantic to object to the following view in the ways I'm about to, but I have my reasons. First things first. The view:

(EPk) p is epistemically possible for S iff ~p isn’t obviously entailed by something S knows.

Think about cases of inductive knowledge. It seems odd to think that you only have knowledge of future events when it is not epistemically possible that these events do not occur. Myself, I don’t doubt that our beliefs about the future constitute knowledge. I doubt that it would be correct to say that it isn’t epistemically possible that these beliefs are mistaken.

Think about conversations where sceptical hypotheses are introduced. In such contexts, it seems proper to concede that we might be mistaken in just about any belief about the external world. Now, suppose that knowledge is necessary for warranted assertion and that concessions (e.g., ‘It might be that I’m a BIV’) are really assertions. It seems that given these assumptions and (EPk), the propriety of the concession would depend upon whether the speaker knew herself to be ignorant. But, it seems harder to know that you don’t know than it is to know that it’s proper to concede that you might be mistaken. Given (EPk), to assert knowingly that you might be mistaken, you either know that you don’t believe p, that your belief about p is mistaken, that the justification you have for your belief is insufficient, or that you are in some sort of Gettier case. I doubt that you know one of these to be true whenever you know that it’s proper to concede that you might be mistaken. Thus, you either should think that concessions aren’t really assertions, deny that knowledge is the norm of assertion, or say (as I do) that in conceding that you might be mistaken you might only be conceding that you are not completely certain.

If the example of inductive knowledge shows what I think it does, then we need to revise (EPk) as follows:


(EPx) p is epistemically possible for S iff ~p isn’t obviously entailed by something S knows w/X.

Whatever we put in for 'X', it just has to be something we don't always have when we know. We could put in 'out inference' for 'X', and we get that epistemic necessity is non-inferential knowledge. That gives us the induction case, but not the perception case. We could put in 'infallible grounds for believing' and that gives us the induction case and would show that CKAs can't possibly pose a threat to fallibilism. Given my views about perceptual justification, I don't think that gets the perception cases right. There are many things we know non-inferentially that I think we have infallible grounds for, but these are things we can properly concede we might be mistaken about when skeptical hypotheses are introduced. So, why not just say something like 'certainty' and be done with it. The context determines whether someone knows with certainty because the conversational context can determine whether certain possibilities are significant and we can say that something is certain for S when S's evidence rules out all the significant possibilities where S is mistaken. Assuming that 'knows' and 'certain' doesn't sway together, this view wouldn't motivate a contextualist account of 'knows'. 'Might' might have the power to derail conversations, but it doesn't threaten your knowledge or evidence.