Sunday, March 11, 2012

Scepticism about scepticism about moral responsibility

I'm interested in this argument from Zimmerman (similar arguments are found in some recentish papers from Rosen):

1. Suppose it was wrong for Adam to φ but he φ’d anyway. He was ignorant of this fact simply because he did not believe that φ-ing was wrong.
2. You are culpable for ignorant behaviour only if you are culpable for the ignorance in which it is formed.
3. Adam is culpable for ignorant behaviour only if he is culpable for the ignorance in which it was performed.
4. You can only be culpable for something only if you were in control of that thing.
5. Thus, Adam is culpable for having φ’d only if he was in control of the ignorance.
6. You are never directly in control of whether you believe or do not believe something. Control over belief is always indirect.
7. If you are culpable for something over which you have merely indirect control, then your culpability for it is itself merely indirect.
8. You are indirectly culpable for something only if that thing was a consequence of something else for which you are directly culpable.
9. Thus, Adam is culpable for having φ’d only if there is something else, ψ, for which he is directly culpable and of which the ignorance in which he φ’d was a consequence.
10. But, whatever ψ-ing was, it cannot itself have been an instance of ignorant behaviour because then the argument would apply all over again to ψ-ing. ψ-ing must have been an item of behavior that Adam believed at the time to be wrong.
11. Thus, Adam is culpable for having φ’d only if there was some act or omission, ψ-ing, for which Adam is directly culpable and of which his failure to believe that φ-ing was wrong was a consequence and ψ was such that Adam believed that it was wrong to ψ at the time he ψ’d.

Two points. First, I rarely if ever engage in wrongdoing in the belief that what I'm doing is wrong and don't think that I'm atypical. If the argument is sound, we are responsible for much less than we ordinarily think. Second, I think anyone who accepts the argument will have a difficult time explaining why we're blameworthy in the remaining cases of clear-eyed akrasia (i.e., cases where we engage in wrongdoing in the belief that what we're doing is wrong).

I think that the most promising line of response is to argue that you can be blamed for your de re moral unresponsiveness (as Arpaly puts it). We can distinguish between two kinds of unresponsiveness. Think of de dicto unresponsiveness as being unresponsive to the thought that something is a moral consideration that counts against some course of action. Think of de re unresponsiveness as being unresponsive to some genuine moral feature that the agent is cognizant of. In cases of de re unresponsiveness, you need to be cognizant of a fact that constitutes a reason not to act, not the fact that the fact is a reason not to act. (For example, Descartes might not have believed that non-human animals were sentient. Many people are aware that many non-human animals are sentient but they don't think that the fact these animals suffer constitutes a reason not to perform acts that cause them to suffer.)

It's the quality of the agent's will that determines whether she's blameworthy and part of what determines the quality of the agent's will is whether the agent is responsive to the values and interests that morality is concerned with. Ignorance of fact might exculpate if, say, you injure another in the belief that you were helping them. If you are (i) cognizant of a moral reason not to act without being cognizant of the fact that it is such a reason and (ii) you act against it, I don't think you automatically earn an excuse. Your actions show that you are unresponsive to moral reasons and that's _precisely_ what you should be blamed for when you should be blamed for what you do.

It seems that anyone who endorses Z's argument has to reject this approach to blame. Why? Well, in some of the cases where an agent is aware of the fact that, say, performing an act that harms an animal without being aware of the fact that it is wrong to cause such harms the agent will be ignorant of the fact that her actions are wrong. The only way to get such an agent on the hook, according to the argument above, is to demonstrate that there's something that the agent is directly responsible for that is responsible for the agent's ignorance. If all it takes to show that some agent is blameworthy is that the agent is de re unresponsiveness, we'd reject the crucial assumption that you can only be blamed for acting in ignorance if you're responsible for your ignorance. No, you'd be responsible for acting in such a way that shows that you're unresponsive to, say, the suffering of animals.

Now, focus on the cases of clear-eyed akrasia, the cases where the agent engages in wrongdoing and acts in the belief that her actions are wrong. Would the agent be culpable in _these_ cases? I doubt it. What would render the agent's actions blameworthy? I see three options:

(a) Acting on the belief that one's action is wrong;
(b) Acting against moral reasons;
(c) The combination of (a) and (b).

Inverse akrasia cases rule out (a), so the real work has to be done by the moral reasons. (Think of Huck Finn and Jim.) The argument sketched above rules out (b), so we're down to (c) by elimination. Theoretically, it's hard to see how (c) could work if (a) and (b) are ruled out. We had to reject (b) because we had to reject the idea that you can be blameworthy for acting against reasons you don't believe to be genuine reasons. Why would believing them to be genuine and acting against them make you blameworthy if the the belief that you are acting against genuine moral reasons doesn't make you blameworthy on its own? If we had an agent that, say, acted twice on the belief that her actions were wrong, once in which she didn't act against a genuine reason and once in which she did, (c) implies that she's blameworthy in only one of these cases. It seems that what's doing the work here in explaining why she's blameworthy in one has to be the fact that she's acted against genuine moral reasons, not just apparent moral reasons, and that seems to assume that something like the quality of will account is correct. But, it seems that anyone who adopts such an account would reject the argument for (11).

16 comments:

Mark T Patterson said...

Clayton,

This is very interesting. I'm hoping you'll admit a silly question from a psychologist.

I suspect that, descriptively, very few of our judgments about anything are accompanied by cognitions of the form 'I think this is wrong', or for that matter, 'I think this is right.'

Is this "active consideration" of an action what you mean when you talk about individuals beliefs about the morality of an action?

Clayton Littlejohn said...

Hi Mark,

I agree that we rarely act while thinking to ourselves that what we're doing is wrong or right. How far we can stretch the notion of belief? I don't know, but I'd think that it stretches further than things we think to ourselves while acting. My worry is just this. However far we stretch it, I don't think moral culpability has much to do with our own moral beliefs.

Nick Byrd said...

Thanks for sharing Clayton. Perhaps you'll humor another question. It's about the following comment: "I don't think moral culpability has much to do with our own moral beliefs."

Do you mean this descriptively or prescriptively?

Steve Sverdlik said...

Clayton,

This is a thoughtful post. As it happens, I'm reviewing Neil Levy's "Hard Luck", and he uses this sort of argument to show that we're not morally repsonsible for any of our actions.

One of my doubts about the argument seems similar to the main point I take you to be making. Can we put it as follows? Levy grants that there are external or objective normative reasons to act. But he says that an agent's rationality always has to be jusged in terms of her internal reasons, by which he means, her beliefs about her normative reasons. (p. 127) This seems to seal us off from the normative reasons, since they never play a role in judging whether someone has acted rationally. And so you may get Z's requirement that irrationality would consist in deliberately making yourself ignorant of something you know or believe.

So I take you to give a picture of how someone can be irrational without choosing to make himself ignorant. I think the picture is more perceptual than conative, so 'quality of will' seems inaccurate. But do I understand you correctly?

I can think of problems with this suggestion, and I think there must be other ways people can be blameworthy for acting in ignorance about the moral quality of their actions, but this may be one way to respond to Z (and Levy).

Steve Sverdlik

Clayton Littlejohn said...

Hi Nick,

I guess I mean that prescriptively, if I get the question. Some writers seem to think that believing that you're engaged in wrongdoing is either necessary or sufficient for being culpable for that wrongdoing. I don't think either is the case. Some people are too easily persuaded that things they shouldn't feel guilty about are wrong. Some people's character flaws prevent them from seeing the flaws in their character. Beliefs matter when we're trying to decide whether someone can be blamed for something, but I don't think _moral_ beliefs matter all that much. My belief that there's a dog in the car who might die if I leave it locked inside matters when it comes to evaluating my behavior if I decide to go in for ice cream. If I didn't believe that leaving the dog in the car was bad for the dog, maybe (just maybe) I'm less blameworthy than someone who does believe leaving the dog in the car is bad for the dog.

Clayton Littlejohn said...

Steve,

Hi!

Funny you should mention the Levy book. I've been reading some papers by Gideon Rosen and Michael Zimmerman and have just started to look at the Levy book.

Anyway, I think the perceptual model might be right, but it might also be an artefact of the way that I state things. I guess what I want to say are three things. First, I like the account that you get from Arpaly and Sher, which is that you're blameworthy if you're not responsive to genuine moral reasons. Second, I think that once you reject this idea it's going to be really hard to then build up a plausible account of moral responsibility in terms of, say, being responsive to the idea of duty or a sense of duty. Third (not in the post), views that insist that there are epistemic constraints on moral obligations will be forced into accepting a problematic account of moral responsibility, one on which the agent's rationality is "sealed off" from genuine reasons, as you put it. I didn't touch on the third point in the post.

Send me your review of Levy's book when you're done with it, I'd love to take a look. Maybe we can chat when I'm in Dallas. I'm trying to arrange a trip for April.

Steve Sverdlik said...

Clayton,

Thanks for your remarks. It seems that there are now people (including you, obviously) who are connecting epistemology and moral responsibility in interesting ways. Since I don't have anything to offer on the epistemology side of things, I'm going to watch with interest as the learned duke this out. I certainly find the idea that you have to control your own beliefs in order to have control over your actions to be absurd on its face. But the only good answer I came up with as answer to this was something like what you seem to be thinking: that at some point we just bump up, as it were, against objective normative reasons.

I'm probably not going to press this point in my review, even if it has some bite, since it seems to be too narrow in scope--since I think there must other circumstances in which we're responsible---and I have doubts about the picture of normative reasons it seems to require us to accept.

I'm inclined to step back from the argument you discuss, and ask a bigger question of Levy: what would an agent who was responsible for her actions be like if one existed? I don't get a clear picture---I can't see how we could control our beliefs in a way that he demands. I'm suspicious of a conception of moral responsibility that entails that no creature could ever be morally responsible for what she does.

Anyway, I'll send you the review when I finish it. Let me see whatever work out on this issue. Hope to see you in Dallas.

Steve

Neil said...

I am also interested in Steve's review of Levy's book! I think Levy's argument differs from Zimmerman's in not turning on premise 6 (Levy accepts premise 6 and has argued for its truth, but his argument does not utilize the premise). Rather, he focuses on internal reasons because he thinks that agents can be blamed only for failing to do those things that can reasonably be expected to do, and we can reasonably be expected to something only if we can it by means of a reasoning procedure. Levy agrees that if Arpaly's views of how moral responsibility works were true, that argument would fail. He agrees, moreover, that his argument simply begs the question against Arpaly. But he takes himself to have independent arguments against her view.

Clayton Littlejohn said...

Hi Neil,

I'm looking forward to reading his book precisely because I think this sort of argument won't work without some independent reason to reject the kind of quality of will acct she and others have defended.

(This isn't Neil Levy, is it? I'm guessing it's not.)

Neil said...

I thought when I posted through my google account, my full name would be displayed. This is Neil Levy. I actually think that Arpaly changes the subject. By 'morally responsible' she just means something like 'expressing through one's actions one's morally relevant actions'. She does not give us any reason to believe that that is the kind of thing that underlies desert. I think she may even accept that it does not. Of course, if what you mean by MR is Arpaly-style MR, then internal reasons are not going to be especially important.

I also think that even as an account of Arpaly-style MR, the account fails. But that's another book (ms available on request!)

Steve Sverdlik said...
This comment has been removed by the author.
Clayton Littlejohn said...

Hi Neil,

That's strange, I guess there's something wonky with the commenting features. Anyway, thanks for the comments.

I've just skimmed parts of Hard Luck this past week and have been mulling over your charge that Arpaly changes the subject. Think there's something to it, but it's something that I haven't had much time to think about yet. On the one hand, I don't know if there should be a connection between blame and differential treatment. On the other, I feel the force of the trivialization charge. (You don't really have another MS on this, do you?)

Neil said...

Yes, I'm serious about having another ms on this. The new book (ms available on request!) is on the need for consciousness for moral responsibility - something that Arpaly (and Scanlon, and Angela Smith, and George Sher) deny. The Huck Finn case, as Araply reads it, is a counterexample. But she misreads it, and it really isn't. Huck isn't akratic in the least. I have a paper in the Dec 11 issue of *Analytic Philosophy* discussing the case. In short: Huck knows what he ought to do. He also knows what the rules he calls 'morality' require of him. He sees the conflict and concludes that morality is wrong and resolves to ignore it.

Steve Sverdlik said...

Clayton, Neil,

Hello to Neil Levy!

Other readers of this blog should note that some posts are being 'published' out of the order in which they were sent. But you should be able to figure out what's being discussed.

I've just finished writing my review. I'm happy to send it to both of you. (Neil, you'd get the draft form in any case.)Write me at my SMU e-mail address.

Since it had to be limited to 1500 words I had to limit my critical remarks, and so I made one point about moral theory, my specialty.

The argument about the epistemic conditions on control, which is Clayton's focus, is not discussed. I've expressed my doubts about it on this blog, but I think Clayton, not I, has the kind of expertise that will help us get to the bottom of it.

Neil: I'd be interested, though, if you'd like to answer a question I asked: can you describe for us what a creature would be like who was morally responsible for her actions? Would it be possible for a kind of creature for whom this was true to come about by natural selection? Or would it have to be some sort of angel? Would it be possible for it to have the sort of beliefs that we do (which generally aren't under our control in the way certain bodily movements are)?

Two last points: I agree that Neil has a good argument against Arpaly (though I need to go back and check if he's describing her position correctly). In general, I think it's correct to say, as Neil does, that blame is different from a simple evaluation as 'bad'. (Adam Smith said that when we say that someone is virtuous we don't mean that she is useful in the way 'a chest of drawers' is.)

But I take Clayton to be saying that Arpaly (and Sher) have another point, one that bears on the specific issue of the epistemic conditions on control. This argument, as I would put it, asserts that an agent's rationality isn't to be judged exclusively in terms of what she herself judges to be rational or judges she has reason to do. If this completely internalist conception of normativity were correct then (it seems to me) objective normative reasons, even if they exist, have no relevance in assessing whether someone acts (or thinks) rationally, and that seems very doubtful to me.

My criticism of Neil focuses on another issue, though.

Steve Sverdlik

Neil said...

Hi Steve,
I'll email you as you suggest. Let me answer your question publicly, though. I take moral responsibility to be impossible, so long as moral responsibility is a partially historical concept. It is impossible - for any being in any world - to tke responsibility for the mechanisms on which we act )to speak like Fischer).  Of course some philosohers deny the historical claim; here it comes down who can tell a more plausible story about a range of cases.

Independently of this claim, I have a second argument for the impossibility claim, one that is closer to the one Clayton targets, turning on the epistemic conditions of responsibility. The argument  rests importantly on a fundamental moral intuitions (which Arpaly and Clayton would reject): that it is fair to blame someone only if it reasonable to expect them to behave otherwise, and I cash that out in terms of what they can do via a reasoning procedure. Again, everything will depend on who can tell a more compelling story about a range of cases. In defense of my view, I think it is pretty clear that Arpaly is not entitled to claim Huck as evidence for her side.

Clayton Littlejohn said...

Hi Neil,
I'd love to read the paper you described in your most recent comment (i.e., the paper that focuses on the epistemic conditions of responsibility). Is that the Analytic Philosophy piece, or something else?