Tuesday, December 22, 2009

Could 'ought' be objective but shifty?

[Fixed a gaff]
I think something like this exchange once took place:
LD: You should do something about the kitchen and leave the living room alone.
Me: No, I'm think I should paint the living room and leave the kitchen alone.
LD: In that case, you should paint the walls brown or grey but not that navy blue you're looking at.

I think there are many contexts in which an advisor will (properly) advise an agent to perform a suboptimal action because she knows that the agent simply will not perform the optimal action. (I don't think this lends any support to actualism.) Nevertheless, I think that the advisor needn't be anything less than perfectly conscientious. What goes for apartment improvement goes for morality as well. I think that an advisor could be perfectly morally conscientious, know that A is better than B, but advise the agent to pursue B upon learning that the advisee won't A.

Zimmerman says this about 'ought' and the conscientious agent:
It is with overall moral obligation that the morally conscientious person is primarily concerned. When one wonders what to do in a particular situation and asks, out of conscientiousness, 'What ought I to do?,' the 'ought' expresses overall moral obligation ... Conscientiousness precludes deliberately doing what one believes to be overall morally wrong (2)

I think that it even if it is with overall moral obligation that the morally conscientious advisor is primarily concerned (it might be the values that ground those obligations, however, that concerns the conscientious agent but let that pass), there might be legitimate reasons for the advisor to 'shift' focus to something she knows full well would be a violation of the advisee's obligations (e.g., when the advisee is just dead set on acting in ways that go against obligation but can be steered to act in such a way that she does the next best thing rather than something even worse).

And this raises a question. Assuming that this is so, why can't we say that just as a morally conscientious advisor might sincerely advise someone to do something other than what they really ought to do _and_ yet be primarily concerned with overall moral obligation (e.g., when they have good reason to advise the agent to do the next best thing) the agent herself might have good reason to focus on something other than her overall obligation. She could still be primarily concerned with her overall obligation, but have some good reason to strive for something else.

Here's the basic strategy for blocking the argument for prospectivism. In cases where the agent takes herself to have adequate information, the 'ought' she is primarily concerned with is one that picks out overall moral obligation. In cases where the agent takes herself to lack adequate information to determine what she ought to do all things considered, the conscientious agent might be concerned primarily with that same 'ought', but with that 'ought' out of cognitive reach, she'll aim to bring about the best state of affairs she can work out a strategy for bringing about given her state of ignorance. Provided that the 'ought' on the lips of the conscientious agent in these cases are different, intuitions about the proper use of 'ought' under ignorance is a poor guide to the truth-conditions for the 'ought' that the conscientious agent is primarily concerned about.

Following up on the post from earlier, the conscientious agent will only shift attention away from the 'ought' that picks out overall obligation when she has good moral reason to shift her attention. This requires identifying some good moral reason to set your sights on something other than what there's overall moral reason to do. I think that the desire to minimize a certain kind of risk could be just that reason.

Two cases seem to cause trouble for the objectivist view that says that an agent always ought to do what's best:
Case 2: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but it also indicates (in contrast with the facts) that giving him drug C would cure him completely and giving him drug A would kill him.

Case 3: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable, but her evidence leaves it completely open whether it is giving him Drug A or Drug C that will kill him or cure him.

Here’s Zimmerman’s version of the objection to the objectivist view:
Put Moore [or any objectivist] in Jill’s place in Case 2. Surely, as a conscientious person, he would decide to act as Jill did and so give John drug C. He could later say, “Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did.” But now put Moore in Jill’s place in Case 3. Sure, as a conscientious person, he would once again decide to act as Jill did and so give John drug B. But he could not later say, “Unfortunately, it turns out that what I did was wrong. However, since I was trying to do what was best for John, and all the evidence at the time indicated that that was indeed what I was doing, I cannot be blamed for what I did.” He could not say this precisely because he knew at the time that he was not doing what was best for John. Hence Moore could not justify his action by appealing to the Objective View … On the contrary, since conscientiousness precludes deliberately doing what one believes to be overall morally wrong, his giving drug B would appear to betray the fact that he actually subscribed to something like the Prospective View (Zimmerman 2008: 18).

Case 4: All the evidence at Jill’s disposal indicates (in keeping with the facts) that giving John Drug B would cure him partially and giving him no drug would render him permanently incurable. Jill’s evidence strongly indicates that drug A would cure John completely and that drug C would kill him, but Jill doesn’t know that because she doesn’t know how to compute the expected value of her outcomes because she doesn’t know Bayes’ Theorem and needs to know how to use Bayes’ Theorem to work out the value.
Intuitively, it seems that Jill oughtn’t take the chance and ought to use drug B. But, it also seems that Jill knows that this course of action is not the course of action the Prospective View or the Objective View advises. As Zimmerman stresses, it is hard to know which option maximizes expected value and the innumerate among us know that he’s right on this point. Shouldn’t we sometimes play it safe in cases like case 4? I think this is what the conscientious person would do. From an intuitive point of view, case 4 is a lot like case 3. But, if intuition suggests that this is what Jill should do and the Prospective View says Jill should give drug A, it seems those who defend the Prospective View are in the same boat as those who defend the Objective View.
Do we have to dumb the Prospective View down? That’s one way to go, but I think that those who defend the Prospective View don’t have to go this route. If the conscientious agent in case 4 is thinking about subsidiary obligations (i.e., what to do if she's not going to do what she ought to do), we can save the prospective view from cases like case 4 but it seems the same thing should work for case 3. It will take some work to get the details right. If you ought to A but won't and have some subsidiary obligation to do B, that's because B is second best. Instead, maybe the idea is that the obligation the agent has in mind is the best world available that she can figure out a way to realize. Something like that.

No comments: