I'm interested in this argument from Zimmerman (similar arguments are found in some recentish papers from Rosen):
1. Suppose it was wrong for Adam to φ but he φ’d anyway. He was ignorant of this fact simply because he did not believe that φ-ing was wrong.
2. You are culpable for ignorant behaviour only if you are culpable for the ignorance in which it is formed.
3. Adam is culpable for ignorant behaviour only if he is culpable for the ignorance in which it was performed.
4. You can only be culpable for something only if you were in control of that thing.
5. Thus, Adam is culpable for having φ’d only if he was in control of the ignorance.
6. You are never directly in control of whether you believe or do not believe something. Control over belief is always indirect.
7. If you are culpable for something over which you have merely indirect control, then your culpability for it is itself merely indirect.
8. You are indirectly culpable for something only if that thing was a consequence of something else for which you are directly culpable.
9. Thus, Adam is culpable for having φ’d only if there is something else, ψ, for which he is directly culpable and of which the ignorance in which he φ’d was a consequence.
10. But, whatever ψ-ing was, it cannot itself have been an instance of ignorant behaviour because then the argument would apply all over again to ψ-ing. ψ-ing must have been an item of behavior that Adam believed at the time to be wrong.
11. Thus, Adam is culpable for having φ’d only if there was some act or omission, ψ-ing, for which Adam is directly culpable and of which his failure to believe that φ-ing was wrong was a consequence and ψ was such that Adam believed that it was wrong to ψ at the time he ψ’d.
Two points. First, I rarely if ever engage in wrongdoing in the belief that what I'm doing is wrong and don't think that I'm atypical. If the argument is sound, we are responsible for much less than we ordinarily think. Second, I think anyone who accepts the argument will have a difficult time explaining why we're blameworthy in the remaining cases of clear-eyed akrasia (i.e., cases where we engage in wrongdoing in the belief that what we're doing is wrong).
I think that the most promising line of response is to argue that you can be blamed for your de re moral unresponsiveness (as Arpaly puts it). We can distinguish between two kinds of unresponsiveness. Think of de dicto unresponsiveness as being unresponsive to the thought that something is a moral consideration that counts against some course of action. Think of de re unresponsiveness as being unresponsive to some genuine moral feature that the agent is cognizant of. In cases of de re unresponsiveness, you need to be cognizant of a fact that constitutes a reason not to act, not the fact that the fact is a reason not to act. (For example, Descartes might not have believed that non-human animals were sentient. Many people are aware that many non-human animals are sentient but they don't think that the fact these animals suffer constitutes a reason not to perform acts that cause them to suffer.)
It's the quality of the agent's will that determines whether she's blameworthy and part of what determines the quality of the agent's will is whether the agent is responsive to the values and interests that morality is concerned with. Ignorance of fact might exculpate if, say, you injure another in the belief that you were helping them. If you are (i) cognizant of a moral reason not to act without being cognizant of the fact that it is such a reason and (ii) you act against it, I don't think you automatically earn an excuse. Your actions show that you are unresponsive to moral reasons and that's _precisely_ what you should be blamed for when you should be blamed for what you do.
It seems that anyone who endorses Z's argument has to reject this approach to blame. Why? Well, in some of the cases where an agent is aware of the fact that, say, performing an act that harms an animal without being aware of the fact that it is wrong to cause such harms the agent will be ignorant of the fact that her actions are wrong. The only way to get such an agent on the hook, according to the argument above, is to demonstrate that there's something that the agent is directly responsible for that is responsible for the agent's ignorance. If all it takes to show that some agent is blameworthy is that the agent is de re unresponsiveness, we'd reject the crucial assumption that you can only be blamed for acting in ignorance if you're responsible for your ignorance. No, you'd be responsible for acting in such a way that shows that you're unresponsive to, say, the suffering of animals.
Now, focus on the cases of clear-eyed akrasia, the cases where the agent engages in wrongdoing and acts in the belief that her actions are wrong. Would the agent be culpable in _these_ cases? I doubt it. What would render the agent's actions blameworthy? I see three options:
(a) Acting on the belief that one's action is wrong;
(b) Acting against moral reasons;
(c) The combination of (a) and (b).
Inverse akrasia cases rule out (a), so the real work has to be done by the moral reasons. (Think of Huck Finn and Jim.) The argument sketched above rules out (b), so we're down to (c) by elimination. Theoretically, it's hard to see how (c) could work if (a) and (b) are ruled out. We had to reject (b) because we had to reject the idea that you can be blameworthy for acting against reasons you don't believe to be genuine reasons. Why would believing them to be genuine and acting against them make you blameworthy if the the belief that you are acting against genuine moral reasons doesn't make you blameworthy on its own? If we had an agent that, say, acted twice on the belief that her actions were wrong, once in which she didn't act against a genuine reason and once in which she did, (c) implies that she's blameworthy in only one of these cases. It seems that what's doing the work here in explaining why she's blameworthy in one has to be the fact that she's acted against genuine moral reasons, not just apparent moral reasons, and that seems to assume that something like the quality of will account is correct. But, it seems that anyone who adopts such an account would reject the argument for (11).