... We ordinarily do not consider what practical reasons we might have for believing. And the explanation for this is similar to the third-person case. Deliberations concerning our practical reasons for belief are ordinarily inefficacious and pointless. Hence, our practice is to ignore them ... Still, there can be reasons for intending to do X that do not even purport to indicate that doing X is worthwhile (just as there can be reasons for believing p that do not even purport to indicate that p is true.) Think of cases in which the intention to do X will produce benefits even if you do not do X. [Here, he has in mind the Kavka's toxin case.] ... The puzzle, then, like the puzzle for belief, is why we are not inclined to take much notice of these consequences in arguing with others about the rationality of their intentions. Part of the solution is similar to the one for belief. Becoming convinced that one has these kinds of reasons is ordinarily not enough to generate a genuine intention to do X. So, insofar as we are trying to persuade others to have this intention, it will normally be pointless for us to cite such considerations. By contrast, if we convince them that doing X is worthwhile, they normally will acquire the intention (39).
I just have a hard time understanding how it can be that there's a reason in any sense of 'reason' for someone to believe p when that belief would be practically beneficial to have. This isn't a motivating reason. It's not a normative reason. It is, at best, a reason to perform an action that would predictably result in the formation of a belief.
If I'm reading the passage correctly, the suggestion as to why we wouldn't point to the Pascalian benefits of believing God exists in the course of trying to persuade someone to believe that God exists is not that we know that only truth-related considerations are relevant to the question 'Should I believe God exists?' but because we know that pointing to practical considerations will not typically persuade or motivate belief. But that's such a strange thing to say. I thought that part of the puzzle here was to explain the motivational inefficacy of practical considerations. I also thought that we would offer truth-related considerations even if we were convinced that they would be no more causally efficacious than practical considerations. I might have an argument for God's existence or non-existence that I accept that I know my audience won't, but if they asked whether they should believe God does/doesn't exist I'd offer the argument and say that I know it won't move them, but that's their problem. Not only that, but it seems like knowledge of what causal connections there are between my words and the beliefs of another will be empirical and built up over past observation. Don't we know apriori that there's no point in offering practical considerations when the questions have to do with whether to believe some proposition?
Later in the essay he remarks:
We rarely engage in Pascalian deliberations. We do not weigh the practical costs and benefits of believing as opposed to not believing some propositions. On the other hand, it is anything but rare for us to weigh the costs and benefits of spending additional time and resources investigating a topic. In buying a used car, for example, I will want to investigate whether the car is in good condition, but I need to make a decision about how thoroughly to do so ... The reasonable answer to such questions [about how much effort to invest into an investigation] is a function of how important the issue is to me and how likely additional effort on my part is to improve my epistemic situation. As the stakes of being right about the issue go up and the chances for improving my epistemic situation go up, it becomes increasingly reasonable for me to make the additional effort (43.)
I wonder if this is a counterexample. Consider two hypotheses. H1: There doesn't exist a Pacalian God that will punish the non-believer for all eternity. H2: There doesn't exist a unitarian God that will treat all believers and non-believers alike. I think the evidential situation for H1 and H2 are basically the same. I know that the practical costs and benefits of getting these hypotheses wrong vary radically. I'm just as epistemically rational and responsible in accepting H1 as H2. In other words, rationality does not compel me to investigate one of these hypotheses with greater care than the other.