I've been thinking about epistemic consequentialism lately and trying to understand Goldman's view from _Epistemology and Cognition_. In terms of the value theory, it's pretty simple: true beliefs are intrinsically good, false beliefs are intrinsically bad, and they have the same magnitude of value. One interesting feature of the view is that it seems Goldman has available a kind of rationale for saying that you can only justifiably believe p by conforming to J-rules if doing so will lead to a sufficiently high truth ratio, one that doesn't lead to a greater number of false beliefs than true ones. If you were to form no beliefs at all, you'd be better off than if you formed a greater number of false beliefs than true ones. So, Goldman can explain why we're not justified by following rules that lead to more failures than successes. I haven't seen much discussion of this, but if you think about it, it's not obvious what entitles the epistemic consequentialist to say that a sufficiently high truth-ratio is required for justification as the consequentialist thinks you should respond to your situation in such a way that there's not a better way of responding available. There's our explanation. Not believing anything is an option, an option that is neither good nor bad, and so an option that contains more bad than good comes out as second best at best.
Okay, but that brings me to the problem. Goldman thinks of J-rules as giving permissions. If you're in a situation where the best you can do if you believe anything at all is get things right 51% of the time and you are permitted to believe nothing, shouldn't you be permitted to follow the rules that get things right 51% of the time? If an option is better than a permissible option, can you really say that that option is ruled out on consequentialist grounds? It seems not. But, then again, if an option is as good as a permissible option, it's hard to see how you could argue that that option isn't permissible on consequentialist grounds. So, why wouldn't rules that get things right precisely half the time be good enough to justify belief?
This matters, I think, because we know that if the evidence for h is just as strong as the evidence for ~h, there's a decisive reason to refrain from believing h and from believing ~h until you get more evidence. A similar point holds true for rules you know lead to true just as often as false belief.
It seems the obvious fixes are these. First, require something of believers. Require that they believe in ways that promote the good. Here's what I like about this. You can still say (I think) that justification requires getting things right at least 50% of the time. Here's what I don't like about this. I think the notion of positive epistemic duties might make sense given the consequentialist framework, but I don't think there are positive epistemic duties. Second, muck around with the value theory. As SB reminded me in an email, Riggs has suggested that we might be wise to say that false beliefs are not just bad, the magnitude of the value of false belief is greater than that of true belief.
I think there are ways still of causing trouble for a view modified in this second way, but here's one. (I had some back and forth tweeting about this, but thought I should post a little something here.) Consider the following rule:
(R) If the # of Fs is finite but too large to count, believe that the number is composite.
Following that rule, you'll get things right more often than not, but believing in accordance with that rule isn't believing with justification. To flesh the intuition out a bit, believing that the number of grains of rice in the kitchens of Austin is prime is believing a known unknown (in Sutton's terminology). You can't justifiably believe known unknowns. So, you can't believe with justification by believing in accordance with (R).
There are two objections. First, if you believe the number of grains of rice in Austin is composite and it is, you know. If you know, you justifiably believe. Second, you can't know that this is so, but you can justifiably believe what you know you cannot know.
I'm not moved by the objections, but I'm also not sure what my intuitions are. I tend to think that you can't justifiably believe these things and know you cannot know them. I'm very confident that if you know you're not in a position to know p, you cannot justifiably believe p. (Some discussion here.)