1. From the epistemic point of view, epistemic justification is the sort of thing that is always good to have.
2. From the epistemic point of view, it is better to have justified beliefs than unjustified beliefs.
Suppose you think that whenever you form your beliefs in a responsible fashion, your beliefs have 'epistemic worth'. Such beliefs have a kind of value that is non-instrumental. It seems that epistemic responsibility is necessary but insufficient for justification. Then it seems that we have an explanation for (1) but not for (2). Worse, we have a problem trying to derive a theory of the right from a theory of the good because the good we've identified either doesn't call for promotion or does but doesn't justify the belief when there are reasons not to believe.
There's an explanation as to why the judgment that (2) is true is itself true, and that's because our judgments about what is 'better' or 'best' often reflect some prior judgment about what's right. (Credit to Philippa Foot for this.) So, we're using rightness to explain (2) rather than goodness to explain rightness. If you want to do better, here's the challenge you face.
If there's a consequentialist explanation of (2), it's either going to identify some positive value that the justified beliefs have that explain their status that unjustified beliefs lack or some negative value that explains why unjustified beliefs are unjustified. I don't think the former could work for this reason. We know that there's epistemic disvalue that attaches to epistemically irresponsible belief and whatever else this positive value must do, it must do something epistemic worth doesn't: it must be a positive value that calls for the response that involves forming a belief. But, it seems that whatever had that value would provide a reason that gave us a prima facie duty to believe. If there were such duties, then it seems there should be sins of epistemic omission. There are no such things. If there's just negative epistemic values and justified beliefs are those that lack such values, you get the problem that whenever you form a belief (even if you form it carefully) you run the risk of bringing about this negative value and if there's no overriding reason to run that risk by (potentially) bringing about some positive value then it seems that any belief you form is running a risk there's no reason to run. It's hard to see what could justify such beliefs. If, however, you identify some value that gives us reason to run a risk, you are back to identifying values that would give us prima facie duties to believe when there's no such beast. Face it, there's no prospect for deriving a theory of epistemic justification from some prior theory of epistemic value.