Saturday, April 26, 2008

Unification Accounts

This is very, very sketchy. I'm trying to make sense of an objection Christensen makes against a certain proposal about coarse and fine beliefs in his Putting Logic in its Place.

Suppose you think talk of both fine and coarse belief pick out some real phenomenon. You might adopt what Christensen calls a 'unification account', an account on which either coarse belief or fine belief is a special case of the other. For example, you might say that the reason talk of coarse belief picks out some real phenomenon is that coarse belief is really just a sufficiently high level of confidence. Coarse belief is just a special case of fine belief. Why not run things the other direction? Why not say that fine belief is really just a special case of coarse belief? Suppose you're moderately confident that it will rain this afternoon. Why not say that talk of fine belief (e.g., Mike's fine belief about Jocko's cheating on an exam) is really just coarse belief, such as the coarse belief that there's, say, a good chance that Jocko cheated on an exam?

Such a view has a certain kind of attraction. It's not a version of eliminativism. It vindicates talk of fine belief. It's not open to the objections raised previously about standards of correctness, aims, and self-knowledge. Nevertheless, the view seems to face a difficulty. Christensen writes, "the problem with this proposal stems from the difficulty of finding an appropriate content for the relevant binary (i.e., coarse) belief". We cannot, he says, understand talk of fine belief in terms of belief about subjective probabilities if subjective probabilities are in turn understood in terms of fine belief. Fair enough. Why not something more objective? He writes, "we risk attributing to the agent a belief about matters too far removed from the apparent subject matter of her belief." (2004: 19).

If I understand this objection, the idea is this. We have various theoretical accounts of probability (e.g., probability understood as frequency, propensity, etc...) and the problem is that we misrepresent the subject's state of mind when he or she has a fine belief is we adopt any of these accounts. So, if we adopt a frequency interpretation we attribute to Mike the belief that within a certain reference class, cheating took place such and such number of times. Or, if we adopt the propensity account we do no better because there is no current setup that is disposed to a certain degree to lead to a certain outcome.

Here's a worry. It might be a mistake to describe Mike in such a way that his subjective conception of things has to do with the conditions a frequency theorist or propensity theorist uses to flesh out a concept of probability, but I don't see that this would show that it would be a mistake to use one of these concepts in ascribing thoughts to Mike. An example should help. My niece asks for a glass of water and I believe that she believes that she wants water. My concept of water is a concept of a chemical kind and I don't think it would be a mistake to use that concept in ascribing her thoughts. Such an ascription is not going to capture her subjective conception of water, perhaps, but it does tell us what her attitudes and beliefs are answerable to. They are answerable to facts about a certain chemical kind that happens to be H2O. Why can't the unification theorist say something similar about this case. In a sense, there's nothing in Mike's head that is about the conditions a theorist of probability uses in unpacking the notion of probability, but what makes it appropriate to ascribe Mike thoughts about the notion of probability being characterized is that in having the thoughts that he does he is answerable to certain considerations that interest the probability theorist.

I worry that this line of objection that Christensen is running rests on a controversial set of assumptions about thought content ascriptions. Maybe there's a different way of understanding the argument. Maybe the idea is that none of the concepts of objective probability are concepts that pick out conditions a normal subject is answerable to in having fine beliefs.

(I don't have a view about the prospects for any sort of unification account.)


Doka dawty wine said...

Hi, Prof. Littlejohn,
I hope it's not annoying when I say I don't know what fine and coarse belief are and I have trouble finding it on google. And I hope it's not annoying when I say I don't understand your entry today, but I would like to.

Reading these kind of things is very hard! From what I can muster to understand, it feels like fine belief is when a person knows X happened and coarse belief is when a person thinks X probably happened.

And I think you are talking about someone who telling about a different way to view these things. And that it is okay to think about thoughts as whatever (in terms of probability and something else?) because even if the person isn't thinking about his thoughts that way, they can still be described or understood that way. This is why its called unification account.

And you are worried that this is based on controversial premises.

I'm not asking any specific questions (yet), I just wonder if I understood it correctly. I'll feel very proud of myself if I did.

Doka Dotty Wine said...

After doing a lot more research on free will, I'm beginning to think that all your entries about "fine and coarse belief" have something to do with the question of Moral Responsibility?

- Udoka

Clayton said...


Sorry, I've been away.

Alright, coarse and fine belief. There's not much on it under that name. There's gobs on it under different descriptions.

Sometimes we speak of belief as we speak of flicking a match. It's an all or nothing affair. Either you believe or you don't. Either you flicked the match or you didn't.

Sometimes, however, it seems our talk of belief pertains to something that comes in degrees or levels of confidence that can be measured. (e.g., He believes less firmly than she that Mustard killed Plum with the candlestick.)

Since it seems weird to think there's two sets of mental states kicking around in our heads, there is a natural tendency to to think that these are two ways of talking about a single psychological phenomenon. The question is how to properly develop this idea. One view is something like this. We can talk about someone's economic worth. We can assign numbers to measure this. At a certain level, someone counts as rich.

We can say that we have different degrees of confidence in the truth of a proposition (e.g., that it will rain tomorrow) and once you cross a certain threshold (e.g., 99.9% confident) you count as believing that it will rain.

One problem with this approach is that any value we pick as the threshold seems totally arbitrary. Another problem (according to some) is that such an account is hard to reconcile with some platitudes about rational belief. So, for example, if what it _really_ is to believe p is to be confident to a certain degree that p is true (e.g., 99.9% you count as believing but below that not), what happens when you are 99.9% confident that p, 99.9% confident that q, and you are asked whether you believe the conjunction p&q? It seems the level of confidence you assign to conjunction will fall below the threshold, so you will believe p, believe q, and fail to believe p&q. But, say some, it is rational to believe the obvious logical consequences of your beliefs. So, problems emerge.

Anyway, I don't know that there's any direct connection between this and the issues connected to moral responsibility.