Thursday, February 25, 2010

Zing!

A response to Shriver's modest proposal:
To the Editor:

Adam Shriver applauds the possibility that we may soon be able to reduce the discomfort of the animals we choose to raise in the horrific warehouses of factory farms through neuroscience. I’d like to propose an alternative: that we consider using neuroscience and genetic engineering to modify humans so that they derive less pleasure from consuming large amounts of animal flesh and more pleasure from consuming things like tofu.

Another option, of course, is that we leave both humans and animals unmodified and instead encourage the humans to use their superior intelligence, freer wills and more developed moral sense to see how deeply repellent it is for humans to continue to devote so much energy to find new ways of exploiting animals so that they can have tasty morsels on their plates.

N. Ann Davis
Claremont, Calif., Feb. 19, 2010

The writer is a professor of human relations and philosophy at Pomona College.

It's an interesting debate, but at this stage it's not very well defined. Shriver's suggestion seems rather plausible, "If we cannot avoid factory farms altogether, the least we can do is eliminate the unpleasantness of pain in the animals that must live and die on them. It would be far better than doing nothing at all." The critics largely seem to be saying that we should avoid factory farms. Not that Shriver disagrees. We have one side arguing for a conditional ought and the other arguing that we ought to make the conditional's antecedent false. To which I say, "I agree!"

This is reminiscent of a previous debate that took place between Kazez and an animal liberationist opposed to meliorative efforts on the grounds that such efforts only serve to make us comfortable with animal exploitation and thus lead to greater long term wrongdoing. Speaking just for myself, I think there's something quite horrible about the thought that we should refrain from minimizing animal suffering when that can be done and we know that it cannot be eradicated. However, there's something to the idea that we oughtn't undermine long term efforts to minimize animal suffering. So, maybe the horrible thought is just one of those horrible things we have to live with. What's interesting here is that it's not entirely clear whether Shriver's suggestions would, if followed, have much causal impact on the continuing practice of factory farming. I simply don't think we'll ever reach a critical mass of people to make these things go away, so I tend to think that we cannot avoid factory farms altogether. If we cannot (notice the 'if'), should we give these animals pain killers if we have them? If we should (and I think we should do what's in our power to minimize their suffering), I guess I don't see any principled objection to Shriver's proposal. I don't see any principled difference between what Shriver is proposing and the more modest proposal that we give the animals pain killers if that would help reduce their suffering. It's hard for me to recover from the letters what the opposition view is.

What else is wrong with the world?

Some people might be really crazy, but thank goodness that some people are not this crazy.

Wednesday, February 24, 2010

Demonology for atheists?

I think there's a good chance that you're missing the point if you're into demons and atheism.

Difficult, difficult, lemon, difficult

Trying to come up with a case to cause trouble for Zimmerman's prospective possibilism. Why? Because I have to, it's my job.

Case 1. (Slightly modified from last time.)

If you go to WR and do everything you should (stay away from drinks), you'll bring about +10. If you go to DG, you'll bring about DG. The problem is that most people like you go to WR and they take to drinking. Then, you'll likely bring about something horrible.

My take on it: you have nothing to lose by going to DG and nothing to gain by going to WR instead. You ought to go to DG, not WR. The problem is that the prospective possibilist doesn't put into the calculation the negative values that arise because of a moral failing that you could avoid simply by not failing in that way.

Case 2. (Not sure this case makes sense, but ...)
You could end up in three places and end up there in four different ways. If you end up in A, that's +36. If you end up in B, that's -90. If you end up heading straight to C via the south road, that's +9. If you head there via the north road, that's +8.

Seems easy, right? Head north to A and collect your +36 whatever that is. Here's the complication. You've been assured that once you head north, you'll forget whether it's A that is +36 or B that is +36. You'll forget whether it is B that is -90 or A that is -90. You'll still remember that if you head to C from the north it's +8. It's not known whether you'll forget because of some failing you are responsible for or not. To the extent that this makes sense, you think it's just as likely that you'll forget for reasons you are responsible for as that you'll forget for reasons that you are not responsible for. You know that when you get to the fork up north where you have to choose whether to go for A, B, or head to C you'll choose to go for C because the expected value of going for C is greater than trying your luck and flipping a coin to decide between A and B. So, why not head for C straight away and take +9 rather than +8?

Why can't the prospective possibilist say this? Because the prospective possibilist sees the value of heading north as something like this:

.5(+36) + .5(+8)

That's greater than +9.

[Why does the prospective possibilist think of the prospective value of heading North this way? Because there's a 50% chance that the failure to know at the fork how to get the +36 is your fault and we fail to reflect that if we say what the prospective actualist says, which is that the prospective value of heading north is +8.]

If the thought that it's a sure thing that you'll forget what the values are that attach to getting A and B, just make it a case where you have exceptionally strong inductive evidence that you'll not remember and divide the expert explanations evenly so that half say that the reason that people who take the path north forget is that they suffer some moral failing that causes the loss of information (e.g., they start to eat the poppies) and half say that they forget and the cause isn't due to some moral failing of the agent (e.g., high altitude). I think we can get the numbers to work if there's some non-zero probability that you'll remember that A will net you +36 such that it seems the expected value of heading to C via the south exceeds the value of heading north but only because there's evidence that you'll forget because of a moral failure on your part.

Sunday, February 21, 2010

Prospects for prospective possibilism

It's difficult to come up with simple objections to Zimmerman. He's quite good at covering all the angles. I have to resort to a complicated objection. Here goes.

The prospectivist thinks that you ought to do what is prospectively best. You ought to A iff it is the option that is prospectively best (i.e., maximizes expectable value).

The possibilist thinks that you ought to do what is the best you could do rather than do what will be best. You ought to do A iff it is the best thing that could happen, not the best thing that will happen depending upon what you decide.

What happens when you combine the prospectivist view and the possibilist view? Of course, you get prospective possibilism. That's the obvious part. The tricky part is that I think you get problems. As a prospectivist, you are supposed to cash out 'S at t1 is obligated to A at t2' in terms of the expected value of A-ing at t2 in light of the evidence you have at t1. What happens when you have reason to believe at t1 that at t2 you won't do what you'd have to do to carry out successfully the plan you put into action at t1? That depends upon why you won't do it. If you won't do it because you freely choose not to, that doesn't matter. If you won't do it because you'll freely do something that prevents you from doing it or knowing how to do it, that doesn't matter. If you won't do it because you'll find that you cannot do it but not because of some free action that has this as the predictable consequence, that does matter.

Suppose you have to decide whether to build your mine in Whiskey River or Dry Gulch. You know that in Whiskey River there's whiskey and there's nothing to drink in Dry Gulch. There are snakes in Dry Gulch and you've been assured that as a result, some of the miners will have to suffer from very painful snake bites. Nothing life threatening, mind you, just every so often one will be bitten and need medical attention. There's nothing like that in Whiskey River. Other things equal, I think it's reasonable to prefer to set up your mine in Whiskey River. And, all else is equal. Except this. You know that people who go to Whiskey River drink more than they do in Dry Gulch. You know that there's a very good chance that you'll start to drink in Whiskey River and this will affect your job. You'll start to forget important details about the mining operation that could lead to a loss of life. So, you know there's a very high chance that by setting things up in Whiskey River you'll encounter a three-option mine shaft case where you fail to know where the miners are because you didn't know something you were supposed to. You know you'll encounter similar situations in Dry Gulf. Indeed, you'll face them with the same frequency, but as you won't have taken to drinking more than you ought, you'll always know where the miners are. (Okay, maybe not always but more often than you would in Whiskey River.)

The worry is this. It seems intuitive, to me at least, that knowing what you know now, you know you should set up shop in Dry Gulch. However, we have something like a three-option case. The options:

1. Set up shop in WR and refrain from drinking. No snakes. Workers are as safe as they'd be otherwise in DG.
2. Set up shop in DG. Snakes. Apart from the snakes, workers are as safe as they'd otherwise be if you didn't take to drink and set up shop in WR.
3. Set up shop in WR and take to drinking. No snakes. Workers are put at great risk thanks to your ignorance which was the predictable side-effect of your freely taking to drink.

You could freely bring it about that 1 happens, but you know that's it's very unlikely that you will and so quite likely that if you choose to go to WR, you'll bring it about that 3 happens rather than 1 or 2. To my mind, that's like throwing away information in the 3 option mine shaft case. I take it that a prospective actualist would say that you ought to go for 2 and that seems right to me. I take it that the prospective possibilist (or, one prospective possibilist) would say go for 1. That seems wrong. To me, the intuition here is worth taking seriously and it seems like a bad idea for someone who makes such hay about mine shaft cases in beating up on objectivism about 'ought' to not deal with this sort of intuition as well.

Yes, actualism is supposed to be quite bad. I agree. I'm not urging anyone to go actualist. I'm concerned about the combination of prospectivism and possibilism. Independently, they look quite good but in combination, I'm not entirely sure I like what you get.

Saturday, February 20, 2010

Small tent



Young conservative doesn't like gays.

I guess that's a kind of news.

Doing the (thing you justifiably take to be) best you (justifiably think you) can

Wanted to get right the argument I ran in the Q&A against evidentialism.

Suppose you believe you ought to A.

What should you do? That depends. Suppose you ought-epistemically to believe you ought to A. Given your belief and its normative standing, it's tempting to think you ought-practically to intend to A. That assumes:
(1) O(OBOA --> IA)

Because the reasons that bear on action and intention are the same:
(2) O(OIA --> A)

According to the evidentialist, if you have sufficient evidence, if you take any attitude at all concerning p, you ought to believe p. So, here's the case:

(3) You know that the best thing to do is A.
(4) You know that you can A.

Knowing (3) and (4), you know that you ought to A. I think you cannot say this and say that you _cannot_ satisfy whatever evidential standards the evidentialist says must be satisfied for it to be that you ought to believe you ought to A.

(5) OBOA.

If you A, you do what you ought to do.

Now, imagine an epistemic counterpart of yours with the same evidence. Imagine that this subject cannot A.

In this world, (5) is false. If (5) is false, you cannot have sufficient evidence to believe in this world. So, either you don't in the actual world really have sufficient evidence and (3) or (4) is false. This assumes, of course, that (1) and (2) are true. As they seem true, it seems the evidentialist is in trouble.

One response was to say that I hadn't said what A-ing amounts to. Our obligations, the thought was, are limited to trying to bring about certain ends. Okay, but the worry is that 'A' will be an act that you can perform and every one of your non-factive mental duplicates could perform. I don't think there's such an act. I don't even think trying satisfies _that_. If you cannot justifiably believe you ought to A, I think you cannot justifiably believe both that A-ing is the best option you have and that you can A. That's a pretty skeptical view if you think about it.

Alright, so the other response is to deny (1) or (2) to save evidentialism. They are not their principles, they are mine. Fine, deny them. Imagine a case that serves as a counterexample. You ought to believe that you ought to A & you ought to refrain from intending to act in accordance with your own normative judgment. Do that, and you're crazy. Do the non-crazy thing and you have an intention you oughtn't. Think about forming the intention that is in accordance with the epistemically flawless belief. There's _something_ good about it. I'd think that it's the sort of good that attaches to believing in accordance with the evidence. But, it's not the value that makes the intention permissible. So, why does the value make the belief permissible? I can't think of an answer. [The same is true for the intention-action link.]

So, I think the value driven argument for evidentialism is just a massive failure. If the value that attaches to fitting belief to evidence explains why it is that beliefs have the deontic properties they do, we get skepticism. If the value that attaches to acting in accordance with the epistemically flawless practical judgment doesn't give a permission to act but does give permission to believe, we'll want to know why that value gives permissions sometimes but not others. That this is so, I think, means that intuitions about value cannot be what explains why evidentialism is true (even if it is true (which it clearly isn't!)).

Thursday, February 18, 2010

Not the outcome I was hoping for

It's been a frustrating day. I submitted a paper I'm pretty fond of (it's my myth of the fjb paper) to a journal back on 12.01.2007. (Look right-------------->) Journal responds on 1.01.09 with R&R. I'm sent three referee reports and they are surprisingly encouraging. Resubmit 7.08.09. Receive the final decision today. It's not good. One of the referees has mixed feelings. (To referee (#4?), if you're reading, your comments were very much appreciated. I think I can deal better with the worry you raise in the comments than I did in the paper. You might be right that your way of stating the worry is better than the way it was stated by the people I addressed in my discussion. That's for another time.) Here's what's frustrating. The news came as follows:
"Two referees were consulted. One returned equivocal advice (report copied below), finding merit in the paper, but not giving it a thoroughgoing endorsement. The other did not write a report for the author, but the advice given was negative and, given the identity of the referee, the Editorial Board took the opinion very seriously. The Board think the main thrust of the paper is promising, but in these circumstances we cannot publish."

This paper has been under review in some form or other for a pretty long time. Can't I just get a sentence or two explaining the verdict? This is like the _Blair Witch Project_. Yes, I know things ended badly, but what the hell happened?

I've emailed the editor to ask if there's something he can share that gives me some sense of what the referee's reason(s) were for rejecting. He's been quite helpful so far, but in the off chance that referee #5 is reading, will you please consider dropping me an email (off the record, obviously, I'm not going to say anything about your identity or your reasons for rejecting the paper) so I know what reason or reasons you had for the rejection? I didn't expect a good outcome, but I was hoping that someone would give some explanation.

Update
I think the latest email basically said that I'll never be given any of the reasons the 5th referee had for rejecting. I don't think it's unreasonable to expect an explanation given the length of time I've had to wait for a verdict.

Tuesday, February 16, 2010

Round 3

We had the Royal Ethics conference this past weekend and I'm getting ready for round 3, the Central APA. I'll be giving a talk on epistemic value and justification, but it's at 9:00 a.m., so I'm forgiving you in advance if you miss it.

Last weekend I gave a not particularly good presentation of my paper. Thought I'd sort of recap and restate. The discussion turned sort of bad when I turned to discuss Dancy's view and reactions to it.

On Dancy's view:
Good case:
Mustard is running down the hall. Mustard knows that a murderer is chasing him.
(1) Mustard's reason for running is that the murderer is chasing him.

Bad case:
Mustard is in the same mental states but there's no murderer.
(2) Mustard's reason for running is that the murderer is chasing him.

My worry: (1) and (2) entail that there's a murderer to run from, but the bad case is a case without a murderer.

Their response: no it doesn't. (2) can be true even if there's no murderer.

My worry: But (1) and (2) entail:
(3) Mustard ran down the hall because the murderer was chasing him.

Their response: No, that entailment doesn't hold. (3) is factive, but (1) and (2) aren't.

Me: The entailment does hold! You're just saying it doesn't because you don't want to say that (1) and (2) are factive but you know (3) is. [This part of the discussion isn't getting us anywhere.]

Them: Look, look, look, we don't need to know the facts to know why someone acted. That's what motivating reasons are for.

Me: Hmmmmm. On your view, (3) isn't a consequence of (1) and (2). On your view, there will be a bunch of 'because' statements that are true of the bad case and the good case. Whatever you want to plug in for 'Mustard ran down the hall because X' will be, by your lights, a satisfactory explanation of the agent's action that tell you why the agent ran. These 'because' statements, according to you, do not specify the reasons for which the agent ran. Once you know all the relevant 'because' statements and we agree that they are true but you then demand that I supply additional 'reason for which' statements I don't know what you want from me. You want some information that's not contained in the 'because' statements _and_ whatever it is, it had better not be information that entails that the agent's beliefs are correct. What's that information? I have no idea. I think you're asking the impossible.

The following seems like a plausible view. In the bad case, we can't say what reason the agent had to do what she did, we can't say (truthfully) 'S A'd for the reason that p'. However, we can say this in the good case. All the true 'because' statements we use to explain action in the bad case are true in the good case. However, there are additional true 'because' statements that entail the correctness of the agent's beliefs that are true of the good case.

I can't tell whether the problem with my view is the assumption that Dancy's treatment of error cases is problematic or with the idea that I'm not able to say both that acting for a reason is a matter of successfully responding to the situation and explain the agent's behavior in the bad case. If it's the first thing, I should note that I'm not the only one who doesn't like to say things like 'He did it for the reason that p, but ~p'. If it's the second thing, I think I have a story to tell about why agent's do what they do in the bad case. It's the very same psychological story that my opponents accept.

Tuesday, February 9, 2010

Reasons and entailment

They've posted the program for the 2010 Epistemic Conference. If you want to read some massive abstracts, that's the place to go.

On the drive home last night, I was thinking about reasons for action and belief (after thinking murderous thoughts about that a*&$&$# in the white Hummer weaving through traffic). It seems quite plausible that a reason to believe p counts as a reason to believe q if you know q is a consequence of p. I was trying to think if there were similar principles for action, and it seemed like it was a bit harder to formulate the right principle.

What if we said this?

(1) If R is a reason for S to X and that S X's entails that S Y's, R is a reason for S to Y.

Suppose there's a reason for me to shake your hand. That I shake your hand in the normal way entails that I have not had my arm removed. (If my arm and hand were detached, I shouldn't hold it in my left hand and extend it to you.) I don't think it's obvious that the reason I have to shake your hand is a reason to keep my arm attached to my body. So, I don't know if reasons to A are, invariably, reasons to refrain from performing an action that would prevent one from A-ing. If I didn't have an arm, I wouldn't have a reason to shake your hand at all. Sure, don't chop the arm to avoid shaking a hand, but I don't think that requires (1) for its proper explanation.

Suppose there's reason for me to apologize sincerely for what I did. I cannot do that unless I did the thing I'm to apologize for. I don't think reasons to make amends give me reason to do the thing I'm to apologize for. That contains a backwards looking set up, but I think we can cause trouble for cases where the relevant acts are all future acts. If there's reason to A and not to A but the reasons to A win out, I'll acquire a reason to explain my actions to those who were ill served by my decision to A rather than do something else, such as B. I don't think that the reasons I'll have to explain myself for A-ing rather than B-ing are reasons to A (they count in favor of B-ing), but I can't offer this sort of explanation to those ill-served by my A-ing unless I do in fact A. So, another case where we have reason to do something (i.e., A-ing) that necessitates the doing of something else (i.e., not B-ing) where the reasons that are reasons for an action (explaining to those ill-served by my A-ing) that are not reasons for an action (A-ing) I'll perform if I perform an action that necessitates it (explaining why I A'd in spite of the fact that it left some ill-served).

Are there better ways of writing out a kind of closure principle for reasons for action? If we switched from things done to things brought about, that would make our principle for reasons for action closer to our principle for reasons for belief, but I think this principle is susceptible to counterexamples as well. A reason to bring about the state of affairs in which I catch a criminal in the act is not a reason to bring about the state of affairs in which there's a criminal to catch, a reason to bring about a state of affairs that would bring about a state of affairs that I should make reparations for is not a reason to bring about _that_ state of affairs. (Or, is that not right?)

Searches

Someone just came here because they ran this search: why do people think obama is a socialist? Fitting that they used AOL.

Another amusing search: how can u tell that a reptilian is in a disguise?

Um, they don't look like a reptile?

The joke about Cindy McCain was a bad idea. Now I get reptilian hits all the time. On a serious note, Cindy isn't a reptilian. No way. Palin is. Go get the Tea Party!!!

Has time run out on Cartesian dualism?

Inspired by a comment and a conversation in a mail room.

(1) Suppose there's an apriori argument for either substance dualism (or, more carefully, immaterialism about minds).
(2) We know apriori that such an argument would establish that minds stand in no spatial relations.
(3) We know apriori (well, from the armchair) that minds stand in temporal relations.
(C) We know apriori (or from the armchair) that things can stand in temporal relations without standing in any spatial relations.

That seems to be the sort of thing we cannot know apriori, and not (just) because it seems false.

Can we turn this into an argument against dualism?
(1) If substance dualism were true, minds would stand in no spatial relations.
(2) If minds stood in no spatial relations, it would stand in no temporal relations.
(3) Minds do, however, stand in temporal relations.
(C) Dualism is false.

Solution. Mental events aren't temporally related to anything, not even other mental events. Problem. I just thought of that response, I thought of the argument much earlier.

Can the dualist challenge (1) or do they just put their faith in the idea that there can be temporal relations of things like simultaneity between things that are spatial and things that are not even though we cannot assess temporal relations between spatial things apart from some frame of reference?

Monday, February 8, 2010

Social Evils

Philosophy of religion is on the brain after spending this weekend in San Antonio at Kvanvig's philosophy of religion conference. Met some wonderful people, spent some time with some people I already knew, saw some really fun talks, and didn't get the shiv during my talk. I'm a bit drained at the moment. I'll be using most of my brainpower to get ready for this weekend's conference at UT, but before switching gears I thought I'd note that Ted Poston has an interesting post on social evils over at Prosblogion. It seems to me that tragedy of the commons cases are nasty little cases to deal with in dealing with arguments from evil. There's something really disturbing to the idea that someone would set up a situation knowing that it will be populated with free individuals who, if they pursue their own self-interest within what seem to be the bounds of morality, will engage in collectively self-defeating behaviors. It worries me because there's really nothing to say to any of the individuals to help them improve their lots apart from telling them to jump off of a cliff or violate the rules of morality to save their families. I can see there being some good that comes of putting people in challenging situations where the virtuous person needs to develop their virtues to flourish, but that's not the sort of situation we face in these cases.

Saturday, February 6, 2010

Two trick pony

I'm trying to develop a second trick. Here we go. I'm working on my paper on the ontology of reasons and need to know something about claims of the form 'He A'd for the reason that p' or 'His reason for A'ing was that p'.

Consider:
(1) He voted for Bill for the reason that Charlie is a crook.
(2) He voted for Bill because Charlie is a crook.
(3) Charlie is a crook.

It seems to me that (2) entails (3). One opponent agrees, but thinks that (1) doesn't entail (3) and so denies that (1) entails (2).

Now, you might think that (1) and (2) are, properly understood, elliptical for some longer statement that describes Bill's attitudes concerning Charlie. That's fine. You probably think that because you think that if (3) is false, (1) and (2) would be false. You probably think that the form of the explanation doesn't depend upon whether (3). For the point of this discussion, we're not on different teams.

Evidence of entailment.
(i) It seems that (1) entails (2) just on the face of it. Conjunctions with negated conjuncts will come in a moment, but I take it that someone who denies that (1) entails (2) but thinks that it seems to some that (1) entails (2) will want to offer an account of the appearance of entailment that doesn't require an entailment. Could it be that (1) pragmatically implies that something like (2) is true? I think not. As Stanley notes, citing Saddock and Bengson, you can reinforce pragmatically imparted information, but not entailments.

So, there's nothing wrong with, "I have a cat. Indeed, I have just one cat". There's something wrong with, "I have just one cat. Indeed, I have a cat". Now, it seems strange to say, "He voted for Bill for the reason that Charlie is a crook. Indeed, he voted for Bill because Charlie is a crook." That (to me) looks/sounds/feels a lot like, "I know that it's raining outside. Indeed, it is raining outside."

(ii) It seems contradictory to assert (~2) and assert (1): He didn't vote for Bill because Charlie is a crook, but he voted for Bill for the reason that Charlie is a crook. (If there were merely a pragmatic link from (1) to (2), wouldn't (~2) cancel the pragmatic implicature that (allegedly) explains the appearance of an entailment from (1) to (2)?)

(iii) It seems defective to say: He voted for Bill for the reason that Charlie is a crook, but I don't believe he voted for Bill because Charlie is a crook. It seems defective to say: He voted for Bill for the reason that Charlie is a crook but I have no good reason to believe he voted for Bill because Charlie is a crook. Obvious explanation is to assimilate these to more familiar kinds of Moorean absurdities such as "He knows that dogs bark but I don't believe it myself". You can only assimilate these to such cases if the speaker's commitment to (1) carries with it a commitment to belief in (3).

(iv) It seems we can rewrite each of (1)-(3) as follows:
(1') He voted for Bill for the reason that it's a fact that Charlie is a crook.
(2') It's a fact that he voted for Bill because it's a fact that Charlie is a crook.
(3') It's a fact that Charlie is a crook.

It seems (1'), (2'), and (3') entail that Charlie is a crook.

Not only that, but I think on everyone's view, (1) entails:
(4) That Charlie is a crook explains why he voted for Bill.

(4) entails:
(4') The fact that Charlie is a crook explains why he voted for Bill.

The schema: 'S's reason for A-ing is that p' entails 'p explains why S A'd', which entails 'The fact that p explains why S A'd', which entails that p is a fact. Which entails p.


Thoughts?

Tuesday, February 2, 2010

You there in the Glenn Beck T-shirt headed off to the Tea Party Patriot rally

Reprinted in full from slacktivist (found thanks to Justin Klocksiem):

"Hey you. You there in the Glenn Beck T-shirt headed off to the Tea Party Patriot rally.

Stop shouting for a moment, please, I want to explain to you why you're so very angry.

You should be angry. You're getting screwed.

I think you know that. But you don't seem to know that it doesn't have to be that way. You can stop it. You can stop it easily because the system that's screwing you over can only keep screwing you over if you keep demanding that it do so.

So stop demanding that. Stop helping the system screw you over.

Look, you can go back to yelling at me in a minute, but just read this first.

1. Get out your pay stub.

Or, if you have direct deposit -- you really should get direct deposit, it saves a lot of time and money (I point this out because, honestly, I'm trying to help you here, even though you don't make that easy Mr. Angry Screamy Guy) -- then take out that little paper receipt they give you when your pay gets directly deposited.

2. Notice that your net pay is lower than your gross pay. This is because some of your wages are withheld every pay period.

3. Notice that only some of this money that was withheld went to pay taxes. (I know, I know -- yeearrrgh! me hates taxes! -- but just try to stick with me for just a second here.)

4. Notice that some of the money that was withheld didn't go to taxes, but to your health insurance company.

5. Now go get a pay stub from last year around this time, from January of 2009.

6. Notice that the amount of your pay withheld for taxes in your current paycheck is less than the amount that was withheld a year ago.

That's because of President Barack Obama's economic stimulus plan, which included more than $200 billion in tax cuts, including the one you're holding right there in your hand, the tax cut that's now staring you in the face. Republicans all voted against that tax cut. And then they told you to get angry about the stimulus plan. They didn't explain, however, why you were supposed to get angry about getting a tax cut. Why would you be? Wouldn't it make more sense to get angry at the people who voted against that Obama tax cut?

But taxes aren't the really important thing here. The really important thing starts with the next point.

7. Notice that the amount of your pay withheld to pay for your health insurance is more than it was last year.

8. Notice that the amount of your pay withheld to pay for your health insurance is a lot more than it was last year.

I won't ask you to dig up old paychecks from 2008 and 2007, but this has been going on for a long time. Every year, the amount of your paycheck withheld to pay for your health insurance goes up. A lot.

9. Notice the one figure there on your two pay stubs that hasn't changed: Your wage. The raise you didn't get this year went to pay for that big increase in the cost of your health insurance.

10. Here's where I need you to start doing a better job of putting two and two together. If you didn't get a raise last year because the cost of your health insurance went up by a lot, and the cost of your health insurance is going to go up by a lot again this year, what do you think that means for any chance you might have of getting a raise this year?

11. Did you figure it out? That's right. The increasing cost of health insurance means you won't get a raise this year. Or next year. Or the year after that. The increasing cost of health insurance means you will never get a raise again.

That's what I meant when I said you really should be angry. That's what I meant when I said you're getting screwed.

OK, we're almost done. Just a few more points, I promise.

12. The only hope you have of ever seeing another pay raise is if Congress passes health care reform. Without health care reform, the increasing cost of your health insurance will swallow this year's raise. And next year's raise. And pretty soon it won't stop with just your raise. Without health care reform, the increasing cost of your health insurance will start making your pay go down.

13. I wish I could tell you that this was just a worst-case scenario, that this was only something that might, maybe happen, but that wouldn't be true. Without health care reform, this is what will happen. We know this because this is what is happening now. It has been happening for the past 10 years. In 2008, employers spent on average 25 percent more per employee than they did in 2001, but wages on average did not increase during those years. The price of milk went up. The price of gas went up. But wages did not. All of the money that would have gone to higher wages went to pay the higher and higher and higher cost of health insurance. And unless Congress passes health care reform, that will not change.

Well, it will change in the sense that it will keep getting worse, but it won't get better. Unless the problem gets fixed, the problem won't be fixed. That's kind of what "problem" and "fixed" mean.

14. Sadly for any chance you have of ever seeing a raise again, it looks like Congress may not pass health care reform. It looks like they won't do that because they're scared of angry voters who are demanding that they oppose health care reform, angry voters who demand that Congress not do anything that would keep the cost of health insurance from going up and up and up. Angry voters like you.

15. Do you see the point here? You are angrily, loudly demanding that Congress make sure that you never, ever get another pay raise as long as you live. Because of you and because of your angry demands, you and your family and your kids are going to have to get by with less this year than last year. And next year you're going to have to get by with even less. And if you keep angrily demanding that no one must ever fix this problem, then you're going to have to figure out how to get by on less and less every year for the rest of your life.

16. So please, for your own sake, for your family's sake and the sake of your children, stop. Stop demanding that problems not get fixed. Stop demanding that you keep getting screwed. Stay angry -- you should be angry -- but start directing that anger toward the system that's screwing you over and taking money out of your pocket. Start directing that anger toward fixing problems instead of toward making sure they never get fixed. Instead of demanding that Congress oppose health care reform so that you never, ever, get another pay raise, start demanding that they pass health care reform, as soon as possible. Because until they do, you're just going to keep on getting screwed.

And it's going to be that much worse knowing that you brought this on yourself -- that you demanded it.

Thanks for your time.

P.S. -- I didn't mention this because I'm trying here to be as patient with you as I can, but you might also want to keep in mind that in addition to screwing over yourself and screwing over your family and screwing over your own children by demanding that Congress oppose health care reform so that you will never, ever see another pay raise, by doing that you're also demanding that I never, ever see another pay raise, which means that you're also screwing over me, and my family, and my children. Not to mention the millions of poor and uninsured and uninsureable people I didn't even mention above because they don't seem to matter at all to you. And for that, let me just say the only appropriate thing that can be said to someone so determined to do direct, tangible harm to the welfare of my family: Fuck you, you fucking moron."