Back to top

The Philosopher's Rationalization-O-Meter

Member Content Rating: 
5
Your rating: None Average: 5 (10 votes)

pixabay.com

Usually when someone disagrees with me about a philosophical issue, I think they're about 20% correct. Once in a while, I think a comment is just straightforwardly wrong. Very rarely, I find myself convinced that the person who disagrees is correct and my original view was mistaken. But for the most part, it's a remarkable consistency: The critic has a piece of the truth, but I have more of it.

My inner skeptic finds this to be a highly suspicious state of affairs.

Let me clarify what I mean by "about 20% correct". I mean this: There's some merit in what the disagreeing person says, but on the whole my view is still closer to correct. Maybe there's some nuance that they're noticing, which I elided, but which doesn't undermine the big picture. Or maybe I wasn't careful or clear about some subsidiary point. Or maybe there's a plausible argument on the other side which isn't decisively refutable but which also isn't the best conclusion to draw from the full range of evidence holistically considered. Or maybe they've made a nice counterpoint which I hadn't previously considered but to which I have an excellent rejoinder available.

In contrast, for me to think that someone who disagrees with me is "mostly correct", I would have to be convinced that my initial view was probably mistaken. For example, if I argued that we ought to expect superintelligent AI to be phenomenally conscious, the critic ought to convince me that I was probably mistaken to assert that. Or if I argue that indifference is a type of racism, the critic ought to convince me that it's probably better to restrict the idea of "racism" to more active forms of prejudice.

From an abstract point of view, how often ought I expect to be convinced by those who object to my arguments, if I were admirably open-minded and rational?

For two reasons, the number should be below 50%:

1. For most of the issues I write about, I have given the matter more thought than most (not all!) of those who disagree with me. Mostly I write about issues that I have been considering for a long time or that are closely related to issues I've been considering for a long time.

2. Some (most?) philosophical disputes are such that even ideally good reasoners, fully informed of the relevant evidence, might persistently disagree without thereby being irrational. People might reasonably have different starting points or foundational assumptions that justify persisting disagreement.

Still, even taking 1 and 2 together, it seems that it should not be a rarity for a critic to raise an interesting, novel objection that I hadn't previously considered and which ought to persuade me. This is clear when I consider other philosophers: Often they get objections (sometimes from me) which, in my judgment, nicely illuminate what is incorrect in their views, and which should rationally lead them to change their views -- if only they weren't so defensively set upon rebutting all critiques! I doubt I am a much better philosopher than they are, wise enough to have wholly excellent opinions; so I must sometimes hear criticisms that ought to cause me to relinquish my views.

Let me venture to put some numbers on this.

Let's begin by excluding positions on which I have published at least one full-length paper. For those positions, considerations 1 and 2 plausibly suggest rational steadfastness in the large majority of cases.

A more revealing target is half-baked or three-quarters-baked positions on contentious issues: anything from a position I have expressed verbally, after a bit of thought, in a seminar or informal discussion, up to approximately a blog post, if the issue is fairly new to me.

Suppose that about 20% of the time what I say is off-base in a way that should be discoverable to me if I gave it more thought, in an reasonably open-minded, even-handed way. Now if I'm defending that off-base position in dialogue with someone substantially more expert than I, or with a couple of peers, or with a somewhat larger group of people who are less expert than I but still thoughtful and informed, maybe I should expect that about half to 3/4 of the time I'll hear an objection that ought to move me. Multiplying and rounding, let's say that about 1/8 of the time, when I put forward a half- or three-quarters-baked idea to some interlocutors, I ought to hear an objection that makes me think, whoops, I guess I'm probably mistaken!

I hope this isn't too horrible an estimate, at least for a mature philosopher. For someone still maturing as a philosopher, the estimate should presumably be higher -- maybe 1/4. The estimate should similarly be higher if the half- or three-quarters-baked idea is a critique of someone more expert than you, concerning the topic of their philosophical expertise (e.g., pushing back against a Kant expert's interpretation of a passage of Kant that you're interested in).

Here then are two opposed epistemic vices: being too deferential or being too stubborn. The cartoon of excessive deferentiality would be the person who instantly withdraws in the face of criticism, too quickly allowing that they are probably mistaken. Students are sometimes like this, but it's hard for a really deferential person to make it far as a professional philosopher in U.S. academic culture. The cartoon of excessive stubbornness is the person who is always ready to cook up some post-hoc rationalization of whatever half-baked position happens to come out of their mouth, always fighting back, never yielding, never seeing any merit in any criticisms of their views, however wrong their views plainly are. This is perhaps the more common vice in professional philosophy in the U.S., though of course no one is quite as bad as the cartoon.

Here's a third, more subtle epistemic vice: always giving the same amount of deference. Cartoon version: For any criticism you hear, you think there's 20% truth in it (so you're partly deferential) but you never think there's more than 20% truth in it (so you're mostly stubborn). This is what my inner skeptic was worried about at the beginning of this post. I might be too close to this cartoon, always a little deferential but mostly stubborn, without sufficient sensitivity to the quality of the particular criticism being directed at me.

We can now construct a rationalization-o-meter. Stubborn rationalization, in a mature philosopher, is revealed by not thinking your critics are right, and you are wrong, at least 1/8 of the time, when you're putting forward half- to three-quarters-baked ideas. If you stand firm in 15 out of 16 cases, then you're either unusually wise in your half-baked thoughts, or you're at .5 on the rationalization-o-meter (50% of the time that you should yield you offer post-hoc rationalizations instead). If you're still maturing or if you're critiquing an expert on their own turf, the meter should read correspondingly higher, e.g., with a normative target of thinking you were demonstrably off-base 1/4 or even half the time.

Insensitivity is revealed by having too little variation in how much truth you find in critics' remarks. I'd try to build an insensitivity-o-meter, but I'm sure you all will raise somewhat legitimate but non-decisive concerns against it.

Eric Schwitzgebel

http://schwitzsplinters.blogspot.com/2017/01/the-philosophers-rationalization-o-meter.html