Political opinions are like scientific theories: they are cognitive frameworks through which we seek to make sense of the flood of disparate information. Otherwise we are left standing amid a roaring storm of raw data. We realize we must “come in from the cold” of confusion. Furthermore, we need to. We have to formulate a strategy to go further. We need to draw a map if we are to get anywhere, especially where we want to go, whether it is scientific research one is pursuing or a decision on how to vote.
As Thomas Kuhn explains in his classic study The Structure of Scientific Revolutions, a workable paradigm is one that will enable predictability. “If we’ve got it right so far, this or that ought to happen next.” “If we are understanding how these factors work together, we ought to get these results; so-and-so should turn up in the next experiment.” If you don’t get the results predicted by your theory, your theory is apparently wrong. Back to the drawing board! No shame in that. You learn from your mistakes. You reduce the possibilities. It’s the process of elimination.
Suppose the experts have long rallied around a consensus paradigm that has worked pretty well, but there remain stubborn anomalies: things, phenomena, that resist incorporation into the paradigm. Clashing data that we wouldn’t have expected. What to do? There are two possible courses. One might propose ad hoc hypotheses for this and that bit of troublesome data, contrived though they may sound even to those who propose them. Are you going to give up the theory that deals successfully with 90% of the data because of the square pegs constituting the remaining 10%? I’ll get to the second option presently. But how about a couple of examples to flesh out the abstraction?
I guess the classic instance would be the astronomical paradigm shift Kuhn discusses so well. The trouble centered around the problematic “retrograde motion” of the planets. Classical Ptolemaic astronomy posited geocentrism: all the heavenly bodies rotated around the earth in regular circular orbits. Ptolemaic astronomers knew something was amiss because the planets didn’t quite follow their expected courses. Every once in a while they appeared to double back, bob around, then continue on their way. Aristotle thought the planets were sentient beings and just felt like doing a little Two-Step now and then. Later astronomers took a different approach, positing a super-complex system of wheels within wheels, like gears in a watch, atop which the planets rested, borne along on this elaborate mechanical lattice. And it worked! Reverse-engineered from the observed motions of the planets, the system of “epicycles” did fit the phenomena (and sailors still use sextants to navigate on the basis of the Ptolemaic system). But of course it couldn’t predict any new results.
But Nicolai Copernicus thought he could do better. Suppose the earth was merely one of the planets rotating about a central sun? In this case, the retrograde motion of the planets was not real motion on the part of the heavenly spheres. Instead it was all the result of shifting perspectives. The orbits were regular, but our platform of observation was moving, too! This allowed a much simplified method of calculation. Eventually Copernican, heliocentric astronomy won the day. Ptolemaic astronomy was consigned to the Museum of Obsolete Theories.
From this case we can derive the major criterion for preferring one paradigm over another: the paradigm that makes the most economical sense of the data, and without having to posit far-fetched ad hoc hypotheses, is the better model. It’s an application of Occam’s Razor: the simpler explanation is the best. Is the truth always simple? Maybe not, but until we find out differently, we have to go by probability. And almost by definition the simpler explanation is more probable. Why bother multiplying redundant explanations when a simpler one cuts to the chase? There’s just no reason to add in needless complexities.
The second option for dealing with anomalous data is to start with it, theorize a new paradigm based on it, then see if the result can be reconciled with the old paradigm. Newton and Einstein found they were able to make new sense of what had remained baffling for Copernican astronomy, but without having to overturn the whole system. Their adjustments didn’t amount to a new set of contrived epicycles posited to rescue the old paradigm. It all proceeded inductively, and the data now made mutual sense in the same framework, all given equal weight.
It may take a long time for a new paradigm to prevail among the experts, because it must prove itself the superior option, and that quite properly requires a lot of detailed scrutiny. It is possible that some experts who have a big investment in the old paradigm (because most of their professional work was based on it) may have selfish reasons for opposing the new model, but usually that will only prolong a needful process, and most experts will welcome the new approach once they see its advantages. After all, they’re in the game because they want to get closer to the truth, not to defend some hobby horse or party line. At least one hopes so.
The same procedure obtains in biblical studies. For instance, scholars had long pondered the relation between the Synoptic gospels: Matthew, Mark, and Luke. They have much in common, almost verbatim. Why? It seems the three texts are interdependent in some way. But which way? Eventually, most scholars came round to accepting the Two Document Hypothesis, also called Markan Priority. In short, the paradigm runs like this: Matthew and Luke each independently copied material from Mark, making various changes in detail. This would explain the material shared by all three gospels. But there is also a large amount of sayings and stories shared by Matthew and Luke but with no parallel in Mark. Where did this stuff come from? There must have been a second prior source that Matthew and Luke used just as they did Mark. We call that one “Q” from the German word for “source,” Quelle. (Still awake?)
Scholars who accepted this way of making sense of the Synoptic material soon spotted anomalous data, sand in the gears of the paradigm: what to make of several places where Matthew and Luke do not quite agree with Mark but do match each other’s wording? Doesn’t that suggest that Matthew was using Luke or vice versa? It might! Starting with these data that did not fit the Two Document Hypothesis, some scholars have proposed various alternative models. Some say Mark combined material from Matthew and Luke, fusing (harmonizing) them as best he could and leaving the rest on the cutting room floor. Others suggest that Matthew used Mark, and then Luke used both Mark and Matthew. Still others think Luke used Mark, and then Matthew used both Mark and Luke. Thus there needn’t have been a Q document. When, e.g.,
Luke sometimes matches Mark but other times matches Matthew, this would be because he sometimes preferred Mark’s original, sometimes Matthew’s revised version.
I am a partisan of the Two Document (i.e., Mark and Q) Hypothesis. The scholars who advocate the alternative solutions I just mentioned obviously feel it necessary to junk the formerly regnant model, just as Copernicus overthrew the Ptolemaic model. I do not. I think the Two Document Hypothesis requires only minor adjustments. It seems to me that the Matthew-Luke agreements against Mark simply imply that Matthew and Luke were using an earlier edition of Mark which contained the reading Matthew and Luke have in common, and they preserved the original Markan wording. But the specifics don’t matter for our purposes. I’m just trying to give a general impression of how the contest between theoretical paradigms works.
But things get more complicated still! There are also so-called “incommensurable paradigms,” which are invulnerable to change. You see, the closer you look, the more it begins to look as if the criteria for the plausibility of a paradigm (and for whether their adjustments look like special-pleading “epicycles” or rather extensions of the paradigm to incorporate hitherto-anomalous data) are functions of the paradigm itself, i.e., contained within the paradigm. For example, what are readers to do with the fact that, though all four gospels show Peter denying Jesus three times, each gospel has Peter talking to different bystanders? Fundamentalists, who take it on faith that there can be no contradictions in the Bible, find it perfectly natural to posit that Peter denied Jesus six or even eight times, in order to fit in all the denials with their varying details. If the explanation comports with the doctrine of biblical inerrancy, it automatically looks good! That’s their criterion for plausibility.
But critical scholars do not hesitate to say that the gospels contradict one another and that the various gospel writers simply changed the details for whatever reason. To them it seems patently ludicrous to suggest that Peter denied Jesus six times. Why? Because these scholars have rejected inerrantism as a workable paradigm. And why? On account of a different, equally important Protestant axiom: scripture must be interpreted according to the “plain sense,” what the words would seem to mean prima facie. Otherwise you can treat the Bible like a ventriloquist dummy, making it mean whatever you want it to. And no one would read the gospel texts as recordingt six denials unless they were desperate to get out of a tight spot.
There can be no real communication, not even any debate, between these factions. There is no common ground. As Stanley Fish (Is There a Text in this Class?) says, we are dealing here with two insulated “communities of interpreters.” Within each herme(neu)tically sealed community of interpreters there can be much debate and dispute, as when fundamentalists argue over whether the inerrant Bible teaches free will or predestination. Or as when critical scholars discuss what might have motivated the various gospel writers to recast the walk-on roles of the various bystanders in whose ears Peter denied Jesus. But debates between the two communities are useless. They cannot help talking past one another. Everybody ends up where they started.
This is where politics comes in. Having political discussions with your friends (who are not likely to remain your friends for long!), you quickly notice you are getting nowhere, and so are they. Each of you is starting from within a self-contained paradigm from which your opponent’s perspective seems baffling. Recently I was a guest (or was that “sideshow freak”?) on a podcast where I was called to account for my support for Donald Trump. My stunned hosts could not conceive of an atheist skeptic supporting Trump, opposing abortion, doubting Global Warming, etc. Likewise, I could not believe my ears at their arguments for “gun-free zones,” etc. Each side has different criteria for plausibility, deductively derived from their paradigm itself. These criteria will determine how one views data that seems to challenge one’s position. It was clear to me that neither side can make any headway until they dare to question their presuppositions, to ask themselves the question emblazoned on a bumper sticker: “What if you’re wrong?”
Each side of the political divide lives in what Peter Berger and Thomas Luckmann (The Social Construction of Reality) call a “symbolic universe,” an internalized paradigm for construing data. Each side engages in “cognitive world-maintenance.” Each individual is reinforced (like a religious believer) in his convictions by surrounding himself with those who share his viewpoint. For instance, if you watch FOX News, as I do, and you hear people mock “Faux News” it is at once obvious they have never watched it. They are parroting the biases of the ideological “in-crowd” whose mockery is what Berger and Luckmann call a “nihilation strategy” aimed at discouraging one’s fellows from ever taking seriously any arguments from the other side. I am familiar with nihilation tactics from religious apologists who reassure their minions that biblical critics hold “skeptical” views only because they begin by arbitrarily rejecting the supernatural. That is nonsense, but they need to believe it in order to pre-empt any serious consideration of critical views. Political conservatives and liberals explain how the opposing faction suffers from psychological handicaps which incline them to their ill-founded opinions. Have you ever seen a more blatant example of the genetic fallacy?
Paul Watzlawick (How Real Is Real?) discusses the “self-sealing premise,” a belief or opinion or party-line that is invulnerable to disconfirmation by any objection, refutation, or contrary data. There is always a quiver full of explanations, excuses, rebuttals, whether consistent with each other or not. Remember, any argument supporting the presupposed position automatically sounds good, persuasive, and more than plausible to the one who offers it. And the defender is not pretending to believe these rebuttals: he really finds the excuses and alternative readings of the evidence to be convincing no matter how lame they sound to outsiders. (This is the larger reality of which the phenomenon of “confirmation bias” is the iceberg tip.)
This all raises the spectre of falsifiabilty. Karl Popper pointed out that some assertions are revealed, not as false, but as meaningless when the one making the assertion cannot think of any state of affairs that would falsify his assertion. You cannot really even define the state of affairs that is being asserted if you cannot specify what conditions would be inconsistent with it. My favorite example here is, not surprisingly, a theological one. If a religious believer asserts that God is in loving, providential control of the world, we would sort of expect this “hypothesis”
to have predictive value. Wouldn’t it seem to imply that God would protect us from tragedy and atrocity? But the facts do not seem to bear this out. Does the believer admit he was wrong? Not at all! He retreats to the position that God is in control, but that “he moves in mysterious ways.” But then we have to ask: if God’s being in providential control winds up looking just like God not being in control, what’s the difference? What, if anything, is even being asserted? If nothing counts against the claim, then there really is no claim.
And it is the same with political stances. You advocate policies that will, you claim, create more jobs and revitalize the economy, but there are no discernible results. Rather than going back to the drawing board or admitting that your opponents were right, you either “reinterpret” the statistics or stonewall with excuses, perhaps blaming the disastrous results of your policies on the previous administration’s policies which, you say, screwed up the economy worse than you first thought, so you double down on your failed policies. And when they fail again, you will sink still deeper into denial and excuse-making. The policies themselves have become the most important thing, not the goals to which they were originally dedicated.
Next step? Whatever destructive results the policies bring, even once they become undeniably obvious, will be considered noble simply because they are the results of the policy and the ideology underlying it. This is what happens when gun control advocates insist on reducing gun ownership by non-criminals. You would have assumed that safety and crime-reduction were the desired goals. But if, as in Chicago, New York City, and Brussels, the reverse happens, well, that’s still good. These murders were “collateral damage,” an unfortunate by-product of the inherently noble crusade to eliminate those nasty guns. Of course, criminals won’t cooperate, but let’s get rid of as many of those unholy and unclean guns as we can, and you can take them away from law-abiding citizens. “Oh,” you say, “but once we tighten gun laws, gun-owners ipso facto become criminals!”
Fossil fuels are inherently evil and unclean, so we must try to get rid of them. If this will destroy whole industries and many people’s livelihoods and make energy too expensive for the shivering poor, well, that’s just collateral damage! It’s all like pacifism: self-imposed martyrdom for the sake of ideals derived from abstract systems of political theory.
Theory is paramount in politics. Government ideologues attempt to reshape the world to make it conform to their theory’s picture of the world, as when George W. Bush sought to impose Western-style democracy on alien cultures, or when Obama thinks all he needs to do when Russia invades Ukraine is to pontificate that Russia is “on the wrong side of history.”
Worse yet, their policies assume the world already does correspond to the picture their ideology paints. Muslims can’t be terrorists, so they’re not! If you think otherwise, my friend, you suffer from Islamophobia! Crimes must be equally distributed among all population groups, so to claim one group commits a disproportionate amount of crimes can only be racist slander.
Committed to an ideology, the ideologue is living in an impenetrable bubble. Freud’s characterization of religion fits equally well here: “the projection of a wish-world onto the real world.”
Peter Berger (in his The Heretical Imperative) speaks of “relativizing the relativizers.” Once a sociologist of knowledge (like him) succeeds in showing the largely psycho-social origins of any individual’s beliefs, the ground is cleared. We are left with no escape, no option but to try to bracket what we have been taught to think, what we would like to believe, and to try our best to look at the facts inductively. And to ask ourselves why we are inclined to one or another interpretation of the facts. Look, I know that voting is a forced choice. You’ll never get to the polling place if you think you have to master all the facts on every question. But you owe it to yourself (and everyone else) to take a cold, hard look at the facts, and at yourself as the evaluator of facts. Take your best shot. Take what Don Cupitt calls “the Leap of Reason,” launching yourself out of your confining paradigm like baby Kal-el rocketing out of exploding Krypton!
So says Zarathustra.