The Duhem-Quine Thesis Reconsidered – Part One

Spread the love

A popular criticism of Karl Popper is that his criterion of falsifiability runs aground on the Duhem-Quine thesis. That is, for any putative falsification, it’s always possible to preserve a scientific hypothesis by revising auxiliary hypotheses in its stead. For example, in September 2011, a team of CERN physicists recorded neutrinos traveling 0.002 percent faster than the speed of light. Journalists notwithstanding, this observation wasn’t readily accepted as falsifying the theory of relativity. Scientists, including the CERN team, merely presumed something else was responsible for the anomaly. Subsequent experiments were unable to replicate the same outcome, and the original results were later explained by a loose fiber-optic cable. The observation of neutrinos traveling faster than the speed of light never happened; the apparent falsification was predicated upon all the cables being screwed tight. A scientific hypothesis, then, is never tested in isolation but among a web of auxiliary hypotheses, varying from the mundane to the metaphysical. It’s impossible, says the critic, to actually falsify scientific hypotheses, because, for any putative falsification, it’s logically possible that an auxiliary hypothesis is to blame instead. Therefore, Popper’s criterion of falsifiability fails to solve the problem of demarcation; scientific hypotheses are only falsifiable insofar as we arbitrarily choose to falsify them instead of auxiliary hypotheses.

My goal here is to defend Popper and his criterion of falsifiability from the Duhem-Quine thesis. In part one, I will examine Popper’s own position and reveal that not only was Popper aware of the Duhem-Quine problem before most of his critics, but he also proposed a methodological solution to it. In part two, I will attempt to demonstrate that the Duhem-Quine thesis is either false, insofar as it’s interesting, or trivial, insofar as it’s true. In either case, the Duhem-Quine thesis no longer stands as a refutation of Popper’s criterion of falsifiability.

PART ONE

Popular myth has it that Popper was once a naive falsificationist. He supposedly claimed that a single recalcitrant observation was enough, in practice, to definitively falsify any scientific hypothesis. The story goes that Popper gradually retreated from naive falsificationism in response to criticism, such as the Duhem-Quine thesis. By the end, all that remained of falsificationism was the cautionary tale of a promising but ultimately flawed attempt to solve the problem of demarcation. Contrary to the myth, however, Popper’s views on falsifiability were never naive; and rather than retreating in response to critics, he anticipated many popular objections, including the Duhem-Quine thesis. The nuances of Popper’s views about sensory experience and methodology are largely ignored, and his responses to these objections misunderstood. Even the term ‘falsificationism’ (in contradistinction to ‘verificationism’) does an injustice to the depth and novelty of Popper’s views.

In ‘Falsifiability as a Criterion of Demarcation’ (§6, The Logic of Scientific Discovery), Popper turns to potential objections to his criterion of falsifiability, and he describes the nub of the Duhem-Quine thesis:

It might be said that … it is still impossible … that any theoretical system should ever be conclusively falsified. For it is always possible to find some way of evading falsification, for example, by introducing ad hoc an auxiliary hypothesis, or by changing ad hoc a definition. It is even possible without logical inconsistency to adopt the position of simply refusing to acknowledge any falsifying experience whatsoever. Admittedly, scientists do not usually proceed in this way, but logically such a procedure is possible; and this fact, it might be claimed, makes the logical value of my proposed criterion of demarcation dubious, to say the least.

Popper associates this objection with conventionalism, a type of reactionary anti-realism. The conventionalist doesn’t interpret scientific hypotheses as descriptions of reality, but as mere analytic tools, ostensive definitions, or elegant conventions. To the conventionalist, for example, the inverse-square law of gravity cannot be false, because it’s just part of the definition of gravity. Having repudiated realism, the question of whether gravity, so defined, physically exists is irrelevant or meaningless. The “reality” described by scientific hypotheses is, in this view, an artificial construct; it has value only insofar as it is convenient and elegant. Typically, then, conventionalism stands by the traditional science of the day, regarding it as an irrefutable system of ostensive definitions.

In ‘Some Conventionalist Objections’ (§19), Popper outlines a key argument of conventionalism:

According to this conventionalist point of view, laws of nature are not falsifiable by observation; for they are needed to determine what is an observation and, more especially, what a scientific measurement is. It is these laws, laid down, by us, which form the indispensable basis for the regulation of our clocks and the correction of our so-called ‘rigid’ measuring rods. A clock is called ‘accurate’ and a measuring rod ‘rigid’ only if the movements measured with the help of these instruments satisfy the axioms of mechanics we have decided to adopt.

When CERN physicists recorded a neutrino travelling faster than the speed of light, they assumed that some equipment malfunction was responsible. Eventually, the anomalous results were attributed to a loose fiber-optic cable. A conventionalist would applaud. In his view, whether a cable is ‘loose’ or why that even matters is only explicable in light of theories about cables, light, and electronics–they’re what make the difference between function and malfunction. The CERN equipment detected something it wasn’t supposed to: a neutrino travelling faster than the speed of light. Therefore, something was loose, either literally or metaphorically, because neutrinos, by definition, cannot exceed the speed of light. Popper explains  (§19):

Whenever the ‘classical’ system of the day is threatened by the results of new experiments which might be interpreted as falsifications according to my point of view, the system will appear unshaken to the conventionalist. He will explain away the inconsistencies which may have arisen; perhaps by blaming our inadequate mastery of the system. Or he will eliminate them by suggesting ad hoc the adoption of certain auxiliary hypotheses, or perhaps to certain corrections to our measuring instruments.

These passages aren’t exhaustive, but they suffice to demonstrate Popper’s familiarity with issues raised by the Duhem-Quine thesis. Indeed, Popper even applauds conventionalism ‘for the way it has helped to clarify the relations between theory and experiment’ (§19), and of the matter he ‘must admit the justice of this criticism’ (§6). In other words, Popper concurs with the logical result of the Duhem-Quine thesis and its conventionalist counterpart–scientific hypotheses cannot be tested in isolation; and it’s logically arbitrary when we choose to falsify a scientific hypothesis rather than an auxiliary.

However, for Popper, the scientific method isn’t reducible to logic, but requires extra-logical rules of method. In ‘Why Methodological Decisions Are Indispensible’ (§9), Popper declares:

In point of fact, no conclusive disproof of a theory can ever be produced; for it is always possible to say that the experimental results are not reliable, or that the discrepancies which are asserted to exist between the experimental results and the theory are only apparent and that they will disappear in the advance of our understanding … If you insist on strict proof (or strict disproof) in the empirical sciences, you will never benefit from experience, and never learn from it how wrong you are.

Whether a hypothesis is falsifiable, then, turns on our method, aims, and norms of investigation, and not on the formal character of hypotheses and experiments. Popper concedes, for example, that logical analysis alone cannot determine whether a hypothesis is falsifiable or merely a system of implicit definitions. In fact, for Popper, any hypothesis can be rendered unfalsifiable by fiat, if we decide to exempt it from tests or protect it from falsification come what may. In other words, we must decide for ourselves to expose hypotheses to empirical trials, because logic will not force us. We choose to assent to falsifications not because the reports of our senses are incorrigible, or their logical implications unquestionable, but because we take the empirical method seriously. Popper continues:

Such are my reasons for proposing that empirical science should be characterised by its methods: by our manner of dealing with scientific systems: by what we wish to do with them and what we do to them. Thus I shall try to establish the rules, or if you will the norms, by which the scientist is guided when he is engaged in research or in discovery, in the sense here understood.

It’s within this methodological context that Popper argues for his criterion of falsifiability. In ‘The Problem of Demarcation’ (§4), Popper writes that his ‘criterion of demarcation will accordingly have to be regarded as a proposal for an agreement or convention.’

Popper’s problem, then, is to propose a theory of the scientific method which, among other things addresses the issues raised by the Duhem-Quine thesis. These methodological rules are conventions with a purpose; they are the rules of the game of science. However, unlike the rules of logic, the rules of science don’t invariably arrive at true conclusions. Indeed, in ‘Methodological Rules as Conventions’ (§11), Popper’s first example of a methodological rule is that,

The game of science is, in principle, without end. he who decides that one day scientific statements do not call for any further test, and that thy can be regarded as finally verified [or falsified], retires from the game.

In science, every judgement of truth and falsity is provisional, subject to revision or retraction. Scientific knowledge is conjectural–doxa rather than episteme. There’s no demand for an ultimate foundation for scientific hypotheses or observational evidence, because acceptance is always tentative. Should the need arise, any previously closed case may be reopened by the court of science and judged anew.

The Duhem-Quine thesis, however, poses a special challenge. On the one hand, continual evasion of falsification is impermissible; it amounts to refusing to learn from experience. On the other hand, sometimes apparent falsifications really are mistaken. After the prosecution has presented its case to the jury, the defence must be allowed to present its counterargument. So what kind of defense is appropriate? Were the CERN physicists wrong to assume an equipment malfunction? We think not, but why? How do we chart a course between learning from experience and not being fooled by every spurious counterexample? Popper discusses the problem at length and proposes we adopt some methodological rules or conventions:

  • Scientific hypotheses are falsified when we accept existential statements that contradict them. We should, however, demand, when possible, that results are reproducible and inter-subjectively testable. If additional experiments are impracticable, such as in the case of natural experiments, then we should at least want to specify conditions of reproducibility. Stray or fleeting results which fail to satisfy these criteria, while perhaps hinting at a problem or inspiring new research, aren’t to be regarded as falsifications.
  • We should refrain from what Popper calls ‘conventionalist stratagems’. That is, ad hoc maneuvers to evade falsification, such as changing definitions, accusations of deceit, dismissing results as observational errors or equipment malfunction. Bare appeals to doubt, or the mere logical possibility of false auxiliary hypotheses aren’t acceptable objections to an apparent falsification.
  • If we wish to defend scientific hypotheses from a putative falsification, then we must structure our objections as independently testable hypotheses, or modifications to hypotheses should increase their degree of falsifiability. In other words, if an objection merely papers over the problem, serving no purpose other than to quarantine or excuse the apparent failure and thereby reduce explanatory content, then it has no scientific merit.

Popper doesn’t pretend for this to be exhaustive, and nor does he hold that his proposed rules are beyond criticism or revision. His purpose is to begin the discussion rather than end it. To borrow a metaphor from W.W. Bartley III, one of Popper’s students, Popper’s aim is to create an ‘ecological niche’ for science. His methodological norms can be thought of as establishing conditions for the selective retention of scientific hypotheses–the evolution of scientific knowledge. For Popper, this evolution is a gradual progress, albeit bumpy and fallible, toward a true description of the world–the ultimate aim of science.

In light of Popper’s views about the criterion of falsifiability, and the methodological context in which it’s couched, the CERN scientists weren’t defying Popper when assuming an equipment malfunction. The faster-than-light neutrino was a non-reproducible result, and an independently testable hypothesis was offered, and corroborated, to explain the anomaly–a loose fiber-optic cable. While the decision to falsify the auxiliary hypothesis rather than the theory of relativity was, per the Duhem-Quine thesis, logically arbitrary, it wasn’t methodologically arbitrary. And it’s the scientific method, rather, than pure logic, which we turn to satisfy our scientific questions and aims.

So that’s Popper’s solution to the issues raised by the Duhem-Quine thesis and conventionalism. But is it really a solution at all? On some margins, it clearly isn’t. If our aim is to incontrovertibly establish that scientific hypotheses are either true or false (or even probably true or false) according to some incorrigible procedure or method, then Popper’s methodological discussion will be only so much hot air. Popper, however, explicitly repudiates these goals. In ‘The Problem of Demarcation’ (§4), he writes:

Thus anyone  who envisages a system of absolutely certain [or probable], irrevocably true statements as the end and purpose of science will certainly reject the proposals I shall make here … The aims of science which I have in mind are different. I do not try to justify them, however, by representing them as true or essential aims of science … There is only one way, as far as I can see, of arguing rationally in support of my proposals. This is to analyse their logical consequences: to point out their fertility–their power to elucidate the problems of the theory of knowledge.

For Popper, scientific hypotheses are innocent until proven guilty. That is, we need not attempt to justify them up front as the products of sense experience, but merely subject them to critical scrutiny and empirical testing. He adopts the same attitude with regard to his methodological proposals–they’re tentative conjectures, offered to clarify the procedures and aims of scientific investigation. The strong fallibilist streak in Popper is often ignored or misunderstood, because it didn’t hold him back from making bold claims. In consequence, he’s accused to failing to solve problems that he explicitly denied trying to solve, and even when he thought the problem itself was wrongheaded.

Popper understood that his criterion of falsifiability could not be unambiguously applied outside of his theory of the scientific method–the logic of scientific discovery after which his seminal work is named. The result of the Duhem-Quine thesis, then, was both anticipated and answered. Whether Popper’s answer is satisfactory depends on what we expect an acceptable criterion of demarcation to achieve. At the least, it should be acknowledged that whether the Duhem-Quine thesis stands as a refutation of popular variants of the falsifiability criterion, it doesn’t stand as a refutation of Popper’s.

About Lee Kelly

Amateur philosopher
This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to The Duhem-Quine Thesis Reconsidered – Part One

  1. Rafe says:

    Hello Lee! I have been so pressed with work and household commitments lately that I am behind on checking blogs and making or responding to comments.

    This is probably a suitable opportunity to mention a thesis on the Duhem problem that I wrote, supervised by Alan Chalmers (yes, that Alan Chalmers).

    http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=rafe+champion

    That was a time in the 1990s when I backed off from the “champion of Popper” stance and took on a Masters in History and Philosophy of Science to find out what it was like to approach the field like a novice, with a relatively open mind.

    It was hard to find a thesis topic, I really wanted to work on objective knowledge or metaphysical research programs but there was nobody who could supervise those topics so I had to settle for something that a member of staff could handle. Alan was the best of a small bunch, the other two were social constructivists and he suggested the Duhem problem which turned out to be a nice pick although he had to practically push me over the line to get the work done. He was a fine supervisor.

  2. Why is this topic worth more than just a reference to LScD where Popper already answered the issue long before he was accused of having no answer to it?

    The criticism on this front is so unserious I’ve never been very motivated to write a long answer. Am I missing something?

  3. Daniele Ventre says:

    Interesting, but i’m afraid that the Quine-Duhem objection is not precisely coincident with the simple argument concerning the possibility of manouvers conceived in order to avoid falsfication.

Leave a Reply

Your email address will not be published. Required fields are marked *

please answer (required): * Time limit is exhausted. Please reload the CAPTCHA.