Short rejoinder to standard crits of CR and Popperism


1. The falsifiability criterion is about meaning.

2. “Falsification cannot be decisive”.

3. Failure to draw the distinction between falsifiability (a matter of logic and the form of     statements) and falsification (a practical matter).

4. “Scientists don’t practice falsification”.

5. “Falsificationism is refuted by the history of science”.

6. Popper was subjected to effective criticism by Lakatos/Kuhn/Feyerabend.

7. The failure of Popper’s theory of verisimilitude casts doubt on his whole program.

8. “There is no getting away from induction/justificationism”.

9. From Habermas: “Popperism is a form of positivism, it is analytical and provides no dialectic or effective theory of criticism”. In addition Popper’s dualism facts and values,  is/ought or propositions and proposals, provides no leverage for criticism of the status quo.


Some of the above are really in the category of “schoolboy howlers” especially the first, and it is disconcerting to find them recycled by scholars of high repute, such as A C Grayling who wrote that statements falling foul of Popper’s demarcation are “vacuous”.

On 2 and 3, Popper pointed out that falsification cannot be decisive (p 49-50 of LSD) so it is hardly a criticism to raise this. Confusion on this point arises from (3), the failure to separate the logic of the situation from the practical problems and procedures of scientists at work. Quine endorsed the logic of the demarcation criterion (1974, x) and acceptance of this would have saved the positivists and logical empiricists from two or three decades of wasted effort on their verification criterion. Back in the real world, Popper made the turn to the critical appraisal of the conventions or “rules of the game” that are required to address the practical problems of testing theories.

4. “Scientists don’t practice falsification”, meaning they don’t want to subject their theories to criticism and tests. Some do and some do not. Those who do are likely to be more effective researchers than those who do not. Some people do not follow the advice of their doctor, their dentist or the instructions that come with their appliances. How smart is that? Scientists who do not criticize their own ideas will most likely find that other people will do so, hence the importance of the social aspect of science that Popper described in Chapter 23 of OSE (1945).

5. The idea that Popper’s ideas are refuted by the  history of science is based on the false assumption that Popper thought that a theory should be discarded at the first sign of adverse evidence.  Apparent refutations, negative evidence, “disconfirmations” signal that there is a problem. More work is required, maybe for years, decades or even centuries. Popper insisted that a new theory or research program takes time to demonstrate its fertility and a highly successful theory should only be supplanted by a better one.

6. As for the criticism from Lakatos, Kuhn and Feberaband: Lakatos invented “naïve falsificationism” to successfully confuse the issues. Kuhn at one point suggested that Popper should be criticized as a naïve falsificationist even though he was not a naïve falsificationist. When he retreated from his initial (and interesting) position to a more coherent (and less interesting) stance he conceded that Popper’s approach was correct at times of crisis, meaning a serious conflict between rival theories, which for Popper was practically all the time. Feyerabend abused Popper and his wife but in terms of substance he merely repeated Popper’s dictum that there is no such thing as “scientific method”.

7. Popper’s attempt to develop a formal measure of verisimilitude (truthlikeness) did not deliver and he gave it away as soon David Miller pointed out he error. This was one of Popper’s projects that did not work. Not for nothing was he a fallibilist!

8. There are four, or maybe five or six kinds of induction which permit writers like O’Hear to appeal to induction at every stage of the scientific enterprise. Popper’s target was the so-called logic of induction which is supposed to assign valid, meaningful or helpful numerical probabilities to explanatory general theories and his arguments on this topic have not been refuted. The last resort of inductivists (apart from the program of Bayesian subjectivism) is usually the claim that we need the “inductive” assumption that there are regularities or laws or propensities and patterns in nature. Popper pointed out (1935, 1959) that this is a metaphysical theory about the world and using the label “induction” is merely a verbal strategy to defend inductivism.

The claim by Habermas and others that Popper is just a slightly deviant positivist is not sustainable in view of the full extent of Popper’s “deviance” which can be explained by the four “turns” (conjectural, objective, social and metaphysical). As for Popper’s defence of the status quo, the distinction that he drew between factual propositions and social or moral proposals was explicitly designed to give reformers a lever to change the status quo.

This entry was posted in Uncategorized. Bookmark the permalink.

7 Responses to Short rejoinder to standard crits of CR and Popperism

  1. Frank Lovell says:

    Another fine piece of work, Rafe, THANK YOU!

  2. Z says:

    The 7th argument is the least documented anywhere. I was really interested in the programme of verisimilitude and I could find a lot about the programme and its failure itself on the internet. The Stanford Encyclopedia of Philosophy has a great article on it BTW. Unfortunately I couldn’t find any detailed sources on how the failure of its goals connect to Poppers other theories.
    My questions (in their basic forms) are: what was the original goal of the concept of verisimilitude in Poppers opinion? What problems did he intend to solve with it? Why exactly is the failure of verisimilitude not a failure for other theories of Popper?
    Can anyone recommend me some articles that are about these issues here?

  3. Rafe says:

    Thanks Frank!

    Good questions Z!!

    Off the top of my head (in haste) the aim of the V project was to get a measure of what it meant to get “closer to the truth”. This is an important idea but I think that turned out to be a hopeless way to get at the problem.

    David Miller has an article on this in one of his books, someone else on the list will have it at their fingertips. He thinks it is a serious problem that calls for more work but he does not think that it wrecks Popper’s program.

    In particular it does not damage the correspondence theory of truth – that statements can be appaised in terms of their correspondence with the facts.

    This is my take: theories cannot be directly appraised; they can be tested by checking deductions against the facts but the theory remains “out of sight”. However theries can be compared in terms of their explanatory power and ability to survive tests – on those criteria Einstein trumped Newton, hence he got closer to the truth but not in a way that you can sum up in a number (a measure). And of course in turn Einstein can be trumped etc.

    I think he was trying to beat the positivsts at their own game of putting numbers on theories but it did not work.

    Some critics make a big deal out of it, but I can’t see how any of his leading ideas are impacted.

    Sometimes I talk about the dangers of adopting the “Terminus Theory” of truth, as though it is a destination and we hope to get closer and closer to it, maybe like Zeno’s arrow. I think that does not work, I prefer to think of an expanding universe of knowledge so the more we learn the more we find there is to explore on the frontiers. Contast the Correspondence Theory of truth that is NOT A DESINTATION, IT IS A REGULATIVE PRINCIPLE THAT WE APPLY TO STATEMENTS.

  4. Lee Kelly says:

    Popper’s criterion of falsifiability is unfalsifiable, meaningless, foiled by the Duhem-Quine problem, and implicitly depends upon induction.

    Mathematics is unfalsifiable but is also indispensable to advanced science.

    Corroboration is induction by another name.

    Popper’s theory of verisimilitude fails. Without a rigorous definition of what it means to get closer to the truth, how does Popper know falsifiability will help get us there? He doesn’t, therefore we cannot rely upon his methods.

    Practicing scientists report doing induction. They often avoid decisive experiments of their hypotheses, contrary to Popper’s recommendations. Scientists often cling to an apparently falsified theory which later turns out to have been correct all along. In other words, the growth of knowledge does not conform to Popper’s process of conjecture and refutation.

    According to Popper, Darwin’s theory of evolution is unfalsifiable. This cannot be. The theory of evolution is an integral part of modern science. Popper’s criterion of falsifiability is falsified!

    Induction doesn’t exist!? You can’t be serious. Whenever we think something that is not immediately deducible from our prior thoughts, then we doing induction– any generalisation, therefore, is induction by definition. QED.

    Justification doesn’t exist!? So Popper was a post-modern irrationalist. Stove was right!

    Kuhn, Feyerabend, and Lakatos. ‘Nuff said.

  5. Bruce Caithness says:

    In a frivolous mood I looked up verisimilitude in the Shorter Oxford Dictionary (1975). Its earliest stated usage was 1603; it is the fact or quality of being verisimilar. Verisimilar means having the appearance of being true or real; probability. Carlyle is quoted, “Are these dramas of his not verisimilar only but true?”.

    My musing then leads me to recall a finance lecture I attended in Sydney in 2008. The speaker was the economist Woody Brock. I felt in listening to his preamble that Karl Popper was being reworked in a fresh and insightful reformulation. Points he made were you cannot know very much, everyone is wrong, the truth cannot be known with certainty but you can be less wrong than others, don’t get hung up on data for its own sake, find a predictive theory and test it, the best you can do is prove a theory wrong, you can’t prove it to be right, open societies cultivate healthy challenge and growth, thugocracies do the opposite.

    Verisimilitude seems to have potential, for instance now it is universally accepted that the planet is roughly spherical. This is a rather simple universal statement but in the early middle ages it would have been a theory that was counterintuitive and downright provocative. It is obvious that the three dimensional model of the earth has greater verisimilitude than the two dimensional model. The three dimensional model is less wrong than the two dimensional, likewise evolution by natural selection is less wrong than young earth creationism as is obvious to anyone except true believers who are still stuck in some sort of a medieval paradigm.

    Karl Popper was right to reject induction, no induction is needed only guesswork said David Miller, and also it is understandable that Popper sought some sort of formalization of verisimilitude. However the calculus for such a concept is problematic. Herbert Keuth, in The Philosophy of Karl Popper (2005) could find no way to formally rehabilitate Popper’s “simple and intuitively plausible” idea that “a theory is nearer to the truth the more true and the less false (logical) consequences it has”. My musing led me to think that maybe a notion of verisimilitude as less-wrongness might have more possibilities than more-rightness. Is this mere shuffling of the glass beads?

    One can prefer one hypothesis over another, and there can be strong grounds for such a preference e.g. greater survivability under testing, consistency with background knowledge, depth, simplicity, unifying power, more relevance to multiple problem situations. At the end of the day, however, even though verisimilitude may be a more humble aim than the pure search for truth, no quantification of numbers of errors or severity of errors seems to do the job in assessing closeness to the truth. No matter how hard we seek to test our theories they remain conjectural.

    Any criticism or comment on these musings is welcome.

  6. Michael Kennedy says:

    Your checklist of criticisms of Popper and short replies to them is most valuable. I have replies to some of Lee Kelly’s comments.

    1. Falsifiability is a property of empirical statements. It requires that a statement should have implications which can be compared with the facts. The statement ‘All swans are white’ has the implication that no swan can be black, and so is falsifiable. The proposition that empirical statements need to be falsifiable to be amenable to scientific investigation is not an empirical statement.
    Right or wrong, it is a statement about needs, not facts. It does not matter that it is not itself falsifiable through empirical implication.

    2. Popper’s remark that induction does not exist is true because no two instances of an object or conjunction of objects are ever identical. We can merrily claim that induction is the inference:
    ‘All observed X’s are Y; therefore (?) all X’s, observed and
    unobserved, are Y’
    But this assumes the identity of the X’s and Y’s. In the real world there are no identical instances abd the induction about swans is only a quasi-induction about birds we judge to be similar.

    3. Popper’s joke about there being no such thing as scientific method was in the context of his being the Professor of Logic and Scientific Method at London University. There is no general method of arriving at new hypotheses, but there is a widespread and well understood method for appraising them. It is the attempt to falsify them.

    4. Popper’s methodology was mainly prescriptive, and the fact that some scientists don’t test their own theories, or even conceal embarrassing evidence, can hardly be blamed on Popper.

  7. Michael Kennedy says:

    May I come back to this and suggest that probability, when it is not a priori or a Popperian propensity is a relative frequency. In one thousand throws of a die I got 160 sixes. The relative frequency is 160/1000, and the probability, therefore, is 160/1000. Probability here is an estimate of what would happen in a long run of throws. Equally if I observe 1000 ravens and I note that 1000 of them are black the probability of seeing a black raven is 1. Similarly with 1000 swans, all of them white. The probability again is 1. But this result does not entail the truth of the theory that all swans are white. It is a hypothesis. And the observation of just one black swan both refutes the theory and lowers its probability to 1000/1001. So a probability 0.999 is entirely consistent with a theory being false and falsified.
    Probability can be estimated for theories which specify particular events. But more complex theories like evolution or Einstein are not open to frequency estimates, and cannot offer probability values. Nor , unlike, dice where the assumption of fairness is enough to for anyone to guess that that the probability of a six is 1/6. This can be be tested by throwing a die a large number of times. The probability of Einstein’s theory fo general relativity cannot be calculated by looking at the number of sides nd sixes. An a priori estimate of its probability, i.e. the probability of it being true is impossible other than by feelings in the bones or stomach. O.K. I declare that its probability is 0.9. That is a completely arbitrary and subjective estimate. And when the theory passes some new test my subjective probability rises by an amount which, again, is arbitrary. Perhaps it goes up to 0.91, or 0,92, or o.93. Who knows?

Leave a Reply

Your email address will not be published. Required fields are marked *

+ eight = 13

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>