Further Thoughts on Critical Preference

Karl Popper held that the effort of reaching a preference of one theory against others is the key to escaping the trap of the logical error of induction. This position was not a late appendage but was clearly stated for the English-speaking world in “The Logic of Scientific Discovery” (1959), the translated version of “Logik der Forschung” (1934). To assist a critical analysis of the evolution of Popper’s thoughts on critical preference I have compiled in the appendix excerpts from a number of his works. The concluding, Paul Arthur Schilpp “Philosophy of Karl Popper” (1974), extract is possibly as good an explication as Popper has made on this topic. In it Popper gives credit to his associate David Miller. On Page 52 of his own book “Critical Rationalism a Restatement and Defence” (1994), Miller states: “There are no such things as good reasons; that is, sufficient or even partly sufficient favourable (or positive) reasons for accepting a hypothesis rather than rejecting it, or for rejecting it rather than accepting it, or for implementing a policy, or for not doing so”. Here it seems that Miller is talking about reasons that are not critical preferences.

The following map of Popper quotes is not meant to replace the reading of the original publications but rather to provide references to guide further exploration in his scattered works. It is not an exaggeration to say that each time one opens Popper, Thomas Stearn Eliot’s much quoted paragraph from the last quartet, “Little Gidding” (1942), of “Four Quartets” might come to mind:
“We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.”


Any reference to “knowing” is not a reference to anything like justified true belief but rather an attestation that we do not know but rather guess and critically appraise. One should not even glorify this conjectural effort by saying it is to the best of our abilities, for who knows whether our abilities have been best used nor even what such abilities are? The quest for certainty in much 20th and now 21st century philosophy is unfortunately too often given an epistemic significance that fills volume after volume but leads to no escape from Hume’s problem of induction nor Kant’s problem of demarcation. Relegating Popper to footnotes or comfortable low rungs in textbook chapter organization does not help in problem solving and even worse the strawman naive-falsificationist legend of Popper actually hinders it.

The introductory extract from “The Logic of Scientific Discovery” highlights Popper’s understanding of the problem of the empirical basis in that acceptance of evidence (basic statements) is the result of human decisions, agreements. Science is a human activity, indeed a communal activity. What decides the fate of a theory is decisions not on aesthetic considerations such as how simple (Occam’s Razor) the theory is worded but decisions on what basic statements are to be accepted for attempted rebuttal of theories. The value of simplicity is to improve testability. One must also differentiate between existential trends e.g. statistical samples of events and so-called probability of hypotheses. Popper rejects the latter.

In the 1963 essay “Models Instruments and Truth”, included in the anthology”The Myth of the Framework” (1994) Popper is critical of the ridiculous phrase “truth is relative”. This phrase confuses the choice we make relative to competing theories’ perceived closeness to truth with TRUTH, unsullied by our opinions and efforts. No matter how well we test our theories they may still not be fair reflections of reality.



APPENDIX: Samples from Popper’s works that address critical preference.

From “The Logic of Scientific Discovery” (1959 orig., ninth impression July 1977, Hutchinson & Co, London)
Page 108 – Section 30, Theory and Experiment

It may now be possible for us to answer the question: How and why do we accept one theory in preference to others?
The preference is certainly not due to anything like an experiential justification of the statements composing the theory; it is not due to a logical reduction of the theory to experience. We choose the theory which best holds its own in competition with other theories; the one which, by natural selection, proves itself the fittest to survive. This will be the one which not only has hitherto stood up to the severest tests, but the one which is also testable in the most rigorous way. A theory is a tool which we test by applying it, and which we judge as to its fitness by the results of its applications.

From a logical point of view, the testing of a theory depends upon basic statements whose acceptance or rejection, in its turn, depends upon our decisions. Thus it is decisions which settle the fate of theories. To this extent my answer to the question, ‘how do we select a theory?’ resembles that given by the conventionalist; and like him I say that this choice is in part determined by considerations of utility. But in spite of this, there is a vast difference between my views and his. For I hold that what characterizes the empirical method is just this: that the convention or decision does not immediately determine our acceptance of universal statements but that, on the contrary, it enters into our acceptance of the singular statements – that is, the basic statements.

For the conventionalist, the acceptance of universal statements is governed by his principle of simplicity: he selects that system which is the simplest. I, by contrast, propose that the first thing to be taken into account should be the severity of tests. (There is a close connection between what I call ‘simplicity’ and the severity of tests; yet my idea of simplicity differs widely from that of the conventionalist; see section 46.) And I hold that what ultimately decides the fate of a theory is the result of a test, i.e. an agreement about basic statements. With the conventionalist I hold that the choice of any particular theory is an act, a practical matter. But for me the choice is decisively influenced by the application of the theory and the acceptance of the basic statements in connection with this application; whereas for the conventionalist, aesthetic motives are decisive.

Thus I differ from the conventionalist in holding that the statements decided by agreement are not universal but singular. And I differ from the positivist in holding that basic statements are not justifiable by our immediate experiences, but are, from the logical point of view, accepted by an act, by a free decision. (From the psychological point of view this may perhaps be a purposeful and well-adapted reaction.)

This important distinction, between a justification and a decision – a decision reached in accordance with a procedure governed by rules – might be clarified, perhaps, with the help of an analogy: the old procedure of trial by jury.
The verdict of the jury (vere dictum = spoken truly), like that of the experimenter, is an answer to a question of fact (quid facti?) which must be put to the jury in the sharpest, the most definite form. But what question is asked, and how it is put, will depend very largely on the legal situation, i.e. on the prevailing system of criminal law (corresponding to a system of theories). By its decision, the jury accepts, by agreement, a statement about a factual occurrence – a basic statement, as it were. The significance of this decision lies in the fact that from it, together with the universal statements of the system (of criminal law) certain consequences can be deduced. In other words, the decision forms the basis for the application of the system; the verdict plays the part of a ‘true statement of fact’. But it is clear that the statement need not be true merely because the jury has accepted it. This fact is acknowledged in the rule allowing a verdict to be quashed or revised.
The verdict is reached in accordance with a procedure which is governed by rules. These rules are based on certain fundamental principles which are chiefly, if not solely, designed to result in the discovery of objective truth. They sometimes leave room not only for subjective convictions but even for subjective bias. Yet even if we disregard these special aspects of the older procedure and imagine a procedure based solely on the aim of promoting the discovery of objective truth, it would still be the case that the verdict of the jury never justifies, or gives grounds for, the truth of what it asserts.
Neither can the subjective convictions of the jurors be held to justify the decision reached; although there is, of course, a close causal connection between them and the decision reached – a connection which might be stated by psychological laws; thus these convictions may be called the ‘motives’ of the decision. The fact that the convictions are not justifications is connected with the fact that different rules may regulate the jury’s procedure (for example, simple or qualified majority). This shows that the relationship between the convictions of the jurors and their verdict may vary greatly. In contrast to the verdict of the jury, the judgment of the judge is ‘reasoned’; it needs, and contains, a justification. The judge tries to justify it by, or deduce it logically from, other statements: the statements of the legal system, combined with the verdict that plays the role of initial conditions. This is why the judgment may be challenged on logical grounds. The jury’s decision, on the other hand, can only be challenged by questioning whether it has been reached in accordance with the accepted rules of procedure; i.e. formally, but not as to its content. (A justification of the content of a decision is significantly called a ‘motivated report’, rather than a ‘logically justified report’.)

The analogy between this procedure and that by which we decide basic statements is clear. It throws light, for example, upon their relativity, and the way in which they depend upon questions raised by the theory. In the case of the trial by jury, it would be clearly impossible to apply the ‘theory’ unless there is first a verdict arrived at by decision; yet the verdict has to be found in a procedure that conforms to, and thus applies, part of the general legal code. The case is analogous to that of basic statements. Their acceptance is part of the application of a theoretical system; and it is only this application which makes any further applications of the theoretical system possible. The empirical basis of objective science has thus nothing ‘absolute’ about it. Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is like a building erected on piles. The piles are driven down from above into the swamp, but not down to any natural or ‘given’ base; and if we stop driving the piles deeper, it is not because we have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to carry the structure, at least for the time being.
Addendum 1972 edition page 281.

(a) We can never rationally justify a theory, that is to say, our belief in the truth of a theory, or in its being probably true. This negative solution is compatible with the following positive solution, contained in the rule of preferring theories which are better corroborated than others: (b) We can sometimes rationally justify the preference for a theory in the light of its corroboration, that is, of the present state of the critical discussion of the competing theories, which are critically discussed and compared from the point of view of assessing their nearness to the truth (verisimilitude). The current state of this discussion may, in principle, be reported in the form of their degrees of corroboration. The degree of corroboration is not, however, a measure of verisimilitude (such a measure would have to be timeless) but only a report of what we have been able to ascertain up to a certain moment of time, about the comparative claims of the competing theories by judging the available reasons which have been proposed for and against their verisimilitude.
APPENDIX *i Two Notes on Induction and Demarcation, (1959) Page 315

Scientific theories can never be ‘justified’, or verified. But in spite of this, a hypothesis A can under certain circumstances achieve more than a hypothesis B-perhaps because B is contradicted by certain results of observations, and therefore ‘falsified’ by them, whereas A is not falsified; or perhaps because a greater number of predictions can be derived with the help of A than with the help of B. The best we can say of a hypothesis is that up to now it has been able to show its worth, and that it has been more successful than other hypotheses although, in principle, it can never be justified, verified, or even shown to be probable. This appraisal of the hypothesis relies solely upon deductive consequences (predictions) which may be drawn from the hypothesis. There is no need even to mention induction.

The mistake usually made in this field can be explained historically: science was considered to be a system of knowledge – of knowledge as certain as it could be made. ‘Induction’ was supposed to guarantee the truth of this knowledge. Later it became clear that absolutely certain truth was not attainable. Thus one tried to get in its stead at least some kind of watered-down certainty or truth; that is to say, ‘probability’.

But speaking of ‘probability’ instead of ‘truth’ does not help us to escape either from the infinite regress or from apriorism.
From this point of view, one sees that it is useless and misleading to employ the concept of probability in connection with scientific hypotheses. The concept of probability is used in physics and in the theory of games of chance in a definite way which may be satisfactorily defined with the help of the concept of relative frequency (following von Mises). Reichenbach’s attempts to extend this concept so as to include the so-called ‘inductive probability’ or the ‘probability of hypotheses’ are doomed to failure, in my opinion, although I have no objection whatever against the idea of a ‘truth-frequency’ within a sequence of statements which he tries to invoke.

From “After the Open Society” (2008),

page 10 “Optimist,Pessimist and Pragmatist Views of Scientific Knowledge” (1963)
The position between optimism and pessimism which I am trying to establish may be briefly described as follows.
I agree with the pessimists that there is no justification for the claim of any particular theory or assertion to be true. Thus there is no justification of any claim to know, including the claims of scientific knowledge. But this merely means that all knowledge, including scientific knowledge, is hypothetical or conjectural: it is uncertain, fallible. This certainly does not mean that every assertion is as good as any other, competing, assertion. For we can discuss our various competing assertions, our conjectures, critically; and the result of the critical discussion is that we find out why some among the competing conjectures are better than others.
Accordingly, I agree with the optimists that our knowledge can grow, and can progress; for we can sometimes justify the verdict of our critical discussions when it ranks certain conjectures higher than others.
A verdict of this kind always appraises our conjectures or theories from the point of view of their approach to truth: although we cannot justify any claim that a theory is true, we can sometimes give good reasons for asserting that one theory is better than another, or even than all its competitors. In this way our knowledge can grow, and science can progress.

From “The Myth of the Framework” (1994), Models, Instruments and Truth (orig 1963)

As to the rationality of science, this is simply the rationality of critical discussion. Indeed there is nothing, I think, which can better explain the somewhat abstract idea of rationality than the example of a well-conducted critical discussion. And a critical discussion is well-conducted if it is entirely devoted to one aim: to find a flaw in the claim that a certain theory presents a solution to a certain problem. The scientists participating in the critical discussion constantly try to refute the theory, or at least its claim that it can solve its problem.
It is most important to see that a critical discussion always deals with more than one theory at a time. For in trying to assess the merits or demerits even of one theory, it always must try to judge whether the theory in question is an advance: whether it explains things which we have been unable to explain so far – that is to say, with the help of older theories. But of course there is often (in fact, always) more than one new theory competing at a time, in which case the critical discussion tries to assess their comparative merits and demerits. Older theories, however, always play an important part in the critical discussion, especially those theories which form part of the ‘background knowledge’ of the discussion – theories which, for the time being, are not criticized, but are used as the framework within which the discussion takes place. Any single one of these background theories may however he challenged at any time (though not too many at the same time), and thus move into the foreground of the discussion. Though there is always a background, any particular part of the background may at any time lose its background character.
Thus critical discussion is essentially a comparison of the merits and demerits of two or more theories (usually more than two). The merits discussed are, mainly, the explanatory power of the theories (discussed at some length in my Logic of Scientific Discovery) – the way in which they are able to solve our problems of explaining things, the way in which the theories cohere with certain other highly valued theories, their power to shed new light on old problems and to suggest new problems. The chief demerit is inconsistency, including inconsistency with the results of experiments that a competing theory can explain,
It will be seen from this that critical discussion will often be undecided, and that there do not exist very definite criteria for tentative acceptability: that the frontier of science is very fluid.
Thus the result of a scientific discussion is very often inconclusive, not only in the sense that we cannot conclusively verify (or even falsify) any of the theories under discussion – this should by now be obvious – but also in the sense that we cannot say that one of our theories seems to have definite advantages over its competitors. if we are lucky, however, we may sometimes come to the conclusion that one of the theories has greater merits and lesser demerits than the others. (In this case some people say that the theory is ‘accepted’ – of course, only for the time being.)
From this analysis of the process of the critical discussion of theories it should be clear that the discussion never considers the question whether a theory is Justified’ in the sense that we are justified in accepting it as true. At best, the critical discussion justifies the claim that the theory in question is the best available, or, in other words, that it comes nearest to the truth.
Thus although we can judge theories only ‘relatively’ in the sense that we compare them with each other (and not with the truth, which we do not know), this does not mean that we are relativists (in the sense of the famous phrase that ‘truth is relative’). On the contrary, in comparing them, we try to find the one which we judge comes nearest to the (unknown) truth. So the idea of truth (of an `absolute’ truth) plays a most important part in our discussion. It is our main regulative idea. Though we can never justify the claim to have reached truth, we can often give some very good reasons, or justification, why one theory should be judged to be nearer to it than another.

From “Conjectures and Refutations” (1963)

Page 235 – Chapter 10 Truth, Rationality, and the Growth of Knowledge, XIII

It always remains possible, of course, that we shall make mistakes in our relative appraisal of two theories, and the appraisal will often be a controversial matter. This point can hardly be over-emphasized. Yet it is also important that in principle, and as long as there are no revolutionary changes in our background knowledge, the relative appraisal of our two theories, t1 and t2 , will remain stable. More particularly, our preferences need not change, as we have seen, if we eventually refute the better of the two theories. Newton’s dynamics, for example, even though we may regard it as refuted, has of course maintained its superiority over Kepler’s and Galileo’s theories. The reason is its greater content or explanatory power. Newton’s theory continues to explain more facts than did the others; to explain them with greater precision; and to unify the previously unconnected problems of celestial and terrestrial mechanics. The reason for the stability of relative appraisals such as these is quite simple: the logical relation between the theories is of such a character that, first of all, there exist with respect to them those crucial experiments, and these, when carried out, went against Newton’s predecessors. And secondly, it is of such a character that the later refutations of Newton’s theory could not support the older theories: they either did not affect them, or (as with the perihelion motion of Mercury) they could be claimed to refute the predecessors also.

Page 248 – XXII

While the verificationists or inductivists in vain try to show that scientific beliefs can be justified or, at least, established as probable (and so encourage, by their failure, the retreat into irrationalism), we of the other group have found that we do not even want a highly probable theory. Equating rationality with the critical attitude, we look for theories which, however fallible, progress beyond their predecessors; which means that they can be more severely tested, and stand up to some of the new tests. And while the verificationists laboured in vain to discover valid positive arguments in support of their beliefs, we for our part are satisfied that the rationality of a theory lies in the fact that we choose it because it is better than its predecessors; because it can be put to more severe tests; because it may even have passed them, if we are fortunate; and because it may, therefore, approach nearer to the truth.

Page 387 – Addenda Some Technical Notes 1. Empirical Content

Empiricists usually believed that the empirical basis consisted of absolutely ‘given’ perceptions or observations, of ‘data’, and that science could build on these data as if on rock. In opposition, I pointed out that the apparent ‘data’ of experience were always interpretations in the light of theories, and therefore affected by the hypothetical or conjectural character of all theories.
That those experiences which we call ‘perceptions’ are interpretations-interpretations, I suggest, of the total situation in which we find ourselves when ‘perceiving’ is an insight due to Kant. It has often been formulated, somewhat awkwardly, by saying that perceptions are interpretations of what is given to us by our senses; and from this formulation sprang the belief that there must be present some ultimate ‘data’, some ultimate material which must be uninterpreted (since interpretation must be of something, and since there cannot be an infinite regress). But this argument does not take into account that (as already suggested by Kant) the process of interpretation is at least partly physiological, so that there are never any uninterpreted data experienced by us: the existence of these uninterpreted ‘data’ is therefore a theory, not a fact of experience, and least of all an ultimate, or ‘basic’ fact.

Thus there is no uninterpreted empirical basis; and the test statements which form the empirical basis cannot be statements expressing uninterpreted ‘data’ (since no such data exist) but are, simply, statements which state observable simple facts about our physical environment. They are, of course, facts interpreted in the light of theories; they are soaked in theory, as it were.

As I pointed out in my Logic of Scientific Discovery (end of section 25), the statement ‘Here is a glass of water’, cannot be verified by any observational experience. The reason is that the universal terms which occur in this statement (‘glass’, ‘water’) are dispositional: they ‘denote physical bodies which exhibit a certain law-like behaviour.

From “Objective Knowledge” (1972)

Page 20 – Chapter 1. Conjectural Knowledge, Section 8 Corroboration: The Merits of Improbability

The fundamental difference between my approach and the approach for which I long ago introduced the label `inductivist’ is that I lay stress on negative arguments, such as negative instances or counter-examples, refutations, and attempted refutations – in short, criticism – while the inductivist lays stress on `positive instances’, from which he draws ‘non-demonstrative inferences’ , and which he hopes will guarantee the ‘reliability’ of the conclusions of these inferences. In my view, all that can possibly be ‘positive’ in our scientific knowledge is positive only in so far as certain theories are, at a certain moment of time, preferred to others in the light of our critical discussion which consists of attempted refutations, including empirical tests. Thus even what may be called ‘positive’ is so only with respect to negative methods.

From “Realism and the Aim of Science” (1983)

Page 65 – Chapter 1 Induction, Section 4 A Family of Four Problems III
Accordingly, I should not expect that a more highly corroborated theory will as a rule outlive a less well corroborated theory. The life expectancy of a theory does not, I think, grow with its degree of corroboration, or with its past power to survive tests.
But do I not (I will be asked) expect the sun to rise tomorrow, or do I not base my predictions on the laws of motion? Of course I do, because they are the best laws available, as discussed at length above. Even where I have theoretical doubts, I shall base my actions (if I have to act – that is, to choose) on the choice of the best theory available. Thus I should be prepared to bet on the sun’s rising tomorrow (betting is a practical action), but not on the laws of Newtonian (or Einsteinian) mechanics to survive future criticism, or to survive it longer than, say, the best available theory of synaptic transmission, even though the latter has (or so it seems) a lesser degree of corroboration. As to practical actions (such as betting on predictions made by these theories), I should be ready to base them in both cases on the best theory in its field, provided it has been well tested.
The matter can also be put like this. The question of survival of a theory is a matter pertaining to its historical fate, and thus to the history of science. On the other hand, its use for prediction is a matter connected with its application. These two questions are related, but not intimately. For we often apply theories without any hesitation even if they are dead – that is falsified – as long as they are sufficiently good approximations for the purpose in hand. Thus there is nothing paradoxical in my readiness to bet on applications of a theory combined with a refusal to bet on the survival of the same theory.
My refusal to bet on the survival of a well corroborated theory shows that I do not draw any inductive conclusion from past survival to future survival.
page 71 Chapter 1 Induction, Section 4 A Family of Four Problems VI
I have replaced the problem “How do you know? What is the reason or justification, for your assertion?” by the problem: “Why do you prefer this conjecture to competing conjectures? What is the reason for your preference?”
While my answer to the first question is “I do not know”, my answer to the second problem is that, as a rule our preference for a better corroborated theory will be defended rationally by those arguments which have been used in our critical discussion, including of course our discussion of the results of tests. These are the arguments of which the degree of corroboration is intended to provide a summary report.
In this way the logical problem of induction is solved.

From “Unended Quest” (1974), standalone printing. Unended Quest is the autobiography included in the two volume Schilpp “The Philosophy of Karl Popper”

Page 103 Chapter 20 Truth; Probability; Corroboration

I regarded (and I still regard) the degree of corroboration of a theory merely as a critical report on the quality of past performance: it could not be used to predict future performance. (The theory, of course, may help us to predict future events.) Thus it had a time index: one could only speak of the degree of corroboration of a theory at a certain stage of its critical discussion. In some cases it provided a very good guide if one wished to assess the relative merits of two or more competing theories in the light of past discussions. When faced with the need to act, on one theory or another, the rational choice was to act on that theory – if there was one – which so far had stood up to criticism better than its competitors had: there is no better idea of rationality than that of a readiness to accept criticism; that is, criticism which discusses the merits of competing theories from the point of view of the regulative idea of truth.
Accordingly, the degree of corroboration of a theory is a rational guide to practice. Although we cannot justify a theory – that is, justify our belief in its truth – we can sometimes justify our preference for one theory over another; for example if its degree of corroboration is greater.
I have been able to show, very simply, that Einstein’s theory is (at least at the moment of writing) preferable to Newton’s, by showing that its degree of corroboration is greater.
A decisive point about degree of corroboration was that, because it increased with the severity of tests, it could be high only for theories with a high degree of testability or content. But this meant that degree of corroboration was linked to improbability rather than to probability: it was thus impossible to identify it with probability (although it could be defined in terms of probability – as can improbability).
All these problems were opened, or dealt with, in Logik der Forschung; but I felt that there was more to be done about them, and that an axiomatization of the probability calculus was the thing I should do next.

from P. A. Schilpp, “The Philosophy of Karl Popper” (1974)

Page 1024 + Replies to My Critics -Section 14 The Psychological and Pragmatic Problems of Induction

It is, I think, hardly open to serious doubt that we are fitted with an immensely rich genetic endowment which, among other things, makes us most eager to generalize and to look out for regularities; and also, to apply the method of trial and error. Now I assert that all learning of new things is by the selective elimination of error rather than by instruction. (I do not deny that there exists what Konrad Lorenz calls imprinting; but this is very different from inductive instruction through repetition.) I assert, moreover, that this is an application of what I have called the principle of transference from logic to psychology – the principle that what is true in logic must, by and large, be true in psychology.

My solution of the logical problem of induction was that we may have preferences for certain of the competing conjectures; that is, for those which are highly informative and which so far have stood up to eliminative criticism. These preferred conjectures are the result of selection, of the struggle for survival of the hypotheses under the strain of criticism, which is artificially intensified selection pressure.

The same holds for the psychological problem of induction. Here too we are faced with competing hypotheses, which may perhaps be called beliefs, and some of them are eliminated, while others survive, anyway for the time being. Animals are often eliminated along with their beliefs; or else they survive with them. Men frequently outlive their beliefs; but for as long as the beliefs survive (often a very short time), they form the (momentary or lasting) basis of action.
My central thesis is that this Darwinian procedure of the selection of beliefs and actions can in no sense be described as irrational. In no way does it clash with the rational solution of the logical problem of induction. Rather, it is just the transference of the logical solution to the psychological field. (This does not mean, of course, that we never suffer from what are called “irrational beliefs”.)

Thus with an application of the principle of transference to Hume’s psychological problem Hume’s irrationalist conclusions disappear.

In talking of preference I have so far discussed only the theoretician’s preference – if he has any; and why it will be for the “better”, that is, more testable, theory, and for the better tested one. Of course, the theoretician may not have any preference: he may be discouraged by Hume’s, and my, “sceptical” solution to Hume’s logical problem; he may say that, if he cannot make sure of finding the true theory among the competing theories, he is not interested in any method like the one described – not even if the method makes it reasonably certain that, if a true theory should be among the theories proposed, it will be among the surviving, the preferred, the corroborated ones. Yet a more sanguine or more dedicated or more curious “pure” theoretician may well be encouraged, by our analysis, to propose again and again new competing theories in the hope that one of them may be true – even if we shall never be able to make sure of any one that it is true.

Thus the pure theoretician has more than one way of action open to him; and he will choose a method such as the method of trial and the elimination or error only if his curiosity exceeds his disappointment at the unavoidable uncertainty and incompleteness of all our endeavours.

It is different with him qua man of practical action. For a man of practical action has always to choose between some more or less definite alternatives, since even inaction is a kind of action.

But every action presupposes a set of expectations, that is, of theories about the world. Which theory shall the man of action choose? Is there such a thing as a rational choice?

This leads us to the pragmatic problems of induction, which to start with, we might formulate thus:
(a) Upon which theory should we rely for practical action, from a rational point of view?
(b) Which theory should we prefer for practical action, from a rational point of view?
My answer to (a) is: from a rational point of view, we should not “rely” on any theory, for no theory has been shown to be true, or can be shown to be true (or “reliable”).
My answer to (b) is: we should prefer the best tested theory as a basis for action.

In other words, there is no “absolute reliance”; but since we have to choose, it will be “rational” to choose the best tested theory. This will be “rational” in the most obvious sense of the word known to me: the best tested theory is the one which, in the light of our critical discussion, appears to be the best so far; and I do not know of anything more “rational” than a well-conducted critical discussion.

Since this point appears not to have got home I shall try to restate it here in a slightly new way, suggested to me by David Miller. Let us forget momentarily about what theories we “use” or “choose” or “base our practical actions on”, and consider only the resulting proposal or decision (to do X, not to do X; to do nothing; or so on). Such a proposal can, we hope, be rationally criticized; and if we are rational agents we will want it to survive, if possible, the most testing criticism we can muster. But such criticism will freely make use of the best tested scientific theories in our possession. Consequently any proposal that ignores these theories (where they are relevant, I need hardly add) will collapse under criticism. Should any proposal remain, it will be rational to adopt it.

This seems to me all far from tautological.” Indeed, it might well be challenged by challenging the italicized sentence in the last paragraph. Why, it might be asked, does rational criticism make use of the best tested although highly unreliable theories? The answer, however, is exactly the same as before. Deciding to criticize a practical proposal from the standpoint of modern medicine (rather than, say, in phrenological terms) is itself a kind of “practical” decision (anyway it may have practical consequences). Thus the rational decision is always: adopt critical methods that have themselves withstood severe criticism.

There is, of course, an infinite regress here. But it is transparently harmless.

Now I do not particularly want to deny (or, for that matter, assert) that, in choosing the best tested theory as a basis for action, we “rely” on it, in some sense of the word. It may therefore even be described as the most “reliable” theory available, in some sense of this term. Yet this is not to say that it is “reliable”. It is “unreliable” at least in the sense that we shall always do well, even in practical action, to foresee the possibility that something may go wrong with it and with our expectations.

But it is not merely this trivial caution which we must derive from our negative reply to the pragmatic problem (a). Rather, it is of the utmost importance for the understanding of the whole problem, and especially of what I have called the traditional problem, that in spite of the “rationality” of choosing the best tested theory as a basis of action, this choice is not “rational” in the sense that it is based upon good reasons in favour of the expectation that it will in practice be a successful choice: there can be no good reasons in this sense, and this is precisely Hume’s result. On the contrary, even if our physical theories should be true, it is perfectly possible that the world as we know it, with all its pragmatically relevant regularities, may completely disintegrate in the next second. This should be obvious to anybody today; but I said so before Hiroshima: there are infinitely many possible causes of local, partial, or total disaster.

From a pragmatic point of view, however, most of these possibilities are obviously not worth bothering about because we cannot do anything about them: they are beyond the realm of action. (I do not, of course, include atomic war among those disasters which are beyond the realm of human action, although most of us think just in this way since we cannot do more about it than about an act of God.)

All this would hold even if we could be certain that our physical and biological theories were true. But we do not know it. On the contrary, we have very good reason to suspect even the best of them; and this adds, of course, further infinities to the infinite possibilities of catastrophe.

It is this kind of consideration which makes Hume’s and my own negative reply so important. For we can now see very clearly why we must beware lest our theory of knowledge proves too much. More precisely, no theory of knowledge should attempt to explain why we are successful in our attempts to explain things.

Even if we assume that we have been successful-that our physical theories are true – we can learn from our cosmology how infinitely improbable this success is: our theories tell us that the world is almost completely empty, and that empty space is filled with chaotic radiation. And almost all places which are not empty are occupied either by chaotic dust, or by gases, or by very hot stars – all in conditions which seem to make the application of any physical method of acquiring knowledge impossible.

There are many worlds, possible and actual worlds, in which a search for knowledge and for regularities would fail. And even in the world as we actually know it from the sciences, the occurrence of conditions under which life, and a search for knowledge, could arise-and succeed-seems to be almost infinitely improbable. Moreover, it seems that if ever such conditions should appear, they would be bound to disappear again, after a time which, cosmologically speaking, is very short.
It is in this sense that induction is inductively invalid, as I said above, in section 13, subsection I. That is to say, any strong positive reply to Hume’s logical problem (say, the thesis that induction is valid) would be paradoxical. For, on the one hand, if induction is the method of science, then modern cosmology is at least roughly correct (I do not dispute this); and on the other, modern cosmology teaches us that to generalize from observations taken, for the most part, in our incredibly idiosyncratic region of the universe would almost always be quite invalid. Thus if induction is “inductively valid” it will almost always lead to false conclusions; and therefore it is inductively invalid.





Posted in epistemology | Leave a comment

Goodman’s grue emeralds and the “new riddle of induction”

The grue emerald problem surely ranks with the Gettier problem as a red herring of the first order, generated by the misguided quest for confirmation, like Hempel’s paradox of the ravens. The paradox of the ravens means that the existence of a non-black non-raven like a white shoe or a green lizard gets to increase the likelihood that all ravens are black.

I raise this because Rosenberg had a couple of pages on this problem immediately before his Popper paragraphs.

Consider the general hypothesis that all emeralds are green (actually I would have thought that was part of the definition of an emerald, you would advance general hypotheses to explain why the optical properties of a certain kind of stone produce a green colour).

Nelson Goodman invented the term ‘grue’ to which means “green at time t and t is before 2100 AD or it is blue at t then t is after 2100 AD“.

The implication is that after 2010 the cloudless sky will be blue or grue and emeralds will be blue as well.

Testing the  colour of emeralds we find that all the instances which are green support the theory that they are green up to 2100 and grue (blue) after that date. Rosenberg wrote:

We could restate the problem as one about falsification too. Since every attempt to falsify “All emeralds are green” has failed, it has also failed to falsify “All emeralds are grue”. Both hypotheses have withstood the same battery of scientific tests.

Everyone accepts that this is an absurd outcome but the problem is to explain why. What is wrong with grue? To an outsider it looks like a silly philosophers game.  Rosenberg wrote

“For our problem is to show why “All emeralds are grue is not a well-supported law, even though it has the same number of supporting instances as “all emeralds are green”. ”

He advises that this remains an unsolved problem in the theory of confirmation. In the years since 1946 various solutions have been offered but none has triumphed.

“But  the inquiry has resulted in a far greater understanding of the dimensions of scientific confirmation than the logical positivists or their empiricist predecessors [did he mean successors?] recognized. One thing all philosophers of science agree on is that the new riddle shows how complicated the notion of confirmation turns out to be, even in the simple case s of generalizations about things we can observe” (120)

It is very fortunate that  scientists can get along without a working theory of confirmation! Maybe they don’t even need one!!

Well here is my contribution:  “Emeralds are green” is not a  law, it is an empirical regularity based on the laws of optics and the crystal structure of emeralds. Similarly it is not a law that the sun rises in the morning and sets in the evening, that is a regularity that we observe over most of the earth (but not all) due to the laws of mechanics and the structure of the solar system.

The laws of scientific interest in relation to the colour of jewels are the laws of optics. Speculation about grueness amount to a conjecture that the laws of optics (or the structure of emeralds) will change at 2100. A conjecture about something that happens after 2100 cannot be tested before 2100. So if we care about grueness, without having any scientific reason to be interested, we will just have to wait until 2100 and see what happens.

I think Bartley made the point that theories are interesting in relation to the problems that they solve. Speculation about grue emeralds was never related to any problem of scientific interest. So what was the point? What is the point of confirmation? We can live with conjectural knowledge provided it is tested well enough for engineering purposes and as long as people come up with better theories from time to time so we know we are  making progress. That looks ok to me.

Posted in epistemology | 1 Comment

Alex Rosenberg, Philosophy of Science: a contemporary introduction, 2nd ed. 2005 reprinted in 2010

The first edition was published in 2000 in the Routledge Contemporary Introductions to Philosophy Series. Philosophy of Science: Contemporary Readings (eds Balashov and Rosenberg, 2004) is a companion anthology.

After a chapter on scientific explanation and a chapter on the structure and metaphysics of scientific theories Popper’s contribution is introduced in a chapter  on the epistemology of scientific theorizing under the sub-heading Induction as a pseudo-problem: Popper’s gambit.

Unfortunately Rosenberg did not explain Popper’s ideas as an attempt to  address some major problems in science and the positivist philosophy of science at the time by reformulating the issue of demarcation to shift attention from meaning  to testability because (a) he thought that the verification principle would never work  and (b)  he thought it was more helpful for working scientists to understand  the most effective way to use data than to worry about a  criterion for meaning.  (It seems that  Hempel conceded on (a) in the 1950s).

As to (b) the function of data,  Popper argued in favour of deductive testing instead of inductive proof or confirmation because the standard approach was not going to work any better than the verifiability criterion of meaning.

Before making some more comments on the details of Rosenberg’s critique it may be helpful to check how  many of the Popperian themes he identified. He did not engage with the theme of non-justificationism and conjectural knowledge (there is no reference to  Bartley who was helpful on that aspect of Popper’s epistemology and rationality).  He cited Objective Knowledge but did not pursue the theme of objective knowledge itself.   The error that Popper called “essentialism”, that is the extended explication of terms,  did not arise as an issue in this book. He did not pick up the social turn and  Popper’s concern with conventions or “rules of the game”. This aspect of Popperism aroused no comment and there was no citation of Jarvie (2001). There was no reference to Popper’s position on metaphysics and the theory of metaphysical research programs. Popper’s serious work along Darwinian and evolutionary lines was not considered nor the theory of language which he inherited from Buhler, and the growth area of evolutionary epistemology (Bartley and Radnitzky, 1987; Hooker and Hahlweg   ).

It is apparent from Rosenberg’s neglect of the Popperian themes  that he was not in a position to explain Popper’s contribution in its most robust form to explain why scientists in particular have found his work to be interesting and helpful. Popper is depicted as a failed contributor to the Legend, the project of the positivists and the logical empiricists, rather than an alternative program. Consequently he has fallen into several of the standard errors. It is interesting to note that he only listed three of Popper’s books in the bibliography; The Logic of Scientific Discovery, Conjectures and Refutations and Objective Knowledge.

Rosenberg commenced his critical commentary with reference to Popper’s ideas about deductive testing. He made the strange claim

“Popper held that as a matter of fact, scientists seek negative evidence against, not positive evidence for, scientific hypotheses.” (121)

However Popper was well aware of the existence of “confirmation bias”, for example in the chapter on the sociology of knowledge in The Open Society and its Enemies he wrote

“Everyone who has an inkling of the history of the natural sciences is aware of the passionate tenacity which characterizes many of its quarrels. No amount of political partiality can influence political theories more strongly than the partiality shown by some natural scientists in favour of their intellectual offspring…”. (OSE chapter 23)

Certainly Popper thought that scientists should seek negative evidence but in the case of their own ideas there was no guarantee that they will do so, hence the importance of the social nature of science so that other scientists could compensate for the bias of individuals, provided that they take a critical approach.

“A scientist may offer his theory with the full conviction that it is unassailable. But this will not impress his fellow-scientists and competitors; rather it challenges them; they know that the scientific attitude means criticizing everything, and they are little deterred even by authorities”. (OSE Vol 2, 218)

Popper did not refer to confirmation bias, be referred to “conventionalist strategies” which the defenders of Newton’s theory were using to hold Einstein’s new theory at bay.

Popper mounted logical arguments against theories of induction that attempted to find a way to support or render probably scientific hypotheses. A separate move, not based on logic, was to formulate some proposals for conventions or rules of the game to maximize the critical pressure that is applied to hypotheses, especially by empirical testing.

Taking up the point that the critical approach is a methodological proposal, Rosenberg wrote that Popper stigmatized Freud and Marx on account of the unscientific nature of their theories (122). Certainly Popper was concerned about the status of Freudian theory, although he did not write much about it. In the case of Marx, far from stigmatizing his work he wrote several hundred pages of analysis to pick out the valuable elements (the rejection of psychologism and the beginning of institutional analysis) from the parts which he considered to be dangerous such as the elements of essentialism and prophecy.

Rosenberg then turned his attention to the use of Popper’s ideas by economists, although it is very hard to find economists who have a good understanding of Popper’s ideas. The same applies to the contributors to the post-1980 growth industry of the philosophy and methodology of economics. He concluded that

“when it comes to economics, Popper’s claims seem to have been falsified as descriptions and to have been ill-advised as prescriptions. The history of Newtonian mechanics offers the same verdict on Popper’s prescriptions…Popper’s one-size-fits-all recipe, “refute the current theory and conjecture new hypotheses”, does not always provide the right answer” (123)

It is more correct to say “test the current theory, discover new problems and go to work on them with new hypotheses and imaginative criticism (and tests)”.

Popper did not provide a one-size-fits-all recipe, he advocated a critical approach with several forms of criticism – logic, tests, problem-solving capacity, consistency with other theories, consistency with the metaphysical research program – and the kind of criticism that is appropriate depends on the  theories under investigation and the aspect of the theory that is under scrutiny at the time.

Strangely, Rosenberg regarded Eddington’s eclipse observations as a confirmation of Einstein’s theory; of course Popper knew that the result was a triumph for Einstein in comparison with Newtonian mechanics but that did not mean that Einstein’s theory was proved, confirmed or verified in the final and definitive form required to justify the demands of justificationists.

Rosenberg asked:

“What can Popper say about theories that are repeatedly tested, whose predictions are borne out to more and more decimal places, which make novel striking predictions that are in agreement with (we can’t say “confirmed by”) new data?”

He can say that they are very powerful and beautiful theories and quite likely the best that we have at the present time but that is no guarantee that anomalies will not appear (if they have not done so already), rivals may appear in the way that Einstein challenged Newton and not for the first time a theory that was considered to be the end of the road will turn out to be another milestone in the progress of science. On a point of detail, what theory at the present time is considered to be confirmed in the way that Rosenberg seems to think that Einstein’s theory was confirmed (up to the time of this book, 2010)? What is the current status of Einstein?

Posted in epistemology | Leave a comment

Peter Godfrey-Smith, 2003, Theory and Reality: an introduction to the philosophy of science

The author took his first degree in Sydney and is a Distinguished Professor in Philosophy in the Graduate Centre at the City University of New York. His   book Darwinian Populations and Natural Selection won the 2010 Lakatos award. This book is an ambitious introduction to the philosophy of science and  a platform for his own “naturalistic” program. It is based on lectures delivered at Stanford University over the last  11 years.

“It also bears the influence of innumerable comments, questions, and papers by students over that time, together with remarks made by colleagues and friends. ” (xi)

He acknowledged a dozen people in the Preface, plus two anonymous referees for the University of Chicago Press, three more people for “detailed and exceptionally helpful comments”, and four fundamental mentors.

It seems  that the misreading of Popper that occurs in this book, and errors based on apparent neglect of  most of his books, passed without attracting any critical comment through a decade of lectures and the production process.

The bibliography lists  Logik der Forschung, The Logic of Scientific Discovery, Conjectures and Refutations, and Criticism and the Growth of Knowledge (eds Lakatos and Musgrave).

Missing are Objective Knowledge, The Library of Living Philosophers (ed Schilpp) volumes, Unended Quest, The Self and its Brain  and the three volumes of Postscript to the Logic of Scientific Discovery.

The book provides a chronological account of the philosophy of science in the 20th century after a brief survey of the various ways that science can be studied.  He drew a distinction between epistemologicval and metaphysical issues.

“Epistemology is the side of philosophy that is concerned with questions about knowledge, evidence and rationality.  Metaphysics…deals with more general questions about the nature of reality.” (9)

It is noteworthy that the most dominant school in the philosophy of science in the 20th century completely  banned metaphysics on the ground it is meaningless nonsense.

Another division of interests in the last century has been to look for (1) a proper form of a the philosophy of science; (2) to identify the scientific way of thinking; (3) a logical theory of science; (4) a methodology (rules and procedures) and in recent times (5) a general theory of scientific change. To which one could add (6) the study of the institutions of science.

He devoted two chapters to Logical Positivism and Logical Empiricism with special attention to ”The Mother of All Problems” – “a very important and difficult problem, the problem of understanding how observations can confirm a theory.” (39) In his pursuit of confirmation he was not deterred by Popper’s skepticism about the prospects of success or  the failure of the project up to date.

Moving on to his chapter on Popper, he noted that Popper is the only philosopher treated in the book who is regarded as a hero by many scientists, despite criticism from many philosophers over the years. He wrote “I agree with many of these criticisms and don’t see any way for Popper to escape from their force”. (57)

In his biographical introduction to Popper he mentioned his stay in New Zealand but not the two very important books that he wrote while he was there.  Godfrey-Smith described falsificationism as the centrepiece of Popper’s scheme along with the demand that scientific theories should take risks and offer the possibility of falsification.

“As I said above, Popper held that Marx’s and Freud’s theories were not scientific in this sense. No matter what happens, Popper thought, a Marxist or Freudian can fit it somehow into his theory.”

On a point of detail, that does not do justice to Popper’s take on Marx than that. He was alarmed by the true believers in Marxism who aroused his concern about unfalsifiable theories, but he took Marx seriously as a social scientists and he devoted hundreds of pages in the second volume of The Open Society and its Enemies to sort out the valuable parts of Marx from the dangerous parts which resulted in the ruin of Russia.

Another nuance that Godfrey-Smith did not pick up is the distinction between falsifiability and falsification [SE3]. This is apparent from his account of Popperian testing “If the prediction fails, the we have refuted, or falsified, the theory.” ( 59)  The deductive logic of falsifiability is decisive, given a true observation, but we do not have access to (certainly) true observation statements.

Because G-S thought that Popper was looking for decisive refutations (something that Popper always rejected) he suggested that Popper got into trouble when he addressed the problem of comparative probabilities were the raw data are scatter relations and not single points. “But in making this move, Popper  has badly damaged his original picture of science. This was a picture in which observations, once accepted, have the power to decisively refute theoretical hypotheses [SE 2].  That is a matter of deductive logic, as Popper endlessly stressed. Now Popper is saying that falsification can occur without its being backed up by a deductive logical relation between observation and theory” (67).

What has happened is that the theory has been made problematic, so there may be an anomaly that needs to be taken seriously. That is important to generate new research problems and it does not (and cannot) involve decisive refutation.

Introducing Popper

He described falsificationism as the centrepiece of Popper’s scheme although Popper himself was not happy with the label “falsificationist” and he considered that the critical approach was the central pillar of his work. The six themes provide a more satisfactory account of the interesting and important features of Popper’s work but placing  the focus on falsification is convenient  for teachers who want to start with the logical positivists and then treat Popper as an eccentric contributor to the same project (the Legend described by Kitcher). It is convenient but it is also very misleading unless the Popperian “themes” are spelled out to signal the major differences between Popper and the positivists and the logical empiricists. How many of the themes did  Godfrey-Smith identify in Popper’s work?

He disagreed with  Popper on confirmation but he did not introduce the language of justification and non-justification to explain the roots of Popper’s conjectural theory of knowledge.

He did not engage with the theme of objective knowledge which was closely linked with Poppers “biological and evolution turn” of the 1960s. [Though the evolutionary approach was clearly present from the beginning.  when he wrote “Its aim is not to save the lives of untenable systems but, on the contrary, to select the one which is by comparison the
fittest, by exposing them all to the fiercest struggle for survival.” (Popper,
1958, 42).] He compared the two stages of conjecture and refutation with Darwinian variation and selection but did not  refer to the four-stage scheme that was a dominant motif in the collection of papers in Objective Knowledge, or the work on evolutionary epistemology by Popper and others (Bartley and Radnitzky, eds, 1987).

He did not report  Popper’s revival of metaphysics in the form of metaphysical research programs. He gave Lakatos the credit for the idea of research programs. He wrote:

“Lakatos’s main contribution was the idea of  a research program… It should be obvious from the previous chapters that this was an idea waiting to be developed.” (102)

Indeed the idea was developed by Popper in the late 1940s and 19590s and it was written up in the Postscript in the 1950s. The manuscript and galleys were available to Popper’s colleagues, including Lakatos. The signal contribution from Lakatos was to decide as a matter of fiat that the metaphysical “hard core” of the program should be protected from criticism, thereby subverting the salient feature of Popper’s critical approach.

Finally he did not engage with Popper’s social/institutional ideas including the “rules of the game” and the way Popper wrote about the social nature of science.

He inverted one of Popper’s points on the importance of the social nature of science. On the function of criticism in the scientific community, Godfrey-Smith wrote “In contrast to Popper, Hull argues that there is no need for individual scientist to take a cautious and sceptical attitude towards their own work: others will do this for them” (165). Where is the contrast, when we find that Popper wrote “A scientist may offer his theory with the full conviction that it is unassailable. But this will not impress his fellow-scientists and competitors; rather it challenges them; they know that the scientific attitude means criticizing everything, and they are little deterred even by authorities”. (OSE Vol 2, 218)?

Moving on to Popper’s skepticism about induction and confirmation:

In the opinion of most philosophers Popper’s attempt to defend this radical claim was not successful, and some of his discussions of this topic are rather misleading to readers. As a result, some of the scientists who regard popper as a hero do not realize that Popper believed that it is never possible to confirm a theory, not even slightly, and no matter now many observations the theory predicts correctly. (59)

The problem for Godfrey-Smith and most philosophers is that the various programs to pin down the confirmation of theories or to attach numerical probabilities to theories have yet to deliver, but science proceeds anyway, on broadly Popperian lines. That is apparent from the testimonials of scientific stars like Einstein, Medawar, Monod and Eccles, from the practice of Watson, Crick and Feynman who were clearly practicing Popperians without citing Popper, and the effect on normal working scientist like the agronomists who attended Poppers Adult Education courses in New Zealand.

Godfrey-Smith finds it odd that an exponent of conjectural knowledge should be in search of true theories and he uses a Holy Grail analogy to lampoon the idea of conjectural knowledge. He invites us to imagine a community of  people in search of the eternally glowing “Holy Grail”. A person in this community may “carry” an unrefuted theory, like  a glowing Grail,  all his life without knowing if it is the real thing (the real Grail will lglow for ever).

He compared this with Popper’s conception of the community of scientists who are searching for truth. A theory that we have failed to falsify up till now might, in fact, be true, but even if it is,  we will never know this or have reason to increase our confidence that it is true. This is an ingenious exercise in missing the point of Popper’s ideas and also the real-world practice of scientific research.

Godfrey-Smith apparently thinks of truth like a Terminus of inquiry (the eternally glowing grail).  But in the real world all significant theories have both anomalies and rivals so there is no useful analogy with the Holy Grail.

Scientists are concerned with the growth of knowledge as theories are elaborated or superseded with the aim of inventing theories whichsolving  deeper problems and standing up to more demanding tests. You could think in terms of ever brighter Grails perhaps. In this context the correspondence theory of truth functions as a regulative principle and the truth it is not regarded as a Terminus at the end of the road. Scientists can function without the need to think that they have found the epistemological equivalent of the Holy Grail, and if they are engaged at the frontier of knowledge they will know quite well that they have not found it.

Objections to Popper on Confirmation

“In the previous section I discussed problems with Popper’s views about falsification. But let us leave those problems aside now, and assume in this section that we can use Popperian falsification as a method for decisively rejecting theories.” (67)

This is a strange assumption to make because Popper explained that that we cannot use falsification as a method for decisively rejecting theories, so why assume that we can? The author went on “If we make this assumption, is Popper’s attempt to describe rational theory choice successful? No, it is not.”

What if we do not make that false assumption and reconsider the question of rational theory choice?

“Here is a simple problem that Popper has a very difficult time with. Suppose we are trying to build a bridge…”. We use a lot of theories, presumably theories that the scientists and engineers regard as well tested “tried and true” methods. Empiricists say that this is a rational way to go, but why this this so? “Let us focus on Popper, who wants to avoid the need for a theory of confirmation. How does Popper’s philosophy treat the bridge building situation”. (67)

He poses a strange situation where Popper has to choose between a theory that has been tested (and passed) many times and a theory which has just been conjectured and has never been tested. Neither has been falsified. Which to choose? The usual thing would be to pick the well tested theory. “But what can Popper say about this choice?

Of course Popper has said that for practical purposes it is rational to use the best tested theory that is available. What is the point that the author is making? He wants to suggest that Popper would have some difficulty in explaining why a well tested (and unrefuted theory) should be selected ahead of a new theory that has not been tested but has also not been refuted. The answer is that Popper is not betting on the truth of the tried and unrefuted theory, he is betting on the same results, that is to say, on the existence of regularities in the world, and the stability of a complex structure or instrument that incorporates many theories and technologies that have been tested in practice.

Like the analogy of  the holy Grail, th bridge-building example is not relevant to scientific research because the bridge is an instrument and the theories that are used in its design could be known to be false, but good enough for the purpose (given the testing and safety factors that are built into bridges and other structures). The analogy is unhelpful because it does not make the distinction between testing theories in the interests of scientific research (the unended quest for truth) and the instrumental  task of building structures that are safe and secure for human use. [Cite Gordon on structures and safety factors]

Godfrey-Smith’s limited treatment of Popper’s ideas is underlined in the final section of book on the key issues for the philosophy of science in the near future. The first of these is the role of frameworks, paradigms and similar constructions. Here some consideration of Popper’s theory of metaphysical research programs is required but the bibliography does not list the Postscript to The Logic of Scientific Discovery, nor the works of Popperians such as Agassi and Watkins who have contributed to the study of metaphysical ideas in science. The second is the reward system in science which is a subsection of the social and institutional approach to science that Jarvie identified in Popper’s works, including The Open Society and its Enemies and The Poverty of Historicism that are not cited in this book.

Posted in epistemology | Leave a comment

Wedberg on positivism and logical empircism

The collection of misreadings of Popper will be supplemented by a some examples where Popper is not mentioned at all (where he should be).

The Swede Anders Wedberg wrote three volumes on the history of philosophy with the third covering the period from Bolzano to Wittgenstein. It was published in 1966 and translated into English in 1984. The book opens with an account of the past 150 years to the mid 20th century  to provide the background of ideas for more intensive scrutiny of 20th century developments focused on Frege, George Moore, Bertrand Russell, Wittgenstein (first phase), Rudolf Carnap and logical empiricism, and the linguistic philosophy of the later Wittgenstein.

Taking up the problem situation after Kant and Hegel he addressed what he called the empiricist critique of science which focused on two questions:

(a)  What is natural science about, or what should be construed as talking about? That is close to what Popper called Kant’s problem of demarcation of the field of science.

(b)  How, and to what extent can theories of physics be verified or falsified by experience? That could be regarded as a form  of  Hume’s problem of induction.

Wedberg noted that the second problem became increasingly pressing at the turn of the century and eventually became a major issue for the logical positivists and logical empiricists. (9)  He also suggested that the efforts of the logical empiricists to give more precise and systematic shape to that idea and others closely related to it, using the resources of mathematical logic “has not led as yet to any even remotely satisfactory result. No real clarity has yet been reached even on the fundamental distinction between what is observable and what is not.” (11)

There is a chapter Experience and Language: Rudolf Carnap and Logical Empiricism

He  listed the figures who gathered around Moritz Schlich: the mathematician Hans Hahn (he should have added Karl Menger, son of Carl the economist), sociologist Otto Neurath, physicist Philipp Frank, historian Victor Kraft, jurist Felix Kaufman, and many others. “The most systematic philosophical talent of the circle was Rudolf Carnap, who gradually became its indubitable leader.” (198). There was a Berlin group led by Reichenback and Hempel,  the Polish “Lwow-Warsaw group” including Lukasiewicaz, Kotarbinksi and Tarski, plus Arne Naess in Norway, Jorgen Jorgensen in Denmark, A J Ayer in England, Nagel and Quine in the US. There is no mention of Popper despite his status as the “official opposition” to the Circle and the influence that he exerted on Carnap when the latter decided that testability rather than verification might be used as  a criterion of meaning.

From Wittgenstein the Circle members took the idea that metaphysics is nonsense and the verification principle became their great weapon. Their respect for science was extreme, as Wedberg put it:

The word science became a term of approbation surrounded by a nimbus of authority, and also a world that was used in the intellectual game as though it were a well-defined ches-man. Venerable and obscure concepts like ‘the scientific language’, ‘the fundamental scientific theory’, ‘the basic postulates of science’ and so on, are constantly encountered in Carnap. The step from respect for authority to a claim for authority is not long. (200)

After setting the scene the remainder of the long chapter is devoted the Carnap’s progress through various stages, including his attempt to construct a formalized language of science that might stand in for natural languages to add precision and rigor. Little is said about his theory of inductive confirmation, which might have prompted a reference to the long-running dispute between Carnap and Popper. This chapter is followed by an equally long chapter on formalization, concluding that the use of formalized languages as theoretical models of natural languages  has very little application (294). As for the long-running concern with meaning, Wedberg noted “budding awareness of nuances” around 1937 when Carnap flirted with different notions of cognitive meaningfulness but “In fact Carnap never abandoned his conviction that one can draw a sharp boundary between the meaningful and the meaningless; as late as 1956 he proposed what he supposed to be such a boundary”. (208)

It is interesting to see how much effort Wedberg devoted to Carnap, compared with no reference at all to Popper.  A  truly remarkable situation developed where it was apparently regarded as quite normal for philosophers of science to proceed independently of the world of science as it is actually practiced and to apparently have no  interest in the value of their activities for working scientists.

Wedberg noted that logical empiricists rarely studied empirical questions empirically, following what he called the Platonistic view of Wittgenstein  that empirical science is the concern of “the sciences”, not of “philosophy”. (202)

At the centre of the logical empiricists’ interest stands the separation of the scientific use of language from other uses, for example the metaphysical use, and the analysis of the languages of science. When reading, for example, Carnap’s discussions of such questions, we often find ourselves in an abstract combinatorial space far above the confused voices of the human crowd. The contact between the abstract arguments and what they are supposed to illuminate is often obscure. (202)

The obscurity of some passages of Carnap is legendary and it is amusing (but sad at the same time) that so much time (and paper) were wasted on a program that never delivered on its initial hopes and expectations. Conceived as an antidote to nonsense, it became a special kind of nonsense of its own.

Posted in epistemology | 1 Comment

Jeremy Shearmur on Popper and Hayek

Should be interesting, have not had time to read it yet, just want to share!

Posted in epistemology | Leave a comment

Popper’s rules of the game of science

The role of institutions and conventions (rules of the game)

The recognition of the social factor in science is often attributed to Kuhn and the sociologists of knowledge, however Jarvie in The Republic of Science (2001) identified what he called the “social turn” in Popper’s earliest published work. It can be seen in the passages in Chapter 1 and Chapter 2 of The Logic of Scientific Discovery where Popper explained the function of conventions or rules of the game for scientific practice. Here is the list of rules that he extracted from Popper’s early work, as reported in The Republic of Science pp 51-63.

First the supreme or meta-rule that governs the other rules.

SR. The other rules of scientific procedure must be designed in such a way that they do not protect any statement in science from falsification (LScD, p. 54).

R1. The game of science is, in principle, without end. He who decides one day  that scientific statements do no call for further test, and that they can be regarded as finally verified, retires from the game (LScD, p. 53).

R2. Once a hypothesis has been proposed and tested, and has proved its mettle, it may not be allowed to drop out without ‘good reason’ (LScD, p. 53-54).

R3. We are not to abandon the search for universal laws and for a coherent theoretical system, nor ever give up our attempts to explain causally any kind of event we can describe (LScD, p. 61).

R4. I shall…adopt a rule not to use undefined concepts as if they were implicitly defined (LScD, p. 75).

R5. Only those auxiliary hypotheses are acceptable whose introduction does not diminish the degree of falsifiability or testability of the system in question but, on the contrary, increases it (LScD, p. 83).

R6. We shall forbid surreptitious alterations of usage (LScD, p. 84).

R7. Inter-subjectively testable experiments are either to be accepted, or to be rejected in the light of counter-experiments (LScD, p 84).

R8. The bare appeal to logical derivations to be discovered in future can be disregarded (LScD, p. 84).

R9. After having produced some criticism of a rival theory, we should always make a serious attempt to apply this criticism to our own theory (LScD, p. 85n).

R10. We should not accept stray basic statements – i.e logically disconnected ones – but…we should accept basic statements in the course of testing theories ; or raising searching questions about these theories, to be answered by the acceptance of basic statements (:ScD, p. 106).

R11. This makes our methodological rule that those theories should be given preference which can be most severely tested…equivalent to a rule favouring  theories with the highest possible empirical content (LScD, p. 121).

R12. I propose that we take the methodological decision never to explain physical effects, i.e. reproducible regularities, as accumulations of accidents (LScD, p. 199).

R13. A rule…which might demand that the agreement between basic statements and the probability estimate should conform to some minimum standard. Thus the rule might draw some arbitrary line and decree that only reasonably representative segments (or reasonably ‘fair samples’) are ‘permitted’, while a-typical or non-representative segments are ‘forbiden’ (LScD, p. 204).

R14. The rule that we should see whether we can simplify or generalize or unify our theories by employing explanatory hypotheses of the type mentioned (that is to say, hypotheses explaining observable effects as summations or integrations of micro events) (LScD, p. 207).

Jarvie noted that these rules are incomplete and they represent a starting point for  an extended process of elaboration, criticism and improvement that has not happened yet. Occasionally they are subjected to criticism but not in a way that acknowledged “the  innovative brilliance of the original idea” (Jarvie, 2001, 63). He suggested that both Popper and his critics suffered from “a myopia about the institutional turn”.

For the benefit of English speakers before Logik der Forschung was translated in 1959, the institutional approach could be found (without elaboration) in the chapter on the sociology of knowledge in The Open Society and its Enemies (1945) and in the final sections of The Poverty of Historicism on situational logic and the institutional theory of progress.

Posted in epistemology | 2 Comments

Kuhn’s misreading of Popper

Kuhn wrote an essay for the Schilpp volume on Popper which appeared in 1974 and it was published in Criticism and the Growth of Knowledge (eds Lakatos and Musgrave) which appeared earlier (in  1970) due to delays in production of the Schilpp volume.  The Lakatos and Musgrave collection was the fourth in a series of books reporting the proceedings of a major conference at Bedford College in London in 1965. Musgrave made  some amusing comments on the conference (and related matters), see link at the bottom of the post.

Kuhn’s paper  at the conference is the foundation of the book. Feyerabend and Lakatos were supposed to deliver papers in the same session but their papers were not available in time and so John Watkins  wrote a paper at short  notice. Popper, Pearce Williams and Stephen Toulmin participated in the discussion and wrote short papers for the published proceedings. Others contributed later – Margaret Masterman in 1966, followed by  Lakatos and Feyerabend,  and Kuhn’s reply to the commentators. The North Holland Publishing Company published three volumes of papers but was not prepared to wait for the long-delayed fourth volume and it was published by Cambridge University Press (it became a best seller).

Something that is rarely mentioned is that the papers by Watkins and Toulmin which follow the  lead paper by Kuhn comprehensively demolish Kuhn’s supposedly radical and exciting ideas, although Toulmin noted that  by that time, almost a decade on from the publication of The Structure of Scientific Revolutions,  Kuhn had walked away from most  of the ideas that accounted for the excitement generated by book when it first appeared.


Kuhn advanced four criticisms or complaints about some of Popper’s “locutions”. The use of the term “locutions” is a warning sign that we may be subjected to the kind of verbal sparring that became widespread under the influence of Wittgenstein in his second phase. It is often a signal that the writer is about to  engage in earnest and scholarly argumentation (or explication) that elaborately misses the point of the work by work that is under examination.

The four suspect locutions are treated in the four sections of the paper.

Locution 1A scientist, whether theorist or experimenter, puts forward statements, or systems of statements, and tests them step by step. In the field of the empirical sciences, more especially, he constructs hypotheses, or systems of  theories, and tests them against experience by observation and experiment. From the opening chapter of the Logic of Scientific Discovery.

Kuhn found this odd because, for him,  there is one sort of hypothesis that scientists do not repeatedly test, this is  the way that the individual research connects to the corpus of accepted knowledge. So “the scientists must premise current theory as the rules of his game” (p 4). It seems that Kuhn simply cannot understand the critical spirit whereby any aspect of the scientific enterprise might in principle be subjected to criticism and revision.

” To turn Sir Karl’s view on its head, it is precisely the abandonment of critical discourse that marks the transition to a science.” (p 6)

The critical attitude does not mean that every scientists should be constantly engaged in critical scrutiny of everything, it just means that they should be prepared to follow problems where they lead, which may be across disciplinary boundaries, into philosophy or even to the first principles of the discipline. Sensitivity to problems is the key.

Locution 2. The essays and lectures of which this book is composed, are variations upon one very simple theme – the thesis that we can learn from our mistakes. From the Preface of Conjectures and Refutations. With emphasis on “we can learn from our mistakes”.

As Watkins pointed out in his reply to Kuhn, addressing his comments to Popperian “locutions” resulted in a deal of distortion of the meaning of Popper’s original text. This especially applies to the  Kuhn’s perception of mistakes. Possibly because Kuhn did not take on board non-justificationism and the conjectural theory of knowledge, he regards “mistakes” as Bad Things that reflect poorly on the people who make them. In the words of Watkins “He seems unable to allow that Popper was using the word ‘mistake’ in a cheerfully guilt-free sense with no suggestion of personal failure, rule-transgression etc.” (p. 26, footnote 3)

For Kuhn, “A mistake is  made at a specifiable time and place by a particular individual…The individual can learn from his mistakes only because the group whose practices embody those rules can isolate the individual’s failure in applying them”. (11)

He did not grasp the idea that the mistakes that concerned Popper were in theories (scientific theories that are false, or theories that are unhelpful, like  theories of sovereignty in politics) or in proposals and practices  (where they are unhelpful or confusing).  Individual culpability and weakness have nothing to do with it.

Locution 3. Sir Karl Popper describes as ‘falsification’ or ‘refutation’ what happens when a theory fails in an attempted application, and these are the first of a series of related locutions that again strike me as extremely odd.  Both ‘falsification’ and  ‘refutation’ are antonyms of proof. They are drawn principally from logic and from formal mathematics…invoking these terms implies the ability to compel assent from any member of the relevant professional community. (Kuhn, 1970, 13).

Kuhn went on to explain that Popper was well aware of the problematic nature of adverse evidence and the capacity of scientist to protect their theory by means of ad hoc hypotheses and the like. He cited Popper’s LSD p 50 statement that no conclusive disproof of a theory can ever be produced.

Having barred conclusive disproof, he has provided no substitute for it, and the relation he does employ remains that of logical falsification. Though he is not a naïve falsificationist, Sir Karl may, I suggest, legitimately be treated as one. (ibid. 14).

Kuhn did not understand that Popperian criticism is not supposed to instantly dispatch defective theories, it is to identify problems that call for more work. It is all about maintaining standards of criticism.  That has a “negative” function to  eliminate false theories that have been effectively criticised and superseded by better theories. Criticism also has a positive or creative function to open up new problems that are the growing points of science. As noted above regarding the first locution, sensitivity to problems and “the gift of wonder” are the beginning of  imaginative thinking.

Locution 4When he rejects ‘the psychology of knowledge’, Sir Karl’s explicit concern is only to deny the methodological relevance of an individual’s source of inspiration or an individual’s sense of certainty (ibid p 22). Kuhn went on to draw a distinction between individual psychology and common elements induced by nurture and training in the psychological make-up of the licensed membership of a scientific groupThough [Sir Karl] insists that he is writing about the  logic of  knowledge, an essential role in his methodology is played by passages which I can only read as attempts to inculcate moral imperatives in the membership of the scientific group…We shall not, I suggest,  understand the success of science without understanding the full force of rhetorically induced and professionally shared imperatives like these [referring to a statement by Popper which Kuhn placed in italics "If we have made this our task (understanding the world with the help of laws and explanatory theories), then there is no more rational procedure than the method...of conjecture and refutation". ] …Institutionalised and articulated further, such maxims and values may explain the outcome of choices that could not have been dictated by  logic and  experiment alone. The fact that passages such as these occupy a prominent place in Sir Karl’s writing is therefore further evidence of the resemblance of our views. That he does not, I think, ever see them for the social-psychological imperatives that they are is further evidence of the gestalt switch that still divides us deeply. (22)

This shows that Kuhn, like most of us before Jarvie’s book on the republic of science, did not pick up Popper’s “social turn”  to pay attention to the rules of the game of science (although that should have been clear enough in 1935 and 1958) and it was even clearer in chapter 23 of The Open Society and the section on the institutional analysis of scientific progress at the end of The Poverty of Historicism.

Popper over-reacted to the call for a psychological or sociological approach with a strong negative response when he could have replied that he got there first, in a more helpful manner by drawing attention to the  social  and institutional aspect of science and the importance of the rules, norms and conventions of scientific and scholarly practice.  He could have added that his theory of  metaphysical research programs also was a better formulation of “paradigm theory”  because it called for  investigation of  assumptions that can  cause trouble but are kept out of sight because they are  supposed to be either sacred (shared imperatives) or unspeakable nonsense (metaphysics in the eyes of the positivists). In fact the elements of the MRP are essential ingredients of the “social” or “psychological” situation and there are times when they need to be subjected to critical analysis.

Musgrave interview (re Bedford College conference).

A line of thought to be elaborated in another post.

The error that Lakatos and Kuhn shared in their efforts to criticize Popper was the failure of their SITUATIONAL ANALYSIS to take account of the problem situation in the philosophy of science in the 1920s and 1930s,  to understand how Popper formulated the problems and how he responded to them. Incidentally the paradigm of good SA is Stanley Wong’s Chapter 2 (see  previous post). Maybe Joe Agassi’s historiography is the model,  it came to Wong by way of Larry Boland.

Posted in epistemology | 2 Comments

The New Popper Legend, updated

In the Schilpp volume Popper described the “Popper Legend”  that he was a positivist with a different take on the criterion of meaning.  In his reply to critics he listed numerous philosophers of the highest international esteem who contributed to the Legend and propagated it. Appendix II of the Popular Popper reading guides is about the mixed reception of Popper’s first book in 1935.

The legend did a great deal of damage to Popper’s standing because only very unusual students would have bothered to read the primary sources closely enough  to pick up the mistakes that they were being taught in classes and in their texts. In fact hardly any students would have read the primary source at all, let alone read it with insight, because the English translation only appeared in 1958, it was a very expensive book and it is not  an easy read.

That legend has been overlaid by another one, that Popper was effectively criticized by Kuhn and Lakatos (and some others such as Feyerabend), and Lakatos tried to retrieve whatever could be saved from the wreckage of falsificationism while others just went off in different directions (Critical Realism, Constructivism, Paradigm Theory, the New Pragmatism and so on).

According to the New Popper Legend, he was a transitional figure between the positivists/ logical empiricists and the people in the New Directions who took over the running after  the 1960s following the appearance of Kuhn and then Lakatos, Feyerabend and others.  The idea that Lakatos was the potential saviour of the Popper program has probably faded away since Lakatos himself was not around to participate in the academic politics required to keep his ideas afloat.

One of the most interesting New Directions is driven by Philip Kitcher who has also used the label The Legend to describe the program of the logical empiricists, which he thinks ran out of legs some time ago.  That created the need for something better, which he  currently sees as a revival of the pragmatism of Peirce and Dewey  to renew and revitalize American philosophy.

Lakatos was the major architect of the New Popper Legend and it sometimes seems that his aim was to effect a kind of Hegelian synthesis of  Popper and Kuhn. Gillies provided an important record of the way Lakatos developed his critique of Popper.

Lakatos developed his criticisms of Popper and his new non-Popperian account of scientific method mainly in the years 1968 and 1969, and it was during these years that the great quarrel broke out between Lakatos and Popper. It should be added that these years were Popper’s last as Professor at LSE. Popper retired in 1969 at the age of 67.

The main public forum of the quarrel was the Popper seminar, where Lakatos presented his new ideas on some occasions and Popper replied. Lakatos’ style was a harsh attack. Popper too  sometimes lost his temper, but, for the most part, his tone was more in sorrow than in anger. Characteristically Popper would claim that Lakatos had failed to understand his (Popper’s) position and had distorted it by selective quotation and failing to mention some passages. I remember Popper once saying that until recently he had thought that Lakatos was one of the people who best understood his (Popper’s) position, and that it was a great disappointment to learn that this was not the case. Of course, Popper also produced answers to some of Lakatos’ objections. For example, I remember Popper saying that according to Lakatos Newton’s theory was not falsified, but that if Mars started moving in a square instead of an ellipse, everyone would take this as having refuted Newton’s theory. As a graduate student sitting at a safe distance below the end of the long table, I thoroughly enjoyed these heated and emotional exchanges, and looked forward to attending when one occurred. In retrospect, however, I think my attitude was a bit frivolous since the quarrel between Lakatos and Popper undoubtedly had a very bad effect on the academic standing of the Poppperian approach to philosophy. This would anyway decline sharply from about 1975 on, and, although there were other reasons for this decline, the quarrels within the school certainly accelerated the decline.

The Lakatos Attack

The attack had several levels: the level that should have been most important was the attack on the logic of  “falsificationism” which Lakatos claimed was fatally flawed.  In addition there was rhetorical attack on several fronts: the creation of a naïve Popperian falsificationist who never existed; a string of long papers  with lengthy accounts of the “history” and evolution of falsificationism through various stages (building the impression that Lakatos was advancing the history with a significant contribution of his own);  extensive references to fine points of detail in the history of chemistry and physics (giving the impression that Lakatos was a polymathic and authoritative scholar, hence who are we to challenge him?); a smokescreen of impressive new terms, above all the “methodology of scientific research programs” (with associated neologisms) to supersede the “static” Popperian approach. That was done without reference to Popper’s theory of metaphysical research programs which looks for all the world like the original source of the “research program” approach, with the advantage that Popper never gave up the critical approach when he turned to study the history and evolution of research programs.

The “Bedford College” paper

“Falsification and the Methodology of Scientific Research Programs” (1970) published in the fourth volume of the proceedings of the Bedford College Conference in 1965 is a good representative of a string of papers that make much the same points.  The starting point is the failure of justificationism in the orthodox schools of epistemology (essentially Rationailsm and Empiricism), thus giving rise to the serious problem of explaining what counts as a scientific advance in the absence of the “gold standard” of Justified True Belief.

On page 93 Lakatos referred to the dispute between Popper and Kuhn on the rational reconstruction of historical episodes in science. He wrote that he was about to strengthen Kuhn’s critique of naïve falsificationism and, as a rejoinder, to  build a stronger position to rationalise scientific revolutions.

(a) Dogmatic Naïve Falsificationism. First Lakatos criticized DNF. According to the logic of dogmatic falsificationism, science grows by repeated overthrow of theories with the help of hard facts. (97, his italics).  Was Popper ever a DNF? Lakatos invented a proto-Popper, Popper subscript zero) who was or might have been a dogmatic falsificationst.

(b) Naïve Methodological Falsificationism. This is an advance on DNF, still the emphasis is that the practitioner has to specify, in advance, the experimental evidence that means “the theory has to be given up” (p. 112 my italics).

To sum up: the methodological falsificationist offers an interesting solution to the problem of combining hard-hitting criticism with fallibilism. Not only does he offer  a philosophical basis for falsification after fallibilism has pulled the rug from under the feet of the dogmatic falsificationist, but he also widens the range of such criticism very considerably…he saves the attractive code of honour of the dogmatic falsificationist: that scientific honesty consists in specifying, in advance, an experiment such that, if the result contradicts the theory, the theory has to be given up. (112)

He went on to identify some risks associated with this position: one is the role of decisions (which could lead us astray). He subjected this to parody “One has to appreciate the dare-devil attitude of out methodological falsificationist. He feels himself to be a hero who, faced with two catastrophic alternatives, dared to reflect coolly on their relative merits and choose the lesser evil” (112).

This is a play on the point that Popper made (in relation to the empirical base) that scientists have to make decisions about the number of times they repeat tests and  decisions about which of a number of hypotheses they will subject to tests (given finite time and resources).  But those decisions do not have to be final: they can resume testing a particular hypothesis and they can widen the scope of hypotheses that they test.

The point that Lakatos wanted to extract from that line of argument was simply that the shifting adherence of scientists to theories cannot be attributed to the overwhelming importance of a particular piece of evidence or a particular critical experiment. He suggested that the history of science forces us to face two alternatives.

One alternative is to abandon efforts to give a rational explanation of the success of science and the other is to come up with a sophisticated version of falsificationism which does not suffer from the deficiencies of the naïve versions that he sketched. “This is Popper’s way and the one I intend to follow” (116).

(c) Sophisticated versus naïve methodological falsificationism. Progressive and degenerating problem shifts.

Contrary to naïve falsificationism, no experiment, experimental report, observation statement or well -corroborated low-level falsifying hypothesis alone can lead to falsification. There is no falsification before the emergence of a better theory (119, his italics).

A couple of points need to  be made here, first , the idea of falsification seems to be conflated with the (decisive?) rejection of the theory. But Popper insisted (LSD p. 50) that there cannot be decisive falsification (despite the logical decisiveness of potential falsifiability).  Consequently it is more helpful to consider  theories that have (apparently) failed tests to be rendered problematic, not subject to immediate dismissal! Second, one of the rules that Popper suggested for handling problematic theories was along the lines that a hypothesis should not be cast aside unless there is another one available that has survived the tests that caused problems for the rival.

That is why it is not a valid criticism of Popper to point out that it took centuries for the new cosmology of Copernicus and others to replace Ptolemy because both had to live with anomalies for a long time.  Similarly Newtonian mechanics survived as the major research program despite well-known anomalies (what was the alternative, before Einstein?), and Einstein superseded Newton (over a period of many years) despite well-known anomalies and problems with Einstein, and so on.

Having made some invalid criticisms of Popper’s views on the logic of testing and ignoring the function of  Popper’s proposed “rules of the game”,  Lakatos then proceeded to his “extension and improvement” of the program with a demand for novel facts.

“Thus the crucial element in [Lakatosian] falsification is whether the new theory offers any novel, excess information compared with its predecessors and whether some of this excess information is corroborated.” (120).

One of the results of this focus on novelty was a long search for “novel facts” which in retrospect resembles the long “protocol sentences” diversion that occupied the Logical Positivists in Vienna for some years, with no useful outcome.

The final section of the paper elaborated the methodology of scientific research programs. Forty years on, has this been helpful for working scientists?   In economics Lakatosian adepts followed Latsis and Blaug but there is no record of progress that I can see, based on the proceedings of a major conference in the Greek islands devoted to exploring the applications of MSRP in economics (De Marchi and Blaug, 1991. Appraising Economic Theories: Studies in the Methodology of Research Programs).

Posted in epistemology | 1 Comment

Good piece on critical thinking, very CR!

A nice piece! Seven Habits.

1. Judge judiciously

2. Question the questionable

3. Chase challenges

4. Ascertain alternatives

6. Take various viewpoints

7. Sideline the self


Posted in epistemology | 5 Comments