Bayes Theorem has become rather a mainstream tool and its efficacy seems to have been generally accepted, however the epistemological basis may require careful reinterpretation. This article is concerned with epistemological issues rather than technical depth in mathematical and probability concepts.

Further questions to consider in this paper are

1. What sorts of problems have Bayesian techniques typically been applied to?

2. How effective have they been in solving these problems?

3. What sorts of problems are not appropriate for Bayesian methodologies?

4. How should critical rationalism be interpreted in the light of Bayesian methodology?

5. How should Bayesian methodology be interpreted in the light of critical rationalism?

The essence of the Bayesian approach is the use of mathematical rules to indicate how one should change one’s existing beliefs in the light of new evidence. By way of introduction, to highlight some core conceptual and logical issues, it may be instructive to look at a very simplistic example of the application of Bayesian inference modified from an article in the Economist 30/09/2000.

Imagine an infant who observes its first sunset; this we presume modifies its prior (apriori but not apriori valid) expectation of say a pattern of even light. We should also recognize that infants have complex neurological systems and are obviously not born as blank slates as even their DNA is a collection of expectations. Our model Bayesian infant conjectures that the sun may or may not rise again and assigns equal probabilities to the sun rising or failing to rise. Observations are represented by putting a metaphorical white marble or a black marble into a bag. Each time the sun rises another white marble is put into the bag. Thus day by day the probability that a marble plucked randomly from the bag will be white rises and this is interpreted, through Bayes theorem, as an increasing degree of belief that the sun will continue to rise until the probability becomes so great it is interpreted as a near certainty.

“Bayesian inference” therefore refers to the use of a prior probability of an hypotheses to determine the likelihood of a particular hypothesis given some observed evidence. That is, the likelihood that a particular hypothesis (the so-called posterior probability of the hypothesis) is true given some observed evidence comes from a combination of the inherent likelihood (or prior probability) of the hypothesis and the compatibility of the observed evidence with the hypothesis (or likelihood of the evidence, in a technical sense). Bayesian inference is opposed to frequentist inference, which makes use only of the likelihood of the evidence (in the technical sense), discounting the prior probability of the hypothesis.

What has happened in the sample infant calculation? Prior probabilities (guesses) have been evaluated and updated. Is the connection between truth and probable or possible truth any firmer than that between truth and guesses? Is Bayes rule an example of rational decision making or a refined game of chance?

We know that humans and all other multicellular animals and plants, algae, protozoa, bacteria and fungi are perpetual risk takers. Most of the time life forms do not display conscious intent but the metabolic and if present, neurological pathways, have apriori expectations of the world. When herds of antelope congregate and surge forward on seasonal migrations they are engaging on gambles of uncertain outcomes with potential success for a proportion of them if the historical record is a guide. A lot if not most of our conscious activity as humans is conjectural – there must be some mechanism for producing guesses and they must often succeed otherwise we also as a species would be extinct.

From a critical rationalist perspective the theory that the sun will rise, even if generated by a Bayesian form of inference, is still a conjecture. It is always possible that some cataclysm will cause the sun not to rise. The theory that the sun will rise may be a better theory than it not rising, but has the Bayesian calculation supported this conclusion at a logical level?

An inductive argument would have the form

The sun has been seen to come up each time I observed it

Therefore the sun always comes up

In this form the statement is but a guess not better or worse than other types of conjectures. As Mark Notturno says, Popper calls a guess a ‘guess’, but inductivists prefer to call a guess ‘the conclusion of an inductive argument’. Universal propositions do not follow logically from a limited set of existentials. It is the fallacy of affirming the consequent as the conclusion of an inductive argument may be false even if all its premises are true. Induction is a perception of relations and at best it represents the guessing and probing of a mind aiming at understanding.

Is there any more support for the inductive supposition to state it in terms of probabilities?

The sun has been seen to come up each time I observed it

Therefore the sun will probably always come up

Note that samples of past occurrences of the sun rising are observations, not inductions. David Deutsch pointed out in “The Beginning of Infinity” if one were to sample calendars throughout the twentieth century each of the years would have started with 19 and one would have predicted that following years would start with 19. How informative is this, what of the 21st century?

To reiterate, when claims are made for amplifying basic statements into universal statements it is equivalent to making a conjecture or a guess, not different in principle to being inspired by a dream, a song or a serendipitous flash of inspiration. Hume’s problem doesn’t apply to guesses.

The logical issue is that guesswork and conjecture sometimes resemble something called induction, but there is really no such thing (here Popper is not not talking about mathematical “induction” for which the proof is deductive anyway). You cannot deduce (or induce) from basic observation statements factual information which goes beyond the factual information contained in the basic observation statements themselves. There is is, in principle, nothing wrong with guessing, but resistance should be offered against placing a logical scaffold around generalising from individual or sampled observations. Popper would say that all perception is modified anticipation. The observer is not a blank slate.

The observer expects to see something.

Something is not observed or is observed differently

Therefore the initial expectation has been wrong.

Thus the brain and visual system reformulates a new expectation (or hypothesis).

The whole debate around the word induction is often at cross purposes in the literature because we use it in different ways. Did Popper define induction? In “Realism and the Aim of Science” (1983) p 147 he stated: “By induction I mean an argument which, given some empirical (singular or particular) premises, leads to a universal conclusion, a universal theory, either with logical certainty, or with probability (in the sense that this term is used in the calculus of probability).” This is what he rejected.

I must add that I am not stating that inference from Bayes’ theorem is inductive, although it is frequently held to be so in the literature, thus my effort above to clarify some issues around induction. Bayesianism is, according to Gillies (Phil Sc 20th C 1993) indeed a theory of justification, not of discovery. Despite the English title of Logic der Forschung (1935) being The Logic of Scientific Discovery (1959) Popper’s view is that there is no such thing as the logic of scientific discovery but only a logic of testing. Discovery and justification are separate issues. Bayesians seek to justify scientific generalisations or predictions, by showing that, although they are not certain, they can nevertheless be shown to be probable, given the evidence used to support them.

At issue is whether Popper’s conjecture and refutation can be accommodated in Bayesian methodology i.e. can there be a logical basis of the application of Bayes theorem for eliminating false conjectures? Failing to be falsified cannot from a critical rationalist perspective produce a positive logical reason for accepting conjectures although it is valid to compare conjectures based on factors other than falsifiability e.g. depth, comprehensiveness, simplicity, unifying power, consistency with background knowledge, relevance to multiple problem situations, being part of a rigorous research program, without drifting down the slippery slope of induction.

The standard Bayes theorem is:

posterior probability given evidence = ( likelihood of observing evidence given the hypothesis) x (prior probability before observing evidence ) / probability of model evidence

P(H¦E) = P(E¦H) P(H) / P(E)

Where:

P(H¦E) = The probability of a hypothesis ,“H”, given an item of evidence “E”

P(E¦H) = The probability of the evidence given the hypothesis

P(H) = The probability of the hypothesis before considering the item of evidence (the “prior probability”)

P(E) = the probability of the evidence arising (without direct reference to the hypothesis)

Bayes Theorem in itself is derivable from a simple application of probability theory. It is a non-controversial mathematical theorem. Bayesianism is more controversial, it makes questionable claims about rational belief, evidence and confirmation. Bayesianism, as David Deutsch says, assumes that minds work by assigning probabilities to their ideas and modifying those probabilities in the light of experience as a way of choosing how to act, i.e. values that are rewarded by experience are reinforced and come to dominate behaviour while those that are punished by experience are extinguished. It may be appropriate to use Bayes Theorem in computer programming but the epistemological extension reeks of behaviourism and leads astray modelling of artificial generalised intelligence.

Before looking closer at the basis of Bayesian inference, a look at how it has been used may give greater context.

Sharon Bertsch McGrayne in The Theory that would not Die: How Bayes Rule Cracked the Enigma Code explores the theorem and illustrates situations where it has had success. A list of her examples and others follows.

Alan Turing and others used modified Bayes rule to crack the Enigma code and to detect U Boats during the Second World War. McGrayne states that Baye’s rule was good for hedging bets when there were prior guesses and decisions to be made with a minimum of time or cost.

Bayesian techniques have been used to determine the most probable causes of diseases like lung cancer when prior data is fed in.

They have been used to determine the likelihood of a nuclear accident.

They have been used to ratify who wrote The Federalist Papers, a minor American history puzzle from vast amounts of written archives.

They have been used to predict results of elections from polling data. An example of this is the spectacular success of Nate Silver using Bayesian techniques to predict the results of the November 2012 American presidential election. Silver’s approach involved taking public poll data from several sources, weighting it depending on things like recency and sampling, then making statistical adjustments, mix in extra data and use this data to simulate 100,000 fake elections to spit out each candidate’s probability of victory.

They were used to help narrow the search for the H-bomb lost in the ocean off the coast of Spain.

They have been used in wildlife population studies.

They have been used in military tracking, weapons systems, anti-terrorism.

They are used in spam filters, handwriting recognition, analysis of neural networks. Bayesian spam filtering is a very powerful technique for dealing with spam, that can tailor itself to the email needs of individual users, and gives low false positive spam detection rates that are generally acceptable to users.

It has been used in analysing mammogram statistics and breast cancer prediction.

The effectiveness and economy in using such Bayesian techniques has been enthusiastically documented in numerous publications.

It does seem to be used for helping to solve problems that may involve large amounts of data but which are perhaps narrowly focused in terms of the aimed for outcomes and prior assumptions which lends weight to the “normal” science analogy.

Subjective Bayesian methodology seems to produce ampliative conclusions (posteriors). If these are seen as conjectures with no logical backing (induction), this would imply that the reason Bayesian inference seems to work is not the reason for it working. Bayesian inference in that case is Humean irrationalism. It would rest on optimism rather than logic. Karl Popper in Objective Knowledge: an evolutionary approach 1972 page 141 stated, and I quote at length:

“Nowhere has the subjectivist epistemology a stronger hold than in the field of the calculus of probability. This calculus is a generalization of Boolean algebra (and thus of the logic of propositions). It is still widely interpreted in a subjective sense, as a calculus of ignorance, or of uncertain subjective knowledge; but this amounts to interpreting Boolean algebra, including the calculus of propositions, as a calculus of certain knowledge—of certain knowledge in the subjective sense. This is a consequence which few Bayesians (as the adherents of the subjective interpretation of the probability calculus now call themselves) will cherish.

This subjective interpretation of the probability calculus I have combated for thirty-three years. Fundamentally, it springs from the same epistemic philosophy which attributes to the statement ‘I know that snow is white’ a greater epistemic dignity than to the statement ‘snow is white’.

I do not see any reason why we should not attribute still greater epistemic dignity to the statement ‘In the light of all the evidence available to me I believe that it is rational to believe that snow is white.’ The same could be done, of course, with probability statements.”

The critical rationalist philosopher, David Miller, made the point that Bayesians are not supposed to be inductivists. He continued, a true Bayesian would not be interested in whether a theory is supported or not as that would be inductive. Bayesians do not have opinions or belief, only degrees of belief.

Stephen Senn states that to have a prior distribution about the probability of success is to have a prior distribution about the probability of any sequence of successes and failures. One simply notes which sequences to strike out as result of any experience gained and renormalises the probabilities accordingly. No induction takes place. Instead probabilities resulting from any earlier probability statements regarding sequences are deduced coherently. He notes that, contrary to what some might suppose, Bruno de Finetti, one of the developers of subjective interpretation of Bayes theorem, and Popper do not disagree regarding induction. They both think that induction in the naïve Bayesian sense is a fallacy. They disagree regarding the interpretation of probability. Even though the inference is coming from evidence, it is still OK by Popper because it is possible that the evidence could have shown the theory is false. Senn continues, this leaves applied Bayesian analysis as currently practiced as one amongst a number of rough and ready tools that we have for looking at data. I think we need many such tools because we need mental conﬂict as much as mental coherence to spur us to creative thinking. When different systems give different answers it is a sign that we need to dig deeper (Senn 2003).

Andrew Gelman in “philosophy and the Practice of Statistics in the Social Sciences 2010 states that he holds fears that a philosophy of Bayesian statistics as subjective, inductive inference can encourage a complacency about picking or averaging over existing models rather than trying to falsify and go further. Likelihood and Bayesian inference are powerful, and with great power comes great responsibility. Complex models can and should be checked and falsified. Again he felt that there may be a way to accommodate such a tool within the hypothetical-deductive and valid world of Karl Popper. “The main point we disagree with many Bayesians is that we do not think that Bayesian methods are generally useful for giving a posterior probability that a model is TRUE, or the PROBABILITY for preferring model A over model B, or whatever. Bayesian inference is good for deductive inference within a model, but for evaluating a model, we prefer to compare it to data without requiring that a new model be there to beat it”. He continues:” Yes, we ‘learn’, in a short-term sense, from Bayesian inference – updating the prior to get the posterior – but this is more along the lines of what a Kuhnian might call ‘normal science’. The real learning comes in the model checking stage, when we can reject the model and move forward. The inference is a necessary stage in this process, however, as it creates the strong conclusions that are falsifiable”. I wonder how Bayesian inference would have progressed Ptolemaic astronomy and Newtonian cosmology. Would Bayesianism have produced the insights of Copernicus or Einstein?

Ivor Grattan-Guinness in “Corroborations and Criticisms: Forays with the Philosophy of Karl Popper” (2010) makes the point that in Popper’s view science is a risk-taking enterprise, where theories are formed and tested as severely as possible. Science and technology have a very close relationship, and yet technology requires reliability in the performance of its product. Thus science is risk and technology is safety – a paradox of which the resolution requires careful attention to be paid to corroborations. Reliability theory is a wide-ranging subject, that takes due note of unreliability i.e. failures in technology which involve falsifications of theories. Examples are the rapid collapse of the World Trade Centre buildings and the sinking of the Titanic.

Grattan-Guinness also uses a novel descriptor, desimplification, more or less as a synonym for Popper’s “ad hoc” hypotheses. He sees desimplification as a way of describing aspects of Kuhn’s theory of normal science i.e. coping with small or not so small effects, extending the detailing, applicability of theories to special cases, checking on the size an effect of omitted factors. While such normal science is routine to a fault it can involve the creation of difficult new theories and experimental techniques. I suspect that the application of Bayes Theorem could be construed within the parameters of normal science. When moves to desimplify are patently unsuccessful, grossly contrived, or impossible a more radical kind of theory is required.

The Bayesian approach, as Gilboa et al points out, begins with priors, and models a limited form of learning, namely Bayesian updating. It does not illuminate the formation of the priors. They argue that rationality requires more than behaviour that is consistent with a Bayesian prior.The first tenet of Bayesianism is that whenever a fact is not known one should have probabilistic beliefs about it. In the light of new information, the Bayesian prior should be updated to a posterior according to Bayes theorem. When facing a decision problem one should incorporate all the information one has gathered with respect to one’s Bayesian beliefs. Bayesian inference has been useful within a limited range of expectations – very useful, but science is about explanations, problem solving. Its goal is true explanations, even if there is no logical basis to prove that we have achieved the goal of discovering unambiguous truth. Critical rationalism has offered falsifiability as the demarcation between science and pseudo-science. If Bayesian methodologies have value it is that of providing economical resources in the conjectural process – no technique can provide positive proof. Remember that even Immanuel Kant thought that Newton’s views on time, space and causality were incontrovertible, indeed a high probability that Newtonianism was correct had little bearing on the truth. How many puzzles were worked on within that paradigm?

Elliott Sober in his essay “Bayesianism its scope and limits” (2002) says that Bayesianism cannot be the whole story about scientific inference. Likelihoods don’t tell you what to believe or how to act or which hypotheses are probably true; they merely tell you how to compare the degree to which evidence supports the various hypotheses you wish to consider. Of course what logical backing such “support” offers is the issue.

Darrell Rowbottom emphasizes that our apparent ability to reach a considered consensus on evaluation of P(e,hb) and P(e,b), as against P(h,b), might nevertheless fail to be of any deep epistemological significance. It is perfectly possible to reject a more verisimilar option in favour of a less verisimilar option in testing. We might move further from the truth rather than be moving closer to it. The weeding out of false theories does not guarantee that we are moving to true ones.

Karl Popper reminds us in “Truth and Approximation to Truth” (1960)

Twice two equals four:’tis true

But too empty, and too trite.

What I look for is a clue

To some matters not so light.

Only if it is an answer to a problem – a difficult, a fertile problem, a problem of some depth – does a truth, or a conjecture about the truth, become relevant to science. Experience is indeed essential to science, but its role is different to that supposed by empiricism. It is not the source from which theories are derived. Its main use is to choose between theories that have already been guessed. That is what “learning from experience” is (Deutsch, D Beginning of Infinity).

(Off topic)

Insomnia?:)

By my calculation it is about 5 am over there!

The confusion of Bayesian and inductive inference is particularly irksome.

In the first case, Bayesian inference cannot proceed without assigning a prior probability to the hypothesis, but this puts the cart before the horse. Induction is usually purported to be a method of discovery; it is how we arrive at hypotheses in the first place. Bayesian inference, however, begins only after the hypothesis has already been discovered; it provides no inferential rules to derive hypotheses from the evidence. Instead, the hypothesis, the evidence, and their logical relation are all part of the premises of a Bayesian inference.

This amounts to nothing less than a tacit renunciation of the traditional goals of inductive inference–Bayesian inference has nothing to do with generalising from experience. It begins only after we have already discovered the hypothesis and our experience has already been interpreted into the evidence. In other words, it sidesteps the core problems that inductive inference is supposed to address.

What is more, Bayesian inference is not ampliative. The inductive support for a hypothesis is sometimes identified with the following:

s(h|e) = p(h|e) – p(h)

That is, the difference between the prior and posterior probability of a hypothesis is supposedly the measure of inductive support provided by the evidence. However, this support does not transmit from premises to conclusion in a valid argument. That is, the logical implications of the hypothesis don’t inherit its inductive support.

For example, suppose that ‘Every raven is black’ is inductively supported by the evidence. A logical implication of ‘Every raven is black’ is ‘If I see a raven tomorrow, then it will be black’. The question, then, is whether this implication inherits inductive support from the hypothesis.

It does not. Support provided by the evidence for ‘If I see a raven tomorrow, then it will be black’ may be different–or even opposite–of the support provided for ‘Every raven is black’.In other words, Bayesian support for a hypothesis does not entail Bayesian support for the predictions of that hypothesis that go beyond the evidence. However, this is exactly the problem that induction is intended to solve; it’s supposed to guide future actions, expectations, and predictions. If Bayesian support is merely an artifact of deductive relations between the hypothesis and evidence, then it’s entirely question begging.

Thanks Lee, I put in the devil’s advocate comment, “Subjective Bayesian methodology seems to produce ampliative conclusions (posteriors).” but had hit a bit of an impasse. Excellent clarification.

I think this is a very interesting draft,. It lists the many supposed applications of Bayesianism, but do we know whether they actually achieved anything?

And is it possible to say what precisely is meant by the probability of a hypothesis, or the probability of its being true? I know what I mean when it comes to the probability of throwing a six. I also know that the probability will not change however many sixes I throw. But when it comes to the probability of the hypothesis ‘All ravens are black’ being true, I have no idea of how to put a figure on it. However, I can see that my subjective estimate of its likelihood will increase if I see a few more black ravens. So is it the case that this kind of probability is ordinal rather than cardinal, and subjective rather than objective? I ask this question from a position of ignorance.

On Lee Kelly’s comment, I think the position is that today’s logical empiricists are no longer ex ante inductivists in seeing observation as prior to, and the route to theories. They have quietly accepted the invalidity of inductive inference, and have become ex post pseudo-indutivists who accept that the theory is, or can, come before the evidence. Their ”induction” is Newspeak for confirmation. They follow the modus ponens and are hypothetico-deductive confirmationists! Bayesian methods tell them how well their hypotheses are supported by confirmations – even though the hypothesis may be false as with the white swans.

My draft essay also has been written from a position of ignorance, an attempt to get my mind around Bayes’ theorem and the problems raised by Bayesianism. There are a number of phrases that I would tighten in a re-write.

If we follow Popper and desert inductive “logic” entirely and accept that knowledge consists largely of unsupported guesses or conjectures then we can still modify the prior knowledge in the light of experience. Here I am indebted to David Miller’s “Out of Error” (2006) for some cues and clues but not the direction of my meanderings.

We may also guess (here the stress is both on subjectivity and the word “guess”) how likely it is that our initial conjecture will survive testing.

We don’t learn from experience but through experience. Decisive and absolute refutation is not necessary, but susceptibility to removal is.

We can learn that our hypotheses are false and any hypothesis that remains stays until it is dislodged.

Can Bayes’ Theorem assist this process without the philosophical commitment to “confirmationism”?

I notice that Lee Kelly rigorously looked at “Inductive and Subjective Interpretations of Probability” in the Critical Rationalism Blog on 6th July 2011 plus followup comments. In that article Lee critiqued aspects of the application of Bayes Theorem. It is worth a revisit.

An extract is:

“The inductive view of probabilistic inference rests on the fallacy of decomposition, i.e. assuming that what is true for the whole must be true for its parts. Not only do logical consequences of A which are independent of B not increase in probability, they may actually decrease in probability. This concludes the refutation of inductive probability.

The subjective interpretation of probability might be retained even if probabilistic inference is not inductive. While it may be true that probabilistic inference cannot amplify B, it can still be used to select among alternative propositions. Probabilities can help us keep score and choose preferences without any presumption of induction. More importantly, probabilities can still be interpreted as the subjective degree of confidence that one has (or should have) in a proposition given some other proposition.”

http://www.criticalrationalism.net/2011/07/06/critical-rationalism-vs-inductive-and-subjective-interpretations-of-probability/

I feel rather ambivalent about that post.

I had a follow-up post that I accidentally deleted. I have attempted to revisit the topic several times, but I haven’t been satisfied with anything I’ve written.

If one accepts that induction cannot be logically justified there can be no inductive justification for the application of Bayes’ theorem. It may however be legitimate to interpret calculations as a tentative indication of the corroboration of beliefs. Remember that Karl Popper attempted to find metrics for corroboration and verisimilitude but basically gave up on them. Here we are talking about beliefs not events. The sun may rise each day (at least not in the arctic zones). Each time the sun rises my belief that it will rise has not been falsified but the fact that the sun has risen repeatedly is not justification for stating that the sun always rises nor that it will probably rise. Karl Popper (1959) did say that the theory of the probability of hypotheses seems to have arisen through a confusion of psychological with logical questions. Popper continues, if one says of an hypothesis (belief) that it is not true but probable then this statement can under no circumstances be translated into a statement about the probability of events. Rather than this being a refutation of subjective Bayesianism it may be the clue that allows an interpretation that complements critical rationalism. Herbert Keuth (2005) comments on de Finetti in that rather than probability in subjective interpretation being a measure of our ignorance it measures ‘the degree of rationality of our beliefs”, as such it might be formulated using decision theory. Most critical rationalists, as Alan Musgrave ( Catton and Macdonald 2004) points out, have nothing to say about beliefs or their rationality and focus rather on objective (third world) problems. Musgrave states that we can have a reason for believing (epistemology) without it being a reason for what is believed (metaphysics). Similarly Deborah Mayo (2010) says we can warrant a rule for accepting (or believing) H without claiming H is proved true or probable. Thus using Bayes’ theorem we can have a reason for believing an hypothesis without claiming that the hypothesis is true or probable. To know is to reasonably believe that the hypothesis is true without claiming that the hypothesis is proven or probably proven to be true. Popper insists that inductive logic is a myth because we don’t need it. We may rely on something without it being reliable.

I though I had sent this contribution, but as it has not appeared, I will send it again. At the time when Bayes cropped up I was writing a paper on critical rationalism for the Manchester Philosophy Group. I was also reading Agassi’s Philosophy from a Sceptical Perspective. Agassi states that the frequency theory is standard. He does not go into Bayes but has plenty to say about the current fasbion for degree of rational belief. I went into Wikipedia to find out more about probability, and These were my conclusions – but I could be wrong.

Probability, when it is not a priori or a Popperian propensity is a relative frequency. In one thousand throws of a die I got 160 sixes. The relative frequency is 160/1000, and the probability, therefore, is 160/1000. Probability here is an estimate of what would happen in a long run of throws. Equally if I observe 1000 ravens and I note that 1000 of them are black the probability of seeing a black raven is 1. Similarly with 1000 swans, all of them white. The probability again is 1. But this result does not entail the truth of the theory that all swans are white. It is a hypothesis. And the observation of just one black swan both refutes the theory and lowers its probability to 1000/1001. So a probability 0.999 is entirely consistent with a theory being false and falsified.

Probability can be estimated for theories which specify particular events. But more complex theories like evolution or Einstein are not open to frequency estimates, and cannot offer probability values. Nor, unlike, dice where the assumption of fairness is enough tor anyone to guess that that the probability of a six is 1/6.This can be tested by throwing a die a large number of times. The probability of Einstein’s theory fo general relativity cannot be calculated by looking at the number of sides and sixes. An a priori estimate of its probability, i.e. the probability of it being true is impossible other than by feelings in the bones or stomach. O.K. I declare that its probability is 0.9. That is a completely arbitrary and subjective estimate. And when the theory passes some new test my subjective probability rises by an amount which, again, is arbitrary. Perhaps it goes up to 0.91, or 0,92, or 0.93. Who knows? But however high is the probability of the theory it could still be false.

I cannot find anything to disagree with in your helpful comments Michael. The challenge of getting a grip on the applicability of Bayes’ theorem has produced an immense literature. I have felt like the boy in Hans Christian Andersen’s tale of the king with no clothes. My initial ignorant reaction was that I felt like calling out that the king has no clothes, this has morphed into I know the king is wearing a number of things but they are not necessarily what a whole stream of courtiers and analysts are describing and I am not sure myself.

The challenge in knowledge management is to try to separate the least truthlike knowledge claims from the more truthlike claims that can survive tests and from those claims that we cannot decide. This is not so easy with respect to Bayes’. Firestone and McElroy (2003) point out with respect to epistemology in general that truthlikeness is similarity to the truth. The smaller the distance between a knowledge claim and the true (perhaps unstated) knowledge claim in its comparison set, the more truthlike it is. When the distance becomes zero, the knowledge claim is true but as in the Kelvin temperature scale, our measurement of truthlikeness is always subject to error and we can never reach absolute truth. Popper did not promote falsifiability for merely fashionable purposes. Inductive reasoning is logically untenable so we can eliminate that issue, despite there being reams of examples of Bayesians and logical empiricists leaping out of their baths and yelling “Eureka” that they have demonstrated induction. In that case the king is not only naked but dripping wet as well.

On a more mundane level Chris Wilson (2008) points out that even with the trivial and overworked example of the sun’s rising, “Will the sun come up tomorrow?”, on its own at a logical level, a critical rationalist at least is hard pressed to say yes, even though he believes in his heart it will. His solution is to expand the question a bit. Instead of considering the one possibility – that the sun will come up tomorrow – we should also consider the alternative, that it will not. These are both hypotheses, we can explore and clarify the evidence with respect to each and see which we prefer. This is a key to the lock of the induction problem. We cannot conclude that either hypothesis is definitely correct, that would be logically untenable. The assertion that the sun will not come up has a lot of evidence against it, even though we cannot prove absolutely that it will not come up. We rely on it coming up, however, for practical purposes, like setting the alarm clock to go to work the following day. What sort of a wager is warranted is an issue, as you highlight. The more a theory stands up to tests the more confident we tend to be in holding onto it, but it is not proven true in any absolute sense. If it is demonstrated to be wrong we gain impetus to conjecture new solutions, we tend to assign our beliefs to the least refutable hypotheses.

It makes me think that falsifiability is not just a property of universal statements but is also often an unspoken assumption or a neglected assumption.

In my partly-baked initial paper, the posting of which was brought forward by Nate Silver’s predictions of the Obama election win, I quoted news sources that indicated he used Bayes’ theorem. This he did, but it does not mean that frequentist inference was excluded when Nate Silver used Bayes’ theorem. Using Bayes’ theorem does not necessarily make one a Bayesian in a pure or idealistic sense (depending on whatever couture brand one chooses).

Obviously, Bayes’ Theorem is one tool in the armour for solving a variety of complex and even urgent problems e.g. finding U Boats in the Second World War.

After a diligent analysis John Pollock (2008), in a paper I found on the web, concludes with respect to epistemological issues: “Bayesian epistemology consists of a theory of subjective probability coupled with Bayesian conditionalization. The latter purports to tell us how subjective probabilities should be updated in the face of new evidence. Because of its simplicity and mathematical elegance, Bayesian epistemology has a seductive appeal to philosophers with a formal bent. It appears to get a great deal out of very little. However, if something seems too good to be true, it usually is. I have argued that there is no way to make sense of subjective probability for real epistemic agents. And when we substitute a more realistic kind of epistemic probability for subjective probability, it becomes clear that Bayesian conditionalization is simplistic. The general lesson to be learned is that epistemic cognition has a more complex logical structure than countenanced by Bayesian epistemology.”

Andrew Gelman and Cosma Shalizi (2010) argue that Bayesian statistics accord with sophisticated forms of hypothetico-deductivism. There is more than one way to dress a king.

References:

Firestone, Joseph M. & McElroy, Mark W. Key Issues in Knowledge Management, Butterworth-Heinemann, 2003

Gelman, Andrew and Shalizi, Cosma Rohilla Philosophy and the practice of Bayesian statistics in the social sciences 2010 in The Oxford Handbook of Philosophy of Social Sciences, Oxford U P, 2012

Pollock, John L. Problems for Bayesian Epistemology, University of Arizona 2008

Wilson, Cris Healing the Unhappy Caveman Libertas Press 2008

Carlos Garcia in “Popper’s Theory of Science: An Apologia” (2006) elegantly makes the point that Popper considers the distinction between logical probability and corroboration as one of the most interesting findings in the philosophy of knowledge and notes that the logic of probability cannot solve the problem of induction. For Popper, the logical probability of ‘x’ is the probability of ‘x’ relative to some evidence; that is to say, relative to a singular statement or to a finite conjunction of singular statements. Probability gives us information about the chances that an event will occur but it does not inform at all about the severity of the tests that a hypothesis has passed (or failed). Corroboration and degree of corroboration are not equivalent to confirmation and degree of confirmation, or probability, as per Carnap’s logical empiricism. The “probability of a hypothesis”, in the sense of the degree of its corroboration, does not satisfy the laws of the calculus of probability. A highly testable hypothesis is logically improbable.

In Popper’s view science is concerned with intersubjectively testable explanations, a subjective view of probability is problematic. Popper fears that the theory of the probability of hypotheses confuses psychological and logical questions. Is it a probability measure or a plausibility measure? Any universal hypothesis goes beyond the empirical evidence. It can be validly tested by seeking counter instances not by collecting supporting examples as this could go on to infinity.