Machine learning

In September 2007, at the Popper conference in Prague, Donald Gillies presented a paper “Problem Solving and the Problem of Induction”. This was included in the collection of conference articles, “Rethinking Popper” edited by Parusnikova and Cohen and published in 2009.

Gillies says Popper’s “induction is a myth’ quotation has become incorrect because science itself is changing. Gillies states in the article, with respect to studies on artificial intelligence: “Developments in machine learning since 1996 have only reinforced the claim that inductive rules of inference exist. Hence it can be argued that Popper’s 1963 ‘induction is a myth’ quotation can no longer be regarded as correct. In fact programs such as Quinlan’s ID3 or Muggleton’s GOLEM (and more recently developed machine learning programs) do make inductive inferences based on many observations and have become part of scientific procedure.”

It seems to me this is a big claim to make.

Challenge it seems to me could come from

  1. Is the “induction” Gillies refers to the same as what Popper means by induction?
  2. Does this machine learning only appear to be inductive?

Are the conclusions drawn by machine learning wider in scope than their premises or are they the result of testing and re-testing the programmed assumptions?

Datteri, Hosni and Tamburrini in “Machine Learning from Examples: A Non-Inductivist Analysis” 2005 rejected the view that mechanical learning systems perform epistemically justified inductive generalization and prediction and outlined an alternative deductive account. Gillies does not seem to accept their viewpoint.

This entry was posted in Uncategorized. Bookmark the permalink.

8 Responses to Machine learning

  1. Lee Kelly says:

    Well, suppose you have a sentence form:

    some x are y

    If induction is merely a programmable rule that accepts such sentences as inputs and then, as outputs, produces sentences of the form,

    all x are y

    Then I presume nobody would disagree that induction is possible; one might then apply some Bayesian updating procedure. This might even roughly correspond to some cognitive process in human brains. But if you think refutes Popper’s claim that induction is a myth, then you have, in my opinion, grossly misunderstood the traditional problem-situation.

  2. Rafe says:

    Yes it also depends on which of the various forms of induction the author is talking about – discovering regularities in the world, inventing explanatory theories, “proving” general statements or putting a p value on a theory.

  3. Bruce Caithness says:

    In his article, Donald Gillies says there is a branch of Artificial Intelligence known as “machine learning” whose aim is to generate hypotheses “automatically” from data, in his words to carry out ” machine induction”.

    I do not expect that a computer can be a functional “blank slate”.

    Gillies would surely not argue with Popper’s statement that observational neutrality is impossible. In other words for a computer to run through the data it must be programmed to mine the data, e.g. pick out all values that equal or are greater than such and such and occur in combination with such and such. In this sense the computer can observe, in Popper’s phrase, “what it knows”. If this is what is meant by induction by Gillies and others, well and good, for Popper didn’t usually like get into arguments about the meaning of words.

    It is observation – not neutral observation, but observation nevertheless. Observation can be very complex – as we know from the human visual system.

    What can one induce about the future from a repetitive run of data? If the observation of repetitive instances is meant to provide a reason for accepting a hypothesis then Popper cautions that this is not only invalid but a myth. In Popper’s words “It is induction by repetition (and therefore probabilistic induction) which I combat as the centre of the myth; and in view of the past history of induction from Aristotle and Bacon to Peirce and Carnap, it seems to me appropriate to use the word “induction” as standing, briefly, for “induction by repetition” (page 1034 Schilpp).

    The problem of induction, according to Popper, is “How can induction be justified?” “To this problem Popper’s answer is “It cannot” (Page 1043 Schilpp).

    I don’t understand how Gillies can propound an inductive principle and get around the trilemma of justification: infinite regress, logical circle and dogmatism. Datteri, Hosni and Tamburrini did not believe that it is necessary to introduce notions of consequence that exceed or otherwise cannot be captured within the framework of deductive reasoning.

  4. Lee Kelly says:

    I’ve recently taken to writing ‘the problem of induction and its corollaries’, because there is no one problem of induction, and it’s closely related to several other problems.

    The traditional problem of induction concerns the justification of a principle of induction. Such a principle was thought necessary to justify a posteriori knowledge of universals, but it was itself a universal that could not be justified a priori. The problem, then, was that induction seemed to depend upon itself for justification, a circular argument.

    This problem is not akin to the problem of justification. A solution would merely involve the demonstration of a principle of induction that doesn’t appeal to the past success of the principle itself.

    However, if knowledge doesn’t require justification, as critical rationalists claim, then neither does induction. Therefore, any principle of induction takes on a conjectural character, neither in need of a priori nor a posteriori justification. Therefore, the critical rationalist critique of induction really has nothing to do with the traditional problem of induction at all.

    In my view, the important problems of induction involve the theory-ladenness of experience and Goodman’s new riddle (or the grue-problem of induction).

    Goodman’s new riddle demonstrates a formal problem. Unless you sufficiently impoverish your language, it’s always possible to induce infinitely many mutually contradictory hypotheses from any finite set of existential statements. Thus, as a rule of inference, induction appears almost entirely incontinent–it allows too many possibilities.

    The theory-ladenness of experience strikes at the psychologistic character of induction. That is, induction proceeds from sensory experience to universal statements, but surely statements only follow from other statements–sensory experiences cannot serve as premises in an argument. Sensory experiences cannot be written down, for to write them down is to interpret them, but to interpret them is to do so in the light of theory. What, then, is “induced” from sensory experience has everything to do with its initial interpretation and almost nothing to do with any inductive procedure.

    In my view, these two problems are actually two ways of saying roughly the same thing.

  5. Lee Kelly says:

    Oops, that’s meant to be: ‘induction appears almost entirely impotent‘, not ‘incontinent’.

  6. Lee Kelly says:

    I don’t know much about AI research, but it seems to me impossible to avoid the problems I describe unless the engineers build their own knowledge into the system. That is, build-in interpretations and expectations that arbitrarily eliminate logically possible “inductions” from consideration (e.g. all emeralds are grue). More specifically, knowledge about how to carve up experience into discrete objects and what counts as interaction between them. Even the input devices, such as cameras and microphones, embody knowledge about the world on which any “induction” would depend.

    Moreover, so far as the discovery of new conjectures go, induction is, in principle no more useful than any other invalid inference. We could program an AI to perform several kinds of fallacy, or just randomly spit out slight variations on existing knowledge, and any such conjectures could then be tested against experience. Inevitably, such procedures would sometimes be successful, but it would be strange to insist that each fallacy must, therefore, be some kind of alternative logic.

  7. Bruce Caithness says:

    Article in The Guardian by David Deutsch “Philosophy will be the key that unlocks artificial intelligence”

    http://www.guardian.co.uk/science/2012/oct/03/philosophy-artificial-intelligence?INTCMP=SRCH

    Quote:

    “The lack of progress in AGI is due to a severe log jam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view.”

  8. Bruce Caithness says:

    The link to David Deutsch’s full article is

    http://www.aeonmagazine.com/being-human/david-deutsch-artificial-intelligence/

    Excerpt:

    “Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs and that, therefore, an AGI’s thinking must include some process during which it justifies some of its theories as true, or probable, while rejecting others as false or improbable. But an AGI programmer needs to know where the theories come from in the first place. The prevailing misconception is that by assuming that ‘the future will be like the past’, it can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’. But that is impossible. I myself remember, for example, observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’. Not only was I not surprised, I fully expected that there would be an interval of 17,000 years until the next such ‘19’, a period that neither I nor any other human being had previously experienced even once.

    How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow? Because it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation. The future is actually unlike the past in most ways. Of course, given the explanation, those drastic ‘changes’ in the earlier pattern of 19s are straightforwardly understood as being due to an invariant underlying pattern or law. But the explanation always comes first. Without that, any continuation of any sequence constitutes ‘the same thing happening again’ under some explanation.

    So, why is it still conventional wisdom that we get our theories by induction? For some reason, beyond the scope of this article, conventional wisdom adheres to a trope called the ‘problem of induction’, which asks: ‘How and why can induction nevertheless somehow be done, yielding justified true beliefs after all, despite being impossible and invalid respectively?’ Thanks to this trope, every disproof (such as that by Popper and David Miller back in 1988), rather than ending inductivism, simply causes the mainstream to marvel in even greater awe at the depth of the great ‘problem of induction’.”

    In regard to how the AGI problem is perceived, this has the catastrophic effect of simultaneously framing it as the ‘problem of induction’, and making that problem look easy, because it casts thinking as a process of predicting that future patterns of sensory experience will be like past ones. That looks like extrapolation — which computers already do all the time (once they are given a theory of what causes the data). But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself.

Leave a Reply

Your email address will not be published. Required fields are marked *

please answer (required): * Time limit is exhausted. Please reload the CAPTCHA.