Continuity, Refinement, and the No-Miracles Argument (with Alex Broadbent)

26
Continuity, Refinement, and the No-Miracles Argument 1. Introduction The No-Miracles Argument (NMA) for scientific realism, in its modern formulation, is generally traced back to Hilary Putnam's remark that realism “is the only philosophy that doesn't make the success of science a miracle” (Putnam 1975, 73). This slogan is usually interpreted as referring to the empirical success of scientific theories, characterised by their ability to accommodate known observational data, and to predict the outcomes of observations that were not previously known. Yet this is not how Putnam seems to interpret his own remark. In a fuller presentation of the argument, Putnam does not set it out in terms of truth, but in terms of convergence upon the truth. He points out (drawing on an unpublished essay by Richard Boyd) that, according to a “positivistic philosophy of science”, later theories need not resemble earlier theories at all in their non-empirical parts. Yet scientists often try to preserve concepts from earlier theories when devising new ones. Putnam (op. cit., 20) claims that this “is often the hardest way to get a theory which keeps the old observational predictions, where they were correct, and simultaneously incorporates the new observational data”. He goes on: “this strategy has led to important discoveries (from the discovery of Neptune to the discovery of the positron)” (ibid.). It is not merely empirical success, but the fact that this strategy of refinement results in further empirical success, which would be a miracle according to any philosophy other than realism. This, at any rate, is the argument we seek to advance in this paper. We do not concern ourselves with whether Putnam intended what we are suggesting. Rather, we want to know whether our interpretation – warranted or not – provides a viable and novel argument for scientific realism. Typically, the fact of theory change is seen as a source of difficulty for realism, because the truth of current theories would seem to imply the falsity of all incompatible ancestors, and our epistemic position does not seem to be so dramatically

Transcript of Continuity, Refinement, and the No-Miracles Argument (with Alex Broadbent)

Continuity, Refinement, and the No-Miracles Argument

1. Introduction

The No-Miracles Argument (NMA) for scientific realism, in its modern formulation, is generally traced back to Hilary Putnam's remark that realism “is the only philosophy that doesn't make the success of science a miracle” (Putnam 1975, 73). This slogan is usually interpreted as referring to the empirical success of scientific theories, characterised by their ability to accommodate known observational data, and to predict the outcomes of observations that were not previously known.

Yet this is not how Putnam seems to interpret his own remark. In a fuller presentation of the argument, Putnam does not set it out in terms of truth, but in terms of convergence upon the truth. He points out (drawing on an unpublished essay by Richard Boyd) that, according to a “positivistic philosophy ofscience”, later theories need not resemble earlier theories atall in their non-empirical parts. Yet scientists often try to preserve concepts from earlier theories when devising new ones. Putnam (op. cit., 20) claims that this “is often the hardest way to get a theory which keeps the old observational predictions, where they were correct, and simultaneously incorporates the new observational data”. He goes on: “this strategy has led to important discoveries (from the discovery of Neptune to the discovery of the positron)” (ibid.). It is notmerely empirical success, but the fact that this strategy of refinement results in further empirical success, which would be a miracle according to any philosophy other than realism.

This, at any rate, is the argument we seek to advance in this paper. We do not concern ourselves with whether Putnam intended what we are suggesting. Rather, we want to know whether our interpretation – warranted or not – provides a viable and novel argument for scientific realism. Typically, the fact of theory change is seen as a source of difficulty for realism, because the truth of current theories would seem to imply the falsity of all incompatible ancestors, and our epistemic position does not seem to be so dramatically

improved as to warrant the expectation that current theories will fare differently. Much more likely is that they themselves will be superseded. We endorse standard realist strategies for dealing with theory change: being realist only about certain parts of theories, advocating their approximate rather than exact truth, and confining our attention to maturescientific enterprises. Like many others, we accept that theory change motivates qualifications on realism. But unlike manyothers, we also think it provides additional motivation for thecore commitment of realism. We think that the way theories grow out of each other, especially the fact of preservation of elements between successive theories, provides a reason to think that those theories are onto something: that they are approximately true, and are getting closer to the truth.

In Section 2 we distinguish the familiar static version of the NMA – which proposes realism as an explanation of the empirical success of scientific theory – from the dynamic version of the argument, which takes as explanandum the empirical success of a current theory coupled with the historical continuity between successive theories leading up to it. We also confine our argument in standard ways, to the approximate truth of the essential posits of mature scientifictheories. There are two ways that continuity might come about.In Section 3 we consider accidental continuity, that is, continuity which comes about when scientists working on prima facie unrelated topics end up with theories that mesh. In Section 4 we consider deliberate continuity, or refinement, which occurs when scientists develop a new theory out of an old one,seeking (as Putnam says) to preserve as much of the old theoryas possible. In both cases, we argue that truth is the best explanation of the respective kind of continuity conjoined with empirical success.

The arguments in section 3 and 4 show how continuity increasesthe relative probability that a theory is (approximately and/or partially) true; they do not yet render truth likely. The best explanation may, as it were, not be good enough. In Section 5 we therefore consider the base rate objection due to Colin Howson (2000), and argue that iteratively refining a theory reduces the probability of falsity exponentially. It may be true that we do not start with any reasonable estimates of the

relative proportions of true to false theories in the “space” of possible theories. But after enough iterations of the process of refinement, we can nevertheless be confident that the absolute probability of (approximate) truth is high, regardless of where we started. This completes the dynamic no-miracles argument for realism. In Section 6 we consider objections, including Larry Laudan's claim that the history ofscience exhibits radical discontinuity, among others.

The aim of the argument in this section is not yet to defeat the base rate objection and thus show that it would be, in absolute terms, remarkably unlikely if a given series of continuous theories were false yet successful. We are arguing that (approximate and/or partial) truth is the best explanationfor accidental continuity in this section (and deliberate continuity, in the next) – not yet that the best is good enough. At this stage, we seek only to show how continuity increases the relative probability that a theory is (approximately and/or partially) true: to render the claim that it is utterly false increasingly improbable.

2. Groundwork

The Static No-Miracles Argument (SNMA) claims that the empirical success of current scientific theory would be a miracle if that theory were not at least approximately true. The Dynamic No-Miracles Argument (DNMA) claims that the empirical success of current scientific theory and the fact that this theory preserves many elements of previous successful theories, which in turn preserve many elements of their ancestors, would be a miracle if the core claims of the whole theoretical “family” were not at least approximately true. Both arguments purport to be inferences to the best explanation (IBEs), but in the SNMA, the explanandum is merelyempirical success, while in the DNMA, it is the conjunction ofempirical success and continuity between successive theories.

The scope of our argument should also be clarified in respect of whether it is to be taken as defending scientific realism tout court, or merely the approximate and/or partial truth of

successful theories. To clarify this issue, we must discuss some of the arguments concerning weakened versions of realism,which we group under the term “selective realism”. The motivation for adopting selective realism is the so-called “pessimistic meta-induction” (PMI; see Putnam, 1978, pp. 24-45; Laudan, 1981). The PMI begins from the premise, motivated by several examples, that there has been “radical” discontinuity between successful scientific theories over the history of science. Considering a given empirically successful theory of the past, the radical discontinuity thesis implies it is totally false, as it is incompatible with currently successfultheories which are taken as true. The main realist thesis, on the other hand, is that empirically successful theories, including the one in question, are at least approximately true. So, taken together, the premises imply that a given theory is both true and false. Thus the pessimistic premise, if true, threatens a reductio ad absurdum of scientific realism.

In response to this argument, the selective realist claims that that realist commitment is only warranted in respect of certain parts of successful scientific theories, namely those that are in some sense “responsible” for the success. She is then able to concede that there have been substantial theoretical changes over the history of science while denying that there has been radical discontinuity. In fact, she claims that there has been continuity in respect of precisely those elements responsible for the success of the earlier theories.

There are several distinct variants of selective realism currently on offer in the literature. Worrall (1989; 2007) endorses “structural realism”, i.e. a realism about the relations between entities that remains neutral about the underlying nature of these entities. Hacking (1983), in contrast, argues that scientific theories often correctly pickout the basic entities underlying observable phenomena, but are frequently mistaken about their properties. Kitcher (1993)argues that we ought to be realists about the “working posits”of theories, by which he means those essentially involved in the “explanatory schemata” which are involved in the actual work of producing empirical predictions and explanations. Psillos (1999) advocates a “divide et impera” strategy, pointing

out that even if a theoretical claim is refuted, parts of the theory may be salvaged. And so on.

For the purposes of this essay, it is not necessary to commit to any particular variant of selective realism, although it isimportant to be aware of the basic idea. Rather, we intend that the notions of “approximate truth” and “continuity” should, for the purposes of the DNMA, be cashed out by whatever variant happens to be favoured in the present context. If we favour structural realism, for instance, the DNMA will go through in respect of a succession of empirical successful theories which share basic structural features. And, when we conclude that a theory is “approximately and/or partially true”, we will mean that the structure of the theorystands approximately in a relation of isomorphism to actual entities in the world and the relations between them. And so on for other forms of selective realism. One might still wish to place some pressure on the term “approximately” but this isnot an issue we intend to resolve here.

Before continuing to the main argument of the paper, it is also worth acknowledging that we are not the first to claim that theoretical continuity is favourable to the scientific realist. Such continuity is to be expected, if realism is correct – and to this extent, continuity might be thought to provide some confirmation of realism. Nevertheless, we have not seen an argument of the kind proposed here, that uses continuity to mount a stronger version of the NMA. To demonstrate the novelty of the current approach, we shall briefly outline some other arguments for realism that broadly emphasise theoretical continuity and highlight differences between them and the current approach.

Levin (1979) and Leplin (1981) argue that we have accumulated empirical observations over the course of scientific history, and present theories are objectively better supported than past theories simply because they account for more of this evidence. As the scientific endeavour continues, it therefore becomes progressively more unlikely that the currently most theory will ever be overturned. A similar argument is made by Roush (2009), who focuses on the improvements in scientific methodology over the course of history. These arguments can be

viewed as attempts to upset a major premise of the PMI, that our epistemic position is relevantly similar to that of our scientific predecessors. The obvious rejoinder by the anti-realist to these arguments is that, while our current theoriesare of course supported by more empirical evidence and better tools of inference than earlier theories were, the same can also be said of the earlier theories that were themselves overturned! So while our epistemic position is dissimilar in some respects to that of our predecessors, it is similar at the more relevant “meta-inductive” level.

A different sort of argument is given by Harker (2008; 2012), who claims that the best justification for realist commitment is a theory’s comparative empirical success over relevant competitors. Specifically, we ought to be realists about thoseelements of newer theories which are responsible for improved empirical success relative to older theories. The basic problem with this idea is that a theory can be more empirically successful than its competitors without being successful in any absolute sense; certainly without being so successful that it would be a “miracle” if it were not at least approximately and/or partially true. So the mere fact that some new theoretical elements increase empirical success relative to the old theory cannot be interesting to the realist. Of course, it may be the case that these new elementsare responsible for empirical success according to some absolutestandard, but such a standard would apply equally well to older elements. Thus the suggestion that we compare theories wouldn’t seem to add anything to the basic selective realist idea that realist commitment should attach only to those theoretical elements responsible for empirical success.

The strengths and weaknesses of these arguments can of course be debated at length. The point, for present purposes, is thatthey differ significantly from the DNMA being presented here. These arguments are not particularly concerned with continuityand, in fact, tacitly concede the anti-realist charge of radical discontinuity. All they attempt to do is show that current theories are, in some sense, improvements over earlier theories and infer from this that realist commitment is warranted in respect of the newer theories. We think this approach is both inadequate, for the particular reasons cited

above, and, more importantly, because concedes too much to theanti-realist. If (albeit partial and/or approximate) continuity is indeed a feature of the history of science, we think a much stronger, positive argument is available to the realist. We present this positive argument in the following sections.

3. Accidental continuity

Consider the case where there is significant theoretical continuity between two scientific theories that deal with the same empirical phenomena. We can split the cases along two dimensions. The dimension already mentioned is accidental/deliberate, where “accidental” means that the continuity between theories does not come about via scientists’ conscious or unconscious attempts to reproduce theconcepts earlier theories. The other dimension is approximate truth/total falsity. These two dimensions of distinction give us four kinds of case where continuity arises:

i) Accidental continuity and truth. Two (or more) independently formulated theories are similar because they both “latch onto” the same underlying facts about the world;

ii) Accidental continuity and falsity. Two (or more) independently formulated theories are similar by sheer coincidence;

iii) Deliberate continuity and truth. A new theory is systematically derived from an older one, and both are (partially and/or approximately) true; and

iv) Deliberate continuity and falsity. A new theory is systematically derived from an older one, and neither is even partially and/or approximately true.

These four options exhaust logical space. Notice, however, that case (i) inlucdes instances of “accidental” continuity where there is in fact a very systematic reason for continuity, namely scientists’ ability to latch onto the truthin both cases.

As in the standard NMA, our strategy is one of eliminating as implausible several explanations for the observed properties of scientific theories, leaving only the explanation that the theories in question are true. In this section, we focus on cases where the historical record gives us a strong prior reason to believe that theoretical continuity is accidental. This focus already eliminates (iii) and (iv) as potential explanations of continuity, meaning that our main task is to eliminate explanation (ii). In the following section, we consider cases where scientists’ knowledge of earlier theoriesmore plausibly influences the formulation of the theory in question. As such, our main task will be to argue against (iv)in favour of (iii).

To illustrate the more general strategy in cases of accidentalcontinuity, we have chose a striking and well-documented example, namely Maxwell’s reformulation of the wave theory of light (the historical details are provided by Mahon, 2003, ch.7). Starting in 1819, Fresnel defended a wave theory of light and, motivated by the results of polarization experiments, postulated that it consists of a transverse wave propagating in an elastic solid “ether” which pervades space. Around 1862,Maxwell was concerned with a problem that is, on the face of it, entirely different to Fresnel’s, namely that of accountingfor electrical and magnetic phenomena. At this time, these were already understood to be related, for example by the phenomenon of electromagnetic induction. Maxwell constructed amechanical model whereby space is filled with spinning “cells”or “vortices”. In this model, magnetic lines of force are constituted by these cells lining up along their spin axes, and electric currents occur in a conducting material when tinyelectric charges are passed from one cell to the next. To account for the electrical properties of insulating materials,Maxwell postulated that the cells are elastic and that an electric field is constituted by the distortion of these cellsacross a region of space.

This model satisfactorily accounted for the electromagnetic phenomena of immediate interest. But a secondary consequence of this model is that the postulated medium has the right mechanical properties (notably elasticity) to fulfil the role of Fresnel’s ether in sustaining the propagation of transverse

waves. And, when Maxwell carried out the relevant calculation with empirically measured values for the electromagnetic and electrostatic units of charge, he predicted the occurrence of waves with very nearly the same velocity as the measured valuefor light. The electromagnetic theory of light which grew out of this insight rapidly became dominant in optics. And this theory had substantial continuities with Fresnel’s earlier theory, not least the postulate that light is constituted by atransverse wave. Yet it is difficult to argue, in light of thehistory sketched above, that the electromagnetic theory was selected precisely because of these continuities. The new theory emerged from an investigation of electricity and magnetism, both of which are prima facie quite distinct from optical phenomena.

Given the particular history of this particular case, the explanation that the newer theory was in any sense designed orselected to match the categories of the older theory seems implausible. We are thus left with the potential explanations of the theories in question being (approximately and/or partially) true or being similar as a matter of mere coincidence. As indicated, our strategy is to render the coincidence explanation implausible, leaving only the explanation of truth remaining.

Our principal argument here is conveniently dubbed the plagiarism argument. Consider what we do when we assess whether a student’s written work is plagiarised from some other source. We conclude that there is some causal connection between the two when the same words are used, in the same order, as to conclude otherwise would be to posit that an astonishing coincidence had occurred. Even though there are a finite number of words in a given language, and perhaps a very finitenumber in the vocabulary of a typical plagiarist, the number of ways to combine these words renders it highly improbable that two texts would combine them in the same way, unless because of some common cause. Scientific theories are human inventions, just as students' essays are. Likewise, it seems implausible that any two human inventions such as scientific theories would be similar in the fashion described if there were no common cause behind both of them. In much the same wayas there are an infinite number of sentences that can be

constructed by different combinations of the words in the English language, it would seem that there are an infinite number of theories that could be created if the combination ofconcepts were purely arbitrary. It is thus astonishingly unlikely that any two theories plucked ‘at random’ from the set of possible theories would be similar in any substantive way.

Of course, if the “selection” of theories is not in fact totally independent, the strength of this particular argument is weakened proportionately. The existence of theoretical constraints effectively restricts the ‘random draw’ of the second theory to a subset of the overall set of potentially successful theories. This proviso is relevant because, even ifthere is no direct causal link between the two theories, it is generally not the case that they are each assembled ‘from nothing’ to account for the empirical phenomena at hand. A given branch of science at a given time will have certain explanatory models or concepts which are taught to all practitioners and which each of them tends to draw on in constructing larger theories. These concepts are thus ‘independently’ incorporated into various theories. One example is the “lock and key” model of molecular interaction in biochemistry. This was originally proposed in the late nineteenth-century as an explanation for the specificity of enzymes in targeting particular substrates (Fischer, 1894), but has subsequently been extended to the interaction of hormone with receptor, of antibody with antigen and so on.

With the Fresnel-Maxwell case in particular, it is notable that Maxwell chose to model the electromagnetic medium as an elastic solid, as the theory of such materials was coming to form a key explanatory resource in physics. Maxwell himself (Maxwell, 2003) had presented a paper on elastic solids in 1850, while still a student at Edinburgh University. Moreover,this preoccupation with elastic media was no doubt very importantly motivated by the need to model wave-like phenomena, and so indirectly by Fresnel’s achievements in the theory of light. So perhaps it is not altogether so surprisingthat Maxwell’s theory was importantly similar to Fresnel’s; itwas drawing upon the same basic, and actually relatively limited, set of conceptual tools.

Although the existence of such theoretical constraints does somewhat limit the scope of the plagiarism argument, it stills retains much of its force. Even the case of literal plagiarismis restricted: students do not pluck words at random, but write on the same topics as their peers, using the same text books, and so forth. Nonetheless, we generally find even the occurrence of a single sentence in two sources as potentially suspicious. Likewise in the scientific case, if there is any freedom at all in the construction of a newer theory, then theplagiarism argument will apply to some extent. Thus the probability that these theories are true is increased by the application of the DNMA to theoretical continuity, relative toa simple application of the SNMA to a given static theory. If it is unlikely that a false theory would be empirically successful, it is astronomically unlikely that two false theorieswould be empirically successful and display accidental continuity, even where the “pick” of theories is not totally independent.

We have no resources for explaining accidental continuity between false and yet empirically successful theories. We can say only that it is a mere coincidence, that it is unexplained. A better explanation for accidental continuity than none at all is that both theories are (approximately and/or partially) true. But this is not the only better explanation than none at all. The plagiarism argument itself suggests another. It could be that scientists achieve continuity by, in effect, copying each other's work. We have given a case (the Maxwell case) where this is implausible, andhere and in other such cases, we maintain that truth is a better explanation than either falsity or deliberate manufacture of continuity. But as Putnam points out, there aremany more cases where continuity seems to be achieved through a deliberate process of refining older theories to arrive at new ones. This raises the possibility that most instances of continuity are explained, not by even approximate truth, but as an artefact of scientists' obsession with achieving it.

In fact, we think that deliberate continuity also grounds a type of DNMA. This is the argument to which we turn in the next section.

4. Deliberate continuity (refinement)

It is not in doubt that scientists often attempt to build on the (perceived) successes of earlier theories. We therefore offer a version of the DNMA focused on cases where scientists deliberately attempt to preserve concepts from earlier theories. However, this argument applies more generally to all cases of systematic theoretical continuity, where scientists’ knowledge of earlier theories influences the construction of the newer theory, even where this influence is not consciously intended.

Although there is no real distinction between the two in practice, for clarity we will describe two idealised methods that scientists might apply in attempting to refine an older in developing a newer theory. These can be thought of as comprising the poles of a continuous spectrum. Firstly, a scientist might attempt to develop an existing theory, steadilyimproving it in response to possible refutations or to provideexplanations of phenomena that are not yet sufficiently accounted for. Lakatos’ (1968) understanding of scientific theory change is helpful in characterising this process. He distinguishes the “hard core” of a theory from a “protective belt” of auxiliary posits, and argues that a “progressive research programme” is one in which the protective belt is continually modified in such a way as to produce new empiricalpredictions without introducing too many ad hoc modifications to avoid refutation. This is a helpful picture, but it will better motivate an argument for truth if Lakatos’ roughly sociological characterisation of the “hard core” is replaced with the selective realist’s “essential posits”, which are identified by their logical relationship to specific empiricalsuccesses. Thus a prima facie (i.e. sociologically identified) “core” posit of a theory can be discarded under the process ofrefinement, provided the “essential” posits are preserved. Oneexample is the eventual abandonment of the luminiferous ether within the Maxwellian theory of light, in favour of a “freestanding” electromagnetic wave.

We might call this “refinement by development”. The opposite pole is what might be called “refinement by constraint”, and

takes inspiration from Post’s generalized correspondence principle. Under this model, the theorist will, as it were, start from scratch in devising principles to explain some new set of phenomena. However, she will then ensure that the theory that arises from these new principles conserve the essential elements of the theory that was previously successful in this broad empirical domain. One good example ofthis is Einstein’s formulation of special relativity. He initially created this theory by way of quite abstract considerations of how an observer would perceive light under high-velocity conditions (Einstein, 1920). Nevertheless, he held it as a basic condition of adequacy that the theory should recover the Newtonian laws of mechanics under low-velocity conditions.

As emphasised above, most actual scientific cases probably occupy some intermediate position between these extremes. And,in any case, it is unclear that the distinction is very relevant for our purposes. In each case, there is continuity between theories, and this has been achieved deliberately. Therefore, in what follows, we shall use the general term “refinement” to refer to any process of theoretical development that achieves such continuity deliberately. Our task is to see whether an argument can be constructed that supports realism about the class of theories in question.

Our principal consideration in the case of deliberate continuity can be conveniently dubbed the map argument. A map is a case where a human construction paradigmatically purports torepresent something in the world. Suppose that a particular map has proved useful in navigating around a specific piece ofterrain, but it is a relatively rough map. Imagine that someone now wishes to improve on this map by taking more observations and “filling in” the parts currently lacking in detail. She doesn’t necessarily take the existing map as canonin every respect, but tries to conserve those elements which have proven themselves useful in navigation. Suppose that, at the end of this process, her new map is significantly better as a navigational tool than the old map, while preserving the “essential” elements of the latter. Intuitively, that the older map was able to support such a process of refinement suggests that there is “something right” about it. The old map

could conceivably have been substantially false, and been useful for navigation only though a flukish isomorphism between the parts that had in fact been used for navigating and the particular routes that had in fact been navigated. A map of Gabarone might, flukishly, happen to be isomorphic to amap of Paris in some respects, having a couple of streets of the same name intersecting, for example. On a lucky occasion, this fluke might help you to make a correct turn at an intersection in Paris. But if one seeks to improve one's map of Gabarone by exploring Paris, one will very quickly discoverthat the map is not a representation of Paris at all. If the original map were substantially false, it would seem inevitable that the procedure of progressively adding detail would quickly turn up fundamental contradictions between some essential element and the actually observed terrain.

Applying this idea to scientific theories, we start with the ability of an empirically successful theory to support success-increasing refinements. If the map argument is correct, the fact that the theory can be substantially developed to achieve new empirical successes without necessitating any substantial revision to the existing essential elements is an argument for its (approximate) truth.

Could it not be a lucky coincidence that refinement is a good strategy for improving empirical success? Could it not be that, as it happens, scientists have not yet observed the phenomena that cannot be reconciled with the core theory? Thisis possible, but it is increasingly unlikely in cases where the core concepts have supported significant refinements over time. Suppose that there are n theories which are empirically adequate in respect of the data supporting our original successful theory. Call these theories A1 – An. Given the leeway allowed by approximate and/or partial truth, more than one of an apparently incompatible set of theories might be considered approximately true. So let us stipulate that A1 - Am

are each approximately true, with the remainder being false but successful. In terminology of the NMA, a lucky coincidencehas occurred just in case the theory actually selected is one of the false but successful Am+1 - An. Now suppose that each theory Ai is potentially refinable into a successor theory Ai*:

On the assumption that truth generally leads to empirical success1, theories whose essential posits are approximately true can always be refined without any substantive challenges to their essential posits. But for a false theory to support such a process requires more luck at each round of refinement.Even assuming the probability of surviving each round is high,say 0.95, the probability of false theoretical principles avoiding being faced with irreconcilable empirical evidence will nevertheless decline rapidly as the process continues. Inother words, the process of refinement itself is likely to expose that the match between theory and data that was coincidental in the first place. So the very fact that a theory has an empirically successful successor theory increases the probability that it is one of the A1 – Am. Notice, once again, that this argument does not yet fully address the base rate argument – if the probability of a successful theory being true is very low in the first place, even the additional fact that it supports refinement might notbe sufficient to render its truth likely. Nevertheless, if we accept the basic premise that the prior probability of obtaining a true theory is sufficient to warrant the NMA on some occasions, then this argument will further diminish the probability of a lucky coincidence.

More radically, one might ask: could it not be that there justare no phenomena that would prompt a fundamental re-evaluationof the theory? Although fundamentally inaccurate, perhaps the core theory can nevertheless be developed to explain all the phenomena that ever will be observed in this broad empirical domain. In answer, let us assume that a given theory may be

1 Challenges to this assumption would be interesting but we cannot deal with them here. Our focus is in the converse claim, that only truth is empirically successful. It is this claim that anti-realists typically attribute to realists, and attack.

associated with several (possibly infinitely many) theories that share the same basic theoretical principles and would therefore count as successors to it. Which successor is actually proposed will depend importantly on which empirical phenomena are actually investigated. The question now is whether the process of refinement will, starting from a false but empirically successful theory, generally be able to produce a successor theory that is also successful. In answering this question, it is worth considering what proportion of potential successor theories will be successful.The answer for the case of an approximately and/or partially) true theory seems clear: if the theory makes (approximately) true claims about some underlying entities and processes, thenlater theories that also make these claims would surely “inherit” its empirical success.2

It is far less clear, however, that successors of false but successful theories will be successful. To repeat Putnam’s point, there is no particular link between improved empirical success and the preservation and refinement of theoretical claims in a purely positivistic philosophy. If every claim about unobservables functions as a “shorthand” for claims about possible observations, then there is no particular reason to suppose that a more empirically accurate shorthand will be a descendant of some previous shorthand. In other words, the fact that some false theoretical principles were successful in accounting for one set of phenomena does not imply that the same principles will be successful in dealing with a different set of phenomena. It is possible that, among the small proportion of false theories that are empirically successful, some are also capable of refinement to produce further empirical success. However, only a small proportion ofthe theories descended from these successful false theories will in their turn be successful: a small proportion of a small proportion.

Thus, for scientists to find empirically successful descendants of theories that happen to be false theories,

2 Again, the heritability of empirical success through refinement of approximately true theories is open to dispute; but since this is not the line that anti-realist arguments have taken, we set it aside, somewhat regretfully.

their method of refinement must pick out the subset of these descendants that improve upon the empirical success of their ancestors. But this is not a likely occurrence, because the process of refinement is unpredictable. It is very seldom apparent when a novel phenomenon is first investigated what changes to the existing theory might be required to give a satisfactory account of it, or whether such an account is evenpossible while preserving the theory’s essential elements. Moreover, even if this could be predicted, scientists who are motivated to maintain continuity are seldom so motivated that they are willing to ignore new avenues of inquiry, even if these threaten to provoke major theoretical change. And even if they are so motivated, they would not usually be able to prevent their colleagues from investigating these phenomena.

Taken together, these factors mean that which empirical phenomena end up driving the process of refinement are, to a decent first approximation, chosen randomly. And since only a small proportion of a false theory’s successors are empirically successful, it is likely that scientists will actually be forced into entertaining an unsuccessful one. This, of course, is a strange way of putting it. More intuitively, the claim is that the unpredictability of the refinement process means that it is likely that a substantially false theoretical framework will eventually be confronted with empirical phenomena that cannot be successfully addressed within the framework. It is therefore incorrect to assert that the process of refinement will generally result in an empirically adequate successor to a theory that is itself false but empirically successful.

In the previous section, we argued that accidental continuity is better able to be explained under the supposition that the continuous theories are (approximately and/or partially) true,than under the supposition that they are (radically) false. Another better explanation is that the continuity is not accidental but deliberate. In this section we have argued thatdeliberate continuity is also harder to explain on the assumption that the continuous theories are false, than on thesupposition that they are true. To posit that a theory is empirically successful though radically false is already to posit the occurrence of a lucky coincidence, and to suggest

that this happens repeatedly over a course of theoretical refinement is to posit an even luckier coincidence.

5. The DNMA for Realism

We have seen that theory change of a certain “conservative” kind may be a reason for rather than against realism. However,while this argument renders the (approximate and/or partial) truth of successful theories that preserve continuity more probable, they do not render it probable simpliciter. This is because we are still faced with the base rate problem, the fact that we do not know the priori likelihood of formulating a true as opposed to a false theory (Howson, 2000, 52-54). Suppose that there are multiple possible theories that would count as empirically successful by whatever standard of success we choose to adopt. Among them will be the one true theory, or, more optimistically for the realist, the relatively few approximately and/or partially true theories. The chance that the actual theory we accept is one of these (approximately and/or partially) true theories depend on the ratio of these to the successful but false theories – the “base rate”. But we don’t know, and having no way of ascertaining, the base rate. In this section we attempt to respond directly to this problem, and thus defend at least some version of scientific realism.

Howson's argument is directed at the SNMA. Applied to the DNMA, it would presumably go like this: that a theory is successful and has important continuities with earlier theories is certainly grounds for regarding it as more likely to be approximately true, since approximately true theories that are similar to previous theories comprise a larger proportion of the class of successful theories than they do of the entirecollection of theories. But we are not warranted in inferring that a theory is likely to be approximately true from its empirical success conjoined with the fact of its having been produced by refinement.

We will frame our response in terms of refinement, although the argument goes equally for the case of both accidental and unintentional but otherwise systematic continuity. The basic

point of our defence is drawn from the “washing out of the priors” argument (Earman, 1992, ch. 6), which is a response toa concern about subjective Bayesianism. The concern is that rational agents might set their priors for a given hypothesis to very different levels, and thus come to very different posterior probabilities for the truth of the hypothesis even if they have identical evidence. The washing-out argument relies on the mathematical observation that, provided each of the agents consistently receives the same evidence, the posterior probabilities of different agents will in fact quiterapidly converge on the same value.

In just the same way as the washing out argument, the DNMA relies on a process of repeated iteration. Suppose one starts with a very large pool of entities and selects some proportionof them, discarding the rest. If one iterates a process like this, which reduces the number of entities remaining in some pool by a proportion, then the number remaining dwindles very fast. After just one iteration, we are not justified in sayingthat the absolute number remaining is small; but after severaliterations, we may well be. That is to say, the probability that the theory in question is (approximately and/or partially) true becomes quite high.

Since Howson’s base-rate objection is explicitly framed as a Bayesian concern about correctly estimating the values of priors, we might say that the DNMA is a variant of the washing-out argument applied to the case of scientific theories. Of course, the washing-out argument is controversialeven amongst committed Bayesians, but its intuitive force in this case should be apparent. We therefore claim, contra the base rate argument, that we can justifiably assign low probability to the claim that an empirically successful theoryis radically false, provided that it has enjoyed improved empirical success over a long chain of refinements

It remains open to the base rate objector to ask what “long” means, and correspondingly to point out that if the number of empirically successful but false theories is large enough at the beginning of the process, then they may still constitute alarge proportion of the successfully refined theories. We cannot deny this. However, we wish to point out that to

maintain such a position in the face of progressively longer chains of iterative refinement is, in the end, equivalent to endorsing a rather radical scepticism. It becomes implausible – even incoherent – to maintain that there are so many ways inwhich we could be wrong despite our empirical success, and that they need to be taken seriously, without also taking seriously the possibility that our daily inductive inferences fail, that we are brains in vats, and so forth.

Anyone who does not subscribe to radical scepticism of this sort must somehow circumscribe a set of possible sources of error which she is prepared to consider, and set aside the inevitably much larger set of sources of error which she is not prepared to consider. The effect of the DNMA is therefore not to show that the base rate is low enough for realism to beplausible tour court; it is to show that the base rate is low enough given non-sceptical starting assumptions. The proponent of the base rate objection can reject these assumptions, but then sheretires from this debate about whether a reasonable person with a reasonably standard set of epistemic values is ought toor may believe what science appears to say, and enters insteadthe debate about the significance of philosophical scepticism.

6. Objections

A. The Duhem-Quine Objection

One crucial premise of the DNMA in the refinement case is that, in certain instances, the observed phenomena just cannot be made compatible with a given set of theoretical principles.Applying the Duhem-Quine thesis, the anti-realist might well respond that a sufficiently resourceful scientist will be ableto preserve the core principles of a given theory whatever new empirical evidence emerges. Popper (1959/2002, s. 20) responded to this challenge by arguing that scientists adhere to methodological norms that militate against making modifications that decrease the testability (falsifiability) of the overall theoretical system. Similarly, Lakatos contended that scientists tend to reject “degenerating” research programmes, i.e. those in which the activity of preserving the core theoretical principles against refutation

has started to take precedence over the generation of new empirical predictions.

In light of the actual history and practice of science, the picture provided by these authors is surely more plausible than that of scientists striving to preserve theories come what may. Indeed the PMI itself turns on this historical claim. Laudan’s major premise in making the PMI is that the scientific community has on numerous occasions been willing toaccept theoretical changes that are quite radical, at least onthe psychological level. Abandoning absolute space and time infavour of relativity, for instance, represents a significant setting aside of core theoretical principles, even if there are deeper continuities at the “structural” level. And yet scientists were willing to make these changes in the face of suitable empirical evidence. If these claims are historically inaccurate then the DNMA and the PMI suffer together.

B. The Other-Reason Objection

Another possible objection to our strategy in sections 3 and 4is to claim that theories may be false and nevertheless successful, not by chance, but for a reason other than the truth of their claims about unobservable entities and processes. This reason may explain not only their empirical success, but their ability to sustain refinements that improve their empirical success. An objection along these lines might take inspiration from the following passage:

“... I claim that the success of current scientifictheories is no miracle. It is not even surprising to thescientific (Darwinist) mind. For any scientific theory isborn into a life of fierce competition, a jungle red intooth and claw. Only the successful theories survive—theones which in fact latched on to actual regularities innature.” (van Fraassen, 1980, p. 40)

Thus false theories could be systematically empirically successful, because they accurately represent “actual regularities in nature”. Therefore, a false theory that has latched onto a genuine regularity will spawn an entire series of successor theories that describe the same regularity and so

are empirically successful. There are no “lucky coincidences”,either in the initial latching on, or in the repeated refinements of the theory.

This proposed explanation, however, only seems plausible because it tacitly helps itself to deep ambiguities in the notions of “empirical success” and “regularity”. As Lipton (2004, pp. 193-195) points out, it seems clear how the selectionist argument could explain why we have theories that successfully describe empirical phenomena that they specifically selected to describe. It is an entirely separate,and more difficult, task to explain why some theories are ableto successfully predict phenomena that were not used in their construction. One might reply, as van Fraassen does, that these predictions come about because the theories have “latched on to actual regularities in nature”. But Lipton’s point is that a regularity discovered to apply in one class ofobservable phenomena cannot simply be inductively projected to describe an entirely different class of phenomena.

The parts of a theory that do allow ampliative inference between distinct classes of phenomena tend to describe “deep regularities” in the behaviour of what van Fraassen would classify as unobservable entities and processes. The suggestion that light consists of a transverse wave, originally posited on the basis of experiments with polarisation, for instance, eventually comes to be used in predicting how light will be distorted in strong magnetic fields. So van Fraassen is left with a dilemma – he must either show how regularities in one observational domain can be extrapolated to provide predictions about a different domain without any appeal to unobservable entities; or he should concede the realist claim that scientists latch onto atleast some genuine regularities concerning unobservable entities and processes.

C. Radical Discontinuity

The DNMA is an argument for realism from a certain kind of theory change, namely, theory change that involves continuity:the preservation of important elements of one theory in successors. But the PMI is premised on the claim that

scientific theory change is radically discontinuous. Does the DNMA thus simply beg the question against the PMI?

The premise of the PMI, we take it, is not that every case of theory change is radical. That would be implausible. Rather, Laudan's strategy in particular seemed to be to highlight cases of radical discontinuity, to show that they occur and are common, but not to show that they are universal. Our premise is that there are many cases of continuity in theory change, and this is compatible with the existence of some – even of many – cases of discontinuity. Indeed, cases of continuity and discontinuity may co-exist in the same broad theoretical framework: the ether hypothesis is discarded from Fresnel’s theory of light, but the posit that light consists of transverse waves is preserved.

Once our premise is granted, the DNMA goes through. It shows that the PMI is incorrect: that one cannot simply extrapolate from past failure. The process of iteratively refining theories to achieve more empirical success cannot be adequately explained other than by approximate truth. The PMI does nothing to undermine this central point.

7. Conclusion

In this paper, we have demonstrated that there are reasons in addition to the standard, static no-miracles argument for accepting the truth of empirically successful theories. These reasons apply in cases where those theories exhibit continuityat the level of the posits “essential” for their empirical success, whether this continuity is achieved by “accident” or because scientists’ knowledge of earlier theories guides theirconstruction of newer theories. In each case, the fact of continuity renders further unlikely that empirical success hasbeen achieved by mere “lucky coincidence”. In the case of “accidental” continuity, the fact of two theories independently achieving empirical success by using a given setof theoretical principles makes that much greater the scale ofcoincidence required to avoid concluding that these principlesare true. In the case of non-accidental continuity, if the essential posits of a theory are utterly false, then each incremental refinement that the theory supports requires an

additional lucky coincidence. Thus, contrary to many anti-realist arguments, the record of theory change over the history of science may well warrant ever-increasing confidence inthe truth of at least some theoretical claims.

8. References

1. Boyd, R. (1980). Scientific realism and naturalistic epistemology. PSA: Proceedings of the Biennial Meeting of the Philosophyof Science Association, 613–662.

2. Boyd, R. (1990). Realism, approximate truth, and philosophical method. In Scientific theories.

3. Earman, John (1992). Bayes or Bust? (Cambridge, MA: MIT Press).

4. Einstein, A. (1920). Relativity: The Special and General Theory. NewYork: H. Holt and Company.

5. Fine, A. (1984). The natural ontological attitude. In J. Leplin (Ed.), Scientific Realism (pp. 83–107). Berkeley: University of California Press.

6. Fine, A. (1986). Unnatural Attitudes: Realist and Instrumentalist Attachments to Science. Mind, 95(378), 149–179

7. Fischer, E. (1894). Einfluss der Configuration auf die Wirkung der Enzyme. Berichte der Deutschen chemischen Gesellschaft, 27(3), 2985–93.

8. Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of natural science. Cambridge: Cambridge University Press.

9. Harker, D. (2008). On the Predilections for Predictions. The British Journal for the Philosophy of Science, 59(3), 429–453.

10. Harker, D. (2012). How to Split a Theory: Defending Selective Realism and Convergence without Proximity. The British Journal for the Philosophy of Science, 64(1), 79–106.

11. Howson, C. (2000). Hume’s Problem: Induction and the Justification of Belief. Oxford: Oxford University Press.

12. Kitcher, P. (1993). The Advancement of Science. Oxford: Oxford University Press.

13. Lakatos, I. (1968). Criticism and the methodology ofscientific research programmes. Proceedings of the Aristotelian Society, 69, 149–186.

14. Laudan, L. (1981). A Confutation of Convergent Realism. Philosophy of Science, 48(1), 19–49.

15. Leplin, J. (1981). Truth and scientific progress. Studies in History and Philosophy of Science Part A, 12(4), 269–291.

16. Levin, M. (1979). On theory-change and meaning-change. Philosophy of Science, 46(3), 407–424.

17. Lipton, P. (2000). Tracking Track Records. Proceedings of the Aristotelian Society Supplementary Volume, 74, 179-205.Magnus, P. D., & Callender, C. (2004). Realist Ennui and the BaseRate Fallacy. Philosophy of Science, 71(3), 320–338.

18. Lipton, P. (2004). Inference to the Best Explanation (2nd ed.). London: Routledge.

19. Mahon, B. (2003). The Man Who Changed Everything: The Life of James Clerk Maxwell. Chichester, UK: John Wiley & Sons.

20. Maxwell, J. C. (2003). On the equilibrium of elasticsolids. In W. D. Niven (Ed.), The scientific papers of James Clerk Maxwell, Volume I (pp. 30–73). Mineola, NY: Dover Publications.

21. Musgrave, A. (1988). The ultimate argument for scientific realism. In R. Nola (Ed.), Relativism and realism in science (pp. 229–252). Dordrecht: Kluwer.

22. Popper, K. (2002). The logic of scientific discovery. London: Routledge Classics.

23. Post, H. (1971). Correspondence, invariance and heuristics: in praise of conservative induction. Studies In History and Philosophy of Science Part A, 93(3), 213–255.

24. Psillos, S. (1999). Scientific Realism: How Science Tracks the Truth. London: Routledge.

25. Putnam, H. (1975). Philosophical papers,Vol. 1, Mathematics, matter and method. Cambridge: Cambridge University Press.

26. Putnam, H. (1978). Meaning and the Moral Sciences. London:Routledge and Kegan Paul.

27. Roush, S. (2009). Optimism about the Pessimistic Induction. In P. D. Magnus & J. Busch (Eds.), New Waves in Philosophy of Science (pp. 29–58). London: Palgrave Macmillan.

28. Stanford, P. K. (2006). Exceeding our grasp: Science, history, and the problem of unconceived alternatives. Oxford: Oxford University Press.

29. Van Fraassen, B. (1980). The Scientific Image. Oxford, England: Oxford University Press.

30. Van Fraassen, B. (1989). Laws and symmetry. Oxford University Press Oxford.

31. Worrall, J. (1989). Structural Realism: The Best of Both Worlds? Dialectica, 43, 99–124.

32. Worrall, J. (2007). Miracles and Models: Why reportsof the death of Structural Realism may be exaggerated. Royal Institute of Philosophy Supplements, 82(61), 125–154.