Should We Care about Fine-Tuning

37
SHOULD WE CARE ABOUT FINE-TUNING? Jeffrey Koperski ABSTRACT: There is an ongoing debate over cosmological fine- tuning between those holding that design is the best explanation and those who favor a multiverse. A small group of critics has recently challenged both sides, charging that their probabilistic intuitions are unfounded. If the critics are correct, then a growing literature in both philosophy and physics lacks a mathematical foundation. In this paper, I show that just such a foundation exists. Cosmologists are now providing the kinds of measure-

Transcript of Should We Care about Fine-Tuning

SHOULD WE CARE ABOUT FINE-TUNING?

Jeffrey Koperski

ABSTRACT:

There is an ongoing debate over cosmological fine-

tuning between those holding that design is the best

explanation and those who favor a multiverse. A

small group of critics has recently challenged both

sides, charging that their probabilistic intuitions

are unfounded. If the critics are correct, then a

growing literature in both philosophy and physics

lacks a mathematical foundation. In this paper, I

show that just such a foundation exists.

Cosmologists are now providing the kinds of measure-

theoretic arguments needed to make the case for fine-

tuning.

1. Introduction2. Probability and Infinite Sets

2.1 No Probability Function2.2 Arbitrary and Wrong Probability Functions

3. The Measure of the Universe4. The Coarse-Tuning Objection5. Arbitrariness and Error6. Conclusion

1. IntroductionThere is no more compelling argument in the ongoing

science-religion dialogue than that from cosmological fine-tuning. The physics is hard to ignore. A slight change in oneof a dozen or more fundamental physical constants precludes thedevelopment of cellular life anywhere.1 The universe, contraryto the so-called Copernican Principle, seems to be structured for creatures like us. There are two common explanations for this surprising state of affairs. First is design: the universe appears to be fine-tuned for life because it is. A theistic God has constructed things in order for the universe to be life-friendly. Second is a multiverse: there are many—perhaps infinitely many—universes/domains2 each with different values for these fundamental constants. Few of these domains will contain stars or even molecules, much less life. We inhabit one of the few universes in which the parameters happened to align in a way that allows organisms like us to exist. Much of the literature on the anthropic coincidences argues for one of these rival explanations over the other.

1 According to Barrow [2002, p. 166] for example, there would be no stars if the fine structure constant fell outside the range

.

2 Holder [2001a] outlines the family of multiverse proposals. Also see Tegmark [2003].

Both sides agree on one thing: this prima facie fine-tuning cries out for an explanation.

The entire enterprise seems to rest on some rough and ready calculations about how improbable fine-tuning is, given what we know about cosmology. Philosophers Timothy and Lydia McGrew, Neil Manson, and mathematician Eric Vestrup have recently challenged these probabilistic intuitions, arguing that the request for explanation is unsubstantiated. They claim that neither design nor multiverse advocates have made a rigorous case for the low probability of the observable universe. Moreover, they doubt that such a case can be made. If correct, then a large and growing literature in philosophy and physics lacks a foundation. The stakes are particularly high for theism since fine-tuning is the basis for what is perhaps the best contemporary argument for the existence of God.

As we shall see, the objections raised by these “anthropicskeptics” run deep and defy a complete refutation. I will argue, however, that a sufficient reply is available to block their concerns. While both design and multiverse proponents might be wrong, their common demand for an explanation of fine-tuning is justified (at present). To see this, one must distinguish between probabilities, the larger class of distribution functions, and measure. Fine-tuning claims can be—and have been to some extent—precisely formulated in measure

theoretic terms. They rest on a common form of inference whereby “almost all” of the members of a set (except for a subset of measure zero) have a given property. If the skepticsare correct, then many inferences of this sort are fallacious. I will argue to the contrary that the mathematics and its application in dynamics and cosmology show that fine-tuning canbe put on a firm foundation.

Let’s first consider the problems in greater detail.

2. Probability and Infinite SetsThere are two overlapping lines of criticism. One is that

any talk of the “odds” of this sort of universe existing presupposes a probability distribution across a model-space of universes. As several philosophers and physicists have pointedout, however, there is no such distribution available and, someclaim, none is likely to be forthcoming.3 The second problem is that even if one could find a probability distribution, it would be completely arbitrary. An infinite number of others isavailable, each of which is radically underdetermined by what we know. Hence, the only rational choice is agnosticism towardthe “right” distribution in nature.

3 This problem is taken up in detail by McGrew, McGrew, & Vestrup [2001], Manson [2000], and Holder [2001b]. As Holder shows (p.8), the problem has been recognized for some time. Both Paul Davies [1992, p.204-205] and Dennis Sciama [1993, p.109] mention it in passing.

2.1. No Probability Function. McGrew et al. offer the most persuasive case for problem one. Let A stand for some purported fine-tuned parameter with values ranging across the nonnegative reals. This reflects the fact that most of the physical constants in question have no theoretical bound. We may think of this range as marking off a space of physically possible worlds, i.e., ones that have the same laws of nature as our observable universe, but differ with respect to A. Moreprecisely, let be a 1-dimensional coordinate spacefor the possible values of A in the interval . Since nature, as far as we know, has no preference for one value of A rather thananother, each interval inshould be assigned equal probability. Fine-tuning advocates4 then argue that the parameter values that permit cellular based life are restrictedto intervals with a tiny measure compared to the full measure of the space. Hence the odds of the universe having a life-friendly A region is tiny—the proverbial needle in an infinite haystack—and requires an explanation.

Very well, except for one thing: haystacks are not infinitely large. Does it make sense, even as a mere thought experiment, to talk about the probability of finding such a needle or of drawing a specific ball from an infinitely large urn? The critics’ reply is a resounding “No!” Probability

4 This refers to those who hold either a design or multiverse view. Both agree that there is prima facie fine-tuning; they differ with respect to its explanation.

distributions cannot be defined in these circumstances. A uniform distribution over an infinitely large space is not normalizable, i.e., the probabilities do no sum to 1.

This is more than a bit of mathematical esoterica.

Probabilities make sense only if the sum of the

logically possible disjoint alternatives adds up to

one—if there is, to put the point more colloquially,

some sense that attaches to the idea that the various

possibilities can be put together to make up one

hundred percent of the probability space. But if we

carve an infinite space up into equal finite-sized

regions, we have infinitely many of them; and if we

try to assign them each some fixed positive

probability, however small, the sum of these is

infinite. [McGrew, McGrew, and Vestrup, 2001,

p.1030]

So although it might seem that one can make a probabilistic argument analogous to the haystack, such thinking only applies to normalizable distributions. The upshot is that “there is nomeaningful way in such a space to represent the claim that one

sort of universe is more probable than another” [McGrew et al.,2001, p.1032]. Paul Davies echoed this point at a recent symposium [Davies, 2003]. Unless a principled means can be found for truncating the parameter space(s), there is strictly speaking no sense in which life-friendly universes are improbable. The probabilities are not defined.

2.2 Arbitrary and Wrong Probability Functions. Even if this can be resolved, as I shall argue, a second problem waits in the wings. In fact, there are two closely related objections. The first is arbitrariness [Manson, 2000, p.348]. With so many compatible probability distributions, on what basis should we simply choose one that is uniform across the parameter space? The second is error [Holder, 2001a, p.346]. The laws governing universe-creation, if there are any, are unknown to us. It might be the case that nature does have a preference for some parameter values over others. The cosmological dice may be loaded after all. If so, then even ifone had some principled way of narrowing the field of distributions, whichever one is chosen will very likely be wrong. There simply is not enough known about the mechanism atpresent.

So then, the gauntlet has been cast at the feet both design and multiverse advocates. The critics charge that both sides are playing fast and loose with their probability claims.Once we examine those claims with mathematical rigor, the

“fine-tuning” data no longer seems to cry out for explanation. At best, one should remain agnostic.

Let’s now try to answer the objections starting with some important distinctions.

3. The Measure of the UniverseOne easy way to avoid the normalization problem would be

to truncate the range of A. A conventional distribution could then be defined over the finite space . Unfortunately, as the critics complain, such attempts seem ad hoc. There is no theoretical basis for saying that the weak nuclear force, for example, cannot exceed some particular strength.5 We must findsome way of dealing with an unbounded parameter space.

One’s introduction to probability theory begins with discrete cases: a six sided die or fifty-two card deck. When the space of possibilities is continuous, probability distributions are required which, as McGrew et al. insist, mustbe normalized. There are, of course, many distribution

5 Assuming, that is, one ignores the relation between fundamental forces. It is likely that the four basic forces can only take on a limitedrange of values relative to one another. Hence, if one value were fixed, the range of the others would likely be limited. I ignore this possibilityfor now. Robin Collins [2004] has also suggested that some values are not physically possible since they preclude the existence of a nonsingular spacetime. A bare singularity would not count as a physical universe, the argument goes, since there would be no regularities that could constitute laws of nature in that case. Although I will offer no arguments to the contrary, I agree with the McGrews [2004] that this suggestion depends on adubious, Humean account of the laws of nature.

functions that have nothing to do with probability. There are textbook examples like the electric charge distributions on an infinitely large dielectric plate. And although there is strictly speaking no probability distribution for an infinite lottery, one could still define a uniform distribution functionacross an infinite number of ping-pong balls. Probability distributions are just a special case of distribution functions, many of which cannot be normalized.

Thus far we have followed the common practice of ignoring the measure of . The concept of measure is a generalization of length, area, and volume. Given a continuous n-dimensional space of possibilities like , the measure indicates relative “volumes” of different parts of the space. We often use a Lebesgue measure without thinking about it since the area of a region in 2 just is its Lebesgue measure, likewise for length of a segment in 1. Like distribution functions, there are many choices for the measure of a space. With a measure on aspace with state variables x,y and a probability distribution in hand, one can integrate over an area A to find the probability of finding a representative point in A:

. Sklar gives a simple illustration of the

importance of measure for probabilities [Sklar, 1993, pp.191-2]. Consider the interval [0,1]. Although we naturally take the size of the subinterval [0, 1/2] to be the same size as [1/2, 1], it need not be the case. One could assign a measure

in which the subinterval [0, 9/10] is the same size as [9/10, 1]. Spreading a uniform probability density across spaces withthese two possible measures amounts to something quite different.

Very well, but when do non-mathematicians need to bother with the details of measure and distribution? For one prominent example, consider how much of the work in the foundations of statistical mechanics has focused on the justification for the probability distribution used in the standard measure. Fortunately, phase spaces in classical mechanics get a natural (Liouville) measure directly from Hamilton’s equations with canonical variables.6 Other state spaces based on position and velocity could be used, but these lack some of the special characteristics of a Hamiltonian phasespace, e.g., the incompressibility of areas in the space as an ensemble evolves over time. The standard probability distribution is continuous with this measure. Sets of measure zero are assigned zero probability; sets of full measure have probability 1. The measure and probability distribution are used to calculate the average value of a function of the phase variables, , as time goes to infinity. These averages

6 The measure is simply the product of the canonical position and momentum. I.e., the measure of a differential volume element in phase

space with n generalized coordinates is . For more, on this

see [Tabor, 1989, pp.54-56].

are then associated with thermodynamic properties. If a systemis ergodic, then the averages calculated as time goes to infinity will equal the average value over microstates at a given time. For some idealized systems, like hard spheres in abox, one can prove ergodicity, except for a set of measure zero. For that tiny family of anomalous trajectories, the phase and time averages are not equal. But since this is merely a set of measure zero within the whole space, these points are ignored. Doing so is not without controversy. It is nonetheless a common move in dynamics. (See Sklar’s [1993, pp.182-188] for more.) Say on the other hand that a property with measure zero in a model-space is actually observed in the system that is the subject of the model. That’s a problem! Such properties should rarely be instantiated—if ever. To actually detect one, then, constitutes a mystery. Either the model is wrong or some other explanation is required.

This approach is not unique to classical mechanics. Thereare several examples in the cosmology literature, none of whichdeals with fine-tuning per se. The interesting feature of eachis that they define a measure and distribution function for spaces likespanning an infinite number of possible universes. These will provide important clues as to how probabilistic inferences can be made in unbounded spaces. Manystart, like Gibbons, Hawking, Stewart [1987], with a state space over FLRW (Friedmann-Lemaître-Robertson-Walker)

universes. Each point in their 4-dimensional phase space represents a model universe with unique initial conditions. The evolution is Hamiltonian with each trajectory representing the evolution of a complete universe. They present three desiderata for the measure [pp.736-7]: (i) it should be positive, (ii) it should not depend on the choice of variables nor any specific set of initial conditions (Cauchy surface), (iii) it should be a “natural” measure in that it captures symmetries in the space of solutions without adding informationfrom beyond the equations. Finding such a measure is nontrivial; it does not simply fall out of the mathematics the way it does in statistical mechanics. Still, it can be done, providing a starting point for finding the prevalence of a given property across the space of all FLRW models.

One such property is inflation, which cosmologists have invoked to solve a variety of puzzles, e.g., the flatness problem. Is inflation something to be expected? Do most big bang models undergo inflation? Finding a measure across the space of Big Bang models was the first step towards a rigorous answer. The next step is a distribution function. As far as we know, inflation is no more likely in one set of FLRW models than another. Gibbons et al. therefore assume a constant probability density across the measure. Using state variablesx,y,z, (functions of a scalar field, its mass, a scale factor, and time), the probability will then be

determined by . They were attempting to show

that “almost all” of the models within phase space undergo inflation. “Almost all” is a technical term here: “all but a set of measure zero when the measure of the entire space of possibilities is finite, or for all but a set of finite measurewhen the measure of the entire space of possibilities is not finite” [Earman and Mosterin, 1999, p.31]. The outcome of thisinvestigation determines what needs to be explained:

Indeed one popular way of explaining cosmological

observations is to exhibit a wide class of models in

which that sort of observation is “generic.”

Conversely, observations which are not generic are

felt to require some special explanation, a reason

why the required initial conditions were favoured

over some other set of initial conditions [Gibbons et

al., 1987, p.736].

Assuming that inflation accounts for flatness and that inflation occurs in all FLRW models except for a set of finite measure, then flatness would not require any further explanation. Would, that is, if Gibbons et al. had been able to show that inflation accounted for flatness. Hawking and Page

[1988] proved instead that although flatness held in almost allof the models, it was not necessarily due to inflation.

For our purposes, their failure matters little. What all sides agree on is the measure-theoretic reasoning involved. Hawking and Page do not criticize the overall approach: “In this way a uniform probability distribution in the canonical measure would explain the flatness problem of cosmology. . .” [Hawking and Page, 1988, pp.803-4].7 These sorts of “almost all” explanations are common, especially in dynamics. (Sauer, Yorke, and Casdagli’s [1991] embedding theorems are a particularly important example.) One might wonder, however, how sensitive they are to the issues raised by the McGrews and Vestrup. Both cosmology papers mentioned refer to distributions that are not normalizable as “probability measures” and “probability distributions.” Perhaps this is just the sort of mathematical hand waving the critics are complaining about.

Two recent papers by Kirchner and Ellis [2003] and Ellis, Kirchner, and Stoeger [2003] are more careful. Although they

7 Manson takes Earman and Mosterin to be allies in this debate, suggesting that the flaw in Gibbons et al. was “the absence of a specification of the probability distribution over these measures” [Manson,2000, p.348]. Actually, Gibbons assumed the probability distribution wouldbe proportional to the phase space measure, following the methods used statistical mechanics. Earman and Mosterin do not fault the approach, but rather site Hawking and Page [1988] to show that diverging integrals thwartthe conclusion. Without this unfortunate technical flaw, their use of distribution and measure would have constituted a solution to the flatness problem [Earman and Mosterin, 1999, p.32].

use fewer state variables over the same FLRW models, their phase space still has an infinite measure. Unless new physicalinformation provides a theoretical bound to the state variables, they recognize that a normalized probability distribution cannot be used. “Since it seems questionable whether there will ever be additional information about the ensemble of universes available, one has to accept that certainquestions will have no well defined probabilities” [Ellis et al., 2003, p.17]. Recognizing the distinction between measure,probability, and non-normalized distribution functions, they offer a partial solution. After defining a measure µ in terms of the cosmological constant and energy density,8 and then a distribution function over parameter space , they consider the ratio

(1)

The numerator integrates over a subset A ofwith in the measure dµ; the denominator integrates

over all of .9 In some instances, both the top and bottom

8 Specifically, . is actually a function of the energy

density , rather than the energy density itself.

9 Ellis et al. consider other possible distribution function as well.In the earlier companion piece, they argued that the constant distribution is called for given the lack of additional information about the system [Kirchner and Ellis, 2003, p.1200]. This will be taken up in the next section.

diverge, in which case the ratio is not defined. In those cases where (1) is defined, it is interpreted as the “probability [Pr(A)] to find a [model] universe in a certain parameter-region A . . .” [Ellis et al., 2003, p.18]. If the numerator does not blow up, then . This is how probability reenters the picture for spaces that do not allow for normalized probability distributions. So long as A does not contain any divergent points, then almost all ofbelongs to the compliment of A. This then is a pattern of inference available for fine-tuning arguments. If life-permitting universes are restricted to a set A (where of model universes of infinite measure) and all the members of A are integrable, then . Since our actual universe is closely approximated by members of this highly improbable set, a special explanation is needed.

Many orthodox Bayesians will be quick to object that a uniform distribution over this measure produces an “improper prior.” Indeed it does, but that is merely the technical name for the non-normalizability problem in unbounded spaces. Here we enter an intramural debate between Bayesians. Subjective Bayesians always reject improper priors. Like the McGrews and Vestrup, they claim that non-normalizable “probabilities” really aren’t probabilities at all. On the other hand, most objective Bayesians allow improper priors, at least provisionally. It should not be surprising, then, that Ellis,

Kirchner, and Stoeger (as well as Evrard and Coles [1995], below) are explicitly working within an objective Bayesian framework.10 They offer a now standard response to the problem.In the words of Bayesian statistician José Bernardo,

By definition, “non-subjective prior distributions”

are not intended to describe personal beliefs, and in

most cases, they are not even proper probability

distributions in that they often do not integrate to

one. Technically, they are only positive functions to

be formally used in Bayes theorem to obtain “non-

subjective posteriors” which, for a given model, are

supposed to describe whatever the data “have to say”

about some particular quantity of interest . . .” [Irony and

Singpurwalla, 1997, p.162].

The distinction between measure, probability, and distribution function comes into play again here. All sides agree that improper priors are not probabilities. The disagreement is

10 This does not mean that all objective Bayesians would approve. As Lydia McGrew has pointed out to me (private correspondence), the use of improper priors is typically justified by showing that they allow future data to have maximum impact on one’s probabilities. It is not clear, however, what new data might be forthcoming when it comes to cosmological fine-tuning.

whether they can be used to get probabilistic inferences off the ground. I do not intend to weigh in on this debate betweenBayesians. If McGrew et al. wish to argue against the use of improper priors, so be it. I only wish to suggest that so longas both sides are considered ongoing research programs among probability theorists, then theoreticians and experimentalists using improper priors ought not to be chased from the field. Nor should this internal debate among Bayesians be turned into a criticism of fine-tuning arguments. At best, one could say that if there is something incoherent about the use of improper priors, then Ellis et al., Evrard and Coles, and anyone else using such devices have a problem. Until that time, it seems perfectly acceptable to work under the banner of those objective Bayesians who find such reasoning cogent.

4. The Coarse-Tuning ObjectionThe appeal to ratios such as (1) will not catch the critics by surprise. They have already argued that if the anthropic coincidences are to be defended in this way, it leads to a reductio in the form of “coarse-tuning.” To start, notice that there are only two jointly sufficient conditions for : is infinite and A is finite. This means that any bounded interval for A, no matter how large, will have zero measure in the space of positive real numbers. If the background space isinfinite, so-called fine-tuning is virtually guaranteed

[Manson, 2000, p.347]. One would get the same measure-theoretic results if the range of each fine-tuned parameter were “within a few billion orders of magnitude” of their actualvalues [McGrew et. al., 2001, p.1032]. Presumably, no one would say that coarse-tuned parameters “cry out for explanation.” After all, the motivation for the original argument is that so many cosmic constants are seemingly balanced on a knife-edge. It would not have been surprising tofind the life-friendly values spread over a vast range. The problem is that in the measure-theoretic terms used in the lastsection, fine-tuning and coarse-tuning are equivalent. If coarse-tuning does not require an explanation, there are no grounds for arguing that fine-tuning does.

This is a keen and convincing objection, but not a fatal one. Let’s consider two responses. The first is that the concern is merely epistemic, a matter of insight and the obstacles to discovery. The charge is not that there is a specific false premise or flaw in the logic of Kirchner and Ellis’s approach. The objection focuses instead on what one would have noticed or found surprising if the life-friendly intervals had been larger. As far as it goes, we should agree:if our universe exhibited coarse-tuning rather than fine-tuning, no one might have noticed. The observations would not have sparked any research or controversy. This, I suggest, is a limitation on our cognitive abilities, not on the strength of

the argument. Those fantastically narrow ranges get one’s attention, no doubt. In any case, the fact remains that a life-friendly, coarsely-tuned universe is part of a set with measure zero. If the background spacehas infinite measure, then broad, finite intervals for A do in fact lead to the same result: . Fine-tuning advocates ought to bite the bullet, stand by the mathematics, and say that a coarse-tuned universe would require an explanation. They should also agree that most people would fail to notice this fact. Few have the requisite background in measure theory to frame the issue properly. No matter. Only a small percentage of people in theworld can appreciate the difficulty of Goldbach’s Conjecture, but this is a matter of education and intellect, not the force of Goldbach’s challenge. The same would be true if the universe were coarsely tuned for life. Fine or coarse pertainsto how obvious the anomaly is, not whether it exists.

The second response has to do with the scope of the objection. The intended targets are fine-tuning advocates likeJohn Barrow, Frank Tipler, John Leslie, and Robin Collins who use the anthropic parameters to argue for the existence of a designer or a multiverse. There are, however, a number of unintended targets equally impugned by the logic of the coarse-tuning objection. As we will see, it calls into question an array of mathematical physics. Consider the set of periodic points in a chaotic model. Dissipative chaotic systems contain

special sets in their phase spaces known as “strange attractors.” If the state point for a system falls within the attractor’s basin, it will exhibit unpredictable, aperiodic behavior. Each of the trajectories caught by the attractor aredynamically unstable except for a set of measure zero. Within that special set are an infinite number of periodic trajectories. If the initial conditions were on one of these trajectories (and the system is not perturbed), it will exhibita regular, periodic evolution. Of course, one would be very surprised to see a known chaotic system behaving periodically since “almost all” of the trajectories on the attractor are unstable. It would require fine-tuning in the extreme for a chaotic system to behave periodically, but it is physically possible.

Armed with the logic of coarse-tuning, one could argue that this is a bogus inference. I claimed that experimentalists would be surprised to see a chaotic system spontaneously change into a regular, periodic evolution. Such an event would require a special explanation (e.g., someone hadtaken the system out of the chaotic regime altogether by changing a parameter so that only periodic behavior is possible). The need for an explanation arises, again, because the periodic points on the attractor have measure zero in the phase space. Let’s now consider how one might use coarse-tuning to debunk this expectation. The skeptic might ask,

“Would you still be surprised if there were ten times as many periodic trajectories available on the attractor?” The answer is yes, since the measure of those trajectories is also zero. “What if there were 1,000 times as many?” The results are the same. “In fact,” the skeptic could point out, “your mathematical results commit you to the same conclusion even if there were several orders of magnitude as many. Any finite multiple would do. But can you really mean to say that no matter how many periodic trajectories there are, we should still be surprised to observe periodic behavior?” Put that way, it sounds unreasonable to continue to claim that a specialexplanation is needed. Intuitively, if the number of periodic points grows large enough (the analogue of coarse-tuning), thenperiodic behavior ought not be surprising. The reductio is nearly complete. “By your own lights, each case is equally surprising. If you now agree that a sufficient number of periodic points removes the need for explanation, then you are committed to the same conclusion from the beginning.” In otherwords, there must be something wrong with the initial argument if the actual number of points cannot change the conclusion. The request for explanation of the periodic behavior should be abandoned.

Eyebrows should now be raised. Is there something amiss in the foundations of nonlinear dynamics? No, but the example does point to where the real culprit lies: intuitions about

infinity. Like Hilbert’s Hotel or the Tristram Shandy paradox,coarse-tuning is another odd consequence of dealing with infinities. While everyone agrees that paradoxes involving countable sets are puzzling, there are differences over what one should conclude. Some, like William Lane Craig, think thatHilbert’s Hotel is proof that actual infinites do not exist. Their view is that , 0, and are merely mathematical devices, oddities from set theory. Others believe that while such paradoxes challenge our usual thinking, they do not revealany hidden inconsistencies and are perfectly coherent. If there were such a thing as a hotel with countable rooms, it would operate in just the manner described in Hilbert’s parable. (For example, even though the hotel has no vacancies,a large number of new guests can be accommodated by having everyone move up a fixed number of rooms. In fact, any countable number of guests could just as easily be fit into this “full” hotel.) Rather than showing the impossibility of actual infinities, the paradoxes help us explore their counterintuitive nature. This, I believe, is the proper response to the coarse-tuning objection. Although one’s first reaction is to see it as a counterexample, it is not. It is rather a correct, albeit surprising, consequence of measure theory. There is nothing wrong with well-defined concepts suchas “almost all” and measure zero, other than the unseemly

results they sometimes generate. Coarse-tuning is another in that same mold.

This concludes the answer to the first major objection from Section 2. Probabilistic inferences can be drawn from spaces that do not allow normalized probability distributions. Measure-theory provides the resources to both frame the arguments and defend them from proposed counterexamples such ascoarse-tuning.11 The next major objection is a matter of justification. On what grounds does one prefer a given measureand distribution function to another?

5. Arbitrariness and Error Let us now examine a seemingly innocuous choice made in the cosmology papers mentioned in Section 3. Although not always clearly stated, each takes the distribution function to be constant in the chosen measure. This is supposed to reflect a lack of knowledge, making each region in the phase space just as likely to be instantiated as any other. The constant distribution/probability function is taken to be an applicationof Laplace’s “Principle of Indifference.”12 When every state isequally likely so far as we know, the probabilities ought not

11 I do not wish to imply that the critics are somehow unfamiliar withmeasure theory. The McGrews and Vestrup discuss the well-known Lebesgue measure in their piece [2001, p.1029].

12 As noted by Manson [2000, pp.346-348], McGrew et al. [2001, pp.1029-1032], and Gibbons et al. [1987, p.736].

to be weighted toward some states without new information. Theproblem is that there are other choices for both the measure and distribution function, many of which can be justified by the Principle of Indifference. We are therefore facing the notorious notion of “uniform probability” illustrated in Bertrand’s paradoxes. Sklar gives a simple example by considering the a priori probability for the amount of water in a bottle [1993, p.119]. Assume that the bottle’s interior surface area varies in a nonlinear way with the volume. In that case, it is guaranteed that a uniform probability distribution across the volume will be different from a uniformdistribution across the surface area. Which should the rational person choose? There is no answer; the notion of uniform probability for water in such a bottle is not well posed. Likewise, given that idealized FLRW models lack canonical variables and a natural measure, there seem to be toomany options on which to base a nonarbitrary decision.

To answer the charge, let us consider the choice of measure in a bit more detail. Both Kirchner-Ellis and Evrard-Coles take the same approach. Bertrand’s paradoxes show that different sets of state variables on a space can lead to conflicting probability assignments. To avoid this problem, they require a measure that is invariant under particular differentiable transformations, including coordinate and scale transformations. That means that the measure is the same for a

change of variables and , where c is a realvalued constant (restricted to + in the latter case). The demand for invariance is a common means for pairing down the mathematically possible options.

One way to achieve this is by Jaynes’s rule of maximum

entropy. To find an invariant measure, Jaynes’s principle prescribes that one maximize the information entropy S:

(2)

S, a quantity borrowed from information theory, is a measure ofuncertainty. If there is no additional information about the value of x other than its range, information entropy is maximized when the probability distribution is equal to the measure, . The only question is what measure will maximize S. For their state space of FLRW models, Ellis et al.[2003, p.17] derive

(3)

where is the cosmological constant and is a function of energy density. The choice is not arbitrary. In some cases, Jaynes’s principle not only yields an invariant measure, but one that is uniquely so. Kirchner-Ellis and Evrard-Coles both consider this to be the only mathematically consistent approachto the choice of measure and distribution function without additional information on the possible values of parameters. If instead one were to consider a 1-dimensional state space for

a parameter such as the gravitational constant G+, maximizingS entails

(4)

Such a measure could be used in the kinds of one-parameter fine-tuning arguments frequently seen in the literature.

Unfortunately, Jaynes’s principle is not a complete solution. It has a host of critics in its own right, apart from any application in cosmology. One well-known problem is that a change of parameters will lead to a different measure insome cases. No method can produce a unique, universally invariant measure. There are also rival principles to Jaynes’s, each of which has a claim to invariance (see Bernardoand Smith, 1994, pp.357-367.) Thus the charge of arbitrarinessrises again, now with respect to the invariance rule chosen rather than the measure itself. Ellis et al. address none of this. The closest they come to defending Jaynes’s principle isto say that it is the only known, consistent method for constructing a measure with sufficient invariance to be trusted[2003, p.16]. An interesting contingent of scientists in otherfields agrees.13

Regardless of the verdict on maximum entropy, it does showthat fine-tuning advocates need not merely rely on intuition, which as Manson rightly says is “no substitute for a well-

13 See [Buck and MaCaulay, 1991], especially chapters 1 and 6.

grounded theory of range and probability distribution” [2000, p.348]. Jaynes’s principle provides such a theory, though it is unlikely to win over the critics. One can always object to which invariance rule or parameterization is chosen, but none of the cosmologists mentioned make the choice arbitrarily.

It may be that Manson is asking for more than a defensiblemethod. He may instead be pressing a point made elsewhere by Holder [check papers] and Ellis [2003, p.20]. In order to derive an accurate distribution function, one must have a physical theory of the production of the universe. That is an elusive goal. Given the state of flux in contemporary cosmology, whatever distribution function is used will almost certainly be wrong. Perhaps some future GUT will show that an open universe is not physically possible. In that case, a better choice of distribution function might be something like

(5)

There are countless other possibilities. If the universe creating mechanism weights some set of parameters more than others, like a loaded die, then a uniform distribution fails toreflect this weighting. The distribution function should capture whatever preference nature has for certain models.

This is a worrisome problem, but not one that we must dealwith at present. When Holder and Ellis-Kirchner-Stoeger argue that the distribution function might be wrong, they are

targeting the multiverse hypothesis, not fine-tuning in general. If there is an ensemble of universes that explains away the anthropic coincidences, then there is presumably a universe-producing mechanism responsible. Such a mechanism should be describable by natural laws, either stochastic or deterministic. There would therefore be a right measure and distribution function describing its function. The worry, then, that a uniform distribution function will likely be wrongneeds to be understood in this context.

The question addressed in this paper is different. Given our current state of knowledge, is there something to be explained? A fine-tuning advocate using the approach in Section 3 would not claim that a uniform distribution is true, i.e. that it accurately reflects the propensities of some mechanism. The claim is rather that it is the appropriate distribution given what we know. By analogy, consider a loadeddie that looks perfectly normal. Before any data are taken, itis rational to assign a uniform probability across each face. This distribution is wrong in the sense that is does not reflect the asymmetric nature of the die, which a sufficient number of trials should reveal. The possibility of error does not change the fact that a uniform probability is the proper a priori choice. Likewise, at this stage cosmologists are looking for a rational, (objective) Bayesian starting place. Jaynes’s principle provides one, capturing the intuition that

nature has no preference for certain parameter values. Sciencemight one day show that nature does. Only then will the question of true distribution have to be addressed.

6. Conclusion There are several other tactics one might use in response to the anthropic skeptics. Many will ignore their objections, believing that the intuitiveness of the data outstrips any clever mathematical arguments one might invent to the contrary.Some look for ways to truncate spaces like thereby circumventing the normalization problem. Others appeal to a kind of Bayesian extrapolation from unproblematic spaces to infinite ones. To some degree, they each try to sidestep the critics’ arguments. In this paper, I have tried to take their concerns head-on. The McGrews and Vestrup demand that probability distributions be normalized. I agree, but then show that probabilities can enter the picture at a later stage. Formula (1) can be interpreted as a probability even though neither part of the ratio contains a probability distribution. Holder and Manson demand that the choice of measure and/or distribution be matters of principle rather than intuition. I agree, but then show how maximum entropy has been used to address these very concerns.

Nothing here qualifies as a refutation of the anthropic skeptics. There is plenty to be unhappy with if one is so

disposed. The only issue addressed is whether there is something in the cosmological data to be explained. I believe the answer is yes; the observations support fine-tuning if theyare understood in measure-theoretic terms. If this is correct,then the next question would be which of the main rival explanations, design or multiverse, is the better one. I leavethat question for others.

ACKNOWLEDGMENTS

Thanks to Neil Manson, Tim McGrew, Lydia McGrew, Del Ratzsch,

and Brian Pitts for their comments on an earlier draft. Thanks

also to Peter van Inwagen, Alvin Plantinga, and the Department

of Philosophy at the University of Notre Dame for hosting “The

Mathematics and Philosophy of Cosmic Fine-tuning: A Workshop”

in April 2003. This paper grew out of discussions at that

workshop. A version was presented at the Eastern Division

meeting of the Society of Christian Philosophers, Asbury

College, December 4, 2003.

Department of Philosophy

Saginaw Valley State University

7400 Bay Road

University Center, MI 48710

[email protected]

REFERENCES

Barrow, John D. [2002] The Constants of Nature. New York:

Pantheon Books.

Bernardo, Jose M. and Smith, Adrian F.M. [1994] Bayesian Theory.

New York: John Wiley & Sons.

Buck, Brian and MaCaulay, Vincent A., eds. [1991] Maximum

Entropy in Action: A Collection of Expository Essays. Oxford: Clarendon

Press.

Collins, C.B. and Hawking, S.W. [1973], “Why is the universe

isotropic?” Astrophysical Journal 180: 317-334.]

Collins, Robin [2004], “Fine-Tuning Arguments and the Problem

of the Comparison Range,” unpublished paper,

http://home.messiah.edu/~rcollins/ft.htm.

Davies, Paul [1992] The Mind of God. New York: Simon & Schuster.

_____. [2003] “Multiverse or Design? Reflections on a ‘Third

Way’,” Paper delivered at the Stanford University workshop

“One Universe or Many?,” March 28-29, 2003

http://aca.mq.edu.au/PaulDavies/Multiverse_StanfordUniv_Ma

rch2003.pdf

Earman, John and Mosterin, Jesus [1999] “A Critical Look at

Inflationary Cosmology,” Philosophy of Science 66, 1: 1-49.

Ellis, G.F.R., Kirchner, U., and Stoeger , W.R.. [2003]

“Defining Multiverses,” Preprint

archive.org/astro-ph/0305292.

Evrard, Guillaume and Coles, Peter [1995] “Getting the Measure

of the Flatness Problem,” Classical and Quantum Gravity 12: L93-

L97.

Gibbons, G.W., Hawking, S.W., and Stewart, J.M. [1987] “A

Natural Measure on the Set of All Universes,” Nuclear Physics

B 281: 736-751.

Hawking, S.W., and Page, Don N. [1988] “How Probable is

Inflation?” Nuclear Physics B 298: 789-809.

Holder, Rodney D. [2001a] “The Realization of Infinitely Many

Universes in Cosmology,” Religious Studies 37: 343-350.

_____. [2001b] “Fine-Tuning, Many Universe, and Design,” Science

and Christian Belief 13, 1: 5-24.

Irony, Telba Z. and Singpurwalla, Nozer D. [1997]

“Noninformative Priors Do Not Exist,” Journal of Statistical

Planning and Inference 65: 159-189.

Kirchner, U., and Ellis, G.F.R. [2003] “A Probability Measure

for FLRW Models,” Classical and Quantum Gravity 20: 1199-1213.

Manson, Neil A. [2000] “There is No Adequate Definition of

‘Fine-tuned’ for Life,” Inquiry 43: 341-352.

McGrew, Timothy, McGrew, Lydia, and Vestrup, Eric [2001]

“Probabilities and the Fine-Tuning Argument: A Sceptical

View,” Mind 110: 1027-1037.

McGrew, Timothy and McGrew, Lydia [2004] “Response to Pruss and

Collins,” unpublished paper.

Sauer, Tim, Yorke, James A., and Casdagli, Martin [1991]

“Embedology.” The Journal of Statistical Physics 65: 579-616.

Sciama, Dennis [1993] “The Anthropic Principle and the Non-

uniqueness of the Universe,” in F. Berola and U. Curi

(eds.), The Anthropic Principle: Proceedings of the Second Venice

Conference on Cosmology and Philosophy, Cambridge: Cambridge

University Press, pp. 107-109.

Sklar, Lawrence [1993] Physics and Chance. Cambridge: Cambridge

University Press.

Tabor, Michael [1989] Chaos and Integrability in Nonlinear Dynamics. New

York: John Wiley & Sons.

Tegmark, Max [2003] “Parallel Universes,” Scientific American 288,

5: 40-51.