A Discussion of Fractal Image Compression 1

79
Appendix A A Discussion of Fractal Image Compression 1 Yuval Fisherl Caveat Emptor. Anon Recently, fractal image compression - a scheme using fractal trans- forms to encode general images - has received considerable attention. This interest has been aroused chiefly by Michael Barnsley, who claims to have commercialized such a scheme. In spite of the popularity of the notion, scientific publications on the topic have been sparse; most articles have not contained any description of results or algorithms. Even Bamsley's book, which discusses the theme of fractal image compression at length, was spartan when it came to the specifics of image compression. The first published scheme was the doctoral dissertation of A. Jacquin, a student of Barnsley's who had previously published related papers with Barnsley without revealing their core algorithms. Other work was conducted by the author in collaboration with R. D. Boss and E. W. Jacobs 3 and also in collaboration with Ben Bielefeld. 4 In this appendix we discuss several schemes based on the aforementioned work by which general images can be encoded as fractal transforms. 1 This work was partially supported by ONR contract N00014-91-C-0177. Other support was provided by the San Diego Supercomputing Center and the Institute for Non-Linear Science at the University of California, San Diego. 2 San Diego Supercomputing Facility, University of California, San Diego, La Jolla, CA 92093. 3 0f the Naval Ocean Systems Center, San Diego. 4 0f the State University of New York, Stony Brook.

Transcript of A Discussion of Fractal Image Compression 1

Appendix A

A Discussion of Fractal Image Compression 1

Yuval Fisherl

Caveat Emptor. Anon

Recently, fractal image compression - a scheme using fractal trans­forms to encode general images - has received considerable attention. This interest has been aroused chiefly by Michael Barnsley, who claims to have commercialized such a scheme. In spite of the popularity of the notion, scientific publications on the topic have been sparse; most articles have not contained any description of results or algorithms. Even Bamsley's book, which discusses the theme of fractal image compression at length, was spartan when it came to the specifics of image compression.

The first published scheme was the doctoral dissertation of A. Jacquin, a student of Barnsley's who had previously published related papers with Barnsley without revealing their core algorithms. Other work was conducted by the author in collaboration with R. D. Boss and E. W. Jacobs3 and also in collaboration with Ben Bielefeld.4 In this appendix we discuss several schemes based on the aforementioned work by which general images can be encoded as fractal transforms.

1This work was partially supported by ONR contract N00014-91-C-0177. Other support was provided by the San Diego Supercomputing Center and the Institute for Non-Linear Science at the University of California, San Diego.

2San Diego Supercomputing Facility, University of California, San Diego, La Jolla, CA 92093. 30f the Naval Ocean Systems Center, San Diego. 40f the State University of New York, Stony Brook.

904 A A Discussion of Fractal Image Compression

Figure A.l : A portion of Lenna's hat decoded at 4 times its encoding size (left), and the original image enlarged to 4 times the size (right), showing pixelization.

The image compression scheme can be said to be fractal in several senses. First, an image is stored as a collection of transforms that are very similar to the MRCM metaphor. This has several implications. For example, just as the Barnsley fern is a set which has detail at every scale, so does the decoded image have detail created at every scale. Also, if one scales the transformations in the Barnsley fern IFS (say by multiplying everything by 2), the resulting attractor will be scaled (also by a factor of 2). In the same way, the decoded image has no natural size, it can be decoded at any size. The extra detail needed for decoding at larger sizes is generated automatically by the encoding transforms. One may wonder (but hopefully not for long) if this detail is 'real'; that is, if we decode an image of a person at larger and larger size, will we eventually see skin cells or perhaps atoms? The answer is, of course, no. The detail is not at all related to the actual detail present when the image was digitized; it is just the product of the encoding transforms which only encode the large scale features well. However, in some cases the detail is realistic at low magnifications, and this can be a useful feature of the method. For example, figure A.l shows a detail from a fractal encoding of Lenna along with a magnification of the original. The whole original image can be seen in figure A.4 (left); this is the now famous image of Lenna which is commonly used in the image compression literature. The magnification of the original shows pixelization, the dots that make up the image are clearly discernible. This is because it is magnified by a factor of 4. The decoded image does not show pixelization since detail is created at all scales.

Why is it "Fractal" Image Compression?

Why is it Fractal Image "Compression"?

905

Grey Scale Version of the Sierpinski Gasket

Figure A.2

An image is stored on a computer as a collection of values which indicate a grey level or color at each point (or pixel) of the picture. It is typical to use 8 bits per pixel for grey-scale images, giving 28 = 256 different possible levels of grey at each pixel. This yields a gradation of greys that is sufficient to make monochrome images stored this way look good. However, the image's pixel density must also be sufficiently high so that the individual pixels are not apparent. Thus, even small images require a large number of pixels and so they have a high memory requirement. However, the human eye is not sensitive to certain types of information loss, and so it is generally possible to store an approximation of an image as a collection of transforms using considerably less information than is required to store the original image.

For example, the grey-scale version of the Sierpinski gasket in figure A.2 can be generated from only 132 bits of information using the same decoding algorithm that generated the other encoded images in this section. Because this image is self-similar, it can be stored very compactly as a collection of transformations. This is the spirit of the idea behind the fractal image compression scheme presented in the next sections.

Standard image compression methods can be evaluated using their com­pression ratio; the ratio of the memory required to store an image as a collection of pixels and the memory required to store a representation of the image in compressed form. The compression ratio for the fractal scheme is hard to measure, since the image can be decoded at any scale. If we decode the grey-scale Sierpinski gasket at, say, two times its size, then we could claim 4 times the compression ratio since 4 times as many pixels would be required to store the decompressed image. For example, the decoded

906 A A Discussion of Fractal Image Compression

Graph Generated From the Lenna Image.

Figure A.3

image in figure A.l is a portion of a 5.7: l compression of the whole Lenna image. It is decoded at 4 times it's original size, so the full decoded image contains 16 times as many pixels and hence its compression ratio is 91.2: I. This may seem like cheating, but since the 4-times-larger image has detail at every scale, it really isn't.

A.l Self-Similarity in Images

The images we will encode are different than the images discussed in other parts of the book. Before, when we referred to an image, we meant a set that could be drawn in black and white on the plane, with black representing the points of the set. In this appendix, an image refers to something that looks like a black-and-white photograph.

In order to discuss the compression of images, we need a mathematical model of an image. Figure A.3 shows the graph of a special function z = f(x , y). This graph is generated by using the image of Lenna (see figure A.4) and plotting the grey level of the pixel at position ( x, y) as a height, with white being high and black being low. This is our model for an image, except that while the graph in figure A.3 is generated by connecting the heights on a 64 x 64 grid, we generalize this and assume that every position (x, y) can have an independent height. That is, our model of an image has infinite resolution.

Images as Graphs of Functions

A.l Self-Similarity in Images 907

Thus, when we wish to refer to an image, we refer to the function f ( x, y) which gives the grey level at each point (x, y). When we are dealing with an image of finite resolution, such as the images that are digitized and stored on computers, we must either average J(x, y) over the pixels of the image or insist that f ( x, y) has a constant value over each pixel.

For simplicity, we assume we are dealing with square images of Normalizing Graphs of Images size 1. We require (x,y) E 12 = {(u,v) I 0:::; u,v :::; 1}, and f ( x, y) E I = [0, 1]. Since we will want to use the contraction mapping principle, we will want to work in a complete metric space of images, and so we also will require that f is measurable. This is a technicality, and not a serious one since the measurable functions include the piecewise continuous functions, and one could argue that any natural image corresponds to such a function.

A Metric on Images

Natural Images are not Exactly Self-Similar

We also want to be able to measure differences between images, and so we introduce a metric on the space of images. There are many metrics to choose from, but the simplest to use is the sup metric

o(f,g) = sup lf(x,y)- g(x,y)l. (x,y)EJ2

This metric finds the position (x, y) where two images f and g differ the most and sets this value as the distance between f and g.

There are other possible choices for image models and other possible metrics to use. In fact just as before, the choice of metric determines whether the transformations we use are contractive or not. These details are important, but are beyond the scope of this appendix.

A typical image of a face, for example figure A.4 (left) does not contain the type of self-similarity that can be found in the Sierpinski gasket. The image does not appear to contain affine transformations of itself. But, in fact, this image does contain a different sort of self-similarity. Figure A.4 (right) shows sample regions of Lenna which are similar at different scales: a portion of her shoulder overlaps a region that is almost identical, and a portion of the reflection of the hat in the mirror is similar (after transforma­tion) to a part of her hat. The distinction from the kind of self-similarity we saw with ferns and gaskets is that rather than having the image be formed of copies of its whole self (under appropriate affine transformation), here the image will be formed of copies of (properly transformed) parts of itself. These parts are not identical copies of themselves under affine transforma­tion, and so we must allow some error in our representation of an image as a set of transformations. This means that the image we encode as a set of transformations will not be an identical copy of the original image but rather an approximation of it.

Finally, in what kind of images can we expect to find this type of local self-similarity? Experimental results suggest that most images that one

908 A A Discussion of Fractal Image Compression

Figure A.4 : The original 256 x 256 pixel Lenna image (left) and some of its self-similar portions (right).

would expect to 'see" can be compressed by taking advantage of this type of self-similarity; for example, images of trees, faces, houses, mountains, clouds, etc. However, the existence of this local self-similarity and the ability of an algorithm to detect it are distinct issues, and it is the latter which concerns us here.

A.2 A Special MRCM

In this section we describe an extension of the multiple reduction copying machine metaphor that can be used to encode and decode grey-scale images. As before, the machine has several dials, or variable components:

Dial I: number of lens systems, Dial 2: setting of reduction factor for each lens system individually, Dial 3: configuration of lens systems for the assembly of copies.

These dials are a part of the MRCM definition from chapter 5; we add to them the following two capabilities:

Dial 4: A contrast and brightness adjustment for each lens, Dial 5: A mask which selects, for each lens, a part of the original to be

copied.

These extra features are sufficient to allow the encoding of grey scale images. The last dial is the new important feature. It partitions an image into pieces which are each transformed separately. For this reason, we call this MRCM a partitioned multiple reduction copying machine (PMRCM). By partitioning

Partitioned MRCMs

A.2 A Special MRCM 909

A PMRCM for a Bowtie

(a)

(b)

(c)

the image into pieces, we allow the encoding of many shapes that are difficult to encode on an MRCM, or IFS.

Let us review what happens when we put an original image on the copy surface of the machine. Each lens selects a portion of the original, which we denote by D; and copies that part (with a brightness and contrast transformation) to a part of the produced copy which is denoted R,. We call the D; domains and the R; ranges. We denote this transformation by w,. The partitioning is implicit in the notation, so that we can use almost the same notation as before. Given an image f, one copying step in a machine with N lenses can be written as W(f) = w1 (f) U w2(f) U · · · U WN(f). As before the machine runs in a feedback loop; its own output is fed back as its new input again and again.

Consider the 8 lens PMRCM indicated in figure A.5. The figure shows two regions, one marked D 1 = D2 = D3 = D4 and the other marked Ds = D6 = D7 = Ds. These are the partitioned pieces of the original which will be copied by the 8 lenses. The lenses map each domain D, to a corresponding range R;, with a reduction factor of I /2. For simplicity, we assume that the contrast and brightness are not altered in this example. Figure A.6 shows three iterations of the PMRCM with three different initial images. The attractor for this system is the bow-tie figure shown in (c).

This example demonstrates the utility of a PMRCM. By partitioning the original to be copied, it is very easy to encode the bow-tie image (though the astute reader will notice that this image is also possible to encode using an IFS).

o=o=o=o 5 6 7 8

An 8 lens PMRCM encoding a bowtie.

Figure A.5

Three iterations of a PMRCM with three different intial images.

Figure A.6

910 A A Discussion of Fractal Image Compression

We call the mathematical analogue of a PMRCM, a partitioned iterated function system (PIFS). A PIFS has some features in common with the networked MRCM and Bamsley's recurrent iterated function systems, but they are not at all identical.

We haven't specified what kind of transformations we are allowing, and in fact one could build a PMRCM or PIFS with any transformation one wants. But in order to simplify the situation, and also in order to allow a compact specification of the final PIFS (in order to yield high compression), we restrict ourselves to transformations wi of the form

It is convenient to write

[a·

vi(x, y) = ~ Ci

(A.l)

b~] [ x] [e.;] d, y + li

Since an image is modeled as a function f(x, y), we can apply w, to an image f by wi (f) = wi ( x, y, f ( x, y)). Then vi determines how the par­titioned domains of an original are mapped to the copy, while s, and o, determine the contrast and brightness of the transformation. It is always implicit, and important to remember, that each wi is restricted to Di x I. That is, wi applies only to the part of the image that is above the domain Di. This means that vi(Di) = Ri.

Since we want W(J) to be an image, we must insist that UR.; = ! 2 and that RinRj = 0 when i # j. That is, when we apply W to an image, we get some single valued function above each point of the square / 2• Running the copying machine in a loop means iterating the Hutchinson operator W. We begin with an initial image fo and then iterate !J = W (Jo), h = W (J,) = W(W(J0 )), and so on. We denote the n-th iterate by fn = wn(Jo).

When will W have an attractive fixed point? By the contractive mapping principle, it is sufficient to have W be contractive. Since we have chosen a metric that is only sensitive to what happens in the z direction, it is not necessary to impose contractivity conditions in the x or y directions. The transformation W will be contractive when each s.; < I. In fact, the contractive mapping principle can be applied to wm (for some m), so it is sufficient for wrn to be contractive. This leads to the somewhat surprising result that there is no specific condition on the si either. In practice, it is safest to take si < I to ensure contractivity. But we know from experiments that taking si < 1.2 is safe, and that this results in slightly better encodings.

When W is not contractive and wm is contractive, we call W even­tually contractive. A brief explanation of how a transformation W can be eventually contractive but not contractive is in order. The map W is com­posed of a union of maps wi operating on disjoint parts of an image. The iterated transform wm is composed of a union of compositions of the form

PMRCM = PIFS

Fixed Points for PIFS

Eventually Contractive Maps

A.2 A Special MRCM 911

Since the product of the contractivities bounds the contractivity of the com­positions, the compositions may be contractive if each contains sufficiently contractive wi1 • Thus W will be eventually contractive (in the sup metric) if it contains sufficient 'mixing' so that the contractive wi eventually dom­inate the expansive ones. In practice, given a PIFS this condition is simple to check.

Suppose that we take all the si < 1. This means that when the PMRCM is run, the contrast is always reduced. This seems to suggest that when the machine is run in a feedback loop, the resulting attractor will be an insipid, contrast-less grey. But this is wrong, since contrast is created between ranges which have different brightness levels oi. So is the only contrast in the attractor between the Ri? No, if we take the vi to be contractive, then the places where there is contrast between the R, in the image will propagate to smaller and smaller scale, and this is how detail is created in the attractor. This is one reason to require that the vi be contractive.

We now know how to decode an image that is encoded as a PIFS or as a PMRCM. Start with any initial image and repeatedly run the copy machine, or repeatedly apply W until we get to the fixed point foo· We will use Hutchinson's notation and denote this fixed point by foo = IWI. The decoding is easy, but it is the encoding which is interesting. To encode an image we need to figure out Ri, Di and Wi, as well as N, the number of maps w, we wish to use.

When we decode by iterating, we take an initial fo and compute f n = Decoding by Matrix Inversion WUn-I)· This can also be written as

fn(x, y) = sdn-l (vi 1 (x, y)) + o, ,

where i is determined by the condition (x, y) E Ri. Suppose we are dealing with an image of resolution M x M. We can write the image as a column vector, and then this equation can be written as

J n = S J n- 1 + 0 ,

where S is an M 2 x M 2 matrix with entries si that encode the vi and 0 is a column vector containing the brightness values oi. Then

f =snr +"n sJ-lO n JO L..,J=l '

and if each si < c < 1 then the first term is 0 in the limit. {The con­dition si < c < 1 can be relaxed when W is eventually contractive). When I - S is invertible,

foo = L~o SJO =(I- s)- 10,

where I is the identity matrix. Bielefeld pointed out that when each pixel value fn(x, y) depends on only one (or a few) other pixel

values fn-l (vi 1 (x, y) ), this matrix is very sparse and can be readily inverted.

912 A A Discussion of Fractal Image Compression

A.3 Encoding Images

Suppose we are given an image f that we wish to encode. This means we want to find a collection of maps WJ, wz ... , w N with W = U~ 1 Wi

and f = IWI. That is, we want f to be the fixed point of the Hutchinson operator W. As in the IFS case, the fixed point equation

.f = W(f) = w1 (f) U w2(f) U · · · wN(.f)

suggests how this may be achieved. We seek a partition of .f into pieces to which we apply the transforms Wi and get back .f. This is too much to hope for in general, since images are not composed of pieces that can be transformed non-trivially to fit exactly somewhere else in the image. What we can hope to find is another image f' = IWI with ti(.f', .f) small. That is, we seek a transformation W whose fixed point f' = IWI is close to, or looks like, f. In that case,

.f ~ .f' = W(J') ~ W(f) = w1 (.f) U w2(f) U · · · wN(.f).

Thus it is sufficient to approximate the parts of the image with transformed pieces. We do this by minimizing the following quantities

/i(.fn(Rixi),wi(.f)) i=l, ... ,N (A.2)

Finding the pieces Ri (and corresponding Di) is the heart of the problem. The following example suggests how this can be done. Suppose

we are dealing with a 256 x 256 pixel image at 8 bits per pixel. Let R1, Rz, ... , R1024 be the 8 x 8 non-overlapping sub-squares of [0, 255] x [0, 255], and let D be the collection of all 16 x 16 sub-squares. The collection D contains 241 · 241 = 58,081 squares. For each Ri search through all of D to find a Di E D which minimizes equation A.2. This domain is said to cover the range. There are 8 ways to map one square onto another, so that this means comparing 8 ·58, 081 = 464, 648 squares. Also, a square in D has 4 times as many pixels as an R~, so we must either subsample (choose 1 from each 2 x 2 sub-square of D") or average the 2 x 2 sub-squares corresponding to each pixel of R" when we minimize eqn. (A.2).

Minimizing equation (A.2) means two things. First it means finding a good choice for Di (that is the part of the image that most looks like the image above Ri). Second, it means finding a good contrast and brightness setting s, and oi for Wi. For each D E D we can compute si and oi using least squares regression, which also gives a resulting root mean square (rms) difference. We then pick as Di the D E D which has the least rms difference.

A Simple Illustrative Example

Two men flying in a balloon are sent off track by a strong gust of A Point about Metrics wind. Not knowing where they are, they approach a hill on which a solitary figure is perched. They lower the balloon and shout to the man on the hill, "Where are we?''. The man pauses for a long time and shouts back, just as

A.3 Encoding Images 913

the balloon is leaving earshot, "You are in a balloon." So one of the men in the balloon turns to the other and says, "That man was a mathematician." Completely amazed, the second man asks, "How can you tell that?". Replies the first man, "We asked him a question, he thought about it for a long time, his answer was correct, and it was totally useless." This is what we have done with the metrics. When it came to a simple theoretical motivation, we use the sup metric which is very convenient for this. But in practice, we are happier using the rms metric which allows us to make least square computations.

Given two squares containing n pixel intensities, a1, ••. , an and b1, ... , bn. We can seeks and o to minimize the quantity

Least Squares

n

R = Z:::::(s · ai + o- b;) 2 .

i=l

This will give us a contrast and brightness setting that makes the affinely transformed a; values have the least squared distance from the b; values. The minimum of R occurs when the partial derivatives with respect to s and o are zero, which occurs when

and

In that case,

A choice of D;, along with a corresponding s; and oi, determines a map w; of the form of eqn. (A.l). Once we have the collection w1 , ... , w1024 we can decode the image by estimating I WI. Figure A.7 shows four images: an arbitrary initial image fo chosen to show texture, the first iteration W(f0 ),

which shows some of the texture from f 0 , W 2(!0 ), and W 10(!0 ).

The result is surprisingly good, given the naive nature of the encoding algorithm. The original image required 65536 bytes of storage, whereas the

914 A A Discussion of Fractal Image Compression

transformations required only 3968 bytes,5 giving a compression ratio of 16.5:1. With this encoding R = 10.4 and each pixel is on average only 6.2 grey levels away from the correct value. These images show how detail is added at each iteration. The first iteration contains detail at size 8 x 8, the next at size 4 x 4, and so on.

A.4 Ways to Partition Images

The example of the last section is naive and simple, but it contains most of the ideas of a fractal image encoding scheme. First partition the image by some collection of ranges Ri. Then for each Ri seek from some collection of image pieces a D; which has a low rms error. The sets R; and Di, determine s; and oi as well as ai, bi, ci, di, ei and /; in eqn. (A.l ). We then get a transformation W = Uw; which encodes an approximation of the original image.

A weakness of the example is the use of fixed size R;, since there are Quadtree Partitioning regions of the image that are difficult to cover well this way (for example, Lenna's eyes). Similarly, there are regions that could be covered well with larger Ri, thus reducing the total number of w; maps needed (and increasing the compression of the image). A generalization of the fixed size R; is the use of a quadtree partition of the image. In a quadtree partition, a square image is broken up into 4 equally sized sub-squares. Depending on some algorithmic criterion, each of these is again recursively sub-divided.

An algorithm for encoding 256 x 256 pixel images based on this idea can proceed as follows. Choose for the collection D of permissible domains all the sub-squares in the image of size 8, 12, 16, 24, 32,48 and 64. Partition the image recursively by a quadtree method until the squares are of size 32. For each square in the quadtree partition, attempt to cover it by a domain that is larger. If a predetermined tolerance rms value is met, then call the square R; and the covering domain D,. If not, then subdivide the square and repeat. This algorithm works well. It works even better if diagonally oriented squares are used in the domain pool D also. Figure A.8 shows an image of a collie compressed using this scheme. In section A.5 we discuss some of the details of this scheme as well as the other two schemes discussed below.

A weakness of the quadtree based partitioning is that it makes no attempt HV-Partitioning to select the domain pool D in a content dependent way. The collection must be chosen to be very large so that a good fit to a given range can be found. A way to remedy this, while increasing the flexibility of the range partition, is to use an HV-partition. In an HV-partition, a rectangular image is recursively partitioned either horizontally or vertically to form two new rectangles. The partitioning repeats recursively until some criterion is met, as before. This scheme is more flexible, since the position of the partition is variable. We

5Each transformation required 8 bits in the x and y direction to determine the position of D,, 7 bits for o,, 5 bits for s, and 3 bits to determine a rotation and flip operation for mapping D, to R,.

A.4 Ways to Partition Images 915

Figure A.7 : An original image, the first, second, and tenth iterates of the encoding transformations.

can then try to make the partttiOns in such a way that they share some self-similar structure. For example, we can try to arrange the partitions so that edges in the image will tend to run diagonally through them. Then, it is possible to use the larger partitions to cover the smaller partitions with a reasonable expectation of a good cover. Figure A. I 0 demonstrates this idea. The figure shows an part of an image (a); in (b) the first partition generates two rectangles, R 1 with the edge running diagonally through it,

916 A A Discussion of Fractal Image Compression

A Collie

A collie (256 x 256) compressed with the quadtree scheme at 28.95: I with an rms error of 8.5.

Figure A.8

San Francisco

San Francisco (256 x 256) com­pressed with the HV scheme at 7.6:1 with an rms error of 7 .I.

Figure A.9

and R2 with no edge; and in (c) the next three partitions of R1 partition it into 4 rectangles, two rectangles which can be well covered by R 1 (since they have an edge running diagonally) and two which can be covered by R2

(since they contain no edge). Figure A.9 shows an image of San Francisco encoded using this scheme.

Yet another way to partition an image is based on triangles. In the triangular partitioning scheme, a rectangular image is divided diagonally

Triangular Partitioning

A.5 Implementation Notes 917

lst""P"-a;- 2-ti-on_....JIF::T ition•

(a) (b) (c)

The HV scheme attempts to cre­ate self-similar rectangles at differ­ent scales.

Figure A.IO

Figure A.ll : A quadtree partition (5008 squares), an HV partition (2910 rectangles), and a triangular partition (2954 triangles).

into two triangles. Each of these is recursively subdivided into 4 triangles by segmenting the triangle along lines that join three partitioning points along the three sides of the triangle. This scheme has several potential advantages over the HV-partitioning scheme. It is flexible, so that triangles in the scheme can be chosen to share self-similar properties, as before. However, the artifacts arising from imperfect covering do not run horizontally and vertically, and this is less distracting. Also, the triangles can have any orientation, so we break away from the rigid 90 degree rotations of the quadtree and HV partitioning schemes. This scheme, however, remains to be fully developed and explored.

Figure A. II shows sample partitions arising from the three partitioning schemes applied to the Lenna image.

A.5 Implementation Notes

Storing the Encoding Compactly

To store the encoding compactly, we do not store all the coefficients in eqn. (A.l). The contrast and brightness settings are stored using a fixed number of bits. One could compute the optimal si and oi and then discretize them for storage. However, a significant improvement in fidelity can be obtained if only discretized si and oi values are used when computing the error during encoding (and eqn. (A.3) facilitates this). Using 5 bits to store s, and 7 bits

918 A A Discussion of Fractal Image Compression

to store o; has been found empirically optimal in general. The distribution of s; and o; shows some structure, so further compression can be attained by using entropy encoding.

The remaining coefficients are computed when the image is decoded. In their place we store R; and D;. In the case of a quadtree partition, R; can be encoded by the storage order of the transformations if we know the size of R;. The domains D; must be stored as a position and size (and orientation if diagonal domain are used). This is not sufficient, though, since there are 8 ways to map the four comers of D; to the comers of R;. So we also must use 3 bits to determine this rotation and flip information.

In the case of the HV-partitioning and triangular partitioning, the parti­tion is stored as a collection of offset values. As the rectangles (or triangles) become smaller in the partition, fewer bits are required to store the offset value. The partition can be completely reconstructed by the decoding rou­tine. One bit must be used to determine if a partition is further subdivided or will be used as an Ri and a variable number of bits must be used to specify the index of each Di in a list of all the partitions. For all three methods, and without too much effort, it is possible to achieve a compression of roughly 31 bits per w; on average.

In the example of section A.3, the number of transformations is fixed. In contrast, the partitioning algorithms described are adaptive in the sense that they utilize a range size which varies depending on the local image complexity. For a fixed image, more transformations lead to better fidelity but worse compression. This trade-off between compression and fidelity leads to two different approaches to encoding an image f - one targeting fidelity and one targeting compression. These approaches are outlined in the pseudo-code below. In the code, size(R;) refers to the size of the range; in the case of rectangles, size(R;) is the length of the longest side.

Another concern is encoding time, which can be significantly reduced by employing a classification scheme on the ranges and domains. Both ranges and domains are classified using some criteria such as their edge-like nature, or the orientation of bright spots, etc. Considerable time savings result from only using domains in the same class as a given range when seeking a cover, the rationale being that domains in the same class as a range should cover it best.

Pseudo-Code a. Pseudo-code targeting a fidelity ec. • Choose a tolerance level ec. • Set R1 = ! 2 and mark it uncovered. • While there are uncovered ranges R, do {

Optimizing Encoding Time

• Out of the possible domains D, find the domain D, and the corresponding w, which best covers R; (i.e., which minimizes expression (A.2)).

• If 8(! n (R; xI), w;(f)) < ec or size(R;) :S I'min then • Mark R, as covered, and write out the transformation w,;

• else

A.5 Implementation Notes

}

• Partition Ri into smaller ranges which are marked as uncov­ered, and remove Ri from the list of uncovered ranges.

b. Pseudo-code targeting a compression having N transformations. • Choose a target number of ranges Nr. • Set a list to contain R1 = ! 2 , and mark it as uncovered. • While there are uncovered ranges in the list do {

• For each uncovered range in the list, find and store the domain Di E D and map wi which covers it best, and mark the range as covered.

• Out of the list of ranges, find the range Rj with size ( Ri) > r min

which has the largest

(i.e., which is covered worst). • If the number of ranges in the list is less than Nr then {

} }

• Partition Ri into smaller ranges which are added to the list and marked as uncovered.

• Remove Rj, Wj and Dj from the list.

• Write out all the wi in the list.

919

Appendix B

Multifractal Measures

Carl J. G. Evertsz1 and Benoit B. Mandelbrot2

Before we generalize [fractal sets to measures], it may be recalled that, among our uses of fractal sets [to describe nature], several involve an approximation. While discussing clustered errors, we repressed our convic­tion that, between the errors, the underlying noise weakens, hut does not stop. While discussing the distribution of stars, we repressed our knowledge of the existence of interstellar matter, which is also likely to have a very irregular distribution. While discussing turbulence, we approximated it as having [nonfractal] laminar inserts. In addition, no new concept would have been needed to deal with the distribution of minerals. Between the regions where the abundance of a metal like copper justifies commercial mining, the density of this metal is low, even very low, but one does not expect any region of the world to be totally without copper. All these voids [within fractals sets] must now be filled - without, it is hoped, inordinately modifying the mental pictures we have achieved thus far. This Chapter will outline a way of reaching this goal, by assuming that various parts of the whole share the same nature.

Benoit B. Mandelbrot3

1Center for Complex Systems and Visualization, University of Bremen, Postfach 330 440, D-2800 Bremen 33, Germany. 2Mathematics Department. Yale University, Box 2155 Yale Station, New Haven, CT 06520, USA and Physics Department,

IBM T. J. Watson Research Center, Yorktown Heights, NY 10598, USA. 3Introduction to Chapter IX of Les objets fractals: forme, hasard et dimension, 1975. A related text appears in B. B.

Mandelbrot, The Fractal Geometry of Nature, 1982.

922 B Multifractal Measures

B.l Introduction

The bulk of this book is devoted to fractal sets. A set's visual expression is a region drawn in black ink against white paper (or in white chalk against a blackboard). A set's defining relation is an indicator function I(P), which can only take two values: I(P) = 1, or I(P)="true" if the point P belongs to the set S; and I(P) = 0, or I(P) ="false", if P does not belong to S.

However, as stated in the 1975 quote which opens this appendix, most facts about nature cannot be expressed in terms of the contrast between "black and white", "true and false", or "1 and 0". Therefore, those aspects cannot be illustrated by sets; they demand more general mathematical ob­jects that succeed to embody the idea of "shades of grey". Those more general objects are called measures.

It is most fortunate that the idea of self-similarity is readily extended from sets to measures. The goal of this appendix is to sketch the theory of self-similar measures, which are usually called multifractals. We shall include various heuristic arguments that are often used in this context, and then (section 4) describe the proper probabilistic background behind the concept of multifractal.4 Against this background, the nature of the usual heuristic steps becomes clear, their limitations and proneness to error become obvious, and the unavoidable generalizations demanded by both logic and the data become easy. However, these generalizations are beyond the scope of this appendix.

B.l.l Simple Examples of Multifractals

Consider a geographical map of a continent or island. An example of a measure 11 on such a map is "the quantity of ground water". To each subset S of the map, the measure attributes a quantity 11(S), which is the amount of ground water below S, down to some prescribed level. Now divide the map into two equally-sized pieces S 1 and S2. It will not come as a surprise if their respective ground water contents 11(SJ) and 11(S2 ) are unequal. If S 1 is subdivided further into two equally sized pieces Sn and S12, their ground water contents would again differ. This subdivision could be carried through to the size of pores in rocks, where some pores are found filled with water and others empty. This is a familiar story: some countries have more ground water than others ----> parts of a country contain more ground water than others ----> you may drill a well and find flowing water, while your neighbor finds none ----> and so on. Many other quantities exhibit the

4The probabilistic approach to multifractals was first described in two papers by B.B. Mandelbrot: Mandelbrot, B.B., Intermittent turbulence in self-similar cascades: divergence of high moments and dimension of the carrier

J. Fluid Mech. 62 (1974) 331. Mandelbrot, B.B., Multiplications a/eatoires iterees et distributions invariantes par moyenne ponderee aleatoire, I & II,

Comptes Rendus (Paris): 278A (1974) 289-292 & 355-358. These papers, together with related ones, will soon be reissued in Mandelbrot, B.B., Selecta Volume N, Multifractals & 1/f

Noise: 1963-76, Springer, New York.

B.l Introduction

The Chaos Game and the Pascal Triangle

A Measure-Generating Multiplicative Cascade

923

same behavior; that is, the quantity

J.L = the amount of ground water below S

is an example of a measure which is irregular at all scales. When the irregularity is the same at all scales, or at least statistically the

same, one says that the measure is self-similar, or that it is a multifractal. A Sierpinski gasket is a self-similar set, in the sense that each piece (how­ever small) is identical to the whole after some rescaling and translation; something similar holds for multifractal measures.

Examples of multifractal measures have already entered in previous chapters of this book. One is the chaos game that corresponds to an iter­ated function system or IFS that is run with unequal probabilities p 1 = 0.5, p2 = 0.3 and p3 = 0.2 for the various reducing similarities (section 6.3), and the other is the Pascal triangle mentioned in chapter 8 and further discussed in section B.2.2. Let us take a closer look at the Sierpinski IFS.

We saw that playing the chaos game long enough produces a Sierpinski gasket with fractal dimension D = log 3/ log 2. We also found that the subtriangles making up the Sierpinski gasket were visited with different probabilities - which are summarized in figures 6.22 and 6.23 for the first two levels. On very rough examination of the first figure, the subset with address 3 seems to include a smooth distribution of hitting probabilities. But closer examination reveals an irregular distribution among its subsets with addresses 31, 32 and 33. The same holds for the parts I and 2, and for closer examination of a part such as 32. We also see that if part 3={31,32,33} is blown up by a factor 2, while its component probabilities are multiplied by a factor 5, we achieve, not only a geometric fit, but also a fit of the probabilities to those at level 1 in figure 6.22. Again, the same holds for parts I and 2, with factors 2 and 3t, respectively. Hence, up to a numerical factor, the distribution of the hitting probabilities is the same in each of the subsets 1, 2 and 3. If such an exact invariance holds for all scales the overall distribution is said to be linearly self-similar. For the random IFS this invariance is exactly the one under the Markov operator defined on page 331 and discussed later on. The multifractal measure produced by the random IFS is simply the probability of hitting a subset of the triangle. The distribution of these probabilities is shown in figure B.l. The Sierpinski IFS can be interpreted as a caricature model for the above ground water example, by taking the total quantity of ground water as unity, and identifying the quantity of ground water with the hitting probability.

Another way to look at the Sierpinski IFS measure is suggested by the figures 6.22 and 6.23 and equation (6.3). We note that, while each triangle is further fragmented into 3 subtriangles (as in figure 2.16), the measure JL (or hitting probability) is also fragmented by factors Pt = 0.5, P2 = 0.3 and p3 = 0.2. Denote the measure of the set with address 3 by J1:1. Its 3 sub­parts with addresses 31, 32 and 33 carry the measures /L31 = PtJ13 , /L32 = P2J13 and /L33 = P3J13 . A single process fragments a set into smaller and smaller components according to a fixed rule, and at the same time fragments

924

Trinomial Measure

The self-similar density of hitting probabilities of the IFS on stage 8 of the Sierpinski gasket. This is a 3-dimensional rendering of figure 6.21. The height of the function above each sub-triangle is propor­tional to number of hits in the limit of infinite game points. In order to draw this illustration we did not play the game infinitely long, but instead used the fact that this distri­bution is the 8th stage of a trinomial multiplicative cascade with mo = PI, m1 = pz and mz = p3, where the p, are the probabilities for the different contractions in the random IFS discussed in section 6.3, i.e., PI= 0.5,pz = 0.3 and P3 = 0.2.

Figure B.l

B Multifractal Measures

the measure of the components by another rule. Such a process is called a multiplicative process or cascade. Multiplicative processes are a very important paradigm in the theory of multifractals and play a central role in this appendix. In the language of multiplicative processes, the fragmentation ratios Pi are usually called multipliers and are denoted by rn with various indices. In the Sierpiiiski IFS, the fragmentation of the set yields a fractal. This feature is a complication, and is not essential. To avoid it, most of this appendix uses multiplicative rules that operate over an ordinary Euclidean set, usually the unit interval, but are such that the measure is fragmented non-trivially.

B.1.2 Characterization of Multifractals

Let us step back, and apply the idea of box-counting dimension to the setS supporting a measure (in the IFS example this was the Sierpiiiski gasket). One covers S with a collection of boxes of size c One evaluates the number N (E) of boxes needed to cover the object, and one finds the dimension D through the scaling relation N(E) rv cD. However, simply counting the boxes is like counting coins without caring about the denomination. When the set supporting the measure is Euclidean - as it will be in this appendix - the value of its fractal dimension only confirms that there is nothing fractal about this support. Thus, D is not sufficient to give a quantitative description of the self-similar measure supported by this set. Instead, the

B.l Introduction 925

measure contained in each box must somehow be given a weight. A priori, the obvious weight would be the average density of probability

in each box. In a Euclidean space of dimension E (or, more generally, in a space of embedding dimension E), the density is simply defined as t.t( S) / EE. When this density varies slowly, it can be mapped in the form of a relief, or in the form of lines of constant height. As E --+ 0, one expects this relief to tend to a limit. Furthermore, in order to characterize the irregularity of the spatial distribution of a measure, the first step - but not the last! - is to draw the familiar frequency distribution of its density. If the measure is random, one draws either the frequency distribution of the density in a sample, or its probability distribution.

However, in the case of self-similar measures, this familiar process loses all meaning - simply because, as we shall see, the density itself loses all meaning. Instead, the loose notion that ordinarily leads to a density becomes embodied in a very different and more complicated quantity,

log f.L(box) o: = called the coarse Holder exponent .

logE

This is the logarithm of the measure of the box divided by the logarithm of the size of the box. For a large class of self-similar measures, it turns out that o: is restricted to an interval [o:m;n, O:maxJ, where 0 < O:min < O:max < oo. But the study of some of the most interesting multifractal phenomena (such as turbulence or aggregation of particles into clusters) often demands O:min = 0 and, or O:max = 00.

Once o: has been defined, the first step - but not the least! - is just as above: to draw the frequency distribution of o:, as follows. For each value o:, one evaluates the number NE ( o:) of boxes of size E having a coarse HOlder exponent equal to o:. Now suppose that a box of side E has been selected at random among boxes whose total number is proportional to c E. The probability of hitting the value a of the coarse HOlder exponent is p,(a) = N,(a)jcE. Again, the first impulse would be to draw the distribution of this probability, but this would not be useful. In the case of interest to us, this distribution no longer tends to a limit as, E --+ 0, hence an intrinsic characteristic is to be found elsewhere. The considerations to be explored in detail in this paper will show that it is, necessary, instead, to take weighted logarithms and consider either of the functions

or

f,(o:) = _log NE(a), (B.l) logE

C,(a) = -!ogp,(a). logE

(B.2)

As E--+ 0, both f,(a) and C,(a) tend to well-defined limits f(a) and C(a). The function C(a) is more widely applicable, but the function f(a) is more widely known. When f (a) exists one has

C(a) = f(a)- E. (B.3)

926 B Multifractal Measures

The definition off (a) means that, for each o:, the number of boxes increases for decreasing E as N<(cx) ~ cf(a)_ The exponent f(a) is a continuous function of a. In the simplest cases, the graph of f(a)- often called .f(a) curve - is shaped like the mathematical symbol n, usually leaning to one side. The values of .f(a) could be interpreted loosely as a fractal dimension of the subsets of boxes of size E having coarse Holder exponent a in the limit E ---> 0. As E ---> 0, there is an increasing multitude - increasing to infinity -of subsets, each characterized by its own a, and a fractal dimension .f( o: ). This is one of several reasons for the term mult!fractal [ 191].

B.1.3 Summary

This appendix restricts itself to the simplest multiplicatively generated multi­fractal measures. For the more delicate examples of multi fractal measures, it refers the reader to the literature. We have already seen a close connection between the IFS and multiplicative processes. Section B.2.2 will digress to discuss the multiplicative process that lurks behind the Pascal triangle in figures 8.14 and 8.32. There is evidence that multiplicative processes can account for many multifractal measures such as those related to the electrostatic charge distribution (the harmonic measure) on fractal bound­aries [175], wavefunctions [289] and random resistor networks [147]. But this does not mean that every self-similar measure is multiplicatively gen­erated. For example, many models for the multifractality of the dissipation field in turbulence are based on multiplicative processes, but a physical counterpart remains elusive.

The very simplest multiplicatively generated self-similar measure is the binomial measure. For it, .f ( o:) will be evaluated in three ways: the his­togram method [64, 65], the method of moments [191, 205] and large devi­ation theory [64, 65]. The method of moments5 is easy to use mechanically, and therefore has been applied very widely. In our first three sections, the only prerequisite is elementary analysis. Section 4 goes further, and casts self-similar measures in their proper probabilistic setting. It shows that multiplicatively generated multifractal measures are intimately related to a standard topic in probability theory, namely the behavior of sums of random variables. The method of moments and the histogram method are consequences of the theory of large deviations in such sums. While it is more technical, section 4 requires no prior knowledge of sums of random variables or of large deviation theory. The conceptual superiority of the probabilistic approach has immediate practical consequences: it is more general. It explains the nature of various mechanical manipulations, and provides tools to handle and understand self-similar measures to which the

5The earliest reference and the reference that is best known are, respectively, Frisch, U., Parisi, G, Fully developed turbulence and intermittency in Turbulence and Predictability of Geophysical Flows

and Climate Dynamics, edited by Ghil, M., Benzi, R. and Parisi, G., North-Holland, New York, p. 84 (1985) Halsey, T.C., Jensen, M.H., Kadanotf, L.P., Procaccia, I. and Shraiman, B.l.: Fractal measures and their sinf?ularities: The

characterization of strange sets. Phys. Rev. A 33 1141 (1986)

B.2 The Binomial and Multinomial Measures 927

method of moments fails to apply. Hence, section 4 of this appendix is an introduction to more advanced literature.

The reader may want to consult other books, reviews or articles about multifractals: for example references [263, 32, 90, 2, 64, 65].

B.2 The Binomial and Multinomial Measures

The introduction discussed the central role multiplicative processes play in the theory of multifractal measures. This section concerns the very simplest multiplicative process, one that generates the binomial measure. This mea­sure appears naturally in the Pascal triangle figure 8.14 and 8.32, and, with a little modification - the replacement of the binomial by the trinomial, it links to the IFS discussed in the introduction.

B.2.1 A Measure-Generating Multiplicative Cascade

Exact (or linear) self-similarity of measures is best illustrated with the bi­nomial measure, (also called the Bernoulli or Besicovitch measure) [64]. In the spirit of the construction of the exactly self-similar Sierpinski gasket through a geometric cascade, this measure J.L is recursively generated with the multiplicative cascade that is schematically depicted in figure B.2. This cascade starts (k = 0) with a uniformly distributed unit of mass on the unit interval I = ! 0 . = [0, 1]. The next stage ( k = 1) fragments this mass by distributing a fraction m 0 uniformly on the left half Io.o = [0, ~] of the unit interval, and the remaining fraction m 1 = l-m0 uniformly on the right half 10.1 = [ ~, 1]. At this stage, the left half carries the measure J.L( Io.o) = m 0

and the right half carries the measure J.L(l0.1) = m 1. In this process, because J.L(Io.) = J.L(lo.o) + J.L(lo.l) = mo + m1 = 1, the original measure of the unit interval is conserved; the J.L's appear like probabilities, and one says that fL

is a probability measure. At the next stage (k = 2) of the cascade, the subintervals 10.0 and 10 .1

receive the same treatment as the original unit interval. That is, Io.o is split into the intervals Io.oo = [0, ~] and 10.01 = [~, !J of size 2-k, and the mass is further fragmented. Similarly, Io. 1 is split into a left half 10 . 10 and a right half 10. 11 • Writing J.L(lo.oo) = J.Lo.oo, and similarly for the other intervals, this second stage of the cascade yields

The condition m 0 + m 1 = I continues to insure that the original unit of mass is conserved.

At the kth stage of the cascade, the mass is fragmented over the dyadic intervals ( i2-k, (i + 1 )2-k], where i = 0, ... , 2k - I. Recall that a point x E [0, I] is said to have the binary expansion 0.;31 {32 ••• /h when x =

{312- 1 + {322-2 + ... + f3k2-l with f3i E {0, I}. For dyadic points, like x = ~, this expansion is ambiguous, and may end with either an infinity of

928 B M ultifractal Measures

0

1\ 0

/ ' \ I 0

0 1 0

0 0

0 1

Figure 8.2 : The multiplicative cascade generating the binomial measure. At each stage the mass of each of the previous dyadic intervals is redistributed as follows: A fraction m 0 goes to the left half and m1 = I - mo to the right half. Here we took mo = ~ and m1 = ~- The density of the measure is shown for the first 8 stages. The scales on the coordinate axes have been kept the same throughout the figure . The actual measure of a dyadic interval is the integral of this density. For example, the measures of the 4 intervals of size ~ at stage 2 are momo , momJ , m1mo and m1m1 .

B.2 The Binomial and Multinomial Measures 929

zeroes or an infinity of ones; in the present application, one must choose

the former expansion. An arbitrary dyadic interval Jk = Io.(3,f3z ... fh of size 2-k consists of all points x E [0, I] whose binary expansion starts with

0.(3J(32 •.• f3k· To give an example, (31 = 0 if our interval, Jk, is in the left half J0 of the unit interval, and (31 = 1 if Jk lies in the right half 11•

Similarly, (32 = 0 if Jk lies in the left half of If3,, etc. Clearly, the measure

of the dyadic interval Io.f3,f3, ... f3k equals

k

II. - ITm - mnomnl ,-O.f3t (1zf33 ... f3k - (3, - 0 I ' (8.4)

t=l

where n 0 is the number of digits 0 in the address 0.(31 (32(h ... (jk of the left end of the interval, and n 1 = k - n0 the number of digits I. Since

the binomial measure of each dyadic interval of size 2-k is the result of a multiplication of k multipliers mf3, it is called a measure generated by a

multiplicative process. The binomial multifractal measure is the measure /1!3 which attributes

masses according to the equation (B.4) to the dyadic subintervals of the unit

interval (Note for the mathematically minded: the measures JLO.f31f32fh .. fh of all dyadic subintervals of the unit interval can be extended to a Borel field of

subsets of [0, 1 ]). The multiplicative cascade is a mechanism for producing

this measure. In the case m0 = m 1 = 4 this measure reduces to the uniform

(Lebesgue) measure.

B.2.2 The Pascal Triangle and the Binomial Measure

This section is a digression that can be skipped with no Joss of continuity.

When viewed in a mirror, the distribution in figure 8.32 resembles closely the density of the binomial measure, as shown in figure B.2. This is not a coincidence. Remember that the height of the r 1h column in figure 8.32 equals the number h ( r) of black squares in the r1h row of the Pascal triangle

(mod.2) in figure 8.14. Turning the page 90° counterclockwise (so that the

columns point upwards) and looking only at the black geometry of the triangle, one sees that the total number of black squares in the rows [4, 7] is

twice that in the rows [0, 3], i.e., L~=4 h(r) = 2 L~=o h(r). This factor of 2 turns out to be ubiquitous in this application. Every time one considers a block of rows [0, 2k - I], its right half [2k-l, 2k - I] contains twice

as many black squares as its left half [0, 2k-I - I]. If this left half, in

tum, splits into two halves of size 2k-2 , one again finds the ratio I : 2 between the left and right, etc. To conclude: up to an overall factor, the

numbers of black triangles in the columns of figure 8.14 have the same structure as those of the mirrored binomial multifractal in the k = 4 stage

of the multiplicative cascade shown in figure B.2. In general, one finds that 3-k h(r) is the binomial measure of the dyadic interval [r 2-k, (r +I) 2-k] for r = 0, ... 2k - 1. This highly visual analogy establishes a relationship

930 B Multifractal Measures

between the distribution h( r) of the Pascal triangle and the binomial measure f-L with multipliers mo = ~ and m1 = ~-

The formal connection between the triangle and the binomial multiplica­tive process follows from an earlier result (section 8.5). Equation (8.18) states that for r = (30 + (312 + (3222 + ... f3k2k with f3i E {0, I} (that is, if f3of31 ... f3k is the binary expansion of r), one has

where n 0 is the number of O's in the expansion of r and n1 is the number of 1 's. Comparing this with equation (B.4), one finds h(r) = 3k /Lo.(3 1(32 ..• f3k,

where f-L is the binomial measure with mo = ~ and m1 = ~- Note that we replaced the digit "(30" by "0." to emphasize that 0.(31(32 ... f3k stands for the dyadic subinterval of the unit interval as discussed in the previous section.

It is important to mention in passing that both the binomial measure and the hitting probability of the IFS on the Sierpinski gasket are invariant measures of the Markov operator M ( v) discussed on page 331.

The binomial measure f-LB with multipliers mo and m 1 is the invariant measure of the Markov operator M(v) = m 0 vw] 1 + m 1 vw:2 1, with

w 1 (x)=~X

wz(x) = ~x + ~-

That is, M(J.LB) = f-LB. The trinomial measure in figure B.l, associated with the random

Sierpinski IFS in section 6.3, is the invariant measure of M(v) = 0.5 vwj 1 + 0.3 vw2 1 + 0.2 vw;,- 1, with

w1(x,y) = (~x, ~y)

wz(x,y) = (~x + 1, h) w3(x, y) = (~x + ~, ~y + ~V3).

B.2.3 Self-Similarity and Singularities

The introduction briefly discussed the notion of self-similarity in the case of an IFS. We now discuss this notion in the case of the binomial measure. The measure of the arbitrarily selected interval Io.(31 f32 ••• f3k is {-LO.f31 f32(33 ••• rh times smaller than that of the original unit interval, which had mass I. Apart from this overall difference, the mass in both these intervals is fragmented in exactly the same way. That is, consider the mass distribution on the

The Markov Operator M(v), the Binomial

Measure and the Sierpinski IFS

Self-Similarity

B.2 The Binomial and Multinomial Measures 931

The Coarse and the Local Forms of the Holder Exponent

interval If31f32f33 •.• f3k of the stage k + k' of the multiplicative cascade; then by spatially rescaling this subinterval by a factor 2k, and renormalizing its

mass by a factor (J.Lf31 f3zf33 ... f3k) -I, one recovers the mass distribution on the whole interval at the stage k' of the cascade. It is in this sense that the mass

distribution (or the measure) is said to be self-similar. We now show that

such self-similar measures are very singular, and discuss in more detail the

notions of Holder exponents a mentioned in the Introduction.

For the uniform measure (the Lebesgue measure), the density is 1 every­

where in [0, 1]; i.e., the Lebesgue measure of an interval of size E is E. For the

binomial measures, the situation is altogether different. Near x = 0, equa­tion (B.4) shows that p,([O, 2-k]) = m~ = (2-k)vo with v0 = -log2 m 0 .

That is, the measure in the neighborhood of 0 scales as

p,([O,E]) rv E'", with a= Vo.

The density p,j E scales like Ea-l, and if a =I 1 the limit of the density as

E ----> 0 is degenerate, equal to either 0 or oo.

When the measure in the £-neighborhood of a point scales as a power

law in the limit E ----> 0, the exponent a of this law is called the local

Holder exponent (Alternative terms, such as singularity index or singularity

strength [205] are also encountered in the physics literature). That is, given

a point x in the support of the measure, the local Holder exponent is defined

as

a(x) =lim logp,(Bx(E)), E-->0 logE

(8.5)

where Bx(E) is a ball of size E around x. When the limit fails to exist, we shall say that the local HOlder exponent is undefined. (The mathematician's more elaborate local definition replaces lim by a limsup. This replacement broadens the cases where the Holder exponent is defined locally; but this detail cannot be discussed here).

In most practical applications, the limit E ----> 0, which enters in the definition of the local Holder exponent, cannot be taken. One must, instead,

work with the concept of coarse (or coarse-grained) Holder exponent. This is a number attributed to each finite interval. For any box B (E) of size E,

the coarse-grained Holder exponent is defined as

logp,(B(E)) a= .

logE (8.6)

Thus, the concept of Holder exponent has a local and a coarse version. Both enter into the theory of multifractals, but the coarse HOlder exponent

plays an especially central role. As mentioned in the introduction, a serves

to label the boxes covering the set supporting a measure, thereby allowing a separate counting for each value of a. For dyadic intervals Jk of size 2-k,

932 B Multifractal Measures

equations (B.4) and (B.6) yield

(0 (J (J (J ) log flO.fJd32 ... fh a . 1 2··· k = k log2- ·

(8.7) log m~"rn·;··• no k -- no ----"---------------'- = -v0 + ---v1,

-klog2 k k

where

v 1 = - log2 m 1 and vo = - log2 mo. (B.S)

Let cp0 = cp0(Jk) be the fraction n0 /k of O's in the address 0.(31(32 •.. fh of interval Jk. Equation (B.7) becomes

(B.9)

The notation amin and amax is explained shortly. It follows that the coarse HOlder exponent a of an interval Jk only depends on the fraction 'Po of digits 0 in its address 0.{3!{32 ... Pk· Note that we use the same symbol a for both the local HOlder exponent (equation (8.5)) and the coarse Holder exponent (equation (B.6)). When present, the argument of the former is a point on the support of the measure, and that of the latter the address of a dyadic interval. In both cases cp0 in equation (B.9) is the fraction of zeroes in their binary address.

Without loss of generality, we can take m 1 <::: m 0 , so that v0 <::: v1 and equation (B.7) yields v0 :::; a <::: v1• (The same restriction also applies to the local HOlder exponent). The extreme values of a are usually denoted by amin and amax, i.e.,

amin = Vo and amax = V].

This restriction on the values of the coarse Holder exponent is independent of the size 2-k of the dyadic intervals; hence, it is independent of the scale on which the fractal measure is probed. This makes a an ideal index with which to mark the boxes of any size covering the set supporting the measure. (Equation (B.22) will provide an alternative way of rewriting equation (B.7), and emphasizing the role of the coarse HOlder exponent in transforming a multiplicative process into an additive process).

Keep the fraction cp0 = n0 /k of O's in equation (B.9) constant, and let k ______, oo thereby squeezing Jk to a point. By permuting the order of appearance of the digits 0 and 1, increasingly many points x E [0, I] are found with fixed Holder exponent a(x) = cpoamin + ( l-cp0 )amax· In the case of the binomial, the extreme values 11'min = v0 and amax = v1 continue to be attained (respectively) in the left- and the right- most dyadic subinterval of the unit interval. But this is a peculiarity of the binomial measure; in general the minimal and maximum coarse and local HOlder exponents can lie anywhere in the support of the measure.

B.2 The Binomial and Multinomial Measures 933

Singular Distributions A measure 11 on [0, 1] has a density p(x) in a point :1: if

1. J.1(Bx(t)) ( ) 1m = p X

c--->0 E

exists and satisfies 0 :S: p(x) < oo. If o:(x) is the local Holder exponent in x E [0, 1], then p(x) "' limE-to Ea(x)-l. In points x where o:(x) # I the density of the binomial measure is singular. Section 4.2 will show that, with probability 1, the local HOlder exponent of a randomly picked point in the support [0, 1] of the binomial measures is

1 a= (vo + vJ)/2 = -2log2(momJ).

A measure for which a= 1 occurs when, and only when, m 0 = m 1 = ~,in which case the binomial measure reduces to the uniform (Lebesgue) mea­sure. In the interesting cases, m 0 # ~, and the density is almost everywhere either 0 or oo, hence it is called singular. For reasons which will become apparent later on, a is usually denoted by o:0 or o:(O).

When the local density is singular, it continues to be possible to define a E-coarse-grained density, by covering the set supporting the measure with boxes B(E) of size E and attributing the coarse density J.1(B)/EE to each box (E is the dimension of the box). Figure B.2 shows an example of a sequence of coarse-grained densities for the binomial measure.

B.2.4 The f ( oo) Curve of the Binomial Measure

The f (a:) curve describes the distribution of the coarse-grained Holders exponents. To introduce this function, we first compute the number Nk (ex) of intervals I"' of size 2-k with coarse HOlder exponent a. Equation (B.9) shows that the value of a: is determined by the frequency cp0 of zeroes in the expansion of the interval, and conversely, that to each a corresponds a unique cp0 (a:). Thus, the number of intervals with coarse Holder exponent a: is given by the number of ways one can distribute n 0 = cp0 ( a: )k zeros among k positions. This is the binomial coefficient

This fact explains why the term binomial is applied to the binomial distrib­ution and measure. To simplify, write z instead of cp0 , and apply Stirling's approximation k!::::::: Vhl (k/e)"'. It yields

( k) k! zk - (zk)!(k- zk)! VZk (zk)zkJk- zk (k- zk)k-zl> ·

Many terms cancel out, leaving

934 B Multifractal Measures

f (a:) Curve for Binomial Measure

The f (a) curve for the binomial measure with mo = ~ and mt = t. At each level of coarse graining of the measure, one can distinguish in­tervals with different coarse H6lder exponents a. As the coarse graining box-sizes become smaller, the num­ber N, (a) of boxes with certain a increases as N<(a) cv E-f(a)_

Figure B.3

f(a)

1

withg(z) = -log2 (zz(l-z) 1-z). Usingequation(B.9)toeliminatecp0 = z in favor of the variable a:, we find

(B.IO)

with

!( ) _ _ O:max - 0: l ( O:max - 0: ) a: - og2 O:max - O:min amax - amin

a - amin I ( a - amin ) - og2 . amax - O:min amax - amin

The graph of f(a) is shown in figure B.3. Expanding f(o:) around a = ao = ( O:min + amax) /2 using the approximation In x = x - x 2 /2, yields for a:- ao « 1,

!( ) _ 2 a- ao 0: -l-- ( )

2

In 2 a max - O:min

The f (a) curve has the following noticeable properties.

(a) It is defined for 0 < amin <a< amax < oo and f(a) 2': 0.

(B.ll)

(b) The maximum of f(o:) is attained in one value of a:, called "a0".

(c) The curve is symmetric around this maximum. (d) The local behavior of f( a) near the maximum is quadratic. (e) The curve lies under the bisector defined by f(a) =a, with contact at

0: = 0:].

One may wonder whether these features are typical for self-similar mea­sures. The answer is no. Upcoming sections touch upon these properties.

q = -00

/ u

B.2 The Binomial and Multinomial Measures 935

For finite k, the above equations only hold when kcp0 is an integer h

between 0 :S h :S k. Let cp~ = (h- 1)/k and rp~ = (h + 1)/k. For fixed k there is no a between a(cp~) <a< a(cp~). From equation (B.9) it follows that a(rp~)- a(cp~) = 2(v1 - v0 )/k is independent of h. Equation (B.lO) should therefore read that the number Nk(a)b.k of intervals Jk with a coarse Holder exponent between a and a + b.k scales like ( 2-k)- f (a l.

Asymptotically Nk(a)da is the number of intervals with a coarse HOlder exponent between a and a + da.

The role of f(a) as scaling exponent (equation (B.10)) suggests that the f(a) is a kind of box-counting dimension. (The fact that there are a multitude of different values of a, each with different f (a), is one reason for the term multifractals). However, a closer look shows f(a) is not really a box-counting dimension. The box-counting dimension is related to the covering of a fixed set, by boxes of increasingly small boxes nested within each other. This is not the case for boxes with a given coarse HOlder exponent: indeed, a box of size 2 2-k with coarse Holder exponent a, contains sub-boxes of size 2-k with different values of a.

Another way of visualizing mu/tifractality is to consider all the points Hausdorff Dimension Versus x in the support of the measure for which the local Holder exponent Box-Counting Dimension is a (equation (8.5)). For each value of a this defines a set Aa c [0, 1]. It is not difficult to show that the box-counting dimension of each of the sets Aa of the binomial measure is 1. (See the Proof at the end of this paragraph). Hence, instead of the box-counting dimension, we need the concept of Hausdorff dimension. It is beyond the scope of this appendix to go beyond simply stating that for a class of multifractal measures (including the above binomial measure) it has been rigorously shown [268] that the Hausdorff dimension of A a

is f(a) (reference [12] includes a proof in the case of the binomial measure). Proof that the box-counting dimension of A<> is 1. It suffices to show that each dyadic subinterval Jk of the unit interval contains points with any coarse Holder exponent between Ctmin and Ctmax.

Take any value a' E [etmin,Ctmax]- Let the binary expansion of Jk be 0.(3!(32 ... f3k· Remember that all x E [0, 1] whose binary expansion starts with digits 0.(31(32 ... f3k lies in Jk. So any choice of an infinite sequence of digits f3k+ 1 f3k+ 2 ... such that its fraction cp0 of O's satisfies equation (8.9) for a = a', yields a point x E Jk with binary expansion f3If3z ... f3kf3k+ 1f3k+z ... having a(x) =a'.

B.2.5 Multinomial Measures and the Legendre Trans­forms

In a multinomial cascade [64, 65, 239], the base b satisfies b > 2 rather than b = 2. Each stage of the construction redistributes mass or measure over b

936 B Multifractal Measures

Rough idea of the domain of (a, b) for a multinomial multifractal with b = 4. The domain's upper bound­ary defines the function f (a). Here, all the m" are different, Ctnun =

min" ( Vo, ... 1 V/, - 1) > 0 and OCrnax = max,(vo, ... ,Vb--1) < =·

Figure B.4 0

equally sized intervals, following the fragmentation ratios m 0 , m 1, ... , mb-I

with :L~:ci mi = I. We already encountered a trinomial multifractal (b = 3) in the Introduction, namely, an IFS-generated trinomial measure on the Sierpinski gasket, with multipliers m 0 =PI. m 1 = p2 and m 2 = p3 . Here we restrict ourselves to multinomial measures supported by [0, 1].

To show how the notion of f (a) generalizes to these measures, let It C [0, 1] be an arbitrary b-adic interval of size b-k, and let its base-b address be O.fM32 ... f3k with f3i E {0, I, ... , b- I}. Denote by cp the point in E = b dimensional Euclidean space whose coordinates are the frequencies 'Pi of the digits i in this address. The combinatorics of the binomial measure, and the use of the Stirling approximation can be generalized to every interval It characterized by cp. For the measure of such It and its coarse Holder exponent one obtains

For the number of such intervals one finds NA( cp) "' (b-k)-b, with

In the binomial case, a single 15 = f (a) could be deduced from the value of a, but this possibility is not available here. A given a, indeed, allows a host of possible set of values of 'Pi constrained by :L 'Pi = I and a = - :L 'Pi 1ogb mi. After the points (a, 8) corresponding to all the values of a have been combined, the result is a domain of the plane [239], as shown in black in figure B.4.

There is a powerful heuristic way of replacing this domain by its upper bound. This heuristic also provides the shortest path towards the Legen­dre transforms, which are essential in the study of multifractals. (A full

a

B.2 The Binomial and Multinomial Measures 937

Beyond the Multinomial Measures

mathematical justification will be given in section 4.4). The key step is to argue that, given a value of a, the 8's are dominated by the largest among them which we denote by f(a). This requires solving for the point r.p that maximizes -I: 'Pilogb'Pi, given -I: 'Pilogbmi = a, and also I: 'Pi = 1. To solve this problem, the classical method consists of using Lagrange mul­tipliers [52]. It introduces a multiplier q, with -oo < q < oo, and yields

bqlogb m, mq • 'Pi="' bqlog m· = ~'

L.Jj b 1 L.Jj mj

and thus that

a(q) ~-~ (L7~J) log,m;

and

f(a(q)) ~-~ ( L7~1) tog, (L7~r) Here, the quantities I:j mj and T(q) = -Iogb I:j mj play the roles the "partition function" and the "free energy" are known to play in thermody­narrucs.

In terms of T(q), the Lagrange multipliers yield

8T(q) q8T a=-- and f(a) =-- T =qa-T.

8q 8q

As announced, these steps replace the black domain in figure B.4 [239] by its upper boundary, which is the graph of a function f(a), which has all the properties that apply in the binomial case, except for symmetry.

Knowing T(q) for all values of q, one can trace all the straight lines of equation 8q(a) =qa-T. These straight lines define f(a) as their envelope, namely

f(a) = minq(qa- T).

This transformation is called a Legendre transform, we shall encounter it repeatedly in contexts.

In many cases, a graphical approach is illuminating. If the lines repre­sented by Dq(a) are traced in green, they merge into a green domain in the (a, 8) plane, which "surrounds" the black domain that we have considered previously.

One way to expand the notion of multiplicative process is to allow b = oo. Examples can be found in references [242, 243, 245]; one example is shown in figure B.5. The interesting thing about these measures is that although they are exactly self-similar (a piece, if expanded, looks exactly the same as the whole), the f(a) curve is very different from the symbol n encountered for the binomial. One can construct examples for which a) amin = 0 and amax = oo, b) the maximum is not quadratic and c) the maximum is not attained for one value of a but over a halftine of a values. We will briefly comment on this in section B.5.

938 B Multifractal Measures

Measure With Left-Sided f(a)

An example of an exactly self­similar multiplicatively generated measure, for which a(O) = oo hence, a fortiori, arnax = oo. It fol­lows that f (a) is defined for alllln < a. Such measures are called left­sided. The corresponding function r(q) is not defined for q < 0, and the method of moments does not adequately describe the whole mea­sure.

Figure B.5

0.004

0.002

0

B.2.6 Random Multiplicative Cascades

0.5

A second way to expand the notion of multiplicative process is to usc random multipliers. When b = 2, such a process proceeds in the same way as the binomial measure, with the crucial difference that each multiplier is the outcome of some probabilistic process such as throwing dice. Just as most fractal sets in nature are random fractal sets, the random multiplicative processes are very useful for modelling real multifractal measures like those in turbulence [233, 142, 263, 254] or diffusion limited aggregation [316, 246, 174]. For such measures, all properties of the f(a) mentioned in the last section may be violated, except that its graph always lies under the bisector f(a) = a . Such measures are described in great detail in references [64, 651. We will postpone a brief discussion of them to section B.S.

B.3 Methods for Estimating the Function f (a) from Data

It is nice to say that a measure is multifractal if it is self-similar, in the sense described for the binomial measure. In our study of the binomial, the cascade was a given, and the quantity Nk(rx)drx could be evaluated for all

k and interpreted as the number of intervals of size 2- k in the kth stage of the cascade having a coarse Holder exponent between a and a+ da. But it

1.0

B.3 Methods for Estimating the Function f(a) from Data 939

is important to keep in mind that behind most measures there is no obvious multiplicative cascade! When one may exist, like perhaps in turbulence or the distribution of mass in the universe, this cascade is history; the only thing present is the measure it has created. How does one find out whether a given measure is multifractal?

The key is that, when only one stage k of a measure is given, one can reconstruct any previous stage h < k by coarse-graining with intervals of size 2-h. We shall examine two methods for obtaining an empirical estimate of f (a) for an arbitrary measure.

B.3.1 The Histogram Method

Given a measure f.L, the histogram method involves the following steps:

(a) Coarse-grain the measure with boxes of size E. This yields a collection

of boxes {B2 (E)}~C~l, where N(E) is the total number of boxes needed to cover the set supporting the measure.

(b) t-L(Bi) being the measure of box i, compute the coarse Holder exponent a, = log f.L,/ log E.

(c) Make a histogram. That is, subdivide the variable a into bins of suitably small size .6.o: and estimate the number density Nc(a) by recording the number of times Nc (a ).6.o: that a specific value of the HOlder exponent falls between a and a + .6.o:.

(d) Repeat step (c) for different values of coarse-graining size f.

(e) Since we expect

plot - log NE (a)/ logE versus a for different values of f.

This method suggests that a measure be called multifractal when the resulting plots collapse onto a curve f (a) if E is small enough.

We must state that self-similar measures exist [242, 243, 245] for which the collapse to a function f (a) is extraordinarily slow, and largely irrelevant for any physically meaningful E.

A test of these steps in the case of the binomial measure, and methods to accelerate the convergence are discussed in reference [253].

B.3.2 The Method of Moments

The method of moments [205] is based on a quantity called partition function (because of analogies with the partition function in the theory of equilibrium thermodynamics). It is defined as

N(t)

Xq(E) = L JL{, q E R. (B.I2) t=l

940 B Multifractal Measures

For example, take the binomial measure and denote by Xq ( Ek) the partition function at coarse-graining box-size Ek = 2-k. An inspection of figure B.2 immediately yields Xq(Eo) = lq, Xq(E1) = mZ + mi and Xqh) = (mZ + mf) 2. More generally,

Xq(Ek) = (m6 + mf)k.

Returning to equation (B.l2) in the general case, let us rewrite the mea­sures Jli of the boxes as Jli = Ea', yielding Xq (E) = 2:::(t) ( Ea, )q. Mo­tivated by the results for the binomial measure, denote by N, (a )da the number of boxes, out of the total N (E) for which the coarse HOlder ex­ponent satisfies a < ai < a + da. Assume, in addition, that there exist constants amin and amax such that 0 < amin < a < amax < oo, and that N, (a) is continuous. Then the contribution of the subset of boxes with ai between a and a+ da to Xq(E), is N,(a)(E<>)qda. Instead of adding the contribution of each box i separately, integrate over da to add the contri­butions of subsets whose coarse HOlder exponent is between a and a+ da. Thus,

Xq(E) = J Ne(a)(Ea)qda.

If Nf(a) "'cf(a), it follows that

Xq(E) = J Eqa-f(a)da. (B.l3)

In the limit E -+ 0, the dominant contribution to the integral comes from a's close to the value that minimizes the exponent qa- f(a). If f(a) is differentiable, the necessary condition for the existence of an extremum is

a oa {qa- f(a)} = 0.

For given value of q, the extremum occurs for the value a satisfies

!!_f(a)l =q, oa a=a(q)

and this extremum is a minimum as long as

()2 I 02af(a) < 0.

a=a(q)

a(q) that

(B.l4)

Thus, the function f(a) should be cap convex as in figure B.3, and for the a= a(q) where the minimum is attained the slope of f(a) is q. Note that by now we encountered three different arguments for a: i) x E [0, 1], ii) the dyadic expansion 0.{3!(32 ... f3k of an interval Jk and iii) the variable q. Only the latter two which are easily distinguished will be used in the future.

B.3 Methods for Estimating the Function f(a) from Data 941

Keeping only the dominant contribution in equation (B.l3), and intro­ducing

T(q) = qa(q)- f(a(q)),

we find

Xq(E) rv Er(q)_

For the binomial measures,

( ) 1. Iog(mZ + mi)k ( q q) T q = 1m = -log2 m 0 + m 1 ,

k--->oo log 2-k

and for the multinomial measures b-1

T(q) =-1ogb L m{. i=O

Returning to the general case, it is not difficult to show that

a aq T(q) = a(q).

(B.l5)

(B.l6)

(B.17)

(B.l8)

This shows that f(a) can be computed from T(q), and vice versa, by the identity

f(a(q)) = q a(q) - T(q). (B.l9)

This relation between T(q) and f(a) has already been encountered in section B.2.5; it is called a Legendre transform. As an example, the Legendre transform of equation (B.l6) yields the f (a) of the binomial measure in equation (B.l 0).

Note that equation (B.l4) and the strict cap convexity off (a) imply that a(q) is a decreasing function of q, so amin = a(oo) and amax =a( -oo); thus T(q) should be strictly cap convex. The function T(q) is sometimes written as T(q) = (q- l)Dq, where the exponents Dq [209], are called generalized dimensions. (See also the discussion in section 12.6).

In practice, to compute f(a) through the partition function requires the following steps:

(a) Coarse-grain the measure with a covering { Bi (E)} ;~5;) of boxes of size E and determine the corresponding box-measures /Li = p,(Bi(E)).

(b) Compute the partition function in equation (B.l2) for various values of E.

(c) Check whether the plots of log Xq (E) versus logE are straight lines. If they are straight, T(q) is the slope of the line corresponding to the exponent q (see equation (B.15)).

(d) Form f(a) by Legendre transforming T(q) (equation (B.19)).

In real applications, the above steps must be carried out numerically. The usual reason is that there is no way to get analytic expressions for lack

942 B Multifractal Measures

of theoretical knowledge about the phenomena. But, even if an analytic expression for the partition function is available, it may be to difficult to find an analytic expression for T(q) or f(a).

When the check described under (c) gives a straight line, the method of moments is justified, and it yields the same f(a) as the histogram method. Moreover, given that moments tend to smooth the data, while the histograms method handles raw data, the method of moments converges much faster.

At this point, we must voice a serious warning. The fact that the work must proceed numerically creates a strong temptation to proceed blindly, by resorting to ready-made computer programs. Unfortunately, some programs fail to include a demanding test for the linearity postulated under (c). In­stead, they go ahead and fit a straight line "mechanically," using a criterion such as least squares fitting. While statistically objective in all cases, those fitted T(q) have no physical meaning, unless the points plotted under (c) are straight. As a matter of fact, the literature is cluttered with thoroughly bizarre f (a) curves that were obtained objectively, but mean nothing.

Equation (B.15) shows that under the assumptions mentioned under equation (B.I2), the partition function scales as a function of the box -size E for all q E R. One then says that "the q1h moment" of the measure exists for all q. For example, all moments exist, for the binomial and multinomial multifractals. The method of moments suggests saying that a measure is multifractal if, and only if, the function T( q) exists for all q E R. But this definition is not satisfactory. Self-similar measures exist [242, 243] for which T(q) fails to be defined for, say, q < 0. Self-similar measures for which all moments exist should be called restricted multifractals to indicate the existence of a broader class.

B.3.3 Properties of f (a)

Let us briefly review some of the characteristics of f (a) for restricted mul­tifractal measures, i.e. measures for which the method of moments is ap­plicable and the histogram method converges reasonably fast.

Let A<> (E) be the subset of boxes covering the support of the measure having a coarse Holder exponent between a and o: + dcx. The total measure carried by such a subset is J'L(A"(E)) = Nc(a)E00 da '"""cf(n)+n. Now, the total measure in all boxes is I. This implies that f (a) ::; a; the reason is that the contrary would be absurd: if there existed a value of a such that f(a) > ex, it would follow that J'L( A"(t:)) ---7 = for f ---7 0. So an f (a) always lies under the bisector. Second, consider that value of n that maximizes J'L( k" ( t:) ). If it were true for all n that f (a) < a it would follow for all a that then J'L( A" (E) ) ---7 0 as E ---7 0. That would contradict the fact that the total measure is 1. So the f (a) curve should at least have one point in common with the bisector; because f (a) is concave, this point of intersection is unique, and occurs where f' ( o:) = I. This corresponds with q = 1 in the method of moments (see equation (B.14)) and this value

B.4 Probabilistic Roots of Multifractals 943

of o: is denoted by o: ( 1) or o:1 • The subset of boxes A<> 1 (E) carries all the measure in the limit E --+ 0; i.e., !-l( A 01 (E) ) --+ 1 for E --+ 0.

Restating equation (B.15) as T( q) = lim€->o(log I:f:(:) !-li) /logE and using equation (B.18) one finds that

(B.20)

An equation similar to equation (B .20) can be derived for f ( o: ( q)) and can be used to find f(o:) directly [157].

In particular

'\:""N(<) l o:(1) = f(o:(l)) = lim L....i=l /-li Og/Ji

<--->oo logE

The quantity- I:f:(:) 1-li log 1-li is related to entropy and information. Corre­spondingly the quantity f(o:(l)) = o:(1) is called the information dimension of the measure [32, 197, 209, 85].

f ( o:( 1)) is also the dimension of the set carrying "all" the measure. (This set is called the measure theoretic support of the measure). To illustrate this, let us consider the IFS discussed in the introduction. We know that the visiting process is probabilistically ruled by a trinomial multifractal measure. Let n be the number of times we play the IFS. From the above discussion we conclude that it is possible to find a subset of (3k)f(n(I)) boxes of size 3-k whose total number of visits n1 (n) satisfies n 1 (n)jn--+ 1 for n--+ oo and k --+ oo. The set A<>(I) carries all the measure.

The number of elements in the sets A<> (E) are N, ( o:) ,....., c f ( <>). The set o: with the maximum number of elements has that value of o: at which f(o:) attains its maximum. From equation (B.14) it follows that the maximum should occur for o: = o:(O). Equation (B.19) yields f(o:(O)) = -T(O) and equation (B.12) gives xo(E) = N(E) ,....., cD. Combining this with equation (B.15) yields f(o:(O)) = -T(O) = D. So the maximum value of the f(o:) curve is the box-counting dimension of the geometric support of the measure. Since the number of boxes with coarse Holder expo­nent o:(O) is infinitely larger than those with other values of the coarse Holder exponent in the limit E --+ 0, one expects a randomly picked box of size E to be of type o:(O) (see also section B.4.2). For the binomial measure, equation (B.16) gives D = -T(O) = 1, which is the dimension of the set [0, 1] supporting the measure. From the symmetry of its f ( o:) curve (equation (B.lO)) it immediately follows that o:(O) = (o:min +o:max)/2.

944 B Multifractal Measures

B.4 Probabilistic Roots of Multifractals. Role of f (a) in Large Deviation Theory

At this point, many questions remain to be raised and answered. Why should an f ( o:) curve exist for a self-similar measure? Its existence for the binomial measure does not guarantee its existence in general. Indeed, self­similar measures exist for which f(a) exists, but is attained very slowly and is not of direct interest. If a useful f ( o:) does not always exist for self-similar measures, then what about alternative quantitative descriptions? To a large extent, these questions can be answered when the self-similar measures are generated by, or can be mapped onto multiplicative cascades. The understanding of such measures is largely based on the following basic fact about fractals. Even when a fractal is nonrandom, like the Cantor set, it becomes a random set if its origin is chosen randomly. Similarly, examine the binomial measure in a randomly chosen interval. This measure is a random variable! Therefore, as we shall see, the H6lder exponent can be expressed as a sum of random variables [232, 64, 65]. This fact, again, is true both of random and nonrandom multifractals. It throws light upon the probabilistic roots of the notion of f(o:), and in addition- this is perhaps even more important - it allows one to capture the limitations of f ( o:), while providing new tools to handle more complicated self-similar measures.

The properties of sums of random variables are a central topic in prob­ability theory. The next section discusses the relevance to multifractals of three theorems dealing with such sums: i) the law of large numbers, ii) the Gaussian central limit theorem and iii) the large deviations theorem. No fa­miliarity with these subjects is assumed, and the section should serve as an introduction to literature applying more advanced results from probability theory to self-similar measures.

B.4.1 Transformation of a Multiplicative Cascade into an Additive Cascade

Suppose that the dyadic interval Jk = Iorhf32 •. .fh has been picked randomly. This amounts to picking a random sequence of digits {31 (32 ... f3k where each f3i is either 0 or 1 with probability 4. Indeed, suppose that Jk has equal probability to lie in the left or right half of the unit interval. For the first digit, this means that Pr{(31 = 0} = !, and similarly Pr{(31 = I} = !­Furthermore, irrespective of whether the chosen dyadic interval Jk turns out to lie in Io.o or in ! 0 .1, it will again have equal probability to lie in the left or right half subinterval; that is also Pr{(32 = 0} = Pr{(32 = I} = !-

Equation (B.4) has shown that the measure of randomly picked dyadic

interval Jk is P,o.(31f32 ••. f3k = IT~=I mf3,. Because the f3i are random, either 0 or I, the variables /1>(3., are also random, either m0 or m 1. This means that this measure p, is the product of k statistically independent values of a random variable M, which can be either m 0 or m 1 with probability ! .

B.4 Probabilistic Roots of Multifractals

A random variable can be thought of as a "mathematical coin or die". It has prescribed probabilities for yielding certain results when "thrown". Our convention denotes random variables by boldface capital letters. The sample value, i.e., the outcome of a throw, is denoted by a corresponding lower case letter. For example, the random variable D meant to represent a real die with six faces would have a probability distribution Pr{D = d} = 1/6 ford= 1, ... , 6.

945

Random Variables

So the measure 1-l of a randomly picked Jk is a sample value of the ran­dom variable TI~=I M, where the random multiplier M has the distribution

I Pr{M = mo} = Pr{M = mi} = -.

2 (B.21)

Equation (B.6) yields the coarse Holder exponent ak of such an interval Jk in the following form, which restates equation (B.7),

(B.22)

Here we use the variables Vf3 introduced in section B.2.3 (v., -log2 rn,, i = 0, 1). Thus, the coarse HOlder exponent of a random Jk is the random variable

(B.23)

This is the average of k independently chosen sample values v of a random variable V with distribution

I Pr{V = vo} = Pr{V =vi}= -.

2 (B.24)

Let us rephrase the above in terms of a well-balanced coin, that is, one with equal probabilities for head and tail. A collection of k identical coins (VI, V2, ... , Vk), each with one face marked v0 and the other v 1, are tossed and the sample average t L~=I V h is computed. These averages have the same distribution as the values of the coarse Holder exponent of randomly picked dyadic intervals of size 2-k in the support [0, 1] of the binomial measure.

The distribution Pr{Hk 2:: a} is very much linked to the distribution P<(a)da discussed in the introduction (equation (B.2)), and will eventually link us back with the method of moments and the histogram method. Two paths are open

1) One can count the number of coarse HOlder exponents at the k1h stage of the multiplicative cascade which are larger than a, then divide this

946 B Multifractal Measures

number by the total number 2k of boxes needed to cover the unit interval, or

2) One can also make n series of k coin tosses, for each series compute the average, then count the number of times this average is larger than o: and divide this number by n and consider the result in the limit n --+ oo.

The first is essentially the histogram method. We now follow the second path using the fact that the distribution of the coarse Holder exponent at the

kth level of the multiplicative cascade is the same as the distribution of the random variable Hk defined in equation (B.23). This reformulation in terms of sums of independent identically distributed random variables (equation (B.23)) allows the use of many techniques in probability theory.

B.4.2 Law of Large Numbers and the Role of o:0 as the Most Probable HOlder Exponent

Tossing the coin marked with v0 and v1 k-times, yields, say, no times the value v0 and n 1 = k - n0 times the values v1 • Since the probability for head and tail are equal, one expects fork--+ oo that n0 jk--+ !. and similarly for n 1• This would mean that the sample average t ( n0v0 + n 1 v 1) converges as k --+ oo to the expectation

1 l EV = -vo + -VJ.

2 2

Different forms of convergence yield different forms of the law of large numbers. The weak law of large numbers guarantees such convergence when the expectation EV of V exists. The strong law of large numbers is more interesting for the theory of multifractals. It states that, almost surely (with probability 1) the sample average will converge for k --+ oo to the expectation. That is

Pr { lim ~ tV h = EV} = 1. k--+oo k

h=l

Using the equivalence established in the previous section between the bino­mial measure and coin tossing, this equation shows that, with probability 1, the local Holder exponent at a randomly picked point in the support of the binomial measure equals EV; that is

Pr { lim Hk = Ev} = 1. k--+oo

(B.25)

Another way of obtaining this result is to use the fact that the strong law of large numbers implies that the binary expansion of a randomly picked point 0.(3J(32(33 •.. almost surely has the same frequencies of O's and 1 's, that is, cp0 = 1/2 almost surely. Inserting this in equation (B.9) yields o: = EV, and equation (B.25) is recovered.

B.4 Probabilistic Roots of Multifractals 947

If the law of large numbers had meant that random choice of x should yield one particular value of a with probability 1, then it should yield the value that occurs most often. Going back to the f(a) curve, this particular value of a is the one that maximizes f(a). On page 943, this special value of the coarse Holder exponent was denoted by a(O) or a0 , because it is the value of a that corresponds to q = 0 in the method of moments. Indeed, for the binomial measure we found a(O) = ( v0 + vJ) /2 in agreement with equation (B.25). This establishes a first link between f(a) and the theory of sums of random variables.

The results related to the laws of large numbers only hold pointwise, that is, roughly speaking, in the limit of infinitesimal box-sizes (k ----+ oo).

In most physical systems, such limits cannot be attained, therefore their properties are not especially interesting. It is clear that random selection of a large number of boxes of finite size 2-kwill not always yields coarse HOlder exponents equal to the expected value EV = !(amin + amax), but all values of the HOlder exponent between amin and a max. In other words, the deviations from the expected value become important for finite k, and their probability of occurrence must be known. The relevant information is yielded by central limit theorems and - far more importantly - by large deviation theory.

B.4.3 Gaussian Central Limit Theorem and the Shape of f(a) near o:o

The Gaussian central limit theorem goes a bit beyond the law of large numbers. But it too is disappointing, its main role being that of explaining why the maximum of the f (a) is often quadratic.

Actually, the term Gaussian central limit theorem covers a variety of distinct results that all conclude that deviations from the expected value have a Gaussian distribution. The specific form we need concerns the sums of independent and identically distributed random variables, such as the

sum E~=l V h that enters in equation (B.23). The basic assumptions is that the random addend V is a random variable with a finite expectation EV and a finite second moment EV2 . (The binomial measure is the standard example, since both EV = 4 ( vo + v,) and EV2 = 4 ( v6 + vt) are finite). The theorem states that, in the limit k ----+ oo, the distribution of the rescaled random variable Y k = (2::~= 1 V h - kEV) / Vk converges to the Gaussian distribution with zero mean and variance u 2 = EV2 - (EV) 2 . That is,

{ Ek V - kEV } Jy lim Pr h=l ~ :S y = G(x)dx,

k->oo U k -= (B.26)

where the integrand

C(x) = -- exp --x 1 { 1 2} V27T 2

948 B Multifractal Measures

is the reduced Gaussian density. (A graph of this density is found on the German bill for 10 Deutsch marks, together with a portrait of Carl Friedrich Gauss).

In the coin-tossing experiment that yields either v0 or v1, the law of large numbers makes us "ideally" expect an equal number of v0 and v 1 from k tosses. That is to say, when adding k sample values of V, one expects to

get the value ~vo + ~v, = kEV, so that of I:~= IV h- kEV = 0. However,

equation (B.26) shows that I:~=I V h- kEV deviates from the "ideal" value 0 by an amount that scales like Vk.

Let us now return to the probability density Pk (a) of the coarse Holder

exponent Hk = -k L~=l vh. Writing Hk = YkiVk + EV = YkiVk + ao, we see that Hk has the variance cr2 I ,Jk, and scales like 1 I ,Jk. Keeping to a finite k, the limit in equation (B.26) yields the approximation

{ ( )2} c 1 1 a- a 0 Pk (a)da = Vk ..Jh exp --2 ~ da. (crl k) 21r crlvk

Here, the superscript G is meant to remind us that p~ (a) is not the actual probability density Pk(a) of the coarse Holder exponent of the binomial measure, but a Gaussian approximation that applies only for a near a 0 .

Very near to the most probable value ao, equations (B.2) and (B.3) yield the approximation

( )2

1 2 a- ao f(a) ~ J0 (a) = 1 + -k log2 p~(a) = 1 - -1 2 _ .

n amax amm

This approximation agrees with the expansion, equation (B.ll ), around the maximum of the exact result. But as a moves away from a0 , f 0 (a) be­comes increasingly larger than the exact f(a). Outside [amin, amaxJ, the approximate f 0 (a) is grossly inadequate: the exact f (a) is not defined there, but j 0 (a) is. Further away from a 0 , j 0 (a) < 0, which is meaning­less for the binomial measure.

In summary, the Gaussian central limit theorem shows that the appear­ance of a quadratic maximum in the f (a) of the binomial measure is not a coincidence. In general, it shows that a quadratic maximum contains no information other than the finiteness of the first and second moment of the logarithm of the multiplier.

B.4.4 Cramer's Large Deviations Theory and f(a)

Take a random variable with EX < oo, satisfying Pr{X > EX} > 0. Large deviation theory is concerned with very large fluctuations around the expected value, namely the behavior of

The Significance of .f (a) in the Discrete

Finite Cases

B.4 Probabilistic Roots of Multifractals 949

Chernoff's Theorem on Large Deviations

The Tail Distributions of the Coarse HOlder Exponent

as a function of 8 and k. The law of large numbers tells us that in the limit

k ---+ oo, Pr{k I:;~=I Xh- EX = 0} = 1. So for 8 = 0, one expects that the above quantity vanishes with speed 0. For all other 8, one expects

Pr{ k I::~= I xh - EX ~ 8} to vanish for k ---+ 00. The question is, "How fast".

The answer was provided by Harald Cramer in 1938 under special con­ditions that were gradually weakened by many authors. A survey (with history) is found in the entry on large deviations in reference [55]. Cramer made rigorous use of saddle point approximations that are expressed in heuristic form in the widely used justifications of the method of moments.

We shall proceed in two steps: a detailed study of discrete and finite addends, made possible by a theorem in Chernoff 1948, then a quick sketch of the great case.

Chernoff's theorem applies (among other cases) when the random vari­able X is discrete and finite, meaning that it can only take a finite num­ber b of values x 1 , x2, ... , x B. Thus, its distribution can be written as Pr{X =xi} =Pi, i = 1, ... , B, with I:;~ Pi = 1. The simplest example of a discrete and finite random variable is when B = b, and Pi = I /b for all i. This case corresponds to the multinomial measures discussed in section 8.2.5.

The random variable X being discrete and finite with EX < 0, one has [13]

,li.~ ~log IT{~ Xh ~ 0} ~ log hf il>( q)} where <I>(q) is the moment generating function defined (for our needs) by

<I>(q) = E(e-qlnb X).

The factor ln b has been inserted for later convenience, when b will be the base of an arbitrary multinomial measure.

Chernoff's theorem can be used to compute the probability

IT{H, 2 a} ~ Pc { ~ ~ V h ? a}

that the coarse HOlder exponent of a randomly picked interval of the bino­mial measure is larger than or equal to a for a > a 0 . Note that this is the probability of finding a deviation larger than or equal to a - a 0 to the right of the most probable value a0 .

Rewrite Pr{Hk ~a} = Pr{I:;~= 1 (Vh- a) ~ 0} and introduce the

shifted random variable X= V- a, so that Pr{Hk ~ a} = Pr{I:;~=I Xh ~ 0}. This X satisfies EX= EV-a = a 0 - a < 0. Since for the binomial measure, V is a discrete random variable with B = b = 2, the same is true of X, with Pr{X = Vi - a} = ! . Chernoff's theorem applies, and yields

lim -k1 logbPr{Hk ~a}= logb{inf<l>(q)} = rR(a). (8.27) k-+oo q

950 B Multifractal Measures

The superscript R refers to deviations to the right of the most probable value o:o; i.e., to o:o < o:. Note that the above also holds for multinomial measures, and that equation (B.27) only makes sense for o: ::; Vmax = max{ vo, v1, ••• vb-d· For later convenience, we divided both sides of the main equation in Chernoff's theorem by log b.

The generating function becomes

1 b 1 b <l>(q) = b I>-qx,lnb = b Le-q(v,-a)lnb

i=l i=l

1 b = eqalnb b 2...:e-qv,lnb_

i=l

From v; = -1ogb m;, it follows that e-qv, ln b = m{. Hence,

b

<l>(q) = eqalnb ~ L m{ = eqalnb E(Mq), i=l

where M is the random multiplier of equation (B.21). By simple algebra,

rR(o:) = inf {1ogb <l>(q)} = inf { o:q +1ogb E (Mq)}. q q

Let us step aside to the multinomial case discussed in section B.2.5 and in equation (B.17). In that case, p; = t for all i's, hence

Therefore, it is legitimate to generalize the definition ofT( q) to read

T(q) = -logbE(Mq) -1.

We see that rR(o:) is the Legendre transform of T(q) +I, and

rR(o:) = inf{qo:- T(q)}- J. q

Chernoff's large deviation theorem reads

=infq{qo:-T(q)}-1, o:>o:o.

(B.28)

(B.29)

For o: < o:0, all the previous steps can be redone using the shifted variable X= o:- V, and one finds

(B.30)

= infq {qo:- T(q)}- 1 , o: < o:o.

B.4 Probabilistic Roots of Multifractals 951

Identity between r( a) and C(a)

The superscript L and R in equations (8.29) and (8.30) cease to serve a purpose and will henceforth be dropped.

The existence of the infimum of { qa - T ( q)} is guaranteed [ 13], and one can also show that r( a) is smooth and concave. This infimum occurs for q such that T1(q) =a= a(q), so that r(a(q)) + 1 = qa(q)- T(q) =

qT'(q)- T(q). It follows that (djda)r(a) = q, meaning that the slope of r(a) equals q.

The quantity E (Mq) is the q1h moment of the random variable M. One can easily show that these moments exist for all q E R when M is a discrete finite random variable. In particular, the multinomial measure of

base b yields E(Mq) = t 2::~::6 m{ < oo for all q E R. However, when M is not discrete and finite, the moments may diverge; i.e. E (M'~) = oo for certain values of q. This happens, for example, in the b = oo measure shown in figure B.S.

The above properties of r( a) are very similar to those section 8.3.2 has described for f(a), hence C(a). The only difference is that f(a) and C(a) were defined in equations (B.l) and B.2 through a number density N, and a probability density p,, respectively, while r(a) is defined through a theorem concerning tail probabilities. However, in the case of our large deviation probabilities, the densities and tail probabilities happen to have exactly the same behavior - except for corrections that vanish for large k if one takes a logarithm and divides by k. Indeed, observe that, using equations (8.29), the existence of r( a) means that

The coarse Holder exponent falls between the values a and rx + da with the probabilities

Pk(o:)da = Pr{Hk;::: a}- Pr{Hk;::: a+ da}

rv (b-k)-r(a) _ (b-k)-r(a+dn)

rv (b-k)-r(a) [1 _ (b-k)-r(a+da)H(alj.

r(a) is concave (n) and a> o:(O) puts us to the right side of the maximum, hence r(a) > r(o:+do:), and the second term in the square bracket vanishes for large k. Thus,

(8.31)

This same result is found for a < a 0 if one starts with Pr{Hk ~ o} and equation (8.30). Comparing equation (8.31) with equation (8.2) yields C(o:) = r(a).

For a measure supported by a set of box-counting dimension D, the number N, (a) of boxes with coarse HOlder exponent between a and a+ da is the fraction p, (a) of the total number cD of boxes, i.e., Nk( a) =

952 B Multifractal Measures

cDpc(a). Using equations (B.l) and (B.2) yields f(a) = C(a) +D. For a measure supported by a Euclidean set of dimension E one finds the result in equation (B.3).

To summarize, we have shown that

C(a) = f(a) and f(a) = C(a) +D.

This generalizes in a rigorous fashion the results that section B.2.5 has obtained for the multinomial by using Lagrange multipliers, and provides the probabilistic roots of the notions discussed in the introduction in equations (B.l) and (B.2).

Cramer's large deviation theory in the continuous and/or unbounded cases is very important and justifies a special section. But we can only state that more general Cramer type theorems exist, and that they provide a full justification of the so-called thermodynamic formalism of multifractals based on the Legendre transforms.

B.S Some Applications, and Advanced Multifractals

Self-similar measures appear in a variety of natural phenomena. In fully de­veloped turbulence, there is strong evidence that the rate of dissipation of ki­netic energy is multifractal [233, 142, 263, 254, 192]. Multifractal measures also play an important role in the formation of fractal patterns such as light­ning, aggregates, snowflakes (dendritic solidification) and fractal viscous fingering [32, 17, 92]. Related to these latter examples is the self-similarity of the electrostatic charge on fractal boundaries like, e.g., the Koch tree [175] or Julia sets [73, 74, 271]. The self-similarity of these measures is due to the interaction of the Laplace equation with a fractal boundary [17 5, 241]. Other occurrences of multifractals, not all of them self-similar, concern the eigenfunctions of the Schrodinger equation in disordered systems [289], the current distributions in random resistor networks [147, 138] and their hy­drodynamic analogues, the distribution of mass in the universe [158], the invariant measures on strange attractors [197, 205, 263, 90, 184] and the distribution of states in the evolution pattern of a class of cellular automata. An example of the latter is the Pascal triangle discussed in section B.2.2.

In some cases, the function T(q) is defined for all q's and one has f(a) > 0 for all a. In these cases, the method of moments is sufficient. But, as time goes on, there is increasing evidence that many applications demand more advanced multifractals. Let us mention some of the most frequently encountered cases.

When f.l is a nonrandom measure handled by probabilistic methods, one always finds that C(a) > -E, hence f(a) = C(a) + E > 0. But when f.l is properly random, there are values of a for which f(a) < 0. This very important possibility has recently become the subject of a significant literature [252, 270, 240, 244, 64, 65].

The Continuous and/or Unbounded

Cases

Applications

C(a) and Negative f(a)

B.5 Some Applications, and Advanced Multifractals 953

Left-Sided Multifractal Measures: When the f(o:) and the Cramer Plot are Insufficient

Depending on the distribution of the multiplier random variable M, it can easily happen that some of the moments E (Mq) are infinite. A typical case is that the moments do not exist for q < 0. This appears to happen (among others) in the harmonic measure on diffusion limited aggregates [316, 148, 246, 176]. It is also the case for the exactly self-similar multifractal measure shown in figure B.S. In cases when EV < oo but E (V2 ) = oo, the Gaussian central limit theorem does not apply and the left-side (o: < o:(O)) of the maximum of f(o:) is not quadratic. The right-hand side (o: > o:(O)) is a horizontal line with f(o:) = D, i.e. the extremum is attained in infinitely many values of a. In cases when E (V) = oo, the law of large numbers also fails to apply, and since o:(O) = E(V) = oo the whole left-side of the f(o:) is infinitely stretched. The Cramer large deviation theorems only work for deviations on the left of o:(O). Alternative central limit theorems apply to these cases. The limits they involve are not Gaussian but Levy stable [40]. This issue is also important and is discussed in references [246, 64, 65].

Bibliography

1. Books

[I] Abraham, R. H., Shaw, C. D., Dynamics, The Geometry of Behavior, Part One:Periodic Behavior (1982), Part Two: Chaotic Behavior (1983), Part Three: Global Behavior (1984), Aerial Press, Santa Cruz. Second edition Addison-Wesley, 1992.

[2] Aharony, A. and Feder, J. (eds.), Fractals in Physics, Physica D 38 (1989); also published by North Holland (1989).

[3] Allgower, E., Georg, K., Numerical Continuation Methods -An Introduction, Springer-Verlag, New York, 1990.

[ 4] Arnold, V. 1., Ordinary Differential Equations, MIT Press, Cambridge, 1973.

[5] Avnir, D. (ed.), The Fractal Approach to Heterogeneous Chemistry: Surfaces, Colloids, Poly-mers, Wiley, Chichester, 1989.

[6] Banchoff, T. F., Beyond the Third Dimension, Scientific American Library, 1990.

[7] Barns1ey, M., Fractals Everywhere, Academic Press, San Diego, I988.

[8] Beardon, A. F., Iteration of Rational Functions, Springer-Verlag, New York, I991.

[9] Becker K.-H., Dorf1er, M., Computergraphische Experimente mit Pascal, Vieweg, Braun­schweig, I986.

[ 1 0] Beckmann, P., A History of Pi, Second Edition, The Golem Press, Boulder, I971.

[II] Belair, J., Dubuc, S., (eds.), Fractal Geometry and Analysis, Kluwer Academic Publishers, Dordrecht, Holland, I99l.

[12] Billingsley, P., Ergodic Theory and Information, J. Wiley, New York (1967). Reprinted by Robert E. Krieger Publ. Comp., Huntington, New York (1978).

[13] Billingsley, P., Probability and Measure, John Wiley & Sons, New York, Chichester (1979).

[14] Bondarenko, B., Generalized Pascal Triangles and Pyramids, Their Fractals, Graphs and Ap­plications, Tashkent, Fan, 1990, in Russian.

[15] Borwein, J. M., Borwein, P. B., Pi and the AGM- A Study in Analytic Number Theory, Wiley, New York, 1987.

[16] Briggs, J., Peat, F. D., Turbulent Mirror, Harper & Row, New York, 1989.

[17] Bunde, A., Havlin, S. (eds.), Fractals and Disordered Systems, Springer-Verlag, Heidelberg, 1991.

[18] Campbell, D., Rose, H. (eds.), Order in Chaos, North-Holland, Amsterdam, 1983.

[19] Chaitin, G. J., Algorithmic Information Theory, Cambridge University Press, 1987.

956 Bibliography

[20] Cherbit, G. (ed.), Fractals, Non-integral Dimensions and Applications, John Wiley & Sons, Chichester, 1991.

[21] Collet, P., Eckmann, J.-P., Iterated Maps on the Interval as Dynamical Systems, Birkhauser, Boston, 1980.

[22] Crilly, A. J., Earnshaw, R. A., Jones, H. (eds.), Fractals and Chaos, Springer-Verlag, New York, 1991.

[23] Cvitanovic, P. (ed.), Universality in Chaos, Second Edition, Adam Hilger, New York, 1989.

[24] Devaney, R. L., An Introduction to Chaotic Dynamical Systems, Second Edition, Addison-Wesley, Redwood City, 1989.

[25] Devaney, R. L., Chaos, Fractals, and Dynamics, Addison-Wesley, Menlo Park, 1990.

[26] Durham, T., Computing Horizons, Addison-Wesley, Wokingham, 1988.

[27] Dynkin, E. B., Uspenski, W., Mathematische Unterhaltungen II, VEB Deutscher Verlag der Wissenschaften, Berlin, 1968.

[28] Edgar, G., Measures, Topology and Fractal Geometry, Springer-Verlag, New York, 1990. [29] Engelking, R., Dimension Theory, North Holland, 1978.

[30] Escher, M. C., The World of M. C. Escher, H. N. Abrams, New York, 1971.

[31] Falconer, K., The Geometry of Fractal Sets, Cambridge University Press, Cambridge, 1985. [32] Falconer, K.,Fractal Geometry, Mathematical Foundations and Applications, Wiley, New York,

1990.

[33] Family, F., Landau, D. P. (eds.), Aggregation and Gelation, North-Holland, Amsterdam, 1984. [34] Family, F., Vicsek, T. (eds.), Dynamics of Fractal Surfaces, World Scientific, Singapore, 1991. [35] Feder, J., Fractals, Plenum Press, New York 1988.

[36] Fleischmann, M., Tildesley, D. 1., Ball, R. C., Fractals in the Natural Sciences, Princeton University Press, Princeton, 1989.

[37] Garfunkel, S., (Project Director), Steen, L. A. (Coordinating Editor) For All Practical Purposes, Second Edition, W. H. Freeman and Co., New York, 1988.

[38] GEO Wissen- Chaos und Kreativitat, Gruner+ Jahr, Hamburg, 1990.

[39] Gleick, J., Chaos, Making a New Science, Viking, New York, 1987.

[40] Gnedenko, B. V., Kolmogorov, A. N., Limit distributions for sums of independent random variables, Addison-Wesley, Reading (Mass.)- London (1968).

[41] Golub, G. H., Loan, C. F. van, Matrix Computations, Second Edition, Johns Hopkins, Baltimore, 1989.

[42] Guckenheimer, J., Holmes, P., Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields, Springer-Verlag, New York, 1983.

[43] Guyon, E., Stanley, H. E., (eds.), Fractal Forms, Elsevier/North-Holland and Palais de Ia Decouverte, 1991.

[44] Haken, H., Advanced Synergetics, Springer-Verlag, Heidelberg, 1983.

[45] Haldane, J. B. S., On Being the Right Size, 1928.

[46] Hall, R., Illumination and Color in Computer Generated Imagery, Springer-Verlag, New York, 1988.

[47] Hao, B. L., Chaos II, World Scientific, Singapore, 1990.

Bibliography

[48] Hausdorff, F., Grundziige der Mengenlehre, Verlag von Veit & Comp., 1914.

[49] Hausdorff, F., Dimension und iiuj3eres Maj3, Math. Ann. 79 (1918) 157-179.

957

[50] Hirsch, M. W., Smale, S., Differential Equations, Dynamical Systems, and Linear Algebra, Academic Press, New York, 1974.

[51] Hommes, C. H., Chaotic Dynamics in Economic Models, Wolters-Noordhoff, Groningen, 1991.

[52] Huang, K., Statistical Mechanics, M. Wiley, New York (1966) Chapter 8.

[53] Jackson, E. A., Perspectives of Nonlinear Dynamics, Volume 1 and 2, Cambridge University Press, Cambridge, 1991.

[54] Knuth, D. E., The Art of Computer Programming, Volume 2, Seminumerical Algorithms, Addison-Wesley, Reading, Massachusetts.

[55] Kotz, S., Johnson, N. L., Encyclopedia of Statistical Sciences, J. Wiley, New York, 1982

[56] Kuratowski, C., Topologie II, PWN, Warsaw, 1961.

[57] Lauwerier, H., Fractals, Aramith Uitgevers, Amsterdam, 1987.

[58] Lehmer, D. H., Proc. 2nd Symposium on Large Scale Digital Calculating Machinery, Harvard University Press, Cambridge, 1951.

[59] Leven, R. W., Koch, B.-P., Pompe, B., Chaos in Dissipativen Systemen, Vieweg, Braunschweig, 1989.

[60] Lindenmayer, A., Rozenberg, G., (eds.), Automata, Languages, Development, North-Holland, Amsterdam, 1975.

[61] Mandelbrot, B. B., Fractals: Form, Chance, and Dimension, W. H. Freeman and Co., San Francisco, 1977.

[62] Mandelbrot, B. B., The Fractal Geometry of Nature, W. H. Freeman and Co., New York, 1982.

[63] Mandelbrot, B. B., The Fractal Geometry of Nature, W. H. Freeman and Co., New York, 1982.

[64] Mandelbrot, B.B., Selecta Volume N: Multifractals & 1/f Noise: 1963-76. Springer, New York, to appear

[65] Mandelbrot, B.B., Selecta Volume N: Turbulence. Springer, New York, to appear

[66] Mafi.e, R., Ergodic Theory and Differentiable Dynamics, Springer-Verlag, Heidelberg, 1987.

[67] McGuire, M., An Eye for Fractals, Addison-Wesley, Redwood City, 1991.

[68] Menger, K., Dimensionstheorie, Leipzig, 1928.

[69] Mey, J. de, Bomen van Pythagoras, Aramith Uitgevers, Amsterdam, 1985.

[70] Moon, F. C., Chaotic Vibrations, John Wiley & Sons, New York, 1987.

[71] Parchomenko, A. S., Was ist eine Kurve, VEB Verlag, 1957.

[72] Parker, T. S., Chua, L. 0., Practical Numerical Algorithms for Chaotic Systems, Springer-Verlag, New York, 1989.

[73] Peitgen, H.-0., Richter, P. H., The Beauty of Fractals, Springer-Verlag, Heidelberg, 1986.

[74] Peitgen, H.-0., Saupe, D., (eds.), The Science of Fractal1mages, Springer-Verlag, 1988.

[75] Peitgen, H.-0. (ed.), Newton's Method and Dynamical Systems, Kluver Academic Publishers, Dordrecht, 1989.

[76] Peitgen, H.-0., Jurgens, H., Fraktale: Geziihmtes Chaos, Carl Friedrich von Siemens Stiftung, Mtinchen, 1990.

958 Bibliography

[77] Peitgen, H.-0., Jiirgens, H., Saupe, D., Fractals for the Classroom, Part One, Springer-Verlag, New York, 1991.

[78] Peitgen, H.-0., Ji.irgens, H., Saupe, D., Maletsky, E., Perciante, T., Yunker, L., Fractals for the Classroom, Strategic Activities, Volume One, and Volume Two, Springer-Verlag, New York, 1991 and 1992.

[79] Peters, E., Chaos and Order in the Capital Market, John Wiley & Sons, New York, 1991.

[80] Press, W. H., Flannery, B. P., Teukolsky, S. A., Vetterling, W. T., Numerical Recipes, Cambridge University Press, Cambridge, 1986.

[81] Preston, K. Jr., Duff, M. J. B., Modern Cellular Automata, Plenum Press, New York, 1984.

[82] Prigogine, I., Stenger, I., Order out of Chaos, Bantam Books, New York, 1984.

[83] Prusinkiewicz, P., Lindenmayer, A., The Algorithmic Beauty of Plants, Springer-Verlag, New York, 1990.

[84] Rasband, S. N., Chaotic Dynamics of Nonlinear Systems, John Wiley & Sons, New York, 1990.

[85] Renyi, A: Probability Theory, North-Holland, Amsterdam (1970)

[86] Richardson, L. F., Weather Prediction by Numerical Process, Dover, New York, 1965.

[87] Ruelle, D., Chaotic Evolution and Strange Attractors, Cambridge University Press, Cambridge, 1989.

[88] Sagan, C., Contact, Pocket Books, Simon & Schuster, New York, 1985.

[89] Schroder, M., Fractals, Chaos, Power Laws, W. H. Freeman and Co., New York, 1991.

[90] Schuster, H. G., Deterministic Chaos, VCH Publishers, Weinheim, New York, 1988.

[91] Sparrow, C., The Lorenz Equations: Bifurcations, Chaos, and Strange Attractors, Springer­Verlag, New York, 1982.

[92] Stanley H. E., Ostrowsky, N. (eds.), Fluctuations and Pattern Formation, (Cargese, 1988) Dordrecht-Boston: Kluwer (1988).

[93] Stauffer, D., Introduction to Percolation Theory, Taylor & Francis, London, 1985.

[94] Stauffer, D., Stanley, H. E., From Newton to Mandelhrot, Springer-Verlag, New York, 1989.

[95] Stewart, I., Does God Play Dice, Penguin Books, 1989.

[96] Stewart, I., Game, Set, and Math, Basil Blackwell, Oxford, 1989.

[97] Thompson, D' Arcy, On Growth an Form, New Edition, Cambridge University Press, 1942.

[98] Toffoli, T., Margolus, N., Cellular Automata Machines, A New Environment For Modelling, MIT Press, Cambridge, Mass., 1987.

[99] Vicsek, T., Fractal Growth Phenomena, World Scientific, London, 1989.

[100] Wade, N., The Art and Science of Visual Illusions, Routledge & Kegan Paul, London, 1982.

[101] Wall, C. R., Selected Topics in Elementary Number Theory, University of South Caroline Press, Columbia, 1974.

[102] Wegner, T., Peterson, M., Fractal Creations, Waite Group Press, Mill Valley, 1991.

[103] Weizenbaum, J., Computer Power and Human Reason, Penguin, 1984.

[104] West, B., Fractal Physiology and Chaos in Medicine, World Scientific, Singapore, 1990.

[105] Wolfram, S., Farmer, J. D., Toffo1i, T., (eds.) Cellular Automata: Proceedings of an Interdisci­plinary Workshop, in: Physica 10D, 1 and 2 (1984).

Bibliography 959

[106] Wolfram, S. (ed.), Theory and Application of Cellular Automata, World Scientific, Singapore, 1986.

[107] Zhang Shu-yu, Bibliography on Chaos, World Scientific, Singapore, 1991.

2. General Articles

[108] Barnsley, M. F., Fractal Modelling of Real World Images, in: The Science of Fractal Images, H.-0. Peitgen, D. Saupe (eds.), Springer-Verlag, New York, 1988.

[109] Cipra, B., A., Computer-drawn pictures stalk the wild trajectory, Science 241 (1988) 1162-1163.

[110] Davis, C., Knuth, D. E., Number Representations and Dragon Curves, Journal of Recreational Mathematics 3 (1970) 66-81 and 133-149.

[111] Dewdney, A. K., Computer Recreations: A computer microscope zooms in for a look at the most complex object in mathematics, Scientific American (August 1985) 16-25.

[112] Dewdney, A. K., Computer Recreations: Beauty and profundity: the Mandelbrot set and a flock of its cousins called Julia sets, Scientific American (November 1987) 140-144.

[113] Douady, A., Julia sets and the Mandelbrot set, in: The Beauty of Fractals, H.-0. Peitgen, P. H. Richter, Springer-Verlag, 1986.

[114] Dyson, F., Characterizing Irregularity, Science 200 (1978) 677-678.

[115] Gilbert, W. J., Fractal geometry derived from complex bases, Math. Intelligencer 4 (1982) 78-86.

[116] Hofstadter, D. R., Strange attractors : Mathematical patterns delicately poised between order and chaos, Scientific American 245 (May 1982) 16-29.

[117] Mandelbrot, B. B., How long is the coast of Britain? Statistical self-similarity and fractional dimension, Science 155 (1967) 636-638.

[118] Peitgen, H.-0., Richter, P. H., Die unendliche Reise, Geo 6 (Juni 1984) 100-124.

[119] Peitgen, H.-0., Haeseler, F. v., Saupe, D., Cayley's problem and Julia sets, Mathematical Intel­ligencer 6.2 (1984) 11-20.

[120] Peitgen, H.-0., Jurgens, H., Saupe, D., The language of fractals, Scientific American (August 1990) 40--47.

[121] Peitgen, H.-0., JUrgens, H., Fraktale: Computerexperimente (ent)zaubem komplexe Strukturen, in: Ordnung und Chaos in der unbelebten und belebten Natur, Verhand1ungen der Gesellschaft Deutscher Naturforscher und Arzte, 115. Versammlung, Wissenschaftliche Verlagsgesellschaft, Stuttgart, 1989.

[122] Peitgen, H.-0., Jurgens, H., Saupe, D., Zahlten, C., Fractals- An Animated Discussion, Video film, W. H. Freeman and Co., 1990. Also appeared in German as Fraktale in Filmen und Gespriichen, Spektrum Videothek, Heidelberg, 1990. Also appeared in Italian as I Frattali, Spektrum Videothek edizione italiana, 1991.

[123] Ruelle, D., Strange Attractors, Math. Intelligencer 2 (1980) 126-137.

[124] Ruelle, D., Chaotic Evolution and Strange Attractors, Cambridge University Press, Cambridge, 1989.

[125] Stewart, I., Order within the chaos game? Dynamics Newsletter 3, no. 2, 3, May 1989, 4-9.

[ 126] Sved, M. Divisibility - With Visibility, Mathematical Intelligencer 10, 2 (1988) 56-64.

960 Bibliography

[127] Voss, R., Fractals in Nature, in: The Science of Fractal Images, H,-0. Peitgen, D. Saupe (eds.), Springer-Verlag, New York, 1988.

[128] Wolfram, S., Geometry of binomial coefficients, Amer. Math. Month. 91 (1984) 566-571.

3. Research Articles

[ 129] Abraham, R., Simulation of cascades by video feedback, in: "Structural Stability, the Theory of Catastrophes, and Applications in the Sciences", P. Hilton (ed.), Lecture Notes in Mathematics vol. 525, 1976, 10-14, Springer-Verlag, Berlin.

[130] Aharony, A., Fractal growth, in: Fractals and Disordered Systems, A. Bunde, S. Havlin (eds.), Springer-Verlag, Heidelberg, 1991.

[131] Bak, P., The devil's staircase, Phys. Today 39 (1986) 38-45.

[132] Bandt, C., Self-similar sets I. Topological Markov chains and mixed se(f-similar sets. Math. Nachr. 142 (1989) 107-123.

[133] Bandt, C., Self-similar sets III. Construction with sofic systems, Monatsh. Math. I 08 (1989) 89-102.

[134] Banks, J., Brooks, J., Cairns, G., Davis, G., Stacey, P., On Devaney's definition of chaos. American Math. Monthly 99.4 (1992) 332-334.

[135] Barnsley, M. F., Demko, S., Iterated function systems and the global construction of fractals, The Proceedings of the Royal Society of London A399 (1985) 243-275

[136] Barns1ey, M. F., Ervin, V., Hardin, D., Lancaster, J., Solution of an inverse problem for fractals and other sets, Proceedings of the National Academy of Sciences 83 (1986) 1975-1977.

[137] Barnsley, M. F., Elton, J. H., Hardin, D. P., Recurrent iterated function systems, Constructive Approximation 5 (1989) 3-31.

[138] Batrouni, G. G., Hansen, A., Roux, S., Negative moments of the current spectrum in the random­resistor network, Phys. Rev. A 38 (1988) 3820.

[139] Bedford, T., Dynamics and dimension for fractal recurrent sets, J. London Math. Soc. 33 (1986) 89-100.

[140] Benedicks, M., Carleson, L., The dynamics of the Henon map, Annals of Mathematics 133,1 (1991) 73-169.

[141] Benettin, G. L., Galgani,L., Giorgilli, A., Strelcyn, J.-M., Lyapunov characteristic exponents for smooth dynamical systems and for Hamiltonian systems; a method for computing all of them. Part 1: Theory, Part 2: Numerical application, Meccanica 15, 9 ( 1980) 21.

[142] Benzi, R., Paladin, G., Parisi, G., Vulpiani, A., On the multifractal nature o.ffully developed turbulence and chaotic systems, J. Phys. A 17 (1984) 3521.

[143] Berger, M., Encoding images through transition probabilities, Math. Comp. Modelling II ( 1988) 575-577.

[1441 Berger, M., Images generated by orbits of 2D-Markoc chains, Chance 2 (1989) 18-28. [145] Berry, M. V., Regular and irregular motion, in: Jorna S. (ed.), Topics in Nonlinear Dynamics,

Amer. Inst. of Phys. Conf. Proceed. 46 (1978) 16-120.

[146] Blanchard, P., Complex analytic dynamics on the Riemann sphere, Bull. Amer. Math. Soc. II (1984) 85-141.

Bibliography 961

[147] Blumenfeld, R., Meir, Y., Aharony, A., Harris, A. B., Resistance fluctuations in random diluted networks, Phys. Rev. B 35 (1987) 3524-3535.

[148] Blumenfeld, R., Aharony, A., Breakdown ofmultifractal behavior in diffusion limited aggregates, Phys. Rev. Lett. 62 (1989) 2977.

[149] Borwein, J. M., Borwein, P. B., Bailey, D. H., Ramanujan, modular equations, and approxi­mations to n, or how to compute one billion digits of n, American Mathematical Monthly 96 (1989) 201-219.

[150] Brent, R. P., Fast multiple-precision evaluation of elementary functions, Journal Assoc. Comput. Mach. 23 (1976) 242-251.

[151] Brolin, H., Invariant sets under iteration ofrationalfunctions, Arkiv f. Mat. 6 (1965) 103-144.

[152] Cantor, G., Uber unendliche, lineare Punktmannigfaltigkeiten V, Mathematische Annalen 21 (1883) 545-591.

[153] Carpenter, L., Computer rendering of fractal curves and suifaces, Computer Graphics (1980) 109ff.

[154] Caswell, W. E., Yorke, J. A., Invisible errors in dimension calculations: geometric and system­atic effects, in: Dimensions and Entropies in Chaotic Systems, G. Mayer-Kress (ed.), Springer­Verlag, Berlin, 1986 and 1989, p. 123-136.

[155] Cayley, A., The Newton-Fourier Imaginary Problem, American Journal of Mathematics 2 (I 879) p. 97.

[156] Charkovsky, A. N., Coexistence of cycles of continuous maps on the line, Ukr. Mat. J. 16 (I 964) 61-71 (in Russian).

[157] Chhabra, A., Jensen, R.V., Direct determination of the f(a) singularity spectrum, Phys. Rev. Lett. 62 (1989) 1327

[158] Coleman, P. H., Pietronero, L., The fractal structure of the universe, Phys. Rept. 213,6 (1992) 311-389.

[159] Corless, R. M., Continued fractions and chaos, The American Math. Monthly 99, 3 (1992) 203-215.

[160] Corless, R. M., Frank, G. W., Monroe, J. G., Chaos and continued fractions, Physica D46 (1990) 241-253.

[161] Cremer, H., Uber die Iteration rationaler Funktionen, Jahresberichte der Deutschen Mathe-matiker Vereinigung 33 (1925) 185-210.

[162] Crutchfield, J., Space-time dynamics in video feedback, Physica 10D (1984) 229-245.

[ 163] Dekking, F. M., Recurrent Sets, Advances in Mathematics 44, 1 (1982) 78-104.

[164] Derrida, B., Gervais, A., Pomeau, Y., Universal metric properties of bifurcations of endomor­phisms, J. Phys. A: Math. Gen. 12, 3 (1979) 269-296.

[165] Devaney, R., Nitecki, Z., Shift Automorphism in the Henan Mapping, Comm. Math. Phys. 67 (1979) 137-146.

[166] Douady, A., Hubbard, J. H., Iteration des polynomes quadratiques complexes, CRAS Paris 294 (1982) 123-126.

[167] Douady, A., Hubbard, J. H., Etude dynamique des polynomes complexes, Publications Mathe­matiques d'Orsay 84-02, Universite de Paris-Sud, 1984.

[168] Douady, A., Hubbard, J. H., On the dynamics of polynomial-like mappings, Ann. Sci. Ecole Norm. Sup. 18 (1985) 287-344.

962 Bibliography

[169] Dress, A. W. M., Gerhardt, M., Jaeger, N. I., Plath, P. J, Schuster, H., Some proposals concerning the mathematical modelling of oscillating heterogeneous catalytic reactions on metal surfaces, in: L. Rensing, N. I. Jaeger (eds.), Temporal Order, Springer-Verlag, Berlin, 1984.

[170] Dubuc, S., E1qortobi, A., Approximations of fractal sets, Journal of Computational and Applied Mathematics 29 (1990) 79-89.

[171] Eckmann, J.-P., Ruelle, D., Ergodic theory of chaos and strange attractors, Reviews of Modern Physics 57, 3 (1985) 617-656.

[172] Eckmann, J.-P., Kamphorst, S. 0., Ruelle, D., Ciliberto, S., Liapunov exponents from time series, Phys. Rev. 34A (1986) 4971-4979.

[173] Elton, J., An ergodic theorem for iterated maps, Journal of Ergodic Theory and Dynamical Systems 7 (1987) 481-488.

[174] Evertsz, C.J.G., Mande1brot, B.B., Harmonic measure around a linearly se(f-similar tree J. Phys. A 25 (1992) 1781-1797

[175] Evertsz, C. J. G., Mandelbrot, B. B., Woog, L.: Variability of the form and of the harmonic measure for small off-off-lattice diffusion-limited aggregates, Phys. Rev. A 45 ( 1992) 5798

[176] Faraday, M., On a peculiar class of acoustical figures, and on certain forms assumed hy groups of particles upon vibrating elastic surfaces, Phil. Trans. Roy. Soc. London 121 ( 1831) 299-340.

[ 177] Farmer, D., Chaotic attractors of an infinite-dimensional system, Physica 4D ( 1982) 366-393.

[178] Farmer, J. D., Ott, E., Yorke, J. A., The dimension of chaotic attractors, Physica 7D (1983) 153-180.

[179] Fatou, P., Surles equations fonctionelles, Bull. Soc. Math. Fr. 47 (1919) 161-271, 48 (1920) 33-94, 208-314.

[180] Feigenbaum, M. J., Universality in complex discrete dynamical systems, in: Los Alamos Theo­retical Division Annual Report ( 1977) 98-102.

[181] Feigenbaum, M. J., Quantitative universality for a class of nonlinear tramformations, J. Stat. Phys. 19 (1978) 25-52.

[182] Feigenbaum, M. J., Universal behavior in nonlinear systems, Physica 7D (1983) 16-39. Also in: Campbell, D., Rose, H. (eds.), Order in Chaos, North-Holland, Amsterdam, 1983.

[183] Feigenbaum, M. J., Some characterizations of strange sets, J. Stat. Phys. 46 (1987) 919-924.

[184] Feit, S. D., Characteristic exponents and strange attractors, Comm. Math. Phys. 61 (1978) 249-260.

[185] Fine, N. J., Binomial coefficients modulo a prime number, Amer. Math. Monthly 54 ( 1947) 589.

[186] Fisher, Y., Boss, R. D., Jacobs, E. W., Fractal Image Compression, to appear in: Data Com­pression, J. Storer (ed.), Kluwer Academic Publishers, Norwell, MA.

[187] Fournier, A., Fussell, D., Carpenter, L., Computer rendering of stochastic models, Comm. of the ACM 25 (1982) 371-384.

[188] Franceschini, V., A Feigenbaum sequence of bifurcations in the Lorenz model, Jour. Stat. Phys. 22 ( 1980) 397-406.

[189] Fraser, A. M., Swinney, H. L., Independent coordinates for strange attractors from mutual information, Phys. Rev. A 33 (1986) 1034-1040.

[190] Frederickson, P., Kaplan, J. L., Yorke, S. D., Yorke, J. A., The Liapunov dimension of strange attractors, Journal of Differential Equations 49 (1983) 185-207.

Bibliography 963

[191] Frisch, U., Parisi, G., Fully developed turbulence and intermittency, in Turbulence and Pre­dictability of Geophysical Flows and Climate Dynamics, Proc. of the International School of Physics "Enrico Fermi,", Course LXXXVIII, Varenna 9083, edited by Ghil, M., Benzi, R., Parisi, G., North-Holland, New York (1985) 84.

[192] Frisch, U., Vergas sola, M., A prediction of the multifractal model: the intermediate dissipation range, Europhys. Lett. 14 (1991) 439.

[193] Geist, K., Parlitz, U., Lauterborn, W., Comparison of Different Methods for Computing Lyapunov Exponents, Progress of Theoretical Physics 83,5 (1990) 875-893.

[194] Goodman, G. S., A probabilist looks at the chaos game, in: Fractals in the Fundamental and Applied Sciences, H.-0. Peitgen, J. M. Henriques, L. F. Peneda (eds.), North-Holland, Amsterdam, 1991.

[195] Grassberger, P., On the fractal dimension of the Henan attractor, Physics Letters 97A (1983) 224-226.

[196] Grassberger, P., Procaccia, 1., Measuring the strangeness of strange attractors, Physica 90 (1983) 189-208.

[197] Grassberger, P., Procaccia, 1., Characterization of Strange Attractors, Phys. Rev. Lett. 50 (1983) 346.

[198] Grebogi, C., Ott, E., Yorke, J. A., Crises, sudden changes in chaotic attractors, and transient chaos, Physica 70 (1983) 181-200.

[199] Grebogi, C., Ott, E., Yorke, J. A., Attractors of an N-torus: quasiperiodicity versus chaos, Physica 150 (1985) 354.

[200] Grebogi, C., Ott, E., Yorke, J. A., Critical exponents of chaotic transients in nonlinear dynamical systems, Physical Review Letters 37, 11 (1986) 1284-1287.

[201] Grebogi, C., Ott, E., Yorke, J. A., Chaos, strange attractors, and fractal basin boundaries in nonlinear dynamics, Science 238 (1987) 632-638.

[202] GroBmann, S., Thomae, S., Invariant distributions and stationary correlation functions of one­dimensional discrete processes, Z. Naturforsch. 32 (1977) 1353-1363.

[203] Haeseler, F. v., Peitgen, H.-0., Skordev, G., Pascal's triangle, dynamical systems and attractors, Ergod. Th. & Dynam. Sys. 12 (1992) 479-486.

[204] Haeseler, F. v., Peitgen, H.-0., Skordev, G., On the fractal structure of limit sets of cellular automata and attractors of dynamical systems, to appear.

[205] Halsey, T. C., Jensen, M. H., Kadanoff, L. P., Procaccia, 1., Shraiman, B. 1., Fractal measures and their singularities: The characterization of strange sets, Phys. Rev. A 33 ( 1986) 1141.

[206] Hart, J. C., DeFanti, T., Efficient anti-aliased rendering of 3D-linear fractals, Computer Graphics 25, 4 (1991) 289-296.

[207] Hart, J. C., Sandin, D. J., Kauffman, L. H., Ray tracing deterministic 3-D fractals, Computer Graphics 23, 3 (1989) 91-100.

[208] Henon, M., A two-dimensional mapping with a strange attractor, Comm. Math. Phys. 50 (1976) 69-77.

[209] Hentschel, H. G. E., Procaccia, 1., The infinite number of generalized dimensions of fractals and strange attractors, Physica 80 (1983) 435-444.

964 Bibliography

[210] Hepting, D., Prusinkiewicz, P., Saupe, D., Rendering methods for iterated function systems, in: Fractals in the Fundamental and Applied Sciences, H.-0. Peitgen, J. M. Henriques, L. F. Peneda (eds.), North-Holland, Amsterdam, 1991.

[211] Hilbert, D., Vber die stetige Abbildung einer Linie auf ein Fliichenstilck, Mathematische Annalen 38 (1891) 459-460.

[212] Holte, J., A recurrence relation approach to fractal dimension in Pascal's triangle, International Congress of Mathematics, 1990.

[213] Hutchinson, J., Fractals and self-similarity, Indiana University Journal of Mathematics 30 (1981) 713-747.

[214] Jacquin, A. E., Image coding based on a fractal theory of iterated contractive image transfor­mations, to appear in: IEEE Transactions on Signal Processing, 1992.

[215] Judd, K., Mees, A. I. Estimating dimensions with confidence, International Journal of Bifurcation and Chaos 1,2 (1991) 467-470.

[216] Julia, G., Memoire sur !'iteration des fonctions rationnelles, Journal de Math. Pure et Appl. 8 (1918) 47-245.

[217] JUrgens, H., 3D-rendering of fractal landscapes, in: Fractal Geometry and Computer Graphics, J. L. Encamacao, H.-0. Peitgen, G. Sakas, G. Englert (eds.), Springer-Verlag, Heidelberg, 1992.

[218] Kaplan, J. L., Yorke, J. A., Chaotic behavior of multidimensional difference equations, in: Functional Differential Equations and Approximation of Fixed Points, H.-0. Peitgen, H. 0. Walther (eds.), Springer-Verlag, Heidelberg, 1979.

[219] Kawaguchi, Y., A morphological study of the form of nature, Computer Graphics 16,3 (1982).

[220] Koch, H. von, Sur une courbe continue sans tangente, obtenue par une construction geometrique elementaire, Arkiv fOr Matematik 1 (1904) 681-704.

[221] Koch, H. von, Une methode geometrique elementaire pour !'etude de certaines questions de la theorie des courbes planes, Acta Mathematica 30 (1906) 145-174.

[222] Kummer, E. E., Vber Ergiinzungssiitze zu den allgemeinen Reziprozitiitsgesetzen, Journal fiir die reine und angewandte Mathematik 44 (1852) 93-146.

[223] Lauterborn, W., Acoustic turbulence, in: Frontiers in Physical Acoustics, D. Sette (ed.), North­Holland, Amsterdam, 1986, pp. 123-144.

[224] Lauterborn, W., Holzfuss, J., Acoustic chaos, International Journal of Bifurcation and Chaos l, 1 (1991) 13-26.

[225] Li, T.-Y., Yorke, J. A., Period three implies chaos, American Mathematical Monthly 82 (1975) 985-992.

[226] Lindenmayer, A., Mathematical models for cellular interaction in development, Parts I and II, Journal of Theoretical Biology 18 (1968) 280-315.

[227] Lorenz, E. N., Deterministic non-periodic flow, J. Atmos. Sci. 20 (1963) 130-141.

[228] Lorenz, E. N., The local structure of a chaotic attract or in four dimensions, Physica 13D ( 1984) 90-104.

[229] Lovejoy, S., Mandelbrot, B. B., Fractal properties of rain, and a fractal model, Tellus 37 A (1985) 209-232.

[230] Lozi, R., Un attracteur etrange (?) du type attracteur de Henan, J. Phys. (Paris) 39 (Coil. C5) (1978) 9-10.

Bibliography 965

[231] Mandelbrot, B. B., Ness, J. W. van, Fractional Brownian motion, fractional noises and appli­cations, SIAM Review 10,4 (1968) 422-437.

[232] Mandelbrot, B. B., Multiplications aletoires iterees et distributions invariantes par moyenne ponderee aleatoire, I & II, Comptes Rendus (Paris): 278A (1974) 289-292 & 355-358.

[233] Mandelbrot, B. B., Intermittent turbulence in self-similar cascades: divergence of high moments and dimension of the carrier, J. Fluid Mech. 62 (1974) 331.

[234] Mandelbrot, B. B., Fractal aspects of the iteration of z f---4 >.z(1 - z) for complex ). and z, Annals NY Acad. Sciences 357 (1980) 249-259.

[235] Mandelbrot, B. B., Comment on computer rendering of fractal stochastic models, Comm. of the ACM 25,8 (1982) 581-583.

[236] Mandelbrot, B. B., Self-affine fractals andfractal dimension, Physica Scripta 32 (1985) 257-260.

[237] Mandelbrot, B. B., On the dynamics of iterated maps V.· conjecture that the boundary of the M-set has fractal dimension equal to 2, in: Chaos, Fractals and Dynamics, Fischer and Smith (eds.), Marcel Dekker, 1985.

[238] Mandelbrot, B. B., An introduction to multifractal distribution functions, in: Fluctuations and Pattern Formation, H. E. Stanley and N. Ostrowsky (eds.), Kluwer Academic, Dordrecht, 1988.

[239] Mandelbrot, B. B., Multifractal measures, especially for the Geophysicist, Pure and Applied Geophysics 131 (1989) 5-42, also in Fluctuations and Pattern Formation, (Cargese, 1988). H. E. Stanley and N. Ostrowsky, Eds., Dordrecht-Boston: Kluwer (1988) 345-360.

[240] Mandelbrot, B. B., Negative fractal dimensions and multifractals, Physica A 163 (1990) 306-315.

[241] Mandelbrot, B. B., Evertsz, C. J. G., The potential distribution around growing fractal clusters, Nature 348 (1990) 143-145.

[242] Mandelbrot, B. B., New "anomalous" multiplicative multifractals: left-sided f(a) and the modeling of DLA, Physica A168 (1990) 95-111.

[243] Mandelbrot, B .B., Evertsz, C. J. G., Hayakawa, Y., Exactly self-similar left-sided multifractal measures, Phys. Rev. A 42 (1990) 4528-4536.

[244] Mandelbrot, B. B., Random multifractals: negative dimensions and the resulting limitations of the thermodynamic formalism, Proc. R. Soc. Lond. A 434 (1991) 97-88.

[245] Mandelbrot, B. B., Evertsz, C. J. G., Left-sided multifractal measures, in Fractals and Disordered Systems, A. Bunde, S. Havlin (eds.) (1991) 322-344.

[246] Mandelbrot, B. B., Evertsz, C. J. G., Multifractality of the harmonic measure on fractal aggre­gates, and extended self-similarity, Physica A177 (1991) 386-393.

[247] Mafie, R., On the dimension of the compact invariant set of certain nonlinear maps, in: Dynami­cal Systems and Turbulence, Warwick 1980, Lecture Notes in Mathematics 898, Springer-Verlag (1981) 230-242.

[248] Marotto, F. R., Chaotic behavior in the Henon mapping, Comm. Math. Phys. 68 (1979) 187-194.

[249] Matsushita, M., Experimental Observation of Aggregations, in: The Fractal Approach to Het­erogeneous Chemistry: Surfaces, Colloids, Polymers, D. Avnir (ed.), Wiley, Chichester 1989.

[250] Mauldin, R. D., Williams, S. C., Hausdorff dimension in graph directed constructions, Trans. Amer. Math. Soc. 309 (1988) 811-829.

[251] May, R. M., Simple mathematical models with very complicated dynamics, Nature 261 (1976) 459-467.

966 Bibliography

[252] Meneveau, C., Sreenivasan, K. R., Simple multifractal cascade model for fully developed tur­bulence. Phys. Rev. Lett. 59 (1987) 1424.

[253] Meneveau, C, Sreenivasan, K.R., A method for the direct measurement off (a) of multifractals, and its applications to dynamical systems and fully developed turbulence, Phys. Lett. A 137 (1989) 103.

[254] Meneveau, C., Sreenivasan, K.R., Multifractal nature of turbulent energy dissipation, J. Fluid Mech. 224 (1991) 429.

[255] Menger, K., Allgemeine Rdume und charakteristische Rdume, Zweite Mitteilung: Uber um­fassenste n-dimensionale Mengen, Proc. Acad. Amsterdam 29 (1926) 1125-1128.

[256] Misiurewicz, M., Strange Attractors for the Lozi Mappings, in Nonlinear Dynamics, R. H. G. Helleman (ed.), Annals of the New York Academy of Sciences 357 (1980) 348-358.

[257] Mitchison, G. J., Wilcox, M., Rule governing cell division in Anabaena, Nature 239 ( 1972) 110-111.

[258] Mullin, T., Chaos in physical systems, in: Fractals and Chaos, Crilly, A. J., Earnshaw, R. A., Jones, H. (eds.), Springer-Verlag, New York, 1991.

[259] Musgrave, K., Kolb, C., Mace, R., The synthesis and the rendering of eroded fractal terrain, Computer Graphics 24 (1988).

[260] Norton, V. A., Generation and display of geometric fractals in 3-D, Computer Graphics 16, 3 (1982) 61-67.

[261] Norton, V. A., Julia sets in the quaternions, Computers and Graphics 13, 2 (1989) 267-278.

[262] Olsen, L. F., Degn, H., Chaos in biological systems, Quarterly Review of Biophysics 18 (1985) 165-225.

[263] Paladin, G., Vulpiani, A., Anomalous scaling laws in multifractal objects, Physics Reports 156 (1987) 145.

[264] Packard, N.H., Crutchfield, J.P., Farmer, J.D., Shaw, R. S., Geometry from a time series, Phys. Rev. Lett. 45 (1980) 712-716.

[265] Peano, G., Sur une courbe, qui remplit toute une aire plane, Mathematische Annalen 36 ( 1 890) 157-160.

[266] Peitgen, H. 0., Priifer, M., The Leray-Schauder continuation method is a constructive element in the numerical study of nonlinear eigenvalue and bifurcation problems, in: Functional Dij~ ferential Equations and Approximation of Fixed Points, H.-0. Peitgen, H.-0. Walther (eds.), Springer Lecture Notes, Berlin, 1979.

[267] Pietronero, L., Evertsz, C., Siebesma, A. P., Fractal and multifractal structures in kinetic critical phenomena, in: Stochastic Processes in Physics and Engineering, S. A1beverio, P. Blanchard, M. Hazewinkel, L. Streit (eds.), D. Reidel Publishing Company (1988) 253-278. ( 1988) 405-409.

[268] Peyriere, J., Multifractal measures, Proceedings of the NATO ASI "Probabilistic Stochastic Methods in Analysis, with Applications" II Ciocco, July 14-27 (1991).

[269] Pomeau, Y., Manneville, P., Intermittent transition to turbulence in dissipative dynamical sys­tems, Commun. Math. Phys. 74 (1980) 189-197.

[270] Prasad, R. R., Meneveau, C., Sreenivasan, K. R., Multifractal nature of the dissipation .field of passive scalars in full turbulent flows, Phys. Rev. Lett. 61 (1988) 7 4-77.

[271] Procaccia, 1., Zeitak, R., Shape of fractal growth patterns: Exactly solvable models and stability considerations, Phys. Rev. Lett. 60 (1988) 2511.

Bibliography 967

[272] Prusinkiewicz, P., Graphical applications of L-systems, Proc. of Graphics Interface 1986 -

Vision Interface (1986) 247-253.

[273] Prusinkiewicz, P., Hanan, J., Applications of L-systems to computer imagery, in: "Graph Gram­

mars and their Application to Computer Science; Third International Workshop", H. Ehrig, M. Nag!, A. Rosenfeld and G. Rozenberg (eds.), (Springer-Verlag, New York, 1988).

[274] Prusinkiewicz, P., Lindenmayer, A., Hanan, J., Developmental models of herbaceous plants for

computer imagery purposes, Computer Graphics 22, 4 (1988) 141-150.

[275] Prusinkiewicz, P., Hammel, M., Automata, languages, and iterated function systems, in: Fractals

Modeling in 3-D Computer Graphics and Imaging, ACM SIGGRAPH '91 Course Notes Cl4 (J. C. Hart, K. Musgrave, eds.), 1991.

[276] Rayleigh, Lord, On convective currents in a horizontal layer of fluid when the higher temperature

is on the under side, Phil. Mag. 32 (1916) 529-546.

[277] Reuter, L. Hodges, Rendering and magnification of fractals using iterated function systems, Ph.

D. thesis, School of Mathematics, Georgia Institute of Technology (1987).

[278] Richardson, R. L., The problem of contiguity: an appendix of statistics of deadly quarrels,

General Systems Yearbook 6 (1961) 139-187.

[279] Rossler, 0. E., An equation for continuous chaos, Phys. Lett. 57 A (1976) 397-398.

[280] Ruelle, F., Takens, F., On the nature of turbulence, Comm. Math. Phys. 20 (1971) 167-192, 23 (1971) 343-344.

[281] Russell, D. A., Hanson, J. D., Ott, E., Dimension of strange attractors, Phys. Rev. Lett. 45 (1980) 1175-1178.

[282] Salamin, E., Computation of 1r Using Arithmetic-Geometric Mean, Mathematics of Computation 30, 135 (1976) 565-570.

[283] Saltzman, B., Finite amplitude free convection as an initial value problem - I, J. Atmos. Sci. 19 (1962) 329-341.

[284] Sano, M., Sawada, Y., Measurement of the Lyapunov spectrum from a chaotic time series, Phys.

Rev. Lett. 55 (1985) 1082.

[285] Saupe, D., Efficient computation of Julia sets and their fractal dimension, Physica D28 (1987) 358-370.

[286] Saupe, D., Discrete versus continuous Newton«s method : A case study, Acta Appl. Math. 13 (1988) 59-80.

[287] Saupe, D., Point evalutions of multi-variable random fractals, in: Visualisierung in Mathematik

und Naturwissenschaften - Bremer Computergraphiktage I988, H. Jrgens, D. Saupe (eds.), Springer-Verlag, Heidelberg, 1989.

[288] Sernetz, M., Gelleri, B., Hofman, F., The Organism as a Bioreactor, Interpretation of the

Reduction Law of Metabolism in terms of Heterogeneous Catalysis and Fractal Structure, Journal Theoretical Biology 117 (1985) 209-230.

[289] Siebesma, A. P., Pietronero, P., Multifractal properties of wave functions for one-dimensional

systems with an incommensurate potential, Europhys. Lett. 4 ( 1987) 597-602.

[290] Siegel, C. L., Iteration of analytic functions, Ann. of Math. 43 (1942) 607-616.

[291] Sierpinski, W., Sur une courbe cantorienne dont tout point est un point de ramification, C. R. Acad. Paris 160 (1915) 302.

968 Bibliography

[292] Sierpinski, W., Sur une courbe cantorienne qui contient une image biunivoquet et continue detoute courbe donnee, C. R. Acad. Paris 162 (1916) 629-632.

[293] Sim6, C., On the Henon-Pomeau attractor, Journal of Statistical Physics 21.4 ( 1979) 465-494.

[294] Shanks, D., Wrench, J. W. Jr., Calculation of Jr to 100,000 Decimals, Mathematics of Compu­tation 16, 77 (1962) 76-99.

[295] Shaw, R., Strange attractors, chaotic behavior, and information flow, Z. Naturforsch. 36a (1981) 80-112.

[296] Shishikura, M., The Hausdorff dimension of the boundary of the Mandelbrot set and Julia sets. SUNY Stony Brook, Institute for Mathematical Sciences, Preprint # 199117.

[297] Shonkwiller, R., An image algorithm for computing the Hausdorff distance efficiently in linear time, Info. Proc. Lett. 30 (1989) 87-89.

[298] Smith, A. R., Plants, fractals, and formal languages, Computer Graphics 18, 3 (1984) 1-10.

[299] Stanley, H. E., Meakin, P., Multifractal phenomena in physics and chemistry, Nature 335 (1988) 405-409.

[300] Stefan, P., A theorem of Sarkovski on the existence of periodic orbits of continuous endomor­phisms of the real line, Comm. Math. Phys. 54 (1977) 237-248.

[301] Stevens, R. J., Lebar, A. F., Preston, F. H., Manipulation and presentation of multidimensional image data using the Peano scan, IEEE Transactions on Pattern Analysis and Machine Intelli­gence 5 (1983) 520-526.

[302] Sullivan, D., Quasiconformal homeomorphisms and dynamics 1, Ann. Math. 122 ( 1985) 401-418.

[303] Sved, M., Pitman, J., Divisibility of binomial coefficients by prime powers, a geometrical ap­proach, Ars Combinatoria 26A ( 1988) 197-222.

[304] Takens, F., Detecting strange attractors in turbulence, in: Dynamical Systems and Turbulence, Warwick 1980, Lecture Notes in Mathematics 898, Springer-Verlag ( 1981) 366-381.

[305] Tan Lei, Similarity between the Mandelbrot set and Julia sets, Report Nr 211, Institut fiir Dynamische Systeme, Universitat Bremen, June 1989, and, Commun. Math. Phys. 134 ( 1990) 587-617.

[306] Tel, T., Transient chaos, to be published in: Directions in Chaos III, Hao B.-L. (ed.), World Scientific Publishing Company, Singapore.

[307] Thompson, J. M. T., Stewart, H. B., Nonlinear Dynamics and Chaos, Wiley, Chichester, 1986.

[308] Velho, L., de Miranda Gomes, J., Digital halftoning with space~filling curves, Computer Graphics 25,4 (1991) 81-90.

[309] Voss, R. F., Random fractal forgeries, in : Fundamental Algorithms for Computer Graphics, R. A. Earnshaw (ed.), (Springer-Verlag, Berlin, 1985) 805-835.

[31 0] Voss, R. F., Tomkiewicz, M., Computer Simulation of Dendritic Electrodeposition, Journal Electrochemical Society 132, 2 (1985) 371-375.

[311] Vrscay, E. R., Iterated function systems: Theory, applications and the inverse problem, in: Proceedings of the NATO Advanced Study Institute on Fractal Geometry, July 1989. Kluwer Academic Publishers, 1991.

[312] Wall, C. R., Terminating decimals in the Cantor ternary set, Fibonacci Quart. 28, 2 (1990) 98-101.

Bibliography 969

[313] Williams, R. F., Compositions of contractions, Bol.Soc. Brasil. Mat. 2 (1971) 55-59.

[314] Willson, S., Cellular automata can generate fractals, Discrete Appl. Math. 8 (1984) 91-99.

[315] Witten, I. H., Neal, M., Using Peano curves for bilevel display of continuous tone images, IEEE Computer Graphics and Applications, May 1982, 47-52.

[316] Witten, T.A. and Sander, L.M., Diffusion limited aggregation: A kinetic critical phenomena, Phys. Rev. Lett. 47 (1981) 1400-1403 and Phys. Rev. B27 (1983) 5686-5697.

[317] Wolf, A. Swift, J. B., Swinney, H. L., Vastano, J. A., Determining Lyapunov exponents from a time series, Physica 16D (1985) 285-317.

[318] Yorke, J. A., Yorke, E. D., Metastable chaos: the transition to sustained chaotic behavior in the Lorenz model, J. Stat. Phys. 21 (1979) 263-277.

[319] Young, L.-S., Dimension, entropy, and Lyapunov exponents, Ergod. Th. & Dynam. Sys. 2 (1982) 109.

[320] Zahlten, C., Piecewise linear approximation of isovalued suifaces, in: Advances in Scientific Visualization, £urographies Seminar Series, (F. H. Post, A. J. S. Hin (eds.), Springer-Verlag, Berlin, 1992.

Index

Abraham, Ralph, 19 absolute value, 777 adaptive cut algorithm, 343 addresses, 309

addressing scheme, 309 dynamics, 620 for JFS Attractors, 313 for Sierpinski gasket, 80, 309, 371 for the Cantor set, 73, 313 language of, 3 12 of period-doubling branches, 620 space of, 312 three-digit, 309

aggregation, 475 of a zinc ion, 477

Alexandroff, Pawel Sergejewitsch, 107 alga, 357 algorithm, 33, 52

automatic, 278 allometric growth, 143 ammonite, 142 anabaena catenula, 357, 358 angle

binary expansion, 816 periodic, 805 pre-periodic, 805

approximations, 150 finite stage, 150 quality of the, 278

Archimedes, 153, 185 arctangent series, 160 area reduction, 666 Aristotle, 126 arithmetic mean, 735 arithmetic precision, 581 arithmetic triangle, 82 Astronomica Nova, 40 attractive, 593, 822 attractor, 232, 341, 790, 820

coexisting, 757

covering of the, 341 derivative criterion, 822 for the dynamical system, 259 problem of approximating, 341 totally disconnected, 314

Attractor (BASIC program), 767 attractorlets, 328, 344 Avnir, D., 475 axiom, 360, 361, 387

Banach, Stefan, 233, 263 band merging, 628 band splitting, 628, 636 Barnsley's fern, 255, 256

transformations, 256 Barnsley, Michael F., 35, 229, 278, 280, 297, 328 BASIC, 60, 179

DIM statement, 296 LINE, 61 PSET, 61 SCREEN, 62

BASIC program, 133 basin boundary, 758 basin of attraction, 665, 757, 759, 775, 790, 857 basin of infinity, 794 Beckmann, Petr, 159 Benard cells, 698 Benedicks, Michael, 676 Berger, Marc A., 229 Bernoulli, Daniel, 262 Bernoulli, Jacob, 186 bifurcation, 586, 603, 617, 641

calculation, 604 period-doubling, 610

bifurcation diagram, 605 binary decomposition, 812, 851 binary representations, 176 binomial coefficients, 409, 420

divisibility properties, 420, 423, 433, 437 binomial measure, 331

972

Birkhoff, George David, 678 bisection, 892 blinkers, 414 blueprint, 238 body, 142

height, 142 mass, 210

Bohr, Niels, 1, 9 Boll, Dave, 859 Bondarenko, Boris A., 407 Borel measure, 330 Borwein, Jonathon M., 157, 161 Borwein, Peter B., 157, 161 boundary crisis, 649 Bouyer, Martine, 161 box-count power law, 724 box-counting dimension, 202, 212, 218, 721, 735,

740, 757 limitations, 722, 726

Brahe, Tycho, 39 brain function anomalies, 53 branch point, 878 branching, 397 branching order, 116 Brent, R. P., 161 broccoli, 137 Brouwer, Luitzen Egbertus Jan, 107, 108 Brown, Robert, 297 Brownian motion, 297, 476, 481

fractional, 493, 494 one-dimensional, 491

Brownian Skyline (BASIC program), 504 Buffon, L. Comte de, 323 bush, 398 butterfly effect, 42

calculator, 49, 576 camera, 20 cancellation of significant digits, 786 Cantor brush, 117 Cantor maze, 242 Cantor set, 63, 67, 172, 219, 252, 342, 364, 381,

574,623,668,705,741,828,944 addresses for the, 73 construction, 68 dynamics on, 620 in complex plane, 833 program, 226

Cantor Set and Devil's Staircase (BASIC program), 227

Cantor set of sheets, 694 Cantor, Georg, 63, 67, 107, 173, 870 capacity dimension, 202 Caratheodory, Constantin, 871, 873 Carleson, Lennart, 676 carry, 427,434,435,443,451 Cartesian coordinates, 777 Casio fx-7000G, 48 catalytic oxidation, 452 Cauchy sequence, 265 cauliflower, 64, 105, 137, 144, 229 Cayley, Sir Arthur, 773, 774 Cech, Eduard, 1 07 cell division, 357 cellular automata, 411, 412, 454, 952

linear, 422 Cellular Automata (BASIC program), 455 central limit theorem, 484 Ceulen, Ludolph van, 158 chain rule, 889 Chaitin, Gregory J., 428

Index

chaos, 6, 46, 52, 55, 59, 76, 536, 567-569, 585, 624, 636

acoustic, 754 icons of, 659 inheritance of, 574 routes to, 585, 640

chaos game, 35, 36, 298, 300, 306, 308, 328, 341, 820, 822, 923

analysis of the, 307 density of points, 324 game point, 298 statistics of the, 329 with equal probabilities, 315

Chaos Game (BASIC program), 351 chaotic transients, 647 characteristic equation, 177 Charkovsky sequence, 638 Charkovsky, Alexander N., 638 chemical reaction, 656 circle, encoding of, 243 classical fractals, 123 climate irregularities, 53 cloud, 212, 501 cluster, 461

correlation length, 468 dendritic, 479 incipient percolation, 467 maximal size, 466 of galaxies, 458

Index

percolation, 467 coarse HOlder exponent, 931 coarse-grained Holder, 931 coarse-graining, 939 coast of Britain, 199

box-counting dimension, 215 complexity of the, 199 length of the, 199

code, 52 collage, 278

design of the, 281 fern, 278 leaf, 279 mapping, 239 optimization, 282 quality of the, 281

collage theorem, 280 Collatz, Lothar, 33 color image, 330 comb, 872 compass dimension, 202, 208, 210 compass settings, 192, 200 complete metric space, 265, 268 complex. argument, 778 complex conjugate, 781 complex division, 781 complex. number, 776 complex plane, 123 complex. square root, 784, 820, 839 complexity, 16, 38

degree of, 202 complexity of nature, 135 composition, 608 computer arithmetic, 533 computer hardware, 162 computer languages, 60 concept of limits, 136 connected, 251, 803 continued fraction expansions, 163, 568 continuity, 565 continuous, 569 contraction, 234

factor, 266 mapping principle, 263, 266, 288 ratio, 318

contraction factor, 715 control parameter, 59 control unit, 17, 19 convection, 698, 699, 702 convergence, 265

test, 101 Conway, John Horton, 413 correlation, 496 correlation dimension, 738 correlation length, 467, 468 Coulomb's law, 801 Courant, Richard, 683 cover

open, 217 order of, 110

Cramer, H., 952 Cremer, Hubert, 123 crisis, 646, 708 critical exponent, 648 critical line, 634, 635 critical orbit, 830, 886, 889 critical point, 829, 833, 886 critical value, 633, 829, 847 critical value lines, 633 Crutchfield, James P., 19 curdling, 224 curves, 113, 208

non planar, 113 parametrized, 370 planar, 113 self-similar, 208 space-filling, 372

Cusanus, Nicolaus, 155 cycle, 34, 35, 58, 552

periodic, 533

Dase, Johann Martin Zacharias, 159 decay rate, 529 decimal, 307

numbers, 65 system, 307

decimal MRCM, 307 decoding, 260, 303

images, 303 method, 260

dendrite, 873, 891 dendritic structures, 475 dense, 552 density of measure, 933 derivative, 679, 717 derivative criterion, 822, 858, 864, 867, 889 deterministic, 34, 46, 51

feedback process, 46 fractals, 299 iterated function system, 302

973

974

rendering of the attractor, 341 shape, 299 strictly, 301

Devaney, Robert L., 536, 569 devil's staircase, 220, 226

area of the, 221 boundary curve, 225 program, 226

dialects of fractal geometry, 230 diameter, 217 die, 35, 298

biased, 322, 326 ordinary, 298 perfect, 315

differential equation, 678, 679, 695, 699, 759, 763 numerical methods, 683 system, 685

diffusion limited aggregation, 477 mathematical model, 479

digits, 28 Dimension, 7 dimension, 106--108, 202

box-counting, 218 correlation, 738 covering, 109 fractal, 195, 441 Hausdorff, 109, 216, 218 information, 735 Ljapunov, 739 mass, 736 pointwise, 736 precision, 743 problem of, 742 Renyi, 736 self-similarity, 441

disconnected, 251 displacement, 481

mean square, 481 proportional, 482

distance, 263, 267, 274 between two images, 267 between two sets, 267 Euclidean, 216 Hausdorff, 263, 268 in the plane, 274 of points, 274

distribution, 67 bell-shaped, 482 Gaussian, 482 invariant, 527, 529, 568

distribution function, 2 divisibility properties, 410, 421, 423, 454 divisibility set, 424 DLA, 478, 952 DNA, 353 Douady, Adrien, 769, 800, 817, 853, 871 dough, 551 dragon, 240, 374, 384 dust, 72 dyadic interval, 927 dynamic law, 17 dynamic process, 265 dynamical system, 233, 658

attractor for the, 259 conservative, 655 dissipative, 655 linear, 594

dynamical systems theory, 233, 265 dynamics of an iterator, 62 Dynkin, Evgeni B., 407

Eadem Mutata Resurgo, 186 Eckmann, Jean-Pierre, 751 Edgar, Gerald, 216, 862 eigenfunctions, 952 eigenvalue, 626, 675 Einstein, Albert, 4 electrochemical deposition, 475

mathematical modeling, 476 electrostatic field, 800 encirclement, 795, 808, 826

algorithm, 798 energy, 655 ENIAC, 48, 333 c:-collar, 267 equipotential, 80 I, 809, 896 ergodic, 554, 726 ergodicity, 525, 728 erosion model, 355 error amplification, 709, 715, 751, 785 error development, 581 error propagation, 41, 55, 512, 515 escape set, 77, 125,789,791 escape time, 648 escaping sequence, 77 Escher, Mauritz C., 79 Ettingshausen, Andreas von, 409 Euclidean dimension, 202 Euler step, 683, 716 Euler, Leonhard, 85, 157, 168, 262

Index

Index

expansion, 70, 163 binary, 70, 101, 175, 549, 555, 557, 576 binary coded decimal, 576 continued fraction, 164 decimal, 70, 425 p-adic, 426 triadic, 70, 72

factorial, 409 factorization, 422, 425, 427 Falconer, Kenneth, 216 Faraday, Michael, 753 Fatou, Pierre, 773 feedback, 17, 57

clock, 19 cycle, 20 experiment, 19 loop, 231, 266 machine, 17, 31

feedback system, 29, 34, 35, 76 geometric, 187

feedback systems class of, 266 quadratic, 123 sub-class of, 266

Feigenbaum (BASIC program), 652 Feigenbaum constant, 590, 612, 618, 675, 707 Feigenbaum diagram, 587, 651, 693 Feigenbaum point, 582, 588, 610, 619, 622, 629 Feigenbaum scenario, 675, 754 Feigenbaum, Mitchell, 53, 587, 590, 612 Fermat, Pierre de, 82 fern, 229, 285

non-self-similar, 288 Fibonacci, 29

-Association, 30 -Quarterly, 30 generator, 339 generator formula, 340 numbers, 30 sequence, 29, 153

Fibonacci, Leonardo, 65 fibrillation of the heart, 53 field line, 800, 802, 813, 816, 852, 853, 871, 872

angle, 804, 815, 854 dynamics, 816

figure-eight, 833, 834 final curve, 95 final state, 510, 586, 759 final state sensitivity, 757

fixed point, 593, 599, 608, 641, 771, 862 (un)stable, 593 attractive, 593, 855, 864 indifferent, 857, 867 of the IFS, 259 parabolic, 868 repelling, 821, 839, 864 stability, 594 super attractive, 596, 855 unstable, 603

fixed point equation, 167 floating point arithmetic, 535 flow, 717, 740 folded band, 688 forest fires, 464

simulation, 465

975

Fortune Wheel Reduction Copy Machine, 30 I, 321 Fourier, 162

analysis, 261 series, 262 Transformaion techniques, 162

Fourier, Jean Baptiste Joseph de, 262 fractal branching structures, 457 fractal dimension, 195, 202, 422

prescribed, 458 universal, 469

fractal geometry, 23, 59 fractal surface construction, 497 fractal surfaces, 211 fractal, random and nonrandom, 944 fractals, 35, 76

classical, 63 construction of basic, 149 gallery of historical, 131

FRCM, 301 free energy, 937 friction, 655 Friedrichs, Kurt Otto, 683 Frobenius-Perron equation, 528 Frobenius-Perron operator, 528, 529

Galilei, Galileo, 139 Galle, Johann G., 38 Game of Life, 413

majority rule, 415 one-out-of-eight rule, 414 parity rule, 419

game point, 35, 298 Gauss map, 568 Gauss, Carl Friedrich, 4, 38, 85, 157, 420, 568, 948

976

Gaussian central limit theorem, 947 Gaussian distribution, 482, 947 Gaussian random numbers, 484 generalized dimensions, 941 generator, 90, 208 generic parabola, 527, 544, 631 geometric feedback system, 187 geometric mean, 735 geometric series, 147, 221, 612

construction process, 149 Giant sequoias, 140 Gleick, James, 41 glider, 414, 415 golden mean, 30, 153, 165, 869, 870, 882

continued fraction expansion, 165 Golub, Jerry, 508 graphical iteration, 337, 472, 510, 583, 593

backward, 826 Graphical Iteration (BASIC program), 61 grass, 399 Grassberger, Peter, 724 Great Britain, 192 Gregory series, 158 Gregory, James, 156, 158 GroBmann, Siegfried, 53, 629 group, 244 growth, 195

allometric, 197 cubic, 198 proportional, 143

growth law, 142, 195 growth rate, 43 Guckenheimer, John, 658 Guilloud, Jean, 160 guns, 414

Hadamard, Jacques Salomon, 122 half-tone image, 329 Hamilton, William R., 837 Hanan, James, 363 Hao, Bai-Lin, 658 harmonic measure, 953 Hausdorff dimension, 202, 216, 218 Hausdorff distance, 150, 263, 268 Hausdorff measure, s-dimensional, 217 Hausdorff, Felix, 63, 107, 108, 202, 203, 216, 233,

263 head size, 142 Henon, Michel, 659 Henon attractor, 659, 663, 667

box-counting dimension, 721 dimension, 670, 726 information, 733 information dimension, 735 invariance, 662, 663 Ljapunov exponent, 712 natural measure, 733 unstable direction, 712

Henon transformation, 660, 71 0 decomposition, 661 derivative, 713 Feigenbaum diagram, 675 fixed points, 674 inverse, 668

Herman, M. R., 869 Herschel, Friedrich W., 38 Heun's method, 766 hexagonal web, 88 Hilbert curve, 63, 388, 392 Hilbert, David, 63, 94, 107, 373, 387 Hints for PC Users, 62, 134 Hirsch, Morris W., 507 histogram, 526, 630

spike, 606, 633 histogram method, 939 Holder condition, 217 Holmes, Philip, 658 Holzfuss, Joachim, 754 homeomorphism, 106, 570 homoclinic point, 644 HP 28S, 49

Index

Hubbard, John H., 769, 770, 800, 817, 853, 872 human brain, 258

encoding schemes, 259 Hurewicz, Witold, 107 Hurst exponent, 493 Hurst, H. E., 493 Hutchinson distance, 331 Hutchinson equation, 436 Hutchinson operator, 34, 171, 238, 269, 302, 438

contractivity, 270 Hutchinson, J., 171, 229, 233, 263 hydrodynamics, 756

ice crystals, 243 IPS, 230,328,435,437,923,930

fractal dimension for the attractors, 271 hierarchical, 284, 290, 337, 392, 442, 444

image, 231 attractor image, 305

Index

code for the, 36, 329 color, 330 compression, 258 encoding, 261 example of a coding, 258 final, 232 half-tone, 329 initial, 238 leaf, 278 perception, 258 target, 278

imitations of coastlines, 499 incommensurability, 126, 163 indifferent, 822 infimum, 216 information, 729

definition, 729 power law, 734

information dimension, 202, 735 relation to box-counting dimension, 735

information theory, 730 initial value, 681 initial value problem, 681 initiator, 90 injective, 751 input, 17, 27, 31, 34 input unit, 17, 19 interest rate, 44 intermittency, 640, 644, 650, 708

scaling law, 646 intersection, 106 invariance, 76 invariance property, 173 invariant measure, 329, 727 invariant set, 823 inverse problem, 261, 278 irreducible, 422 isometric growth, 143 iterated function system, 230 iteration, 18, 42, 49, 55, 57, 544, 547, 551

graphical, 58 iterator, 17, 37, 55

Julia set, 123, 771, 775, 790, 791, 820, 823, 839, 878

(dis)connected, 833 by chaos game, 820 invariance, 822 iterated function system, 823 pinching model, 817

quatemion, 837 self-similarity, 822, 890 structural dichotomy, 833, 843

Julia sets, 952 Julia, Gaston, 63, 122, 772, 774 JuliaSets (BASIC program), 840

Kadanoff, Leo P., 472 Kaplan, James L., 738 Kaplan-Yorke conjecture, 738, 739, 741 Kepler's model of the solar system, 38 Kepler, Johannes, 38, 39 kidney, 94, 211 kneading, 536, 551

cut-and-paste, 543 stretch-and-fold, 543, 544, 660, 688 substitution property, 542

Koch curve, 63, 89, 112, 200, 365, 380, 405 construction, 91 Koch's Original Construction, 89 length of, 92 random, 400, 459 self-similarity dimension, 205

Koch Curve (BASIC program), 180 Koch island, 89, 149, 200, 386

area of the, 149 random, 400

Koch, Helge von, 63, 89, 145, 152 Kolmogorow, Andrej N., 507 Kramp, Christian, 85 Kummer criterion, 427, 429, 434, 435 Kummer, Ernst Eduard, 133, 254, 424, 425

L-system, 354, 361, 376, 380, 402 extensions, 401 parametric, 401 stochastic, 399

L-Systems (BASIC program), 403 labeling, 73 Lagrange transform, 937 Lagrange, Joseph Louis, 262 Lange, Ehler, 333 language, 150, 230 Laplace equation, 480, 952 Laplace, Pierre Simon, I, 323 Laplacian fractals, 480 laser instabilities, 53 Lauterborn, Werner, 754 law of gravity, 40 law of large numbers, 946

977

978

leaf, 128, 278 spiraling, 128

least-squares method, 743 Lebesgue, Henri L., 107, 109 Legendre transform, 937, 941, 952 Legendre's identity, 429 Legendre, Adrien Marie, 425, 429 Leibniz, Gottfried Wilhelm, 5, 17, 91, 156 lens system, 23, 26 level set, 812, 851

cell, 812, 816, 832, 851 level sets, 811 Lewy, Hans, 683 Li, Tien-Yien, 657 Libchaber, Albert, 508 Liber Abaci, 29 Lichtenberger, Ralph, 838 lightning, 952 limit, 135 limit objects, 147 limit structure, 151

boundary of the, 151 Lindemann, F., 160 Lindenmayer, Aristid, 128, 353, 355, 363, 401 linear congruential method, 339 linear mapping, 235 Liouville monster, 871 Liouville number, 870 Liouville, Joseph, 870 Ljapunov dimension, 739, 741

acoustic chaos, 756 Ljapunov exponent, 516, 518, 523, 568, 709, 712,

714, 738, 739 acoustic chaos, 756 algorithm, 710, 712 algorithm for time series, 751 continuous system, 715 from time series, 751 invariance, 719 zero, 719

Ljapunov, Alexander Michailowitsch, 516 local HOlder exponent, 931 locally connected, 853, 871, 873 lognog diagram, 193

for the coast of Britain, 194 of the Koch curve, 20 I

lognog diagramm, 195 logistic equation, 42, 45, 48, 333, 678 look-up table, 412, 420, 454 Lorenz attractor, 658, 697, 698, 702, 766

dimension, 705 model, 702 reconstruction, 750

Lorenz experiment, 48 Lorenz map, 691, 693, 703 Lorenz system, 697, 699, 717, 768

crisis, 707 intermittency, 706 Ljapunov exponent, 716 Lorenz map, 702, 703 periodic solution, 705 physical model, 698 streamlines, 701 temperature profile, 701

Index

Lorenz, Edward N., 42, 46, 53, 512, 581, 678, 697, 700

1963 paper, 657 Lozi attractor, 672 Lozi, Rene, 671 Lucas' criterion, 428 Lucas, Edouard, 425, 428, 433

Machin, John, 158, 160 Mandelbrojt, Szolem, 122 Mandelbrot (BASIC program), 898 Mandelbrot set, 841, 843, 896

algorithm, 848, 896 atom, 866 binary decomposition, 851 buds, 852, 855, 866 central piece, 857 dimension, 851 encirclement, 845, 848, 896 equipotentials, 851 field lines, 851 level set, 851 pinching model, 852 potential function, 851 secondary, 873, 892 self-similarity, 874, 890

Mandelbrot Set Pixel Game (BASIC program), 899 Mandelbrot Test (BASIC program), 899 Mandelbrot, Benoit B., 63, 89, 122, 183, 493, 841,

843 Mane, Ricardo, 748 Manhattan Project, 48 mapping ratio, 20 mappings, 235

affine linear, 236 linear, 235

Index

Margolus, Norman, 416 Markov operator, 330 Mars, 39 mass, 224, 727 mass dimension, 736 mathematical category of constructions, 256 mathematics, 135

in school, 293 new objectivity, 135

Matsushita, Mitsugu, 475 Maxwell, James Clerk, 2 May, Robert M., 42, 53 mean square displacement, 481 measure

binomial, 331 multifractal, 331

memory, 31, 34 memory effects, 22 Menger sponge, 109

construction of, 109 Menger, Karl, 107, 108, 115 metabolic rate, 210 meter, 65 meter stick, 308 method of least squares, 194 method of moments, 939 metric, 274

choice of the, 274 Euclidean, 264 Manhattan, 264 maximum, 264 suitable, 276 topology, 263

metric space, 110, 263, 264 compact, 110 complete, 265

middle-square generator, 339 Misiurewicz point, 886, 888 Misiurewicz, Michal, 671, 888 Mitchison, G. J., 358 mixing, 58, 520, 554, 558 mod-p condition, 429, 433, 440 mod-p criterion, 451 modulo, 420, 422, 423 modulus, 777 moment, 951 moments of measure, 942 Monadology, 5 monitor, 20 monitor-inside-a-monitor, 20

monster, 100 fractal, 229 of mathematics, 63

monster spider, 117 Monte Carlo methods, 323 moon, 40,78 mountains, 15

979

MRCM, 23, 34, 36, 230, 233, 354, 364, 387, 820 adaptive iteration, 342 blueprint of, 258, 278 decimal, 307 lens systems, 288 limit image, 288 mathematical description, 288 networked, 284, 337, 392

MRCM Iteration (BASIC program), 294 Mullin, Tom, 756 multi-fractal, 737 multifractal, 331, 736, 922, 935 multifractals, 216 Multiple Reduction Copy Machine, see MRCM multiplicative cascade, 927 multiplier, 885, 889

natural flakes, 90 natural measure, 727 NCTM, 25 Neumann, John von, 333, 339, 412 Newton's method, 28, 167, 773 Newton, Sir Isaac, 17, 40, 91 noise, 551 nonlinear effects, 26 nonlinear physics, 2 nonlinearity, 3 Norton, V. Alan, 838

object, 139 one-dimensional, 112 scaled-up, 139

one-step machines, 27 open, 569 open set, 565 orbit, 509

backward, 669, 826 ergodic, 525 periodic, 509, 533, 535, 604

organs, 94, 210 oscillator, 695 output, 17, 27, 31, 34 output unit, 17, 19

980

overlap, 275 overlapping attractors, 346

parabola, 57, 608, 691, 831 parameter, 18, 27 parametrization, 858 partition function, 937 Pascal triangle, 82, 407, 419, 447

a color coding, 132, 408, 433 coordinate systems, 424

Pascal, Blaise, 82, 88, 254 pattern formation, 422, 952 Peano curve, 63, 94, 220, 372, 383, 394

construction, 95 S-shaped, 391 space-filling, 98

Peano, Giuseppe, 63, 94, 107, 225, 372 pendulum over magnets, 758, 761, 774 percolation, 458, 459

cluster, 470 models, 458 threshold, 464, 470

period, 588 period-doubling, 587, 592, 610, 6ll, 618, 619, 636,

674, 675, 705, 754, 875 periodic orbit, 604, 628, 864, 889

derivative criterion, 864 periodic point, 523, 552, 555, 557, 564, 576, 639 periodic trajectory, 705 periodic window, 635, 636, 708, 875 Perrin, Jean, 482 perturbation method, 3 phase transition, 467 phyllotaxis, 283, 285 pi, 153, 160, 323

approximations of, 158, 161 Cusanus' method, 155 Ludolph's number, 158 Machin's formula, 160 Rutherford's calculation, 159

pinching, 852 Pisa, 29 Pisano, Leonardo, 29 pixel, 329 pixel game, 771, 775 planets, 38, 40 plant, 15 Plath, Peter, 477 Platonic solids, 38 Poincare, Henri, 53, 107, 507, 585, 644, 694

Poincare map, 694 point at infinity, 781, 790 point charge, 801 point of no return, 793 point set topology, !50 point sets, !50 pointwise dimension, 736 polar coordinate, 778, 80 I poly-line, 593 polynomial, 420, 421

equivalent, 570 Pontrjagin, Lew Semjenowitsch, I 07 population dynamics, 29, 42, 53 Portugal, 183, 199 potential energy, 80 I potential function, 80 I, 811 power law, 195

behavior, 200 pre-periodic point, 886 preimage, 593, 601, 606, 727, 795, 815, 820 Principia Mathematica, 135

Index

prisoner set, 125, 789, 792, 795, 800, 802, 826, 834, 844

(dis)connected, 843 connected, 832

probabilities, 304 badly defined, 334 choice of, 304 for the chaos game, 349 heuristic methods for choosing, 327

probability theory, 82 problem, 281

optimization, 281 traveling salesman, 282

processing unit, 19, 31, 34, 42 production rules, 360 program, 60, 132, 179, 226, 293, 350, 503

chaos game for the fern, 350 graphical iteration, 60 iterating the MRCM, 293 random midpoint displacement, 503

proportio divina, 30 Prusinkiewicz, Przemyslaw, 356, 363, 40 I pseudo-random, 333 Pythagoras of Samos, 126 Pythagorean tree, 126

quadratic, 42, 52, 59 dynamic law, 52

quadratic equation, 60 I, 787

Index

quadratic iterator, 37, 520, 536, 581, 585, 651, 672, 690, 710, 826

equivalence to tent transformation, 562 generalization to two dimensions, 660 in low precision, 534

quadratic law, 52 quatemions, 837

rabbit, 823, 838 rabbit problem, 32 rabbits, 29 Ramanujan, Srinivasa, 157 random, 459

fractals, 459 midpoint displacement, 487 number generator, 48, 322 process, 299 successive additions, 498

random function, 492 rescaled, 492

random resistor networks, 952 random variable, 944 randomness, 34, 297, 459 Rayleigh, Lord, 754 reaction rate, 452 real number, 776 reconstruction, 658, 745, 748

acoustic chaos, 755 reduction, 236 reduction factor, 23, 26, 203, 236 reflection, 236, 244 renormalization, 470, 710

technique, 470 renormalization group, 3 Renyi dimension, 736 repeller, 593, 603, 820

derivative criterion, 822 repelling, 822 rescaling, 626 rest point, 593 return map, 689 Riemann, Bernhard, 4 Riemannian sphere, 781 Rossler, Otto. E., 686 Rossler attractor, 687, 688, 766

paper model, 690 reconstruction, 748, 749

Rossler system, 686, 695 Feigenbaum diagram, 693

romanesco, 137, 144

rose garden, 368 rotation, 236, 244, 882 Rozenberg, Grzegorz, 353 Ruelle, David, 507, 656, 751 Runge-Kutta method, 717 Rutherford, William, 159

saddle point, 642, 644 saddle-node bifurcation, 642 Sagan, Carl, 162 Salamin, Eugene, 161 Saltzman, B., 701 sample value, 945 saw-tooth transformation, 541, 555, 571 scaling, 931 scaling factor, 138, 203, 309 Schwarzian derivative, 616 scientific method, 5 self-affine, 145, 146, 223, 283 self-intersection, 94, 214 self-similar, 76, 283

perfect, 76 statistically, 494 strictly, 146, 283

981

self-similarity, 95, 137, 202, 619, 636, 822, 874 asymptotic, 885, 890, 894 at a point, 882 at Feigenbaum point, 623 of probability distribution, 325 of the Feigenbaum diagram, 588 statistical, 145

self-similarity dimension, 202, 205 of the Koch curve, 205

sensitive dependence on initial conditions, 6, 48, 511, 538, 551, 557, 561, 666, 703, 804

sensitivity, 511, 524, 533, 582 sensitivity constant, 551 series, 147

geometric, 147 Semetz, Manfred, 21 0 set, 69

countable, 69 set theory, 67 shadowing lemma, 576 Shanks, Daniel, 160 Shannon, C., 730 Shaw, Robert, 521 shift, 75

binary, 101 shift on two symbols, 551

982

shift operator, 549 shift transformation, 554, 568, 575, 804 Shishikura, M., 851 Shrodinger equation, 952 Siegel disk, 868, 872 Siegel, Carl Ludwig, 868 Sierpinski arrowhead, 368, 371, 382 Sierpinski carpet, 81, 119, 219, 254 Sierpinski fern, 291 Sierpinski gasket, 24, 25, 36, 63, 174, 219, 244,

252,366,369,370,407,434,930 binary characterization, 175, 434 perfect, 24 program, 132 relatives, 244 variation, 239

Sierpinski Gasket by Binary Addresses (BASIC pro-gram), 134

Sierpinski, Waclaw, 63, 78, 174 similar, 23 similarity, 138 similarity transformation, 23, 138, 202 similitude, 23, 26 simply connected, 251 singularity, 466 singularity strength, 931 Skewed Sierpinski Gasket (BASIC program), 134 Smale, Stephen, 507, 644, 658 Smith, Alvy Ray, 363 snowflake curve, 89 snowflakes, 952 software, 40 space-filling, 94, 373 Spain, 183, 199 Sparrow, Colin, 705, 708 spectral characterization, 498 spiders, 112

monster, 117 order of, 115

Spira Mirabilis, 186 spirals, 185, 891, 894

Archimedean, 185 golden, 190, 882 length of, 185, 189 logarithmic, 142, 185 polygonal, 188 smooth, 189 square root, 126

square, 244 square root, 26, 28, 166

approximation of, 166 complex, 839 of two, 166

square, encoding of, 243 stability, 25, 507, 511, 610, 644 stability condition, 683 stable, 56, 57, 59 staircase, 221

boundary of the, 222 star ships, 414 statistical mechanics, 2 statistical tests, 339 statistics of the chaos game. 329 Steen, Lynn Arthur, 407 stereographic projection, 782 Stewart, H. B., 333, 407, 695 Stifel, Michael, 85 Stirling, James, 410 strange attractor, 255, 656, 693

characterization, 670 coexistence, 676 dimension, 721 reconstruction, 745

strange attractors, 952 Strassnitzky, L. K. Schulz von, 159 stream function, 700 Stroemgren, Elis, 40 structures, 203

basic, 285 branching, 457 complexity of, 16 dendritic, 475 in space, 212 in the plane, 212 mass of the, 224 natural, 65 random fractal dendritic, 458 self-similar, 203 space-filling, 94 tree-like, 458

Sucker, Britta, 333 Sullivan, Dennis, 869 Sumerian, 27

Index

super attractive, 596-598, 609, 855, 863, 864 super object, 112, 113 super-cluster, 470 super-site, 470 supremum, 216 survivors, 521 Swinney, Harry, 508

Index

symmetry transformation, 244

Takens, Floris, 507, 657, 748 Tan Lei, 879, 889 tangent bifurcation, 640, 642, 674 target set, 809, 846 temperature, 700 tent transformation, 32, 556, 571, 704

binary representation, 556 equivalence to quadratic iterator, 562

thermodynamic formalism, 952 thermodynamics, 937, 939 Thomae, Stefan, 53, 629 Thompson, J. M. T., 355, 695 three-body problem, 40 threshold radius, 794 time delay, 748 time profile, 598, 602, 607 time series, 509,581,586 Time Series (BASIC program), 582 Toffoli, Tommaso, 416 Tombaugh, Clyde W., 38 topological conjugacy, 569 topological dimension, 202 topological invariance, 108 topological semi-conjugacy, 569, 571 topology, 106 totally disconnected, 804 touching point, 118, 866 touching points, 3 12 trajectory

periodic, 692 transcendental, 870 transformation, 26, 54, 107, 138, 540

affine, 26, 223, 300 affine linear, 234 Cantor's, 107 for the Barnsley fern, 256 invariance, 168 linear, 26 nonlinear, 125, 820 renormalization, 471 shift, 549, 555 similarity, 138, 168, 202, 234, 300 symmetry, 244

transition to chaos, 6 transitivity, 536 transversal, 897 trapezoidal method, 684 trapping region, 664

tree, 66, 242, 402 decimal number, 66 Pythagorean, 126

triadic numbers, 69, 75 triangle, encoding of, 243 triangular construction, 497 triangular lattice, 463 triangulation, 896 truncation, 533 turbulence, 1, 53, 952

acoustic, 753 turtle graphics, 376, 402

step length, 381 turtle state, 377

stacking of, 397 twig, 242, 396 twin Christmas tree, 240 two-step method, 28, 31

Ulam, Stanislaw Marcin, 48, 333, 412 uncertainty exponent, 757 uniform distribution, 322, 484 unit disk, 802 universal object, 115 universality, 4, 112, 586, 590, 616, 618, 625

of the Menger sponge, 115 of the Sierpinski carpet, 112

Universe, 952 unstable, 57, 59, 593 Urysohn, Pawel Samuilowitsch, 107 Uspenski, Wladimir A., 407 Utah, 199

variance, 947 variational equation, 717 vascular branching, 211 vectors, 31 Verhulst, Pierre F., 42, 44, 45 vessel systems, 94 vibrating plate, 753 video feedback, 19

setup, 19 Vieta's law, 865 Vieta, Fran~ois, 156 viscous fingering, 480 visualization, 362, 376 volume, 740 Voyager II, 53

Wallis, John, 156

983

984

weather model, 41 weather prediction, 46, 59 Weierstrass, Karl, 90 wheel of fortune, 34 Wilcox, Michael, 358 Wilson, Ken G., 3, 474 wire, SOl Wolf, A., 751 Wolfram, Stephen, 407, 412 worst case scenario, 259 Wrench, John W., Jr., 160

Yorke, James A., 521, 657, 738

Zu Chong-Zhi, 154 Zuse, Konrad, 412

Index