Instabilities and Information in Biological Organization

21
Scanned from Self-Organizing Systems: The Emergence of Order, F. E. Yates, Ed. New York: Plenum Press, 1987, pp. 325-338. Instabilities and Information in Biological Self-organization Howard H. Pattee ABSTRACT Two classes of self-organizing systems have received much attention, the statistically unstable systems that spontaneously generate new dynamical modes. and information-dependent systems in which nonstatistical constraints harness the dynamics. Theories of statistically unstable systems are described in the language of physics and physical chemistry, and they depend strongly on the fundamental laws of nature and only weakly on the initial conditions. By contrast, the information-dependent systems are described largely by special initial conditions and constraints, and they depend only weakly, if at all, on the fundamental laws. This results in statistically unstable theories being described by rate-dependent equations, while the information-dependent systems are described by rate- independent (nonintegrable) constraints. It is argued that an adequate theory of biological self-organization requires that these two complementary modes of description be functionally related, since the key process in morphogenesis is the harnessing of cellular dynamics by the informational constraints of the gene. This could arise if the triggering role of fluctuations could be displaced by informational constraints in the control of the dynamical behavior. However, the spontaneous replacement of chance fluctuations by deterministic informational codes is itself a serious problem of self-organization. At present the only approach requires complementary modes of description for the molecular informational constraints and for the macroscopic dynamical behavior that they harness.

Transcript of Instabilities and Information in Biological Organization

Scanned from Self-Organizing Systems: The Emergence of Order, F. E. Yates, Ed. New York: Plenum Press, 1987, pp. 325-338.

Instabilities and Information in Biological Self-organizationHoward H. Pattee

ABSTRACTTwo classes of self-organizing systems have received much attention, the statistically unstable systems that spontaneously generate new dynamical modes. and information-dependent systems in which nonstatistical constraints harness the dynamics. Theories of statistically unstable systems are described in the language of physics and physical chemistry, and they depend strongly on the fundamental laws of nature and only weakly on the initial conditions. By contrast, the information-dependent systems are described largely by special initial conditions and constraints, and they depend only weakly, if at all, on the fundamental laws. This results in statistically unstable theories being described by rate-dependent equations, while the information-dependent systems are described by rate-independent (nonintegrable) constraints. It is argued that an adequate theory of biological self-organization requires that these two complementary modes of description be functionally related, since the key process in morphogenesis is the harnessing of cellular dynamics by the informational constraints of the gene. This could arise if the triggering role of fluctuations could be displaced by informational constraints in the control of the dynamical behavior. However, the spontaneous replacement of chance fluctuations by deterministic informational codes is itself a serious problem of self-organization. At present the only approach requires complementary modes of description forthe molecular informational constraints and for the macroscopic dynamical behavior that they harness.

―The Editor

What do we expect to learn from a theory of biological self-organization? What types of observables or events do we begin with, and what regularities or laws would we accept as explanations of the self-organizing consequences of these events?Is a theory of biological self-organization fundamentally different from a theory of evolution, a theory of development, ora theory of thermodynamics; or is self-organization only the result of special types of behavior, such as the singularities found in dynamical systems, or the genetic shuffling found in evolutionary systems? If we survey the many approaches to the problem taken by the other authors of this volume, we might conclude that everyone sees self-organizing behavior in terms of his or her own discipline's theoretical framework. This is to be expected. We all use what we know to better comprehend what we donot know, as Newton used his theory of gravitation better to comprehend God. Nevertheless, even if all disciplines have something to contribute to our comprehension of self-organizing behavior, we still must be able to distinguish this "special type" of behavior from the "normal" events associated with each discipline, if self-organization is to have a distinct meaning.

This chapter focuses on two classes of self-organizing system that have received the most attention: the statistically unstable systems and the information-dependent systems. The question to be addressed is:What do these two types of systems have to do with each other? Reading the chapters in this book, we find that instability and information are nearly disjoint subjects that are not even dis-cussed in the same language. The chapters on instabilities contain the language of physics or physical chemistry and are particularly sensitive to the basic laws of nature, whereas the chapters focusing on informational concepts contain biological language and are not concerned with physical laws at all. Of course this might simply reflect the differences of the two disciplines of physics and biology. It is also true that authors of papers on instabilities tend to use relatively simple non-living chemical systems as examples, whereas those writing papersinvolving information discuss organs as complex as the brain of humans. Again, the differences in approach might simply reflect

this enormous difference in levels of organization chosen for study. These reasons for different approaches are understandable, but ignoring either dynamical instability or symbolic informationevades a fundamental aspect of biological self-organization. Although it is true that dynamical theory and symbolic information are not associated in our normal way of thinking, they are epistemologically complementary concepts that are nevertheless both essential for a general theory of biological self-organization. Moreover, instabilities are the most favorablecondition of a dynamical physical system for the origin of nondynamical informational constraints, and the evolution of self-organizing strategies at all levels of biology require the complementary interplay of dynamical (rate-dependent) regimes with instabilities and nondynamic (rate-independent, nonintegrable) informational constraints (Pattee, 1971). Finally,after discussing these points, I shall comment on the limitationsof self-organization theories in terms of epistemological and methodological reductionism (see Ayala, Chapter 16, this volume).

The Necessity of Stability

One epistemological requirement of scientific explanation of events is some form of homomorphism between the behavior of the theory as a model or simulation and the behavior of correspondingevents or measurements. Or, as Hertz (1899) expressed it, we havean explanation “. . . when the necessary consequents of the images in thought are always the images of the necessary consequents in the nature of things pictured.” This classical epistemological requirement for a correspondence between object and image implies the basic concept of stability that all sufficiently small changes in the images or descriptions of events represent some corresponding, small changes in the events themselves. Again, it is hard to improve on the classical formulations. Poincare (1952) called a system stable when "small causes produce small effects." AlthoughNewton had no formal methods for testing the stability of his laws of motion, he clearly understood the profound importance of stability for an explanation, since he argued that one of God's

primary functions was to maintain the stability of the planetary orbits against the perturbations of other planets. Deterministic laws of motion alone do not ensure stability. When Laplace proved that Newton’s equations were inherently stable, the universe became a great deterministic machine that required no divine intervention once it was created. The only remaining problem, as Laplace (1951) saw it, was the practical impossibility of the human mind acquiring a complete and precise set of initial conditions for all the bodies in the universe. Thebest he could do was to develop the theory of probability so thatwe could deal rationally with our remaining ignorance of initial conditions. From this classical, objective epistemology of Newtonand Laplace, the idea of self-organization made little sense. Either God intervened locally to keep motions stable, or any unpredictable behavior could be attributed only to our ignorance of details. In either case, there was no apparent desire for additional theories of self-organization or emergent behavior.

. The Necessity of Instability

Logically, any theory of self-organization must accept the precondition of a disorganized subsystem, or a partially disorganized system, since an inherently totally organized systemleaves no room for more organization without either redundancy orcontradiction. Thus, in the ideal Laplacean universe where every microscopic initial condition is precisely given, we have a totally organized system with an inexorably determined past and future. The well-known escape from this complete determinism, which Laplace recognized, is the condition of human ignorance of initial conditions. This condition is the basis of statistical mechanics; but the most general consequence of this condition wasonly an increasing disorganization or entropy in the course of time. The main contribution of Prigogine's school has been to finda new description of how instabilities in these statistical systems may result in entirely new structures from chaotic initial conditions. However, both the ontological and epistemological status of instability remain a fundamental and still controversial problem. One may briefly, and somewhat

naively, describe the problem as follows: Our knowledge of the world depends upon our ordering of experience by images, models, descriptions, theories, and so on, which we try to make as clear and unambiguous as possible. To test our theories we predict the consequences from observed initial conditions using the rules of the theory and then look for the corresponding consequences in our later observations. If this correspondence holds over a wide-enough range of initial conditions, and if the theory has other coherent logical and esthetic properties, which we find difficultto define, then we may feel that we understand or have described some laws of nature, or some laws of knowledge, depending on our metaphysics. Now, although we may observe unstable behavior, both directly in events or in our mathematical models of events, we still have a problem in establishing their correspondence by the same criteria we use to establish the correspondence between predictions and measurements of stable behavior. This results from the fact that we recognize events as unstable only because they appear to have no observable deterministic cause, and we recognize descriptions of events as unstable only because the descriptions fail to completely define those events. Poincare (1952) defined instabilities in the same way that he defined chance events, i.e., as observable events that have no observablecause. In other words, Poincare's definition of an unstable eventimplies that it can be described only by a probabilistic model that is logically incompatible with a deterministic model of the same event. The question, then, of whether instabilities are fundamentally the result of chance or determinism in an objectivesense or whether they result from the failure of our descriptions ofevents is unresolved and perhaps unresolvable, in any scientific or empirical sense. In any case we can see that self-organizing behavior must involve something more than stable, deterministic trajectories ofclassical theories, or the assumption of ignorance of initial conditions, which leads to the stable, deterministic distributions of statistical mechanics. Even though these laws ofphysics are a foundation for all organization, including self-organization, we recognize in the self-organizing behavior of both nonliving and living systems many entirely new forms and

patterns that are not simply the perturbations of stable systems or the probabilistic behavior of unstable systems. The novelty and persistence of emergent forms characteristic of living systems do not fit our definition of either stable or unstable behavior. Therefore, we may expect theories of self-organization also to require complementary subjective or functional modes of description. Thom's (1975) catastrophe theory of self-organization may atfirst appear to have little relation to these concepts, since hisstarting point is pure mathematics. However, Thom's concept of form is also expressed in terms of stability; whatever remains stable under a small perturbation has the same form. Change of form or morphogenesis consequently involves instability. The mathematical concept of "structural stability," as it is called, is technically complex, but can naturally be associated with two complementary modes of description―a control space (subjective) description and a corresponding, state-space (objective) description. The essence of structural stability is that a gradual change in the control-space description induces a corresponding, gradual change in the state space description ("small causes produce small effects"). In the other case, where a gradual change in the control-space description induces a sudden, discontinuous change in the state-space description (bifurcation), there is a change of form, or what Thom calls a catastrophe and we call a form of self-organizing behavior. Of course, many other epistemological interpretations of the formalism are possible since the model, or "method," as Thom calls it, is essentially mathematical. However, it is fundamentalto the method that two complementary structures are necessary fordeveloping the concept of catastrophe.

The Necessity of Complementarity

The modern concept of complementarity is associated with Bohr andthe development of quantum theory (Bohr, 1928, 1963; Jammer, 1974; d'Espagnat, (976), but Bohr believed that the concept was of much more general epistemological significance. Although it was the theory of the electron that forced recognition of complementary modes of description, Bohr felt that

complementarity "bears a deep-going analogy to the general difficulty in the formation of human ideas, inherent in the distinction between subject and object." Bohr's definitions of the complementarity principle were not formal or precise, and they have generated much controversy. This is to be expected of any epistemological principle that claims both universal applicability as well as some empirical necessity. It is generally accepted, however, that the principle includes at leasttwo components: first, that to account for or to explain an observed event, two distinct modes of description or representations are necessary, and second, that these two modes of description are incompatible, both in the logical sense that acontradiction would arise if the two descriptions were combined into one formal structure and in the conceptual sense that tryingto combine the meanings of both descriptions into one image leadsto confusion. Although my concept of complementarity was greatly influenced by reading Bohr and his interpreters, I do not wish todefend or attack his epistemology. I simply find no alternative but to accept multiple, formally incompatible descriptions as a satisfactory explanation of many types of biological events (Pattee, 1979). Perhaps the most fundamental epistemological complementarityarises in our perception of events as either deterministic or chance. The Laplacean ideal of determinism is certainly more thana rational hypothesis about point masses and universal gravitation. One of the most easily observed beliefs of a 5-or 6-year-old child is the child's assumption that every event has a cause or that events could not be capricious (Piaget, (927). Of course, the "causes" that children see are usually animistic or moralistic, but in any case, the idea of determinism is very primitive and does not easily die out in the course of intellectual development. For example, a mature Wigner (1964) characterizes his acceptance of explanation as the feeling that "events could not be otherwise," and most of us are at least emotionally sympathetic with Einstein's belief that "God does notplay dice." The concept of chance comes developmentally with experience,but is never assimilated into our thought with the clarity of theconcept of determinism. In fact, gamblers and physicists alike

behave as if chance is only determinism disguised by ignorance. The two types of classical theory illustrate this contrast: the so-called microscopic, deterministic descriptions based on the Laplacean ideal of total knowledge and the complementary macroscopic, statistical descriptions based on some predefined ignorance. The former theories are usually pictured as laws of nature that are inexorable in every detail, whereas the latter theories assume some alternative behaviors that are ascribed to chance. The deterministic laws of nature are reversible (time-symmetric) and give predictions that are crucially dependent on knowledge of initial conditions, whereas statistical laws are irreversible and give predictions that are more or less independent of knowledge of initial conditions. Much of what we call nonliving organization is explained by one or the other of these types of description. 'Nevertheless, attempts to use both types of theory to explain organization havepresented conceptual and formal difficulties since the time of Boltzmann. Prigogine and his collaborators have discovered from their formal attempts to define nonequilibrium entropy that a complementarity principle appears inescapable. The description ofdeterministic, reversible trajectories is incompatible with a simultaneous description of entropy (Prigogine, 1978; Misra, 1978). Even without the mathematical formalism, most of us will conceptually agree with Planck (1960) that "it is clear to everybody that there must be an unfathomable gulf between a probability, however small, and an absolute impossibility," i.e.,determinism. Therefore, whether one looks at the principle of complementarity as an evasion or as a solution of the problem of determinism and chance, we have very little choice at present butto use complementary models for explaining self-organization in biological systems, in which chance and determinism play such interdependent roles. However, in my biologically oriented epistemology, I am going to suggest that chance is displaced in some optimal sense by informational constraints that efficiently control the higher levels of dynamical behavior.

The Nature of Dissipative Structures

We have seen that deterministic descriptions are not adequate for

explaining self-organizing behavior even at the prebiological levels. The instabilities in deterministic dynamics may be regarded as escape hatches, which in effect leave the behavior ofthe system undefined in some regions and, hence, subject to unknown or chance events. But chance serves only as an escape fromclassical determinism; it is not a theory of self-organization. The basic contribution of Prigogine was to find an alternative description for the system behavior that exhibits a new structurethat is stable with respect to the chance events of the previous level. In this way, the instability can serve as a source of chaos in the deterministic, microscopic description and also as asource of new order in the statistical, macroscopic description. However, even though these new modes of behavior effectively introduce a history into physical description, the selection of alternative modes is left to chance. The physics of this situation is described in terms of far-from-equilibrium thermodynamics, in which instability produces amplification of the fluctuation or chance behavior at the microscopic level and which is then stabilized in the form of neworganizations or dissipative structures at the macroscopic level.Although the formalism describing this behavior becomes very complex, it is the epistemological basis of the approach that hasthe most significance for our theories of self -organization. Classical epistemology, as we said, assigns determinism and objectivity the primary role in theory, with chance and the subjective observer only accepted as unavoidable perturbations inGod's universal mechanism. Modern physics, especially quantum theory, forced us to assign a more fundamental role to probability as well as to the role of the observer and the process of measurement. Explanation came to mean not simply reduction of statistical observations to deterministic microscopic events but an effective procedure for correlating ourdescriptions of observations with our descriptions of laws. Prigogine's epistemological approach to dissipative structures requires this same complementarity between laws and observers. Heassociates irreversibility, or a direction of time with the necessary, epistemological conditions for observation, thereby achieving a consistent definition of nonequilibrium entropy and the instabilities that allow dissipative structures (Prigogine,

1979). However, this association of instability with the conditionsfor observation produces an apparent paradox, which is fundamental for theories of self-organization. Recall that the classical concept of instability associates it with chance events―Poincare's "observable events that have no observable cause." This means that lack of information is associated with chance and instability. Thus, flipping a coin is a chance event only if we do not measure the initial conditions accurately enough. Or, in other words, instability is interpreted classically as a lack of knowledge of the system. But the modem view requires instability as a condition for measurement. In this sense, observation and knowledge seem to require a system that iscomplex enough to display instability in some sense. What is the difference, then, between the instabilities that we associate with loss of knowledge and the instabilities that we associate with the acquisition of knowledge? Or, in terms of theories of self-organization, what is the difference between the organizations that develop from loss of information and the organizations that develop from acquiring new information? One answer is that a loss of information occurs when systems acquire alternative behaviors (bifurcations), while a gain of informationoccurs when alternatives are reduced (selection). The Nature of Symbolic Information

These questions bring us directly back to the fundamental difference between the physicist's approach and the biologist's approach to a theory of self-organization, for despite the new epistemology of physics that gives more weight to probabilistic descriptions and the requirements of observation, there is still an enormous gap between the types of self-organization found in the dissipative structures of macroscopic chemical systems and even the simplest living cells. There is also a serious discrepancy in the idea that any hierarchy of levels of sta-tistical, dissipative structures could ever, by itself, lead to biological self-organization, since the latter is clearly instructed and controlled by individual molecules of nucleic acids and individual enzymes. Statistical mechanics can play no

more role in describing these individual molecules than in describing a computer program. The basic epistemological distinction I wish to make is between the organizations that are constrained by symbolic information and those that develop through chance. Physical theory can be expected to describe self-organizing behavior only insofar as the laws of nature and of statistics are responsible for the behavior; but when the self-organizing behavior is under the constraints of a symbolic information system, then the history of the system dominates our description. The concept of physical laws has meaning only for universal and inexorable regularities, i.e., for systems in which our information about the system can change only with respect to our knowledge of initial conditions. In other words, the form of the laws of nature-even the statistical laws-must be expressed as invariant relative to the information the observer may have about the stateof the system. Physical laws do not change in time and have no symbolic memory of past events. On the contrary, observers and symbol systems are characterized by their response to selected past events, which are recorded as memory. A basic discrepancy, then, between the physicist's and biologist's approach to self-organization is that the physicist'stheory recognizes no symbolic restrictions and no historical regularities, whereas the biologist's theory assumes genetic symbol systems with more than 3 billion years of selected historical structures. The two approaches to self-organizing theories-the instability theories and the information-dependent theories-reflect these two complementary views toward symbols andmatter, the instability theories emphasizing fluctuations and ignoring symbolic constraints, and the information-dependent theories ignoring physical laws and emphasizing genetic instructions in their respective formulations. One can find this complementarity sharply distinguished in our two modes of describing computers. On the one hand, the basic electronic gatesand memory devices that form the hardware require a description in the language of solid-state physics with no reference to syntax or symbols, whereas the software description is entirely symbolic using programming languages that have no reference to physics. Of course, this same complementarity in our descriptions

is required for even the simplest symbolic behavior. The "hardware" description of a pencil has nothing to do with the function of a written message, and the chemical description of a gene has nothing to do with the function of the enzyme whose synthesis it instructs. We can therefore recognize that the physicist's instability-based concepts and the biologist's information-based concepts of self-organization are also closely related to the two sides of the structure-function complementarity. The concept of function and the concepts of symbol and memory are not a part of physical theory itself. It is significant; however, that when the physicist tries to extend his descriptions to the process of measurement, he cannot avoid the concepts of symbol and function,since measurement is defined as a functional activity that pro-duces symbolic output. For this reason the majority of physical scientists simply ignores the problem of measurement or places itin the biological world, often at the level of the conscious observer. Attempts at unified physical theories of measurement, especially in quantum theory, have not been satisfactory and remain controversial (e.g., see Jammer, 1974; d'Espagnat, 1976). On the other hand, most biological scientists tacitly assume a classical reductionism they expect will ultimately explain biological activity in terms of physical theory. When it comes to theories of the brain, cognitive activity, and consciousness, there is not only controversy over what constitutes an explanation, but also basic disagreement on what it is that we are trying to explain. I therefore find it useful to pay more attention to primitive symbol systems where we may expect the matter-symbol relationship to be less intricate, and where the fundamental complementarity of physical instabilities and symbolic information can be more easily explored. My use of the concept of information is strictly limited to semantic information, i.e., to information that is characterized by its meaning, value, or function. Of course, semantic information is not as well defined as is the structural information in communication theory or complexity theory; but the problem of value and function is obviously the central issue in biological organization. My concept of symbol system is essentially the concept of a language-like set of rules (e.g., codes, lexical

constraints, and grammars) that are necessary conditions for executing or interpreting symbolic information, instructions, or programs (cf. Stent, Chapter 18, this volume).

Instabilities and Information Since the concept of symbolic information is not as well defined as the concept of instability, we need to consider the primitive conditions for symbolic behavior in more detail. The simple concept of a symbol is that it is something that stands for something else by reason of a relation, but it is implicit in this concept that the relationship of symbol to referent is somewhat exceptional. In other words, it is not a physical law. We do not call the electron a symbol for a proton because there is a relationship of attraction between them, nor is the symbol'srelationship to its referent the result of statistical laws. We do not call the temperature of a gas a symbol for the velocity distribution of its molecules. In natural languages we say that the symbol-referent relation is a convention. But what does a convention correspond to at the most primitive levels? At the level of the genetic code there is no evidence of a physical or chemical basis for the particular relations between codons and their amino acids. Crick first called this type of relation a frozen accident, and Monod has generalized this biological arbi-trariness as a principle of gratuity. To achieve such an arbitrary or conventional aspect of the symbol-referent relation,the physical system in which the symbol vehicles are to exist must exhibit some form of instability. We could also say that a totally deterministic, stable description of the world in which small changes inexorably produce corresponding small effects doesnot allow the arbitrariness necessary to generate new symbol-referent relations. Moreover, once created, a symbol system must persist under the same instabilities through which it came to exist, since these instabilities are an inherent property of the underlying physical system. These sound very much like the conditions for Prigogine's dissipative structures, which are created by fluctuations in an unstable thermodynamic regime but which are subsequently stabilized against the same level of fluctuations by

the macroscopic coherence of energy flow through the system. However, there is a basic discrepancy between the characteristicsof dissipative structures and the characteristics of symbols or informational structures. Dissipative structures are dynamic, i.e., they depend crucially on the rate of matter and energy flow in the system (e.g., reaction and diffusion rates). By contrast, symbol systems exist as rate-independent (nonintegrable) constraints. More precisely, the symbol-referent relationship does not depend, within wide limits, on the rate of reading or writing or on the rate of energy or matter flow in the symbol-manipulating hardware. On the other hand, the effect or meaning of symbols functioning as instructions is exerted through the selective control of rates. For example, the rate of reading or translating a gene does not affect the determination of which protein is produced. However, the synthesis of the protein as instructed by the gene is accomplished through the selective control, by enzymes, of the rates of individual reactions. At theother extreme of symbol system evolution, we have the example of computers where the rate of reading a program or the rate of computation does not affect the result or what is being computed.On the other hand, the program, as instructions, is actually selectively controlling the rate at which electrons flow in the machine. A second difference between dissipative structures and symbols is in their size. Dissipative structures occur only when the size of the system exceeds some critical value that is significantly larger than the fluctuating elements that trigger the instability that creates them. This restriction may also be considered as a statistical requirement for a large number of elements to allow stabilization of the new structure. Symbol vehicles, whether bases in nucleic acids, synaptic transmitters, or gate voltages in a computer, are generally not large relative to the size of the organizations they control, nor are symbols essentially statistical in their structure or behavior. A symbol is a localized, discrete structure that triggers an action, usually involving a more complex system than the symbol itself. Also, symbolic inputs are generally amplified in some sense. In other words, symbols act as relatively simple, individual, nondynamical (nonintegrable) constraints on a larger dynamic

system. This suggests that with respect to their relative size, discreteness, nondynamical behavior, and triggering action on larger dynamical systems, symbols act at the same level as the fluctuations that generate dissipative structures. But clearly symbols are completely unlike fluctuations in other respects. Symbol systems are themselves exceptionally stable, and if we areto understand the origin of symbolic behavior at the fluctuation level or molecular level of organization, then we must explain how symbol systems could stabilize themselves without depending on the statistics or averages of macroscopic organization. How can we expect high reliability in molecular information structures that are embedded in a noisy thermal environment? There are several possible answers to this question, but quantitative results based on theory are still very difficult to produce. The problem was first discussed by Schrodinger (1944), who recognized that the quantum dynamical stationary state in a covalently bonded macromolecule ("aperiodic crystal") effectivelyisolated its primary structure from thermal fluctuations. Schrodinger also suggested the analogy of a true "clockworks" at the molecular level, based on quantum dynamical order―and not thestatistical order of classical clocks. London (1961) also speculated that individual enzyme molecules may function in some form of quantum superfluid state that allows the advantage of stationary states along with the possibility of internal motions isolated from thermal fluctuations. In a discussion of the physical basis of coding andreliability in biological macromolecules, I proposed that the correlation of specificity and catalytic rate control in the enzyme required a quantum mechanical non-integrable (rate independent) constraint formally analogous to a measurement process (Pattee, 1968). However, no quantum mechanical models of a quantitative relationship of specificity and catalytic power exist at this time. We should also mention von Neumann's (1966) ideas on the logical requirements for reliable self-replication. Although he gave no proof, or even a precise statement of the problem, von Neumann conjectured that in order to sustain heritable mutations without losing the general self-replicative property, there must

be a separate coded description of the "self' that is being replicated along with the general translation and synthesis mechanism that reads and executes the description (the "universalconstructor"). The most fully developed study of the reliability requirements for hereditary propagation were begun by Eigen and developed into an extensive theory of self-organization at the chemical kinetic level (Eigen and Schuster, 1977). Given that some error in information-processing including the template replication of polymer sequences is physically inevitable, Eigen and Schuster show how cooperation between replicating sequences, each with error-restricted information capacity, can lead to hierarchical systems of greater and greater capacity (see Schuster and Sigmund, Chapter 5, this volume). Their dynamical treatment of this problem requires a special assumption or special boundary condition that I would associate with arbitrary measurement constraints rather than with laws of nature. This in no way weakens their mathematical arguments; however, it bears directly on the question of epistemological reductionism. The special assumption is the existence of specific catalysts that are coordinated to form a primitive code. I want to make it clearthat there is nothing wrong with this assumption―indeed, this mayactually be the way life began. All I claim is that a code is notreducible to physical laws in any explanatory sense, i.e., without basically revising the concept of explanation. The practical question of origins is simply whether any such functional code had a reasonable probability of occurring by chance under primitive earth conditions. The epistemological question is whether the logical concept of codes in general is derivable from only physical laws or whether the concept of code also requires a complementary functional mode of description. In my earlier discussions of the relationship of specific catalysts to the problem of measurement, I concluded that symbolic information can only originate from processes epistemologically equivalent to measurements (Pattee, 1968, 1979). Measurement may occur over an enormous range of levels of biological organi-zation, from the specific catalysis of enzymes to natural selection processes that are the ultimate origin of symbolic information. The artificial measurements of physicists fall

somewhere in this hierarchy. Syntactical constraints or codes areanalogous to the constraints embodied in measuring devices. Without such coherent constraints, neither informational nor measurement processes would have any function or meaning; yet these constraints are arbitrary in the sense that many physicallydistinguishable syntactical constraints (i.e., different languages and measuring devices) may produce indistinguishable meanings or results. This redundancy or degeneracy is found in all levels of symbol systems from the genetic code to human languages and contributes to the stability and reliability of symbolic information. One is tempted to contrast Poincare's concept of instability, i.e., distinguishable events for which wefind no distinguishable antecedents, with this degeneracy characteristic of symbol vehicles, i.e., distinguishable antecedents for which we may find no distinguishable consequence However, this is too simple a comparison, for even in the most reliable symbol systems some unstable ambiguities in function must exist with respect to perturbations in symbol vehicle structure. I shall return to these epistemological issues in the last section, but my main purpose here is to suggest how dynamics, instabilities, dissipative structures, and symbolic information are related in biological self-organization. I begin with the hypothesis that the elemental basis for thesymbol-referent relationship is the individual specific catalyticpolymer in which the folded shape of the molecule is arbitrarily or "gratuitously" coupled to control a specific dynamical rate. This is the only general type of coupling that provides the nec-essary conditions for a symbol-referent relationship, although itis by no means sufficient. The recognition site or substrate binding site is not related to the catalytic site simply by dynamical or statistical laws. Rather, it requires the particularrate-independent constraints of the folded polymer to determine what molecules are recognized and what bonds are catalyzed. Giventhis type of specific catalyst, a true code relationship between structure and dynamics is logically possible, along with self-replication and Darwinian natural selection. The question is, canwe predict some general organizational consequences of this complementary view of dynamics and information? We expect all stable dynamical behavior to be largely autonomous; that is,

since stable dynamics generate no alternatives, there is no need for informational control except in constraining internal boundary conditions. Of course in a stable dynamical regime therestill are fluctuations (variations) in these boundary conditions,e.g., modifying protein sequences with or without selection. However, if the limits of dynamical stability are exceeded through excessive competition or new interactions, the informational constraints become dominant in choosing a new stable dynamical structure. This should result in a sudden, large, phenotypic change incommensurate with any structural information measure that could be observed and produce two distinguishable patterns of genotypic and phenotypic change. Under stable phenotypic dynamics there may be observable structural changes in the gene with little corresponding phenotypic change. This would superficially appear as selective neutrality. However, under unstable phenotypic dynamics, a major evolutionary change, like speciation, could result from only minor genetic change. This situation would of course appear to beeven more complex, because dynamical instabilities will evolve atall levels of the organizational hierarchy (see Gould, Chapter 6,this volume). A second consequence of this complementary view of self-organization may appear in our approach to the evolution of symbol systems themselves. Since symbolic information at all levels, from nucleic acids to natural languages is so obviously effective for instruction and control of dynamical systems, we usually jump to the conclusion that this capability is an intrinsic and autonomous property of the symbol system alone. From our view of the symbol-matter relationship, however, there is virtually no meaning to symbols outside the context of a complex dynamical organization around which the symbolic constraints have evolved. It is useless to search for the meaningin symbol strings without the complementary knowledge of the dynamic context, especially since the symbolic constraints are most significant near dynamical instabilities. From this point of view the relationship of the present design of computers and their programming languages represents a bizarre extreme. The hardware is designed to have no stable dynamics at all. The gates and memories are a dense maze of

instabilities that require explicit, detailed programs of informational constraints for every state transition. Programminglanguages therefore have no natural grammars. Machine language isconceptually vacuous, whereas high-level languages that try to mimic natural languages are generally compiled or translated to machine code only at the cost of speed and efficiency. The benefit of this design, of course, is universality of computation. The central epistemological problem of computer simulations of such complex systems as the brain is the ability to distinguish the part of the computer simulating the brain's dynamics from the part simulating the brain's information, since in the universal computer all dynamics are simulated by information. This raises the deeper question of whether or not wecan determine if the brain itself uses a dynamical mode for its representations or if all knowledge is restricted to informational constraints.

Epistemological Limitations

The conclusion that it is useless to search for meaning in symbols without complementary knowledge of the dynamics being constrained by the symbols raises the classical issue of what it would mean to "know" the dynamics. We can mention only briefly here two major schools of thought: that of the information processors who believe that to know the dynamics means representing the dynamics with yet another symbol system (e.g., Newell and Simon, 1972) and that of the subjectivists who believethat to know the dynamics is tacit or ineffable, i.e., that it cannot be represented by any symbol system (e.g., Polanyi, 1958).In light of these opposing concepts of knowing, we may reconsiderthe question of whether symbols can be epistemologically reduced to dynamics, i.e., whether codes can be derived from physical laws. To information processors the reduction of symbols to dynamics is not logically possible, since to them, knowing means reducing dynamics to symbols. To a subjectivist the reduction of symbols to dynamics would place knowing entirely in the realm of the ineffable, which, although acceptable epistemologically, is methodologically impotent. I do not believe that any of these arguments, including my

own, are likely to be convincing at the level of the nervous system because of the complexity of the hierarchical organizationof both its dynamical and symbolic modes. There is no question that information-processing at many coded symbolic levels goes onin the brain. It is also obvious that much of what goes on in thebrain is unconscious and inaccessible to objective analysis. However, it is largely because of these difficulties that studying the relationship of symbol systems to dynamics at the molecular level may uncover useful concepts and organizational principles. At least at this level we can say that the symbolic instructions of the gene go only as far as the primary sequence of the proteins. From there, thus constrained, the dynamical lawstake over. To the gene, these dynamics are ineffable. References

Bohr, N. (1928) The Como lecture, reprinted in Nature 121:580. Bohr, N. (1963) On Atomic Physics and Human Knowledge. Wiley-Interscience, New York. d’Espagnat, B. (1976) Conceptual Foundations Quantum Mechanics. Benjamin, New York. Eigen, M., and P. Schuster (1977) The hypercycle: A principle of natural self-organization. Naturwissenschaften 64:541-565; 65:7-41, 341-369. Hertz, H., 1894, Die Principien der Mechanik in neuem Zusammenhange dargestellt (Leipzig). English translation: The Principles of Mechanics (Dover, NY, 1956). Jammer, M. (1974) The Philosophy of Quantum Mechanics. Wiley, New York. Laplace, P. S. (1951) A Philosophical Essay on Probabilities. Dover, New York. London, F. (1961) Superfluids, 2nd ed., Vol. I. Dover. New York. p. 8. Misra, B. (1978) Nonequilibrium entropy, Lyapounov variables, andergodic properties of classical systems. Proc. Natl. Acad. Sci. USA 75:1627-1631. Newell, A., and H. A. Simon (1972) Human Problem Solving. Prentice-

Hall, Englewood Cliffs, N.J. Pattee, H. H. (1968) The physical basis of coding and reliabilityin biological evolution. In: Towards a Theoretical Biology, Vol. I, C. H. Waddington (ed.). Univ. of Edinburgh Press, Edinburgh, pp. 69-93. Pattee, H. H. (1971) Physical theories of biological coordination. Q. Rev. Biophys.4:255-276. Pattee, H. H. (1979) The complementarity principle and the originof macromolecular information. BioSystems 11:217-226. Piaget, J. (1927) The Child's Conception of Physical Causality. Routledge & Kegan Paul, London. Planck. M. (1960) A Survey of Physical Theory. Dover, New York, p. 64. Poincare, H. (1952) Science and Method. F. Maitland (trans.). Dover. New York. Polanyi. M. (1958) Personal Knowledge. Routledge & Kegan Paul, London. Prigogine. I. (1978) Time. structure and fluctuations. Science, 201:777-785. Prigogine. I. (1979) Discussion in "A Question of Physics." P. Buckley and D. Peat (eds.). University of Toronto Press. Toronto, p. 74. Schrödinger. E. (1944) What Is Life Cambridge University Press, London. Thom, R. (1975) Structural Stability and Morphogenesis. Benjamin, New York.von Neumann, J. (1966) Theory of Self-Reproducing Automata. Edited and completed by A. W. Burks. University of Illinois Press, Urbana. Wigner, E. P. (1964) Events, laws and invariance principles. Science 145:995-999.