@device[postscript] @style[fontfamily = timesroman] @style[size 12] @Comment{@style[indent 5 characters]} @Style[spread 1] @style[spacing 1.3] @majorheading[Statement of Objectives] @pageheading[left = "Statement of Objective", right = "Page @value(page)"] @pagefooting[center = "Leonard N. Foner, 187 Arsenal St, Watertown, MA 02172, 617/923-8113", Immediate] Many systems display behavior which is much more complicated than the individual behaviors of their parts would indicate. Such @i[emergent behavior] is seen everywhere in living systems, such as in the simple rules followed by flocks of birds which lead to apparently complex overall behavior of the flock. They are also common in nonliving systems, as demonstrated by even such simple systems as Conway's Game of Life and other cellular automata when trivial rules lead to behaviors as complex as gliders and self-reproducing patterns, or in the spectacular visual patterns produced in fractals generated from simple recursion relations. My work to date has often been indirectly the study of such emergent behaviors in computational systems, and I would like to study such behaviors more in order directly to advance the state of understanding of complex systems. Since much of current engineering involves accidentally stumbling upon emergent properties of large systems, and attempting to eradicate them, and since I believe much of large systems in the future will instead depend critically on building their behavior to take advantage of emergent properties, this seems a subject whose better understanding may help advance many disparate fields. To be more specific, I'll list some examples of what I mean, how this relates to my prior work, and what I want to do next. Take computer networking, in the sense of protocol design for local- and wide-area loosely coupled computing. I've studied network protocols for many years, usually from the perspective of implementing or designing network protocols that are stable even in the face of unknown-in-advance emergent behaviors. Such networks display emergent properties at every level of study. At the lowest levels of packet routing, apparently simple algorithms lead to gross nonlinearities, leading to such famous misbehaviors as the "oscillating ARPAnet" and "IMP virus ARPAnet crash" of the early 1980's. At the very highest levels, the level of users participating in the joint "computation" of the network, we see emergent properties that are more properly sociology than computer science, such as the peculiarly different etiquette employed in electronic mail (indeed, entire books have been written about the social impact---read emergent properties---of a similar invention, the telephone). Nontraditional computers of any sort often display these sorts of emergent behaviors as well. (We've merely gotten used to those displayed by "conventional" computers.) For example, I did some research for Professor Knight as an undergraduate in the interaction of small objects and capability architectures, based on work by Gehringer, culminating in a 50-page technical note describing a system that was able to both represent the numerous small objects typically required by object-oriented programming without the enormous efficiency costs usually associated with the tagged structure of objects in a capability machine, and to cope with the other emergent property (page thrashing) usually associated with garbage collecting a large virtual address space (based on work in a similar environment, the Lisp Machine). Massively parallel computers, such as Arvind's Dataflow architecture, Dally's Jellybean architecture, and the Connection Machine, display similar nonobvious properties arising from the interactions of the simple rules of their parts, such problems typically manefesting themselves in poor performance due to unexpected congestion in their internal routing networks. Finally, many traditional rule-based expert systems display emergent properties, as rules interact in unanticipated ways and lead to incorrect conclusions. My thesis work on network diagnosis with Professor Davis, using a model-based ES instead, was in part an effort to describe a system using a more robust representation that did not suffer from such emergent properties, because each hierarchical level of the system was small enough that the possible interactions among them were controllable. (In essence, the model-based approach uses standard hierarchical decomposition of the knowledge base to avoid the otherwise-unpredictable emergent behavior in a flock of coequal rules.) Each of the examples above is a typical example of complexity leading to incorrect behavior. Much of current software engineering focuses on such emergent behavior (the traditional rule of thumb that increasing the scale of anything by an order of magnitude buys you novel problems certainly applies here), though it focuses on such behavior merely to eliminate it by specifying methodologies (such as layers of abstraction and information hiding) that seek to keep the system behavior "linear" and predictable, effectively by allowing superposition to be used in analyzing its behavior. Such techniques are limited in the complexity of system behavior they may adequately describe---they essentially require global descriptions of global state. Systems in which local descriptions lead to a description of global state lead to more parsimony of description in many cases, such as the generation of "intelligent" or "self-reproducing" behavior. Systems in which the interaction of very simple components is deliberately nonlinear and hence do not obey the superposition principal are much harder to analyze using traditional formal methods, making their behavior less predictable. On the other hand, they can generate much more complex behaviors with far less description than traditional techniques. Some examples will make this clear. Brooks' subsumption architecture for generating robotic behavior, and Minsky's Society of Mind for generating human behavior, both attempt to use simple, essentially trivial local behav@i(ors) to generate complex global behav@i(iors). The richness of the resulting behavior, and the success of, for example, the subsumption architecture in creating robotic behavior that is both more robust and more "intelligent" appearing than traditional control produces, argues that the emergent properties of having many simple pieces interacting in deliberately nonlinear ways has great potential. Indeed, the sort of robotics I had been exposed to as an undergraduate, seemingly all inverse kinematics and how to guide an end effector through a tight maze, was unexciting to me and completely unappealing, whereas the subsumption architecture has revived my interest in robots and other "creatures of external reality." Since the exact emergent behavior of many nonlinear components is presumed to be a priori unpredictable, generating a system with a particular desired global behavior involves working backwards somehow from the intended final behavior to the local rules, essentially a search problem. The rather recently-formed field of Artificial Life often approaches the problem by appealing to the same robust search techniques used by biologic life: genetic algorithms. Such techniques have surprisingly broad applicability; for example, in my spare time, I've been implementing an intelligent Internet "clipping service" using genetic algorithms, evolution, and other Artificial Life techniques to quickly generate behavior that describes those topics and messages I might like to read and to discard the rest without ever showing them to me. This is quite a different sort of application than either the subsumption architecture or Society of Mind, but it is amenable to the same sort of attack. This is an approach to building complex systems which is not well-explored analytically, partially because it is not structured to be analyzed in the first place, but rather synthesized. Consequently, there are many unanswered questions in the field: When can we reasonably expect to see emergent behaviors from the interaction of a system's parts? How sensitive is the generation of emergent behaviors to increasing the scale of a system? Can we sensibly bound or envelope the possible types of emergent behaviors even before they emerge, and hence be prepared for a range of behaviors even when we know we are unlikely to be able to analyze the resulting emergent behaviors a priori? What sorts of systems are most amenable to solution by deliberate generation of a priori unpredictable emergent behaviors? These questions, and a host of similar ones, have not been well-studied, yet they could have a dramatic effect on issues in computer science as diverse as risk and safety analysis of software control systems and the generation of intelligent behavior. It is unlikely that it is possible to study such questions in a vacuum. In particular, emergent behavior is an inherently "bottom up" phenomenon, making it questionable whether a "top down," general analytical approach to these questions will yield useful results. Thus, I aim to study such behaviors in the context of real systems that do useful work, for example, network protocols or the subsumption architecture. Such "useful systems" are invariably scaled up beyond their original design goals or have more and more relatively simple pieces added to them, predictably introducing unexpected emergent behaviors which typically result in engineering solutions to suppress the undesired additional behavior. From my point of view, this unfortunate trend is in fact a feature, not a bug: I intend to use applications such as these, deployed in the real world, as a free testbed from which to do research. Rather than striving to create a system in which all emergent behaviors have been eliminated (something of a fruitless endeavor in many systems, since it is not yet known how to predict when new emergent behaviors may occur or what forms they may take), I intend to study the creation of such emergent behavior and how they can be used to advantage. If possible, I would like to answer many of the questions I posed above (and many others which space does not permit), to yield a more coherent and inclusive picture of how complex systems are formed, in what circumstances deliberately nonlinear behaviors are useful despite their inherent unpredictability, and how to apply this knowledge either in building deliberately emergent systems or, the more traditional case, deliberately "non-emergent" ones. This approach will presumably be a rather interdisciplinary one, possibly taking advantage of work by Minsky and Brooks in their emergent behavior paradigms, the work done by various members of the Media Lab in Artificial Life, and work in LCS by Clark and others in network protocol design. The unique mix of research at MIT which exhibits emergent behavior seems the perfect environment for this sort of investigation.