index

Swarm Ontology by Graham Harman

Swarm Ontology:

Particles and Emergence

by Graham Harman

The term “swarm intelligence” seems to have been introduced in June 1989, in a conference paper by Gerardo Beni and Jing Wang.

[1]

As is now widely known, it refers to the way that simple entities following basic rules can generate complex large-scale results. This has implications for the study not only of society and intelligence, but of part-whole relationships more generally (“mereology” is the branch of philosophy that studies such relationships).

[2]

But before entering further into swarm intelligence, we should introduce two related themes. The first is the age-old tension between the continuous and the discrete, which still rears its head at the center of countless disciplines. The second is the theme of emergence, referring to the ubiquitous phenomenon in which mid-sized to large-scale entities are composed of parts, yet are somehow more than a mere sum of those parts.

The Continuous and the Discrete

My recently published book Waves and Stones considers the duality, found in nearly every field of knowledge, of the continuous and the discrete.

[3]

One famous case occurs in evolutionary biology. According to Charles Darwin’s model, the evolution of species ought to happen gradually, with slight divergences in offspring leading to minor advantages in fitness, so that over the course of millions of years we should see an arms race in which both predators and prey grow generally faster, cleverer, or better camouflaged.

[4]

If this were the case, then the fossil record should display slow changes occurring over vast periods of time. Yet this is not what the paleontologist finds. As Niles Eldredge and Stephen Jay Gould noted in their pathbreaking 1972 article “Punctuated Equilibria,” what the fossil record actually shows is long periods of stability in species, with occasional jumps (“saltations”) between markedly different life forms.

[5]

They conclude that Darwinian “phyletic gradualism,” as they call it, has been dogmatically imposed upon the field on the basis of limited evidence. Leaning on Ernst Mayr’s influential theory that speciation occurs rapidly when one part of a population becomes isolated from the main branch, they also suggest that sudden environmental change is a major trigger for the evolution of species.

[6]

A classic example would be the asteroid that probably struck the Yucatán Peninsula some 66 million years ago, creating the Chicxulub crater. While catastrophic for the dinosaurs who were wiped out by this event, this event opened a wide evolutionary window for the mammals and birds who quickly filled their vacated ecology. Aside from Eldredge and Gould, Lynn Margulis proposed a different mechanism for discrete rather than continuous evolution.

[7]

As she has it, evolution is driven by the symbiosis of previously separate creatures into more complex living entities: the mitochondria found in most cells (and the chloroplasts in plants) came originally from outside the organism. Since such invasions would have to occur intermittently and suddenly, the symbiosis theory also speaks against Darwinian gradualism.

Another scientific example comes from the ongoing deadlock between General relativity and quantum theory, the former (thanks to Einstein) providing our best theory of gravity and the latter (through the work of Planck, Bohr, and others) giving a unified explanation of electromagnetism along with the strong and weak nuclear forces. These two pillars of the past century of physics, both so powerful in their own domains, remain maddeningly incompatible. As the physicist Sabine Hossenfelder clearly puts it: “The origin of the contradiction is that General relativity is not a quantum theory but nevertheless must react to matter and radiation, which have quantum properties.”

[8]

For example, quantum theory tells us that the position of any particle can only be known with probability, not with certainty. General relativity tells us that a particle bends space-time with its mass. The problem resulting from the two statements is this: if we don’t know exactly where a particle is, we can’t know exactly which part of space-time is bent. But General relativity is a classical theory, and does not allow for such uncertainty; space-time has an exact value at every point. Here we have another classic tension between quantum theory’s model of nature as made up of tiny discrete chunks, and General relativity as a model of the gradual curvature of space-time by mass. Oddly, most accounts of contemporary physics (even those written by eminent physicists) give a different and somewhat misleading explanation. Consider the following passage from the British newspaper The Guardian: “Physicsts agree that the theory of quantum mechanics applies to very tiny particles, and Einstein’s theory of General relativity applies to larger objects.”

[9]

While this may be a good rule of thumb, it is not strictly true. The central difficulty of finding a theory of “quantum gravity” able to unify the two poles of physics is the conundrum of bridging the gap not between the large and the small, but between the continuous and the discrete.

[10]

But in some ways, it would be too neutral to call the search for quantum gravity a bridging operation, as if it were a symmetrical attempt to do equal justice to both sides. After all, the very phrase “quantum gravity” implies that the continuous space-time gradation of Einstein’s theory ought to be rendered in a language of discreteness. That would effectively turn quantum theory into the master language of physics, with General relativity becoming a derivative phenomenon of an underlying quantum world. While this capably mirrors the preference of a sizable majority of working physicists, a reverse approach is possible. Consider the words of leading relativist Roger Penrose, who counters that “the case for ‘gravitzing’ quantum theory is at least as strong as that for quantizing gravity.”

[11]

Alongside this possible oscillating priority of the two, the idea has recently emerged that perhaps the two theories are non-unifiable and do not need to be unified. This was proposed in 2023 by the physicist Jonathan Oppenheim at University College London, although certain philosophers of science had already been musing about the possibility.

[12]

In fact, such a two-headed approach was laid out already in ancient philosophy by no less a figure than Aristotle. For while his Physics is devoted to arguing for the continuous character of number, time, space, and motion, his Metaphysics lays stress instead on the discreteness of both individual substance and qualitative change.

[13]

If we imagine a professor lecturing a class full of students, the number of instants of time in the lecture and the number of units of space in the classroom can be cut up arbitrarily however we please. It is just as correct to say that the lecture lasted for seventeen or three million units of time as to say that it lasted for a single hour; one can also say with equal justice that the room has ten parts, or seventy, or 13,542. But the number of people in the room is not divisble in any way we want, but is some exact number, which shows that individual entities are discrete rather than continuous.

Despite the ancient example of Aristotle and the more recent one of Oppenheim, it has usually been true that when faced with the deadlock between the continuous and the discrete, most people will try to reduce one to the other. In medieval Islam and early modern Europe, the tendency was to reduce the continuous to the discrete, in a philosophy known as “occasionalism.” The occasionalists holds that objects are so radically discrete, so completely cut off from each other, that only God could serve as a causal bridge between them.

[14]

Recently it has been more popular to take the opposite tack and say that reality itself is a molten, turbulent, unified flux, and that individual things emerge only as byproducts of this more primal whole. In the words of Jane Bennett: “One would then understand ‘objects’ to be those swirls of matter, energy, and incipience that hold themselves together long enough to vie with the strivings of other objects, including the indeterminate momentum of the throbbing whole.”

[15]

Versions of this theory date back as far as archaic Greece. In the remaining fragments of such pre-Socratic thinkers as Anaximander, we read that what is real are not individual entities, but a formless cosmic lump known in Greek as the apeiron.

Another recent debate between these two extreme options took place in architecture. In 1988, New York’s Museum of Modern Art (MoMA) held a show on the rising architectural movement known as Deconstructivism. It was an architecture of disruption, one based on gaps, cracks, and discontinuities in what used to be familiar everyday wholes. As curators Philip Johnson and Mark Wigley described some of the buildings: “In one project, towers are turned over on their sides, while in others, bridges are tilted up to become towers, underground elements erupt from the earth and float above the surface, or commonplace materials become suddenly exotic.”

[16]

Daniel Libeskind’s unnervingly angled Jewish Museum Berlin (opened in 2001) is a fine exemplar of this style. But as early as 1993, we see a sudden change of mood in the profession. In that year the young American archtiect Greg Lynn called instead for an “alternative smoothness,” an “intensive integration of differences within a continuous yet heterogeneous system” featuring “smooth mixtures” in which individual elements are “blended within a continuous field of other free elements.”

[17]

Although Lynn’s use of the phrase “heterogeneous yet continuous” makes it look as if he were striking a delicate balance between the discrete and the continuous, his writings like his practice always give smoothness the upper hand. This becomes especially clear in his championing of the “blob” as an ideal architectural form, given its readiness for “fluid and continuous differentiation.”

[18]

Stated differently, Lynn’s architecture is all Einstein, no quantum theory. But architecture, like every discipline, needs to absorb Aristotle’s lesson that the discrete and the continuous both have a place in the world. An architecture of pure continuity would not be able to have windows, doors, and bathrooms in specific places differing from others; likewise, an architecture of pure discreteness would have no continuous floor or hallways space enabling us to move around the building.

Emergence

By now the reader might wonder what the continuous and the discrete have to do with swarm intelligence, or even with swarms at all. The connection with swarms is made for us by another architect: Stan Allen, former Dean of the Princeton University School of Architecture. In an important article of 1997, Allen argues that architecture is undergoing a shift from objects to fields: a claim that was no doubt true at the time.

[19]

An individual element in architecture, he holds, should be thought of “not as a demarcated object but as an effect emerging from the field itself —as moments of intensity, as peaks or valletys within a continuous field.”

[20]

This is why he prefers the numerous evenly spaced columns of Córdoba’s former Umayyad mosque (which he describes as “an undifferentiated but highly charged field”) to the iconic and functionally differentiated St. Peter’s in Rome.

[21]

All of this makes Allen sound like the perfect ally of Lynn and smoothness. But his article soon takes a surprising turn that has no obvious analogue in Lynn’s work. Namely, Allen tries to suggest that fields are actually generated by countless small individuals. This happens when he refers to a computer program developed by Craig Reynolds to simulate the flocking of birds: or rather, “boids,” in a whimsical reference to the stereotypical Brooklyn accent.

[22]

As Allen reports, each boid in the program simply followed three rules:

  1. Each boid had to maintain a minimum distance from the others.
  2. Each should maintain the same speed as the other boids near it.
  3. Each one should also “move toward the perceived center of mass of [boids] in its neighborhood.”
[23]

By simply following each of these rules, a flock was formed, even though no individual boid had any wish to do so. This is clearly reminiscent of the “cellular automata” known from Conway’s Game of Life and studied at length by the likes of Stephen Wolfram.

[24]

By now, the reading public is well aware of how larger patterns can be generated by purely local constraints.

But what seems to escape Allen’s notice is that there is no way to generate a continuous field from local elements. Consider the number line, a textbook example of a continuum. How many numbers are there between 0 and 100? The answer is neither 100 nor 99, since we can count by halves or tenths or hundredths just as easily as by integers. The answer, of course, is that there is no definite number of numbers between 0 and 100. We can cut up this interval however we please; the numbers on that stretch of the line (or any other) are literally infinite. That is precisely what makes the number line a continuum. This means that the flock of boids example makes a bad fit with Allen’s model of architectural fields. For it would literally take an infinite number of birds to create a truly continuous flock or field. Instead of leading to the concept of a continuous field, the boids point us toward what is actually a discrete phenomenon, one that philosophers call “emergence.”

[25]

Typically, emergence is defined as a situation in which a whole is greater than the sum of its parts. It is a phenomenon seen more often in chemistry than in physics. In physics, if we take all the force vectors acting on an object (such as an airplane’s absolute speed and the wind that is pushing it obliquely), we can simply add them up to reach the final velocity.

[26]

This does not always happen in chemistry.

[27]

In the classic example, two hydrogen atoms and an oxygen atom do not simply add up to a collection of two hydrogens and one oxygen. Instead, they yield H2O, which has properties not found in either element taken alone: such as the ability to quench fire, whereas hydrogen and oxygen paradoxically intensify fire. There need not be anything “mysterious” or “inexplicable” about the part-whole relation, as some early theorists of emergence mistakenly thought.

[28]

Quantum chemistry is perfectly capable of explaining why water is something more than its components. The theory of emergence is not about knowledge, but about reality itself: when two or more entities enter into combination, sometimes a novel entity is produced. Along with having new emergent properties, it is often the case that an emergent entity also has retroactive effects on its components (to move to Los Angeles is to adopt a pre-existent Angeleno lifestyle), is capable of generating new parts (Los Angeles often establishes new government departments), and can lose or replace parts without becoming something altogether different (some residents of Los Angeles die every year and new ones are born or move in).

[29]

To summarize, when it comes to swarms we are dealing with a question of emergence, not of continuous fields. The boids in a flock do not blend together seemlessly and smoothly: if they appear to do so, this is merely an artifact of the low resolution of the human eye. What happens instead is that rule-following individual boids join to build something that is more than the sum of their parts, meaning something different in kind from any individual boid. New boids can join the flock at any time, while others may fall out or drop dead from the sky, and none of this would change the flock into a different flock. This is exactly what emegence means: the whole is neither a sum of individuals, nor a “bundle of qualities” as the philosopher David Hume would claim.

[30]

Swarm Intelligence

James Kennedy and Russell C. Eberhart’s book Swarm Intelligence is no longer recent, having been published in 2001. But like classic works in any field, it does a fine job of capturing the major issues connected with its topic.

[31]

Among other things, the authors claim that macro-scale phenomena such as society and culture can be produced by “particles” (individual humans, in this case) following local rules. At first glance, this might seem like the most abject form of reductionism, as if cities and institutions were really just epiphenomena of individual decisions. Capitalism and liberal democracy are often criticized from the Left for just this reason, with frequent derision aimed at Margaret Thatcher’s infamous statement that society doesn’t exist: “[W]ho is society? There is no such thing! There are individual men and women and there are families…”

[32]

But unlike Thatcher, Kennedy and Eberhart are committed to emergence. In one passage they treat insect sociality as a textbook example of higher-level emergence, though a bit later they treat ant behavior as random but schools of fish as emergent.

[33]

Slime molds are perhaps an even better example, given how primitive their individual elements are compared with both ants and fish, and how remarkable their collective behavior when joined in a colony.

[34]

The philosopher Steven Shaviro has considered the problem of slime molds at some length in his wonderful book Discognition.

[35]

That said, there are times when I find their defense of emergence well-meaning but inadequate. I speak of the following passage:

It may be true that the weather is in fact a system of moving molecules, but forecasting must be based on molar patterns of air masses [...] Human conduct may one day be explained in terms of neural firings and the organization of the brain, but it will never be understood in those terms, just as the weather will never be understood by examining gas molecules.
[36]

With their distinction between explanation and understanding, the authors make the mistaken concession that weather is actually made up of molecules, but that human understanding simply cannot easily process that level of analysis. They should have made the stronger case that weather is simply not made of molecules. The reason is that not all molecular movements will be relevant to the higher-level phenomena we know as weather; a large number of such movements can go in one direction or the other without affecting anything above them at all. As an analogy, consider a large military operation such as the D-Day invasion in Normandy. If we map the battles of that invasion in terms of abstract symbols for the main armies involved, this is not simply because we lack space on the map for each individual soldier. One of the interesting thigns about emergence is that renders much of its backstory irrelevant. Once a number of soldier coalesce into an army, it no longer matters to the battle itself when some of the soldiers are struck down and replaced by reinforcements. Just imagine how much less the individual bodily organs, blood cells, and molecules of each soldier’s body are of . The death of a soldier is a tragedy for family, friends, and colleagues, but with respect to the emergent Battle of D-Day it is not just irrelevant for clarifying the battle to listeners: it is ontologically irrelevant to the battle itself, since on some level one soldier is good as another, and at times one soldier matters no more than empty space.

Even so, Kennedy and Eberhart are true champions of emergence. The philosopher Andy Clark is now for his “extended mind” thesis, which holds that cognition reaches beyond the brain and even beyond the body.

[37]

The same holds for the authors of Swarm Intelligence, who begin their first chapter by announcing that “we do not subscribe to the view of mind as equivalent to brain, as a private internal process, as some set of mechanistic dynamics, and we deemphasize the autonomy of the individual thinker.”

[38]

Indeed, the entire thrust of the book is to argue that the identity of the self is by no means self-contained, but arises from a social network.

[39][40]

As the philosopher Wilfrid Sellars argues, we do not even have direct access to our own thoughts. Society’s “power and accomplishments far exceed the sum of the parts,” and the resulting whole is able to shape its individual members retroactively: the authors consider the vastly different styles of reasoning that social scientists have found between China and the United States.

[41]

We think most effectively not in a vacuum of individual privacy, but through society, by the principle of “imitating our betters.”

[42]

When thinking is distributed over an entire group or culture the results are usually pretty good, despite the ever-present danger of mediocre groupthink.

[43]

Just as the autonomy of the individual is threatened by the supra-individual level of society, the authors are also alert to our dependence on the sub-individual realm. Here they refer not only to Margulis’s serial endosymbiosis theory, but the related speculation of Lewis Thomas that mitochondria and chloroplasts are the real heroes of biology, with larger organisms actign as nothing more than vehicles for them: much like Richard Dawkins’s views on the “selfish gene” a few years later.

[44]

But rather than given to a reductionist view of the part-whole relationship, Kennedy and Eberhart are more concerned with the way sub-individual components generate emergent wholes. This is what swarm intelligence is all about. As the authors describe it, “particle swarm optimization utilizes a ‘population’ of candidate solutions to evolve an optimal or near-optimal solution to a problem.” The way this is done is that “population members, called particles, are flown through the problem hyperspace.”

[45]

The authors rightly link this to pre-existing work on cellular automata, since the idea is to reach large-scale results on the basis of tiny elements.

[46]

They get very specific, reporting that somewhere between ten and fifty particles is the best size for a swarm.

[47]

They are also not shy of reporting success, as when they were able to solve a problem concerning electric car batteries in an average of 2.2 minutes compared with the previous 3.5 hours.

[48]

They cover a number of strategies for optimizing swarms, and are especially careful to note the risk of any strategy that merely follows a gradient of improving results, since climbing a hill to the top is no guarantee that the hill is especially tall by comparison with other hills on the landscape.

[49]

In any case, the results speak in favor of a social rather than individual model of intelligence, for “interactions among population members result in problem-solving intensity greater than the sum of individuals’ solitary efforts.”

[50]

To some extent they even apply this lesson to evolutionary theory. Despite their full-throated defense of natural selection against Biblical creationism, they end up expressing a degree of scepticism towards selection on the level of individuals.

[51]

Here they might have called on the aid of Stephen Jay Gould, a prominent defender of group selection in evolutionary theory.

[52]

If there is anything troubling about Kennedy and Eberhart’s approach, it may be their tendency to consider just two levels: indviduals on the one hand, and society as a whole on the other. This neglects the existence of intermediate emergent layers at a smaller scale than society as a whole. After all, individuals are also shaped by the force of families, clubs, universities, banks, and frequented chatrooms. The authors’ discomfort with intermediate levels appears in their disagreement with the ideas of Marvin Minsky and Ernest Hilgard, even if these disagreements have other motives. Minsky claims “that mind is composed of a ‘society’ of specialized functions or ‘agents’ that operate more or less independently of one another.”

[53]

Kennedy and Eberhart call this “exactly the kind of reified fiction that scientific psychology rejects,” though their reasons for rejecting autonomous mental components are obscured by their other disagreements with Minsky. These include a distaste for his generally reductionist attitude, his excessive focus on the individual mind that emerges from the various mental components, and the lack of room for society in his model of cognition. This parallels their rejection of Hilgard’s treatment of autonomous congitive processes inside the mind, which they follow Nicholas Spanos in re-interpreting as fictitious theoretical artifacts.

[54]

Here Kennedy and Eberhart make some observations that are interesting, but whose ultimate relevance strikes me as dubious. For one thing, they say, “individuals in a swarm are relatively homogeneous, while ‘society of mind’ or subsumption models are assigned to specialized tasks.”

[55]

Perhaps their complaint is that a model of specialized components gives too much power to an overarching organic whole and not enough to the components themselves, though they never quite spell it out that clearly. They do imply that in the autonomous agents model “the system’s behavior is more nearly a sum of the agent’s contributions” rather than a truly emergent phenomenon, though no compelling argument is given for why this would be so.

[56]

The result, at any rate, is that they end up working with only the micro- and macro-levels of individuals and societies, thereby missing out on the meso-level institutions that are so important in the strucutre of civil society.

[57]

Conclusions

Although we have seen that reality consists irreducibly of both discrete and continuous aspects, swarm intelligence aims at the discovery of emergent wholes. By nature emergence is discrete, since it represents a difference in kind from its components taken as a mere sum. Yet by considering the arguments of Kennedy and Eberhart’s classic book on the topic, we uncovered a possible intellectual risk connected with the swarm model: namely, its tendency to think in two and only two layers: individuals on one side, and society as a whole on the other. In some respects this replicates the two key weaknesses of modern political theory. The first is that both the political Left and Right base their ideas on distinct conceptions of human nature: the Left treating humans as naturally good or at least improvable, with the Right viewing our species as dangerous and perhaps depressingly constant across the centuries.

[58]

The second weakness is excessive meditation on the relation between individuals and societies, with insufficient attention to the meso-level of social institutions and the forces they exert on both individuals and broader social entities. On the Left this has led to a fixation on revolution, or total upheaval in the social whole, when a vigorous readjustment of mid-sized instituions might be a better path to achieving Leftist aims. On the Right it leads to a minimalist reaction to an overly totalized conception of society, though partisans of limited government might be more comfortable with the looser and more stratified model of social institutions that gives a better description of political reality.

References

  1. Gerardo Beni & Jing Wang, “Swarm Intelligence in Cellular Robotic Systems.”
  2. See Giorgio Lando, Mereology.
  3. Graham Harman, Waves and Stones.
  4. Charles Darwin, The Origin of Species.
  5. Niles Eldredge & Stephen Jay Gould, “Punctuated Equilibria.”
  6. Ernst Mayr, Populations, Species, and Evolution.
  7. Lynn Margulis, Origin of Eukaryotic Cells.
  8. Sabine Hossenfelder, Lost in Math, p. 178. Kindle edition.
  9. Stephen Buranyi, “Do We Need a New Theory Evolution?” The same formulation is given by the prominent Americna physicist Michio Kaku, The God Equation, p, 141. Kindle edition.
  10. See for instance Lee Smolin, Three Roads to Quantum Gravity.
  11. Roger Penrose, “On the Gravitization of Quantum Mechanics I,” abstract.
  12. Jonathan Oppenheim, “A Postquantum Theory of Classical Gravity?”; Steven Carlip, “Is Quantum Gravity Necessary?”; James Mattingly, “Is Quantum Gravity Necessary?”
  13. Aristotle, Physics; Aristotle, Metaphysics.
  14. See Dominik Perler & Ulrich Rudolph, Occasionalismus; Steven Nadler, Occasionalism.
  15. Jane Bennett, “Systems and Things,” p. 227.
  16. Philip Johnson & Mark Wigley, Deconstructivist Architecture, pp. 17-18. See also Graham Harman, Architecture and Objects, pp. 16-24.
  17. Greg Lynn, “Architectural Curvilinearity,” p. 24.
  18. Greg Lynn, Animate Form, p. 31. See also Greg Lynn, “Blobs.”
  19. Stan Allen, “From Object to Field.” See also Harman, Architecture and Objects, pp. 28-29.
  20. Allen, “From Object to Field,” p. 28.
  21. Allen, “From Object to Field,” pp. 24-25.
  22. Allen’s source here is M. Michael Waldrop, Complexity. The original paper is Craig Reynolds, “Flocks, Herds and Schools.”
  23. Allen, “From Object to Field,” p. 29.
  24. Martin Gardner, “The Fantastic Combination of John Conway’s New Solitaire Game ‘Life’”; Stephen Wolfram, A New Kind of Science. The term “cellular automata” itself appears in a 1969 University of Michigan technical report by Arthur Burks, “von Neumann’s Self-Reproducing Automata,” though Burks seems to have taken the term from von Neumann himself.
  25. Manuel DeLanda, “Emergence, Causality and Realism”; Niki Young, “Object, Reduction, and Emergence.”
  26. For more on this see John Stuart Mill, A System of Logic, p. 243.
  27. See Manuel DeLanda, Philosophical Chemistry.
  28. See for instance George Henry Lewes, Problems of Life and Mind; C. Lloyd Morgan, Emergent Evolution; Samuel Alexander, Space, Time, and Deity. For an excellent treatment of these authors see DeLanda, “Emergence, Causality and Realism.”
  29. See Graham Harman, “DeLanda’s Ontology,” p. 371.
  30. David Hume, An Enquiry Concerning Human Understanding.
  31. James Kennedy & Russell C. Eberhart, Swarm Intelligence.
  32. Douglas Keay & Margaret Thatcher, "Interview for Woman’s Own.”
  33. Kennedy & Eberhart, Swarm Intelligence, pp. 100 & 109, respectively.
  34. Kennedy & Eberhart, Swarm Intelligence, p. 98.
  35. Steven Shaviro, Discognition.
  36. Kennedy & Eberhart, Swarm Intelligence, p. 397.
  37. See Andy Clark, Supersizing the Mind.
  38. Kennedy & Eberhart, Swarm Intelligence, p. 3.
  39. Kennedy & Eberhart, Swarm Intelligence, p. 130.
  40. Kennedy & Eberhart, Swarm Intelligence, pp. 415-417. See also Wilfrid Sellars, Empricism and the Philosophy of Mind.
  41. Kennedy & Eberhart, Swarm Intelligence, pp. 414 & 406, respectively.
  42. Ibid., pp. 419, 284.
  43. Ibid., pp. 308, 411.
  44. Ibid., p. 92. See also Lewis Thomas, The Lives of a Cell; Richard Dawkins, The Selfish Gene.
  45. Ibid., p. xvii.
  46. Ibid., p. xxi.
  47. Ibid., p. 314.
  48. Ibid., p. xxi.
  49. Ibid., pp. 72, 298-299.
  50. Ibid., p. 79.
  51. Ibid., pp. 82 & 91, respectively.
  52. Stephen Jay Gould, The Structure of Evolutionary Theory.
  53. Kennedy & Eberhart, Swarm Intelligence, p. 118. See Marvin Minsky, The Society of Mind.
  54. Kennedy & Eberhart, Swarm Intelligence, p. 119. See Ernest R. Hilgard, Divided Consciousness; Nicholas P. Spanos, “A Social Psychological Approach to Hypnotic Behavior.”
  55. Kennedy & Eberhart, Swarm Intelligence, p. 120.
  56. Ibid., p. 119.
  57. For a discussion see Manuel DeLanda, A New Philosophy of Society.
  58. See Graham Harman, Bruno Latour: Reassembling the Political.