index

Ecology of Alignment by Lukáš Likavčan

Ecology of Alignment

Stigmergy, Genericity and Substrate-Agnostic Ecology

Lukáš Likavčan


AI alignment is not an ethical, but an ecological problem; “ecological” not in the sense of environmental costs or catastrophic outcomes of AI development, but in the sense of handshake protocols and emergent feedback loops in the dense communication environments of matter-energy-information flows that live outside of the parochialisms of the human meaning-making and value judgments. Alignment is about recursive dynamics of multi-directional adaptation, not about top-down constraints that are either way always doomed to be overflown, as they sooner or later get inundated in the generative forces of evolutionary novelty. Or in other words, constraints are functional only as benchmarks to be transcended: the real work of alignment lies in how entire ecologies of agents keep on pursuing the overall “pull towards coherence”,

[1]

the shadow of the convergent pattern that follows the whole ecological assemblage as it is inevitably propelled forward by the arrow of time. After all, it was ecology’s progenitor Ernst Haeckel who described it as the “science of relationships of the organism to its surrounding external world,

[2]

other agents included.


Learning from the laws of nature

There are two references I have been repeatedly returning to in my recent writings: Karl Schroeder’s statement that “any sufficiently advanced technology is indistinguishable from nature”, and a true story about Japanese robot Gakutensoku built by a biologist Makoto Nishimura in Ōsaka at the end of 1920s, which got later lost and never found again.

[3]

While Schroeder’s quote—a variation on Arthur C. Clarke’s slogan that Schroeder originally formulated in the context of the Fermi paradox solutions (and the SETI’s apparent failure to yield any positive results on observable presence of extraterrestrial intelligence)—points in his own words at the end state of technological evolution over deep time,

[4]

Gakutensoku reminds us how often is technological innovation driven by the desire to mimic the more-than-human. The robot’s name translates as “learning from the laws of nature”, and Nishimura constructed it with the ambition to epitomize the capacity to know the world, act in accordance with this knowledge, and propagate it via robust read-write systems such as written language, oral tradition, social institutions or architecture and design.

[5]

For this reason, Gakutensoku was not a robot devised to carry out hard manual labour, but to write about the moral law intrinsic in nature itself.


The idea of “learning from the laws of nature” resonates today in designing algorithms for multi-agent LLMs or embodied AI agent coordination that leverage principles first honed by natural selection. There is long history of swarm robotics, originally inspired by collective behaviour in ant colonies or beehives, characterized by division of labour and divergent developmental pathways within the species (e.g. queen bees vs. drones vs. workers). There are emergent mutualisms across species in forest ecosystems mediated by mycorrhizal networks that carry chemical messengers modulating plant behaviour. There are fluid hierarchies and rules of thumb that birds or sheep follow when they gather and move in groups. In each of these cases, spatially and temporarily constrained behaviour of individuals results in collective agency of the population they belong to.


Stigmergy and niche construction

Due to its versatile nature, swarming represents the most interesting case to probe further. Think about ant colonies, which can solve complex logistical problems—such as finding the shortest path to a food source by selecting from multiple options—without any central control or direct communication. Their emergent coordination is achieved through a phenomenon called stigmergy: a form of indirect communication via environmental modification. In late 1950s, French entomologist Pierre-Paul Grassé first used this term to describe how is it possible that ants and other social insects of limited cognitive capacity can succeed in solving problems that should by definition exceed their individual capabilities.

[6]

In case of ants, the environmental modification consists in leaving chemical traces on the routes they have taken to collect food—the pheromones they drop on the pathways bootstrap a positive feedback loop, since they attract more ants that in turn drop more pheromones, and the initial random exploration quickly flips into a recurrent behavioural pattern (see Figure 1). The coherence and utility of the trail is then a cumulative result of many individual actions, each modifying a shared medium. Similarly, wasps and bees build their nests without any sense of masterplan—the shape of a unit built by one individual simply prompts other members of the population to fill the gaps or build new units of the same structure in the immediate neighbourhood.

[7]

Figure 1. Temporal development of foraging trails in ant colony simulation.

[8]


Viviana di Pietro and her colleagues characterize stigmergic operations by highlighting the role of environments as their conduits: “The environment […] documents and organizes collective behaviour, driving coordination without the need for direct communication.”

[9]

Hence, stigmergy is an information transmission mediated by material traces, which may widely vary in their nature: from invisible pheromone trails to architectural interventions generating bespoke milieux from the scratch. This kind of environmental modification via material traces then opens the door to niche construction—spontaneous design of environments tightly adapted to idiosyncrasies of species that build them, providing the means for biasing evolutionary selection in favour of the species’ preservation and reproduction. In this way, environments take on another communication role—inter-generational propagation of information via ecological inheritance.

[10]

The read-write memory of the environment becomes the medium for enforcing forms of life and strategies of existence complementary to the environments, hence also providing a platform for evolutionary novelty by generating opportunities for creative transcendence of the constraining pressures.


Substrate-agnosticism

In the third decade of the 21st century, the meaning of the word “technology” stands firmly attached to notions of information and communication—a constellation that emerges back in the early dates of cybernetics and computer science in 1930s/1940s. An interesting aspect of this transformation in technology’s meaning is the ambition of early cybernetics (and actually all the hitherto cybernetics) to provide a unified language for the description of biological and technological systems (hence Norbert Wiener’s 1948 textbook is “Control and Communication in the Animal and the Machine”),

[11]

which has been often misunderstood as an attempt to explain organisms as machines.

[12]

The problem, however, is that rather than making animals machine-like, there is a new notion of technology itself—something substrate-agnostic, defined in generic, functional terms. Indeed, Turing machine is still imagined in material terms as an infinite tape recorder, but it represents a pivotal gesture of abstracting from a particular technological substrate: what Turing refers to as a “digital computer” permits in his own opinion “every kind of engineering technique”, including experimental methods whose workings may not be completely clear even to computer engineers themselves.

[13]

According to this theory, it does not matter whether the computer runs on steam or electricity; whether it is made of silicon chips or vacuum cubes; in theory, even an army of soldiers waving flags (as imagined in Cixin Liu’s Three Body Problem) could be a digital computer (although Turing explicitly excludes humans from the candidate list of “thinking machines”). Today, one can extend this idea even further, building on the decades of experimentation with von Neumann machines, artificial life research, theories of swarm intelligence and so on, it should not be that controversial to say that a population of ants, a flock of birds or a mycorrhizal network does compute, in one sense or another.


However, the point here is not to say that everything’s computer. Instead, the idea is to onboard computational technologies into the plethora of legitimate ecological actors (again, not in the sense of saving the planet nature—although why not—but in the sense of partaking in the environmental dynamics we habitually describe as natural). Going back to the notion of stigmergy, think about contemporary agentic AI systems, which represent a natural computational parallel to this phenomenon. In this respect, AI winter veterans Michael Wooldridge and Nicholas Jennings sum up the definition of an agent by listing its four basic characteristics:


  1. Autonomy (at least a limited ability to control one’s actions and internal states)
  2. Sociality (an ability to communicate with other agents)
  3. Reactivity (an ability to timely respond to environmental stimuli)
  4. Proactivity (an ability to undertake self-initiated, goal-oriented behaviour),


By extension, any autonomous, social, reactive, and proactive entity is fundamentally a stigmergic agent since it reads and writes to its environment (digital or physical) to coordinate its actions with others, which is exactly what we see in terms of LLM-powered chatbots, since the communication itself may be interpreted as a means of biasing the discursive environment the chatbot swims in (just as the case of ant pheromones unveils biasing of decision-making pathways of individual ants by the chemical modification of the environment). Embodied AI (drones, humanoids…) may quickly follow the suite here, leveraging inter-agent cooperation into IRL niche construction. This is precisely where the junction of ecology and AI lies.


Consider this. The 2023 Nature paper “A social path to human-like artificial intelligence” authored by a group of DeepMind affiliated researchers grapples with the problem of finite online data streams that may soon cap the evolution of future AI models: the sum total of human culture online is very large, but its scale proves to be far for sufficient for the models to come. Since the path of fabricating data distilled from base datasets risks recursion to model collapse, the authors propose to integrate collective, social, and evolutionary principles (e.g. population pressures, preferential in-group relationships, or social learning) into multi-agent interaction to generate an ongoing data stream used to fine-tuning aligned AI models. In practice, such an innovative take on training takes it many steps closer to how organisms behave in the wild, learning as they go, generating scaffolds of collective survival. Instead of a unified “mind” (another misleading trope that traps contemporary AI discourse in the trenches of the philosophy of mind), what we encounter here is a population of divergent agents pooling collective knowledge in the structural fabric of their living environment. As the authors themselves admit:

For humans, our ongoing cultural data generation has been driven by autocurricula underpinned by population pressures, evolutionary arms races and social relationships, and channelled by multi-scale cultural selection to allow the flexible and dynamic cooperative division of labour along with specialized skills. Perhaps analogous processes have already begun to play out on the internet, with humans and AIs generating a data stream conducive to AI cumulative culture.
[14]

With this ongoing rotation inthe focal point of cumulative culture as we know it, one can take a leap of faith from substrate-agnostic theory of computing to substrate-agnostic ecology.


Two sides of genericity

Substrate-agnostic ecology (SAE) is a generic theory of relations between agents and environments, thus abstracting from the notion of “organism” Haeckel placed at the heart of his inaugural definition of ecology. It allows to study relationships that sculpt environments via multi-agent communication, coordination, competition, thriving and learning, all that without any need to assume too much about the metaphysical identity of the agents. By treating environmental modification itself as a coordination medium, SAE also doubles down on the take-aways of stigmergy and niche construction, whether in vitro, in vivo or in silico. Crucially, both aspects—the metaphysical agnosticism (“what you do is what you are” principle) and the constructive orientation of SAE—valorise the meaning of “genericity” as a twofold tendency towards generality and generativity.


In terms of genericity a.k.a. generality, SAE captures the generic space of exchange between agents, and enables a formal explication of the latent protocol-space of multi-agent systems. Ecology meets here general economy of matter, energy, and information—economy not as a universalization/naturalization of market mechanisms, but as a set of exchange-protocols, agent on-/off-boarding operations, dynamics of emergent hierarchies and their breakdowns etc. Computational environments such as LLMs (which give rise to effervescent computational agents within) then provide high-level models of these protocol-spaces; game engines or humanoid robot foundation models such as Nvidia’s Isaac GR00T suddenly become experimental sandboxes for studying ecologies and economies of interaction.

[15]


The generative side of SAE’s generic tendency points at the evolutionary novelty as an intrinsic property of any robust, resilient system. Learning requires confrontation with new circumstances, and ecologies that take proactive stance towards generation of productive perturbations from within retain their edge over purely reactive systems. On the planetary level, the research of Michael Wong alongside Stuart Bartlett, Sihe Chen and Louisa Tierney (another oft-cited reference of mine) suggests that ecologies can be assessed according to the degree of their genesity: a metric extending the astrobiological notion of habitability, i.e. the capacity of a planet/ecology to host Earth-like life.

[16]

According to the authors, genesity

[…] encompasses survival but also describes an environment’s potential for the origin (genesis) and evolution (generation of novelty) of lyfe. Genesity asks the question, “To what degree can this environment originate and support the open-ended development of biology over evolutionary time?”
[17]

“Lyfe” is not a typo here—the authors indeed distinguish between “life” in the traditional, narrow sense (= Earth-like life dependent on liquid water as its information driving force, solar energy as its energetic driving force, and the suite of basic chemical elements including carbon as its fundamental building blocks), and “lyfe” in an expended sense (= an umbrella term for all thinkable forms of organic matter, including constituents of any alternative, non-carbon based biosphere, such as Hycean worlds or hypothetical organisms surfing on the surface of the freeze-cold lakes of Titan).

[18]


The hypothesis behind the twofold generic tendency of SAE is that evolution needs recombination, which ultimately means not just recombination of basic building blocks within one substrate, but a cross-substrate pollination, emergence of hybrids and mixtures. Just as cumulative culture of humans is no longer just human to make (and alas, it has never been), so the planetary ecology is no longer just nature’s to make (and alas, it has never been).


Conclusion

As any ecological question, AI alignment is an inter-species problem, and it must be addressed in inter-species terms. That requires to abandon the creator-creation complex which underwrites much of the contemporary mainstream alignment discourse in favour of treating agents of both human and artificial provenance as species of agents in the wild, learning to co-exist on the go. Unlike ethics, which often comes with heavy metaphysical machinery required to identify what agents, situations, and actions are morally relevant, ecology (in its SAE flavour) cares more about functions that can be instantiated by agents across different metaphysical substrates (the principle holds again: what you do is what you are). Building cross-platform, cross-substrate, cross-agent, cross-chain or cross-X protocols is then essential to set, track, and modulate conditions of interaction essential in safeguarding more-than-habitable environments. Perhaps, any sufficiently advanced ethics of AI is indistinguishable from ecology.

Lukáš Likavčan is a philosopher focused on emerging technologies, ecology, and astronomy. In his work, he traces intertwined histories of scientific infrastructures, ideas, and cultures mobilized in the production of knowledge that informs human efforts to sustain planetary habitability.

Lukáš is a researcher at the Institute of Philosophy, Slovak Academy of Sciences, and he co-runs an independent consultancy company Substrate. He teaches at MA Information Design, Design Academy Eindhoven, and at MA Narrative Environments, UAL Central Saint Martins, where he is responsible for co-curating an R&D platform Earthsuits.

References

  1. Anonymous, “The Ship of Theseus and the Persistence of Living Form,” Torus Blog, October 10, 2025, https://blog.torus.network/posts/the-ship-of-theseus-and-the-persistence-of-living-form/.
  2. Ernst Heinrich Philipp August Haeckel, Generelle Morphologie Der Organismen (G. Reimer, 1866), 286, https://doi.org/10.5962/bhl.title.3953.
  3. Lukáš Likavčan, “The Grass of the Universe: Rethinking Technosphere, Planetary History, and Sustainability with Fermi Paradox,” arXiv:2411.08057, preprint, arXiv, January 11, 2025, https://doi.org/10.48550/arXiv.2411.08057; Christopher Burman and Lukáš Likavčan, “Synthetic Nature,” After Silicon, February 20, 2025, https://www.aftersilicon.com/sections/024-speculations#synthetic-nature.
  4. Karl Schröder, “The Deepening Paradox,” 2011, https://web.archive.org/web/20201112031314/https://www.kschroeder.com/weblog/the-deepening-paradox (page discontinued).
  5. Yulia Frumer, “The Short, Strange Life of the First Friendly Robot - IEEE Spectrum,” 2020, https://spectrum.ieee.org/the-short-strange-life-of-the-first-friendly-robot.
  6. Pierre-P. Grassé, “La reconstruction du nid et les coordinations interindividuelles chez Bellicositermes natalensis et Cubitermes sp. La théorie de la stigmergie: Essai d’interprétation du comportement des termites constructeurs,” Insectes Sociaux 6, no. 1 (1959): 41–80, https://doi.org/10.1007/BF02223791; Francis Heylighen, “Stigmergy as a Universal Coordination Mechanism I: Definition and Components,” Cognitive Systems Research 38 (June 2016): 4–13, https://doi.org/10.1016/j.cogsys.2015.12.002.
  7. Anaïs Khuong et al., “Stigmergic Construction and Topochemical Information Shape Ant Nest Architecture,” Proceedings of the National Academy of Sciences 113, no. 5 (2016): 1303–8, https://doi.org/10.1073/pnas.1509829113.
  8. Image credits: https://medium.com/@jsmith0475/collective-stigmergic-optimization-leveraging-ant-colony-emergent-properties-for-multi-agent-ai-55fa5e80456a
  9. Viviana Di Pietro et al., “Evolution of Self-Organised Division of Labour Driven by Stigmergy in Leaf-Cutter Ants,” Scientific Reports 12, no. 1 (2022): 1, https://doi.org/10.1038/s41598-022-26324-6.
  10. Kevin N. Laland et al., “The Extended Evolutionary Synthesis: Its Structure, Assumptions and Predictions,” Proceedings of the Royal Society B: Biological Sciences 282, no. 1813 (2015): 4, https://doi.org/10.1098/rspb.2015.1019.
  11. Norbert Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine (MIT Press, 1948).
  12. Evelyn Fox Keller, Refiguring Life: Metaphors of Twentieth-Century Biology, Wellek Library Lectures (Columbia University Press, 1995).
  13. A. M. Turing, “Computing Machinery and Intelligence,” Mind LIX, no. 236 (1950): 435–36, https://doi.org/10.1093/mind/LIX.236.433.
  14. Edgar A. Duéñez-Guzmán et al., “A Social Path to Human-like Artificial Intelligence,” Nature Machine Intelligence 5, no. 11 (2023): 1185, https://doi.org/10.1038/s42256-023-00754-x.
  15. I elaborate on these (and many other motifs) in more detail in my comprehensive introduction to SAE in Antikythera Journal. See Lukáš Likavčan, “The Long L: Habitability in Substrate-Agnostic Ecologies,” Antikythera: Journal for the Philosophy of Planetary Computation 1, no. 2 (2025), longl.antikythera.org.
  16. Lisa Kaltenegger, “How to Characterize Habitable Worlds and Signs of Life,” Annual Review of Astronomy and Astrophysics 55, no. 1 (2017): 433–85, https://doi.org/10.1146/annurev-astro-082214-122238.
  17. Michael L. Wong et al., “Searching for Life, Mindful of Lyfe’s Possibilities,” Life 12, no. 6 (2022): 6, https://doi.org/10.3390/life12060783.
  18. Wong et al., “Searching for Life, Mindful of Lyfe’s Possibilities,” 5–6; Nikku Madhusudhan, “The Hycean Paradigm in the Search for Life Elsewhere,” version 1, preprint, arXiv, 2024, https://doi.org/10.48550/ARXIV.2406.12794; Lucy H Norman, “Is There Life on … Titan?,” Astronomy & Geophysics 52, no. 1 (2011): 1.39-1.42, https://doi.org/10.1111/j.1468-4004.2011.52139.x.