Tuesday, December 02, 2014

What is Life?

I was trolling the Google+ discussion forum related to the Developmental AI MOOC, which my son Zar recently completed, and noticed a discussion thread on the old question of “What is Life?” — prompted by the question “Are these simple developmental-AI agents we’ve built in this course actually “alive” or not?

I couldn’t help adding my own two cents to the forum discussion; this blog post basically summarizes what I said there.

First of all, I’m not terribly sure "alive or not" is the right question to be asking about digital life-forms.  Of course it's an interesting question from our point of view as evolved, biological organisms.  But -- I mean, isn't it a bit like asking if an artificial liver is really a liver or not?   What are the essential characteristics of being a liver?  Is it enough to carry out liver-like functions for keeping an organism's body going?  Or do the internal dynamics have to be liver-like?  And at what level of detail?

Having said that, though, I think one can make some mildly interesting headway on the "what is life?” question by starting with the concept of agency and proceeding from there...

I think Stan Franklin was onto something with his definition of "autonomous agent" in his classic “Agent or Program?” paper .  He was writing there more from an AI view than an ALife view, but still the ideas there seem very much applicable.  The core definition of the paper is:




An autonomous agent is a system situated within and part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to affect what it senses in the future.


The paper then goes on to characterize additional properties possessed by various different types of agents.   For instance, according to Franklin's approach,

  • every agent satisfies the properties: reactive, autonomous, goal-oriented and temporally continuous.
  • some agents have other interesting properties like: learning/adaptive, mobile, flexible, having-individual-character, etc.

Given this approach, one could characterize “life” via saying something like

A life-form is an autonomous agent that is adaptive and possesses metabolism and self-reproduction.


This seems fairly reasonable — but of course begs the question of how to define metabolism and self-reproduction.  If one defines them too narrowly based on biological life, one will basically be defining "traditional biological life."  If one defines them too broadly, they'll have no meaning.

A related approach that seems sensible to me is to define a kind of abstract "survival urge"  For instance, we could say that


An agent possesses survival-urge if its interactions with the environment, during the period of its existence, have a reasonably strong influence on whether it continues to exist or not ... and if its continued existence is one of its goals.


and


An agent with individual character possesses individual-character survival-urge if its interactions with the environment, during the period of its existence, have a reasonably strong influence on whether other agents with individual-character similar to it exist in future ... and if both its continued existence and the existence of other agents with similar individual-character to it, are among its goals.


Then we could abstractly conceive life as
A life-form is an adaptive autonomous agent with survival urge
or
An individuated life-form is an adaptive autonomous agent with, survival urge, individual character and individual-character survival-urge

These additions bring us closer to the biological definition of life.

(I note that Franklin, in his paper, doesn't define what a "goal" is.  But in the discussion in the paper, it's clear that he conceived it as what I've called an implicit goal rather than an explicit goal.   That is, he considers that a thermostat has a goal; but obviously, the thermostat does not contain its goal as deliberative, explicitly-represented cognitive content.  He seems to consider a system's goal as, roughly, "the function that a reasonable observer would consider the system as trying to optimize."   I think this is one sensible conception of what a goal is.)

Unfortunately I haven't studied the agents created in the Developmental AI MOOC well enough to have a definitive opinion if they are "alive" according to the definitions I've posited in this post.  My suspicion, though, based on a casual study, is that they are autonomous agents without so much of a survival urge.   But I guess a survival urge and even an individuated one could be achieved via small modifications to the approach taken in the course exercises.

My overall conclusion, then, is that some fairly simple digital life-forms should logically be said to obey the criteria of “life”, if these are defined in a sensible way that isn’t closely tied to the specifics of the biological substrate. 

Now, some may find this unsatisfying, in that digital organisms like the ones involved in the Developmental AI MOOC are palpably  much simpler than known biological life-forms like amoebas, paramecia and so forth.    But  my reaction to that would be that complexity is best considered a separate issue from “alive-ness.”  The complexity of an agent’s interactions and perceptions and goal-oriented behaviors can be assessed, as can the complexity of its behaviors specifically directed toward survival or individual-character survival.   According to these criteria, existing digital life-forms are definitely simpler than amoebas or paramecia, let alone humans.  But I don’t think this makes it sensible to classify them as “non-alive.”   It’s just that the modern digital environment happens to allow simpler life-forms than Earthly chemistry gave rise to via evolution.