Thursday, June 30, 2011

Bacterial Wisdom As Template for Artificial Free Will

If any genuine "free will" exists, it is at the level of the "I-ness" of a system, the decision making routine, that it comes into play. Before we dive into the technicalities of this issue, let's first try to brainstorm on what can be understood by "free will". Although intuitively we "know" what "free will" is, just as we know what consciousness is,it is extremely hard to define it in words. Let's try to build an ontology "free will" by reciting its features and by drawing the borders of this concept from the notions of what it is not.

I followed a very interesting discussion on the issue of free will and whether it is needed in AI, which I will neither repeat nor summarise here, but a number of striking concepts of which I will use in this essay. I do not claim to have come up with those concepts myself nor do I claim to be an expert on the issue, but I believe that I can add some interesting concepts to the discussion deriving from Ben Jacob's "Bacterial Wisdom", "Global Brains" and "Societies-of-Minds". I will also propose to incorporate an artificial functional mimic of "Free Will" in a Webmind such as the AWWWARENet (Artificial World Wide Web Awareness Resource Engine Net).

A number of concepts stood out above the noise of the aforementioned discussion, which I'll mention here as features (and non-features) of the "free will ontology":

"Choice, override, randomness, unpredictability, (non)determination, chaotic, (non)causality and evolution".

Indeed, for a "Will" or decision-taking routine to be "free", it must be able to override those possible decisions, which are "causality-determined". In Goertzel?s Webmind the discriminating faculty is the AttentionBroker routine). In the AWWWARENet, the AttentionBroker presents its conclusions, what course of action is to be taken as being the most rational, as having the highest probability of success, to the I.I.I (Identity,Initiative and Illusion generating routine). In as far as the system has an "override" function, the system appears to be endowed with a faculty of "choice" to an outside observer of the system.

The need for a random-picking faculty arises, when the AttentionBroker present the I.I.I-routine with more than one equally likely options i.e. options with identical priorities.

The issue becomes more poignant, when due to a scarcity of resources or time imposed resource constraints not all options can be carried out simultaneously or worse are mutually exclusive i.e. some must be sacrificed at the expense of others.

Which one to choose if they have all equally preferable numerical outcomes of a resultant vector of the pros and cons and the only differences are to be found on a qualitative level?

It goes without saying, that the advantage-disadvantage summing includes attributing preferential weighting of long term advantages over short term disadvantages.

A rational/causal decision for the system will try to optimise the chances for survival of the system in the long term; short term repairable damage can then be tolerated as a temporary sacrifice.

When we look at the only observable example we have of "free will", i.e. ourselves, (at least we believe we're endowed with such a faculty - and we need an example of free will, if we ever want to try to simulate or mimic it in an artificial environment), indeed we sometimes override rational reflections, which warrant a safe outcome and take prima facie irrational intuitive decisions based on a "gut-feeling". Often our animal instincts and/or emotions are capable of overriding a potential well-reflected decision based on a summation of the pros and cons. Goertzel sees these as natural impediments to human superintelligence in his book "The Hidden Pattern".

Are such override decisions examples of "genuine free will" or are they merely the result of a summation on a meta-level, e.g. where an outcome of the "Emotome" is weighed against an outcome of the "Cognotome"? If the latter is the case, these decisions certainly do not qualify as "free will"but are the result of yet another algorithm. Nevertheless, programmed with sufficient control over the "advice" deriving from the "Emotome", a superintelligent AI system, which is aware of the routines of the "Emotome" and "Cognotome", the system will still face situations where it has to choose between equally good (or bad) strategies.

In such cases the system could be programmed to pick one at random. But such a random-picking routine cannot really be equated with "genuine free will".

When we say that we intuitively choose the solution which "feels best", perhaps we're subconsciously performing a search through a space of known similar solutions and we pick the one with the highest degree of similarity of the situational parameters in the solution space or the one with the shortest route to a successful outcome. We might be devising a heuristic. An AI system could be programmed in such a manner, but again such an algorithm does not qualify as genuine free will.

In reality our presumed "free will" is much more limited than we might a priori believe. Tricks played by so-called "mentalists" have shown, that subconsciously registered clues from the most recent peripheral perceptions steer us toward decisions, which we believe to be genuine free will based decisions.

By eliminating all descriptions which are not the product of genuine free will, we may come to a description of free will. Let's continue the brainstorming exercise in order to ground a pattern of free will from a number of examples.

Let's start with an extreme example of "choice", which should not be influenced by "peripheral perceptions". In the film "Sophie's choice" there is a scene where Sophie (played by Meryl Streep) is forced to choose one of her children, the other will be killed. Not choosing will result in both children being killed. A parent who loves his children alike and refrains from favouritism might have the following thoughts:

It is better if one of my children survives than none.As these sadistic monsters kill people anyway, there is no good reason to give in to this non-choice as they will very probably kill both children in the end anyway.If I do choose one of them, I may buy some time for one of them generating a chance for escape and survival.If I do choose one of them I commit a sin: It is immoral to make this decision forced upon by blackmail; One should never give in to that, I'd rather safe my ass in the after world.If I do not choose one of them I commit a sin: It is immoral to condemn both death.I should choose the most helpless one/the one with the best survival chances.

Thoughts 1,3 and 6 belong to the realm of Necessity (N) and Energy (E) and aim for the "least damage" result. Thoughts 2,4 and 5 belong to the realm of Morality (M). Is the choice being made again the result of a summation vector of N,E and M? Is one's choice faculty predestined by the idiosyncratic resultant N,E,M vector?

Is "gut feeling" and "feeling like it" a form of aligning your decision as much as possible to your N,E,M vector or is there a way to escape from algorithmic pattern based calculation considerations?

Don't we sometimes make choices, which are non-rational or even counter-intuitive, the motive being recalcitrance? Is a "what the heck, I'll just pick one" not the carrying out of a pure random picking algorithm?

Scientists, artists, musicians and other creative persons sometimes have breakthrough insights, moments of pure bliss, where they simply "see" the solution to a complex problem; where a sudden "inspiration" overrides the paradigmatic pathways and fixed action patterns of the basal ganglia.

Such utterly original ideas coming from moments of bliss, especially when coupled to a choice may approach the most, what we intuitively assume to be "free will".

Another example of apparent free will based choices is when we deliberately and consciously do the opposite: Willingly go against one's morality, by indulging in this or that bad habit, even if our Emotome and Cognotome tell us differently: The often heard expression is then "The flesh is weak". When this relates to e.g. possibilities of extramarital sexual intercourse, for many people the overriding force of our animal instincts should not be underestimated. The animal part of the brain then imposes a kind of artificial Necessity on the decision-taking routine, if the mating signal has been given by an attractive candidate of the opposite sex.

Only a combination of Energy (E) and morality (M) (e.g. I don?t want to hurt my present partner and my children and/or my religion considers this as a sin etc.) may then override such instincts. Again here a summation of both instinctive tendencies and the outcome of the Emotome? Cognotome will then determine the action to be taken. Not so much free will after all?

A third example of apparent free will and choice with an unpredictable outcome can be found in the realm of "Global brains" such as bacterial colonies, beehives and anthills. A priori, as long as resources are sufficient the system thrives by maintaining conservative habits i.e. by maintaining the paradigm.

Those individuals in the Global Brain, who have the role of Ben Jacob's "conformity enforcers" and "inner judges" will assure that the system can thrive as long as the status quo parameters apply. However, once resources become scarce a need will arise to probe different strategies so as to ensure the survival chances of the species. Those individuals in the colony having the role of "diversity generators" are indispensable to probe alternative strategies. These diversity generators must be able to boldly go where no one had gone before; they must be daring and blithely dive into the abyss the unknown.

It is of utmost importance, that these individuals are endowed with a great deal of free will, because they MUST take decisions, which go against common-sense. They must expose themselves to great dangers and have a huge chance of compromising their own survival in sacrifice for the greater good. The diversity generators must almost have a borderline personality: they must take absurd, random intuitive or counter-intuitive decisions. Whereas the vast majority of the diversity generators (I.e. the mutants in an evolutionary system) are not successful, a few of them are and bring the species anew chance for survival; a new way to exploit resources or new resources all-together The selection of the most promising strategy follows the evangelical adage "To he who hath it shall be given, from he who hath not, it shall be taken away". The outcome of sending out in parallel a multiplicity of diversity generating sentinels and pioneers is unpredictable.

The Global Brain system as a whole, if it is successful in the end then appears to have chosen and invented a solution, which for an observer from the outside appears to derive from a blissful insight, a truly intelligent, intuitive utterly original free will decision.

What the outside observer does not know from this prima facie observation is that the Global Brain has massively probed a multitude of solutions the vast majority of which have failed. He outcome appears to be free will, but is the result of a competition, a screening struggle for the most promising strategy. Perhaps our brains function in a similar way, such that when we seek a solution to a problem, we subconsciously launch a multitude of strategies in parallel. These strategies compete and only the most promising strategy is promoted to the level of consciousness, by exceeding a certain threshold after having been voted upwards in a Reddit-like system.

Perhaps this is the best way to mimic free will in an AI system: to allow multiple different strategies to evolve in parallel in a simulation and/or "real"environment and have the "intergroup tournament" establish which strategy is the most successful one. The up or down voting during the intergroup tournament screening is then carried out by online individuals and/or aLife agents, which can be considered as Ben Jacob's "resource shifters".

So "apparent free will" may emerge from making a vast amount of wrong, unsuccessful decisions/strategies and keeping the few promising successful ones.

The "making of mistakes" is both inherent and indispensable to this system as it relies on massive parallel probing: The system will learn the most from its mistakes and prune away non-promising strategies. It will not venture in those directions again. Thus by means of this massively parallel probing in simulation environments, the Global Brain builds its own heuristics.

Analogously in our lives there is nothing wrong about making mistakes as long as we learn there from. It is my experience, that making mistakes is more instructive and has a longer lasting impact than courses of action, which I happened to perform correctly, without knowing why. Thus this world, where we can make mistakes (a religious person would use the term "sin") is in fact the best of all possible worlds in the terms of Voltaire's "Candide", as it permits us to evolve consciously.

So for me the answer to the question "What is free will and is it needed for AI" is the following (and I do not claim to have come up with this definition all by myself; I combined some concepts of the aforementioned discussion and added the element of Ben Jacob's thesis thereto):

Free will can be characterised by a decision making process, which overrides rational and/or emotional/instinctive heuristics and which establishes a new heuristic on the basis of the seven step algorithm of Intelligence, whenever the system is under resource restrictions and has to deal with a choice having less than certain knowledge at its disposal. That algorithm means involving the elements of Ben Jacob's "Bacterial Wisdom" in the following manner:

Probing a diversity generating antithesis as a result of a stimulus from the inadequacy of the status quo thesis (e.g. a lack of resources), pattern abstraction, emergence of multiple alternative strategies, intergroup tournaments and distinction probing resulting in either niching or preferably symbiosis.

The most promising strategies ideally result in symbiosis, a unification of features toward which the system will strive. It will try to resonate with its new environment and thereby adapt to it.

As to the necessity for AI casu quo a webmind, it can be said that if the system is put under pressure due to scarcity of resources, it is indispensable it has a way to venture into the unknown to discover new resources.

Yet the system as a whole cannot venture into the unknown by making a big leap; that is simply too risky. A Webmind apparently disposing of free will is therefore ideally a Society-of-Minds, wherein the different individuals have been attributed the roles of conformity enforcers, inner judges, resource shifters and diversity generators, so that the system as a whole can safely sacrifice diversity generators on a massive scale, without compromising the integrity of the whole in order to find new promising strategies, heuristics and/or resources. Among the diversity generators it can be envisaged that there are different groups or ensembles each having a different degree of freedom to explore: there can be a gradual increase from rather conservative combinations of existing strategies that a diversity generator can propose until absurd wild combinations of unrelated strategies. Conservative diversity generators will still look for certain degrees of resemblances between existing strategies and combine parts of these linearly, when more freedom is allowed non-linear combinations can be used and the most free systems can have access to random combinations on the verge of the absurd. The diversity generators themselves are still algorithm bound, but a successful one will be seen by the outside world as having had a great deal of free will.

Evolution of colony based organisms and cell aggregates within an organism works in a similar way: think of the hypermutation process of the immune system.

Similarly we as human beings may fulfil the roles of the different types of individuals of a Society-of-Minds. The universe is probing for new solutions in order to propagate its seven-step intelligence algorithm and it also uses us to achieve that goal. From there to conclude that we live in a simulation is then almost mere semantics.

The seven-step algorithm of intelligence is a twofold dialectic process: The thesis (1), antithesis(2), pattern-abstraction (3) leading to emergence (4). From opposition (antithesis as regards a thesis) comes creativity (pattern abstraction) resulting in redefinition (emergence).

Free will -at least an apparent form thereof- is indispensable in this system to create the diversity generators. Whereas the conformity enforcers and inner judges who maintain the status quo are endowed with fairly little or almost no free will (and thereby maintain a form of inertia of the system), the resource shifters and even more the diversity generators are endowed with a great deal of free will so as to ensure leaps into the unknown. Absurd and unpredictable mutations, which are carried out on a massive scale result in intelligent decisions by pruning away the mistakes via a survival of the fittest protocol.

The free will of the most extreme diversity generators is then in fact a form of counter-intuitive absurdity; a borderline leaping into the abyss of the unknown just-for-the-kick-of-it. The diversity generators must be endowed with a certain amount of "mental insanity"so as to ensure the sanity of the system as a whole, of which they form part.

So it can be concluded that the free will of the orchestrating quasi-conscious faculty in such a webmind, is limited to the generation of submodules e.g. in the form of smaller sized copies of itself endowed with lesser resources, which submodules perform the ungrateful task of probing the unknown, whereas another greater part of the system is controlled and maintained by the conformity enforcers and inner judges in the form of a Life agents. Note that the faculty to generate apparent free will for its submodules does not necessarily entail genuine free will of the higher meta-levels as well: those are still governed by weighing and summation algorithms and choosing the best option, if needed using random picking when results are identical. Due to the selection of the best solution from the submodules the system as a whole displays "apparent free will", but the webmind has no such true faculty.

Antonin Tuynman was born on 22-02-1971 in Amsterdam. He studied Chemistry at the University in Amsterdam (MSc 1995, PhD 1999). Presently he works as a patent examiner at the European Patent Office in the field of clinical diagnostics. He has also passed the papers of the European Qualifying Examination for patent attorneys. Antonin has a developed a strong interest in futurism and the Singularity theory of Kurzweil. In his blog Awwwareness, http://tuynmix.blogspot.com/ Antonin proposes Artificial Intelligence concepts which may lead to the emergence of internet as a conscious entity.


View the original article here

No comments:

Post a Comment