This is a followup to my previous post on the same topic : http://rogeralsing.com/2010/07/29/evolutionary-algorithms-problems-with-directed-evolution/
I started thinking about possible solutions after I published my last post, and I think I might have something that could work.
In order to harness the full power of evolution, we need to be able to simulate undirected evolution.
Undirected evolution requires more than one environment, so that organisms can evolve and niche into specific environments instead of just competing for the same environment.
In our case, the “environment” is the “problem” that we want to solve.
The more fit an organism is in our environment, the better it is to solve our given problem.
So far, I think everyone have been applying evolutionary/genetic algorithms on individual problems, evolving algorithms/solutions for a single purpose..
And thus, experiencing the problems of irreducible complexity.
But what if we were to introduce an entire “world” of problems?
If we have a shared “world” where we can introduce our new problems, the problem would be like an island in this world, and this island would be a new environment where existing organisms can niche into.
This way, we could see organisms re-use solutions from other problems, and with crossover we could see combinations of multiple solutions for other problems.
The solutions would of course have to be generic enough to handle pretty much every kind of algorithm, so I guess the DNA of the organisms needs to be very close to a real GPL language.
Possibly something like serialized Lisp/Clojure, running in a sandboxed environment…
By adding more and more problems to this “world”, the better it would become at solving harder problems since it can reuse existing solutions.
The structure of it all would be something like:
The “World” is the container for “Problems”.
“Problems” contains input/output sampling and “populations of organisms”, thus, each problem have its own eco system.
“organisms” evolve by mutations and genetic crossover, they can also migrate to other “problems” from time to time.
This way, an organism from the “SSH Encryption island” may migrate over to the island of “Bank authentication login code generator island” and possibly be used as a module in one of the branches of one of the organisms in there, and thus removing “irreducible complexity” from the equation here..
Evolution would be locally directed and globally undirected…
I think this could work at least to some extent, or?
Exactly… and apparently this is what they are doing… im looking for the article but they now have it solve multiple problems using some common framework to describe the datasets and problems… my thought is also this… ok we have a brain but its only got very rudimentary stuff in it. It takes children time to learn skills how the world works and is manipulated… what is when our AI finally are born they are like kids… how will be train them… will every AI be paired with a child so they learn together? The TED.com talks seem to show alot of info and ways to think about the issue… and what the feeling “interesting” is and how me make our programs have those type of emotions. One guy says we are just trying to compress data inot its smallest form so beauty is how close a face is to the archetype in memory because it takes less data to save differences of a more “perfect” face
most things we solve for are looking for the solution to solve 1 problem as you said not how it should learn… there should be extra self refrencing function that say if the current model is erroring too often it kicks into a statistics collecting phase on inputs… tries to find patterns that are stat significant for guessing answer and write its own new fitness equation to optimize for as well as if its seen this same problem before and already knows the solution
the problem is getting the algorithm to work as hysterical data structures… so that it can create statistics and generalities for each level of objects… the hard thing is saying what goes into what bucket… and how the web of related things connect together…we dont tell our kids how to classify something we just keep on describing it different ways until he gets it so how do you tell a computer those complex relationships. all this is much easier to say than do but if we can codify a set of steps that work for most problems then it may not be so bad and do well solving some more general low complexity problems
This guys talking about plasticity of brain… im interested in using evolutionary algo and neural nets to try and find most efficient learning machine… duplicators may have been first life and finding one solution to a problem is fine but we want a general worker that can specialize as time goes on
There are several works on combining GA and ANN. You can check the work made by Floreano & Co. for pointers. An interesting paper is about choosing the best learning algorithm (using GA) for a neural network. I’ve seen applications in Hebbian learning, for instance.
The idea is thrilling: find not the best solution to a problem, but the best “meta” solution that can be instantiated via learning to solve a particular task. Do you know anything about it?
I believe the problem is we wish to simulate something far beyond our current computational power. Every cell is a turing complete processor. Every combination of proteins is a state in the processor and the concentration is the value of that state. State transitions are not 0 or 1 but are thermodynamic probabilities. State transitions occur in parallel on femto-second time scales. We are nowhere close to being able to understand the complexity of the protein structures of a bacterium. So we try to abstract that away and evolve cell populations, or tissue populations (e.g. neural tissues), or abstract even higher and simulate populations of competing individuals (e.g. foxes and rabbits in the Lotka-Volterra equation) or even entire competing populations (e.g. economic simulations). Every level is useful in discovering some insight into ‘how things work’. But I think we are far, far away from understanding how to take a solution from one simulation and use it in another. The scaling just won’t allow it I fear.