Evolutionary Algorithms – Problems with Directed Evolution

Creationists often use “irreducible complexity” as an argument against evolution.
e.g. you need all parts in place and functioning before the “whole” can do its work and thus reproduce and spread its features.

The bacteria flangellum is one such feature that have been attributed with “irreducible complexity”.
You need the “tail” and some “engine” parts in place before it can be used as a propeller and drive the bacteria forwards.

Evolutionists have however proven this wrong and shown that each of these parts have had other purposes before they were re-used/combined for propulsion, so each part was already present.

The key here is that evolution in reality is not directed to a “final goal” it simply makes organisms adapt to their current environment.
e.g. an organism might evolve a single pair of legs and later generations might get more of those legs if that is beneficial.
The front pair of legs might even later evolve into a pair of arms that allows the organisms to grab food while they eat and so on.

In short, existing features can be reused and refined in order to reach a higher fitness level in the current environment.

As far as I know, we still have not managed to accomplish this sort of “undirected evolution” in computer programs in the same sense.
If we make a program that are supposed to come up with a solution for a given problem, I would use “directed evolution” and try to breed solutions that are better and better at solving the given problem.
So if our program was supposed to come up with a propulsion system for a body, it would fail at evolving the bacteria flangellum since we experience the effects of irreducible complexity, our program is unable to evolve all the parts for other reasons than moving the body forwards.

In order to harness the full power of evolution in computer programs, we need to be able to simulate “undirected evolution” so that we can evolve all these parts that later can be re-used for other purposes.

Are there any research going on in this topic at all?

I know that the old “Tierra” simulation was sort of undirected, the only goal was to consume as much CPU as possible, but it sure could use undirected evolution to get to that goal.

But other than that, anything?

16 Comments

  1. Carter Cole says:

    I have had your feed for awhile and have contemplated dropping it until i see posts like this… i think you hit the nail on the head… we need to get better at how to create machines or algorithm that together preform as a collective unit. We arent smart enough yet to teach them how to grow a solution to ANY problem we can use the “irreducible complexity” of our human minds to direct that learning to find the best answer based on the fitness we give it to decide if it lives or dies… we are like playing god with little things and like John Conway’s game of life living things can emerge from a system of a few rules… look at a fractal… a simple equation that is as complex as you look farther back or closer to it… the thing is being able to sift through the trash thats created and find the right solution to the right problem.

  2. Thomas Coats says:

    The problem is with an algorithm we aren’t building a machine – an algorithm is not blue prints. If we move a wall or make something bigger in a blue print things still work in a similar manner. If we change a register or start counting from a different index we radically change our algorithm.

    The problem with algorithms is that we don’t have an organic way to represent their function. They don’t exist in a physical environment where they can find their way by bumping into things, they exist in a fragile computer environment where slight changes in function cause large differences in behaviour.

    If we could represent algorithms in an organic and flexible manner we could start taking steps to evolving them. We need to find a DNA style representation for algorithms.

  3. Carter Cole says:

    The algorithm follows the rules of the system it was created for and the same way a fractal has infinite complexity and we can keep going deeper or deeper and higher and higher and optimize for efficiency at each of those levels… Im really starting to believe that we can fake it… only create a section of the fractal the solution we need to do the job and dont worry about the details now (zooming into the fractal)… it will be many algorithms together each self referencing and regulating much the way Google ranks the web (its over 200 factors all play a part and the thing on a hole is a bitch to replicate). With enough feedback loops we can create an algorithm that will modify itself as it is able to comprehend larger datasets and broader topics. Now with “irreducible complexity” we can cheat evolution… if you tried to come up with intelligence by modeling the real world it would take forever for life to emerge again it just needs too many generations to brute force the solution… (how long would it take to randomly get the giant duplicator in john conway’s game of life? forever! but we as human were able to see how the rules of little gliders worked and make a bigger machine that copies itself) so if we tell it how to grow with what we’ve already reverse engineered about ourselves we can coax it into finding the solutions we are looking for… but if you try to make it learn too fast or have too dirty of signal to noise then it doesn’t find the pattern, cant learn the right thing and wont converge.

    ive only been smart enough to use neural nets to do captcha breaking but there are so many other solutions we could be looking for…

    and heres last thought before i go… if the machine we create can beat every turning test is that good enough?

    id love to work with others on other applications of this stuff so please get in touch with me…
    im @cartercole follow me :)

  4. Christopher Weeks says:

    As far as I know, we still have not managed to accomplish this sort of “undirected evolution” in computer programs in the same sense

    I certainly agree that there is a vast arena of yet-to-learn goodies in the field of evolutionary computing, but how is the organism’s fitness in the real world any more “undirected” than it is in my image-growing application? Nature defines fitness as the ability of a genome to replicate. I define fitness in my software the same way. The mechanisms are all different, but they seem similarly directed.

    The thing is, the bacteria evolved some molecular pumps because transfusing atoms through the outer membrane (or something…I don’t recall the details and getting it right is kind of beside the point) increased their fitness. They went on to attach those pumps to a hydrodynamic tail because that also increased fitness. The same inexorable test of fitness is driving those things. In my image-grower, if I’m judging fitness similarly, but giving the engine the option to mutate in a variety of ways, isn’t the same thing happening? Maybe a yellow blob over here increases fitness. Maybe increasing the transparency of the end-range of the square-painting series increases fitness. And then maybe attaching those two bits of genetic code (in either order) increases fitness more.

    So, either it seems like we’re already doing that and it’s not too remarkable or I’m missing something substantial about the absence that you’re noticing.

  5. Roger Alsing says:

    is the organism’s fitness in the real world any more “undirected” than it is in my image-growing application?

    I don’t know what your application does, but every single implementation of EA/GA I’ve seen have had “a single goal”, that is , the evolution is directed towards that goal.

    In nature, there is not a final solution that everything strives for, humans are not more right than a worm, we are both adapted to different environments and niched at different things.

    The thing is, the bacteria evolved some molecular pumps because transfusing atoms through the outer membrane (or something…I don’t recall the details and getting it right is kind of beside the point) increased their fitness

    I think you are missing the point, in the real world, yes that would increase the bacterias fitness.
    in a GA simulation, evolving something that don’t make the organism better at solving the specific problem that the simulation strives for, will not yield any better fitness, and will thus most likely be discarded.

  6. Carter Cole says:

    thats why i was saying you have to ask it the right problem to solve or you will just keep getting single solutions… first we had replicators that could just make exact copies… but they were venerable so we added diversity… but what are humans but giant walking robots made to carry around our genes and be smart enough that we can talk slick with a female and procreate. The fitness equation for organisms in the wild is how well they are able to procreate (thus more of their traits are likely to live on and exist in later generations)

    if its a bigger penis that increases fertility then it will optimize for that just because on average the bigger penis is able to father more children but humans went in a new direction… they got smart and with intelligence they were much more efficient at hunting and surviving… in our simulations an iteration is a few microseconds but for a human generation to go by is an entire lifetime of learning. they say women are getting more attractive because on average a more attractive female procreates more then a less attractive one. The smarter ones that were able to discern ambush or drought were smart enough to move on and reproduce…

    if we want a learning robot maybe we generate all the children and then let them all try and optimize themselves for some number of problems over time… then breed the best of those that learned the fastest after they have been able to survive and adapt as new problems and larger concepts are given… i think a big part is the way data is stored in the mind plays a big part of it… we need classifiers to be able to lump objects into general categories so we can find patters and make assumptions about the object at each level of the hierarchy.

    we say “box” and people all think of different things but we all can say it probably has at least 4 sides… now if its an open or closed figure or a packing box or a perfect cude has to do with the archetype of the box in our mind and only by specifying more detail (or showing a picture) are we actually talking about the same box and not boxes in general

    heres process ive come up with so far… identify problem to optimize for… if % correct rate isnt stat significant then kick into learning mode… my idea is that we create classifiers that group every object we talk about… if two inputs seem to produce different results but we cant find the rule then then there must be some other metric that separates them and we need to try and find differences in the sets they belong to… when we identify difference we test the new metric to see if we can come up with solution now. we want to lower entropy of the data aswell the less predictive a metric is the less useful and the algorithms need to understand that and know how to create filters to predict bad data and filter it out… if a input isnt predictive of the outcome try and find out why and if theres some new fitness function that could do the job better rather than just using some static approximation of the term

    its very high level but heres the idea…
    1) do you know how to solve this problem? (classifer)
    2) if you dont look for ways the inputs and outputs are different and try to find the rule
    3) if you cant find a rule increase dataset with new metric to see if it can help predict outcome

    humans build their own datacets so we we think of new way to look at a problem we can go and get all that new data and build all out learning data… i think this is a big part of the system and we need to hook the algorithms into API on the internet… they would grow slower because they reference outside data but if they are attempting to converge while they are able to manipulate the dataset and test their predictions then it can grow smarter over a longer period of time

    what we want is a function that can look at each variable of a fitness function understand what its looking for and why its there… if its getting trash data it needs to understand it may have a signal to noise problem or it may need another metric to test on or fix the broken input…

    the scientific method give us some structure on how to test prediction we just need to teach structure to computer… i think we we have disconnect is we cant go backwards and think of what the minimum is so we ask it to find patterns in systems that are hign level ideas but its doesnt get all the inputs (because we dont get whole problem) so it cant find the pattern and learn how to predict it…

    if we really did understand how every node in the stock market would behave (or could predict it based on all the metrics about their money, previous choices and where they are in their lives at that point) i think it would be trivial to make very accurate predictions but we have no way to get that dataset or how to express it to computers so it cant solve that problem yet

  7. Christopher Weeks says:

    OK, so I don’t have a particularly comprehensive or scholarly apprehension of GC/GA. I’ve read some stuff (not in an academic environment) and I do actually have an image-growing application that I tinker with from time to time. I used the examples of the genetic code to create a yellow blob and the genetic code to create more transparent rectangles in part of the genome because those are both totally valid in my particular app and also seemed to mirror evolution of the molecular pump and the tail into a flagellum.

    The molecular pump increased the bacterium’s fitness over its peers (as the universe measures fitness). The yellow blob (hypothetically) increases the fitness of an image over its peers as well (as I have my app measuring fitness). And the tail and the transparency are other gene-snippets. Those bits, when combined in certain ways, produce flagella and transparent yellow boxes…or whatever.

    So, maybe we mean something different by “single goal” when talking about fitness functions? You’re saying that in the evolution of organisms, there is not a single goal, right? That improved survival * fecundity is a complex goal. Do you merely mean multi-variate? And is it just that fitness is understood to mean the product of survival and fecundity; are those the two variables that make it not a single goal? Or is it something more essential than that?

    In e.g. my image-grower, the fitness function is a composite test of the generated image against each of the images in a directory. It runs several tests over each to determine a weighted fitness score. I can craft new functions to measure new forms of similarity (a pixel-by-pixel match, or palette-weighting tests, or matching image-subsets) and just drop them in place and then tweak the weights of each test against one another. I was under the impression that that kind of multi-variate analysis of fitness was normal, but it’s possible that my small sample of reading has skewed my understanding. Also, there might be something that I’m still missing that makes my test “a single goal.” In your understanding, is it? And if it isn’t, is it because I’m testing the grown images against more than one “ideal” image or is it because I’m using several tests of fitness and mixing them?

    Thanks for your time, I hope I’m not being too dense.

  8. Christopher Weeks says:

    Oh sweet! I just realized you’re the Mona Lisa guy. That series of your posts is what got me started on my own explorations though I’m doing different stuff and not as disciplined as you. :)

  9. Carter Cole says:

    i think im trying to say we need a fitness function for fitness functions… it can look at itself decide if it already knows an answer or it needs to find one and then modify itself accordingly… the fitness function is what determines the bounds for which the organism will grow… the more terms the more advanced it is. be creating self referencing fitness functions with feedback systems it allows the program to grow the scope of problems it can solve. its recursion on a fractal of problems and solutions… we know everything from when we were child and it gives us baseline for how world will behave and what we can expect. you cant just throw a function out there without the organism understanding the terms… then it can only grow within the limits of those terms until you allow it to add and remove values… making it better at the job (learning) and the better it learns the more intelligence it has

    if the organism choose to add/remove term from its fitness function it has to be for a reason… we might think of these as beauty… or accomplishment… if removing the term or adding it helps it converge and solve then it may be something good to add and on opposite side if removing the term allows it to see patterns more clearly with less noise then thats also a correct solution… now it sees the beauty in patterns…

    we need to think about what the complex feeling we have are and what it is actually saying about how the human is behaving or trying to do

    if its crying then its frustrated… it cant solve the issue and in the past it knows that when it cant do it itself crying will lead to them getting desired outcome… if you keep telling them to figure it out for themselves they will eventually stop trying to cry to get the answer and look for out of the box thinking that will produce the desired result because the solution isnt obvious in the data we are currently manipulating and they have learned that crying no longer is an option for finding solution

  10. Carter Cole says:

    ive been watching these and thats where a bunch of my recent ideas have come together http://www.ted.com/talks/dan_dennett_s_response_to_rick_warren.html

  11. Carter Cole says:

    Really wish i could edit comments but i guess ill keep talking as im still finding new info to contribute…

    this robot can evolve motion on its own without a model of itself because its an active feedback system and understands one big concept… physics and locomotion… it tries to describe itself, moves a random motor and tests its theory and then designs locomotion from that until it has a weird although clever form of locomotion… i wonder if you told it locomotion isnt just movement but also ability to step over obstacles if it would automatically evolve the spider like walking they hoped to see in the first place

    http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.html

  12. @thomascoats Evolving algorithms os not a problem check out GP (genetic programming)

  13. If you have a final goal (the problem to be solved) then you have to provide some kind of guidance so the individuals in the pool are moving towards that goal. Usually, the fitness function assigns a score to each individual based on how well it solves the problem at hand. The fitness function is analogous to the environment and is most often static. To achieve a more dynamic environment you can let the problem co-evolve or set up a competive arena where the individuals compete.The competion can be between peers of the same species (single population) or adversaries of a different species (2 populations).

  14. Carter Cole says:

    This was quite interesting they say they evolved memory in their duplicator populations

  15. Bill Tozier says:

    OK, Roger, so I’m of two minds here, and one is unfortunate: on one hand I’m tempted to fall into a supercilious old-scientist mode that a lot of people find off-putting.

    I’m not gonna do that. This is the helpful, encouraging mode instead:

    “That old Tierra simulation” was undirected, but also implicitly multiobjective, and was in several published experiments from the 90s used in attempts to “get them to do something useful” in addition to just surviving and propagating. You may think of it as “old” because you don’t know about the Avida project, now mainly housed at MSU’s DevoLab (http://devolab.msu.edu/), a broad-ranging research program that grew out of old Tierra, and has been doing some amazing work recently.

    That ‘every single implementation of EA/GA I’ve seen have had “a single goal”’ is something I guess I can help with, too. Multiobjective search has been part of EA since the late 1980s, and it’s probably more the fault of engineering instructors who are too dumb to teach people that every engineering problem is multiobjective, than EA researchers and practitioners that you haven’t heard about it. There are a number of useful and interesting conferences on multiobjective metaheuristics, and you can probably get access to some or all of them via an ACM library lookup or Google Scholar.

    You can also find out more about multiobjective metaheuristics (especially genetic programming) in the introductory (and free to download) Field Guide to Genetic Programming, or in Sean Luke’s Essentials of Metaheuristics (which features an image lifted from your Mona Lisa project, Roger).

    For people who are just getting interested in multiobjective searches, just be aware that there are plenty of useful, well-written things (some from more than 30 years ago) that explore all the stuff you’re considering about “fitness functions for fitness functions”, weighting schemes, notions of optimality and tradeoff, stochastic methods.

    But in the end I’d suggest one more thing to everybody whose comments I’ve read above. Maybe a useful thing to think about in considering evolutionary algorithms and engineering projects and natural open-ended evolution is this: When is it done? A lot of engineers and scientists (many of whom are my colleagues, and EA theorists and practitioners you’re reading about now) seem to imagine that a “goal”, embodied in a fitness function, is a fixed point. That if you change what you want in the middle of a “search”, you’re making things complicated and bad and intractable. That once you’ve made some progress in writing—or evolving—an algorithm, and gotten a little feedback from the process about how well it will probably turn out, you’re bound by some kind of Science Police to keep going until you get it optimal.

    That’s crap. Worse, it’s stupid academic philosophically misleading crap. If you want to make things, stop thinking of design (as in this context: choosing representations, fitness functions, algorithms, anything) as something separate from adapting your own goals.

Leave a Comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s