This is a follow up to: http://rogeralsing.com/2008/12/07/genetic-programming-evolution-of-mona-lisa/
The last few days we have thrown quite a bit of changes onto the EvoLisa app.
Dan Byström provided an optimization that resulted in a 25 times performance improvement for the fitness function, completely crazy stuff :-)
Me and Mats Helander have been discussing ways to make EvoLisa paralellizable to make EvoLisa run on a big computation cluster, and Mats came up with a brilliant idea on how to partition it.
(More info on that later)
We have also been playing quite a bit with the rendering, added spline rendering and changed how the polygons mutate.
Just check this out:
How cool is that? ;-)
So, I’m curious what kind of performance gain could be achieved by evaluating only those pixels affected by the mutation, or in a simpler form, maybe evaluating the pixels in the smallest rectangle that can contain all the affected pixels.
Just fantastic. Especially if we could have this new version ;)
Just brilliant :)
Awesome
I’m also curious, but about what the effective compression ratio is for this, compared to the initial bitmap
i also wonder if fading edges rather than sharp ones (maybe even a simple blur) might improve the matching as a lot of the edges are where a slow fade occurs :-)
@chewie007
That sounds reasonable. One thing to consider is you’d have to calculate the preceding fitness for that region, subtract it from the overall fitness, and then calculate and add the new fitness. Also you have to calculate the vertices of the containing rectangle.
Doesn’t sound terribly expensive—unfortunately I don’t own the software necessary to compile the project, so I can’t test it out. :-(
@kavi
you can compile it in visual studio 2008 express edition (c#)… and that is free as in beer
you’d have to have a microsoft os for that though (maybe that is what you mean?)
Someone could explain me how do i need to do to compile the modifier version of evolisa ? (This version : http://danbystrom.se/2008/12/14/improving-performance/)
I have Visual studio 2008 free edition (c#)
This just gets more and more interesting… :)
Interested in an ASM port? ;)
I’ve upgraded the C++ version of Roger algorithm. Now I trace the polygons with an optimized alpha blending function and the program can serialize progress to file.
http://www.ggsoft.org/archives/genetic.zip
I’m very intrested in a multithread support, I thought about N parallel “evolution processes” and every X seconds the one with the lower error is chosen and distributed to the other processes.
But I’m sure something better can be done!
The best way to approach parallelism I think is to extend it to a true implementation of the genetic evolution algorithm rather than a stochastic hill-climbing approach. There should be an initial population of random drawings that can be compared to the target in parallel, that is the fitness measure, then the next generation is selected from that population in proportion to their fitness. From that pool random pairs can be selected to exchange portions of their polygon vectors and random mutations can be introduced with a low probability. Then the whole population is compared against the target. Lather, rinse, repeat.
Applying genetic recombination on a set of data that is not a tree is useless.
There are no such thing as a face rendering branch/gene in this case, so there is no chance for the recombination to e.g. combine a good background with a good face, simply since there is no such structure here.
All the polygons in a picture is dependant on eachother due to transparency, so taking some from another picture will never be bettern than to simply insert a new polygon at random..
Also, the fitness function in this case is very heavy, this it will result in a major performance hit to evaluate a whole pool of individuals and just killing the weakest ones.
And don’t be fooled by a high generation count.
if you have 100 generations and 10 individuals in each generation , then there will be 1000 evaluations.
if you have 500 generations and 2 individuals, there will also be 1000 evaluations.
What takes time here is the evaluation, and you have the same number of evals in both cases.
Yes, Roger Alsings post seems spot on. I’ve tried using a population of 1000 drawings and randomly picked three. The two better drawings were allowed to “mate” by mixing their polygons (“genes”), and the child would replace the remaining drawing – so the population count remained constant.
This algorithm did not converge well – Mona Lisa never became recognizable.
@ Last 3 Posts
I have a generational approach, which does converge (the picture becomes recognizable) at a reasonable rate, however like Roger said, just because the generation count is down does not mean the evaluations are down.
If you compared the number of generations from two similar results in my implementation and Roger’s, the number of generations in mine would be 1000s lower… but it isn’t a valid comparison because generally speaking the number of candidates considered in Roger’s = the number of generations, where as in mine there could be 1000s of candidates per generation.
If you consider the number of candidates in my versus Roger’s, Roger’s will be 10,000s lower than mine.
Doing this using a genetic evolutionary approach is possible, but it is NOT more efficient. It is not even close to as efficient.
People assume that because you breed two healthy parents the outcome will be something that is even more healthy, or at least AS healthy… this is not the case in a problem like this because you are talking about overlapping polygons of varying transparency and color. Regardless of whether the genes are polygons or points on a polygon, by doing genetic crossover between two healthy parents you are more likely to come up with something very unhealthy then something healthy.
Once again, it is possible, but from my practical experiments I can say with some certainty that crossover between healthy parents will not make Roger’s algorithm faster, and in fact, it will likely make it slower.
Would love to be proved wrong though.
Hi Roger,
I’ve found another way to help speed up the generation loop. I’ve implemented Dan’s changes to the pixel fitness calculator and extends that principle.
Very simple really, in DnaDrawing.cs change the foreachs to for loops and wrap them in an unchecked block.
For example DnaDrawing’s Clone method:
foreach (DnaPolygon polygon in Polygons)
drawing.Polygons.Add(polygon.Clone());
Becomes:
unchecked
{
for (int i = 0; i < Polygons.Count; i++)
{
drawing.Polygons.Add(Polygons[i].Clone());
}
}
I’ve done this for all the Foreachs in the code, Like DnaPolygon and DnaPoint and it does make a noticeable amount of difference. I added a quick timing loop for 1000 generations – before dans code I got 10.01 seconds. With Dans change, 2 seconds and with these further changes, it works in arount 1.8 seconds for 1000 generations.
Thought you might be interested.
Any chance we can get a copy of the spline version? would be fun to play with.
This is awesome. I really wish I could see the code, let alone if you happened to release this as OSS. It would be amazing to see what some people could do with it. And I think experiments like this could help me learn and understand C# and other languages better.
Hello Roger, inspired by your EvoLisa I wrote a very similar C# application, that uses only triangles, and enables recombination.
http://hourlyapps.blogspot.com/2009/07/adding-recombination-to-image.html
I’ve got a few ideas, love the program btw:
1. Sexual reproduction rather than parent-child selection. This would require more ram but if you had separate lines running on different processors the ones with lower polygon counts would evolve faster as there would be less processing per generation.
2. Having the source image initially rendered extremely pixellated. I’ve noticed low resolution images render much much faster. The source image could be rendered slightly less pixellated every time the fitness hits a certain threshold. This would drastically speed up early evolution. The bit depth could possibly start low as well.
3. An option to have mutation size reduce as fitness increases. A large mutation early on has a would produce larger average fitness increase than a small one whereas later on it would have a very high probability of reducing fitness.
4. An option to export as a vector image.
Where can we find the binaries of the new version with improvements ?