After some horrible failures trying to cluster EvoLisa, I finally succeeded!
(And some creds goes out to Fredrik Hendeberg on this solution too)
All of my other attempts failed because of the overhead in the communication.
I was trying to synchronize things too often, simply because I didn’t see how it would be possible to solve the original Mona Lisa problem w/o doing so.
The key to successful clustering is to communicate as little as possible and run jobs in isolation as much as possible.
And this is what I’ve done now.
Each node gets ” 50 / NodeCount ” polygons.
If I have 2 nodes, they get 25 each.
So how do I get those two nodes to paint one and the same image with their 25 polygons each?
The solution is quite simple;
Each node runs in isolation for e.g. 1000 generations.
Then they pass their own DNA over to all the other nodes.
The receiving nodes then construct a “foreground image” and a “background image” based on the DNA from the sending nodes, depending on their ordinal/rank/index in the cluster.
Once that process is complete, each node goes back to running another 1000 generations, but now using the static fore and background while rendering.
So each node is sort of rendering the entire image, but where all the polygons except for its own are static for X generations.
This works because the longer the simulation runs, the less changes there are to all the polygons.
So each node can fine-tune it’s own polygons together with the static fore and background since we know that the fore and background will not change that much until the next sync.
There will be a bit of a ping-pong effect in the beginning before all of the polygons on all the nodes start to make sense together.
But that doesn’t matter because it is only for a short period before everything start to stabilize.
So we are actually co-evolving one organism per node, where all of the organisms needs to harmonize with each other in order to get a good fitness level.
There is a sort of symbiosis going on now.
As a result of this, the new algorithm is no longer “greedy” and can in some cases result in a worse fitness level than the previous X generations, but the positive effect by far out-weights this.
The effect was quite dramatic on my 2 core laptop.
With a single core, it takes about 12-20 minutes to reach the fitness level that I use for my tests (The same fitness level as the last image as the original Mona Lisa evolution series).
With two cores, I reach the same fitness in 3-5 minutes.
That’s a pretty nice speedup.
The performance gain is actually more than 100%, and my best guess is that the OS is eating some of the power of the first core, the one that is used when running in single core mode.
Now I just need to find a multi core computer to see how it works on more than two cores.
Well done Roger!
Cool, so crossover works well! Should be expoitable on a single core as well, then. :-)
Well this is not really cross-over of the DNA
because the organisms never merge DNA, ever.
I simply create a foreground an background image that is static for X generations based on the DNA from the other nodes in the cluster.
So it’s more like a bunch of organisms that together create an image.
Will you be publishing the code & binaries for this too?
@Ben : I think Roger is a perfectionist so we should wait for a little more long to have a binary version :p
It should me grandioooosa ! ^^
Happy new year
Have you considered doing this in erlang? if you did we could have a LOT of fun with this.
one option: write an erlang monalisa server regulator, people log in with there erlang hosts and each person who ‘logs in’ gets to run a part of the simulation. (you could of course do this in java and we could simply head to your website but i think my idea is better seeing as erlang has a lot better smp support recently and it’s always been good at concurrency =-P hehe)
This would be a nice addition to the system. your computer does the hosting log in and out checks, the merging and the crossover communication work (or we could p2p it, whatever) and we do the brunt of the processing. I think we could get a VERY good version of the mona lisa with a high resolution image instead of the smaller version we have now.
feel free to email me about this, I can pass you some tips on how to set that type of internet distributed beowulf clustering. you have the basic cluster concept down, and thats the hard part.
This is crying out to be run on Amazon Web Services EC2 servers – simply fire up a bunch of servers at 10c per server per hour and you could generate final images in a few seconds. Only makes sense as a service for lots of submitted pictures I suppose…