Genetic Programming: Evolution of Mona Lisa

Added FAQ here:
Added Gallery here:

This weekend I decided to play around a bit with genetic programming and put evolution to the test, the test of fine art :-)

I created a small program that keeps a string of DNA for polygon rendering.
The procedure of the program is quite simple:

  • Setup a random DNA string  (application start)
  1. Copy the current DNA sequence and mutate it slightly
  2. Use the new DNA to render polygons onto a canvas
  3. Compare the canvas to the source image
  4. If the new painting looks more like the source image than the previous
    painting did, then overwrite the current DNA with the new DNA
  5. Repeat from 1

Now to the interesting part:

Could you paint a replica of the Mona Lisa using only 50 semi transparent polygons?

That is the challenge I decided to put my application up to.

The image below is the result of that test:
The number below each image is the number of generations it took to reach that specific painting.


So what do you think?

Added FAQ here:
Added Gallery here:


  1. Very cool, I have been thinking about doing the same thing actually (but not using polygons, by using sort of line creatures).

    What algorithm / library did you use for image comparison?

    I have been pretty much obsessed with evolution stuff this past 1-2 years, read all Dawkins’s books. I Can really recommend the selfish gene, and the blind watchmaker, and ancestors tale, plus Genome by Matt Ridley was great, really interesting how DNA really can be viewed as a hard drive littered with remains neutralized viruses and other self serving copying processes.

    If I ever did PhD research it would be in evolution simulation :)

  2. …interesting. If the representation of the polygons was small enough, you’ve probably got a new compression algorithm.

  3. Very nice! That’s a vectorizer and image compressor in one. Even with scalable compression level, just add another 20 polygons.

    I guess you have to trade time, though :-)

    Same request: source would be nice – a toy to play with on christmas :-)

  4. Wow, your phenotype–genotype mapping seems particuarly effective for this sort of thing!

    Dare I ask, what is it?

  5. I like this! Any chance of seeing the evolution as a video?

    One thing that puzzles me, why aren’t there 50 polygons in the initial images?

  6. Pretty impressive. However, you said the DNA sequence was randomly created… why is the first screen black, instead of showing randomly placed triangles?

  7. It’s not really a genetic algorithm that produced the picture, but a stochastic hill-climber. No recombination is done, the population size is one, so the genotype-phenotype mapping is not really important. Also it seems step 0 was not executed, as others already pointed out.

    And finally, looking at the number of steps it did take to produce that drawing, the convergence speed is not very impressive.

  8. How long does it take to generate the final result?

    Do the number of points in the polygons change overtime? If it does, how do you take that in account in your DNA sequence?

  9. That’s very cool. How did you do the selection? Was it automated somehow, or did you manually compare each image to decide whether to keep/discard it?

  10. I am curious about your fitness function. How, for example, did you decide that 466 looks more like the Mona Lisa than 372?

  11. I’ve implemented genetic algorithms before but only for optimization. Neat application of it!
    Unfortunately, seems like it takes an exponential number of iterations to get finer and finer details.

  12. Seems to me the real magic is in two words: “mutate slightly.” That is the part of the code I would like to see :)

  13. Brilliant idea!

    As per the people asking about the algorithm, I’m curious of algorithm. I imagine the genes have 50 phenotypes, each one being 3 points and 4 colours (RGBA). To show the ‘construction’ of the image, you just draw all of the phenotypes in (a random) order?

    The fitness function sounds like it’d be fun. Objective image comparison is hard to create and might fall apart at higher image sizes. :) Are you using line tracing or something like that?

  14. Some more information would be cool. How long did it run? How did you compare the picture to the actual Mona Lisa?
    You had a population of one, i feel that that might be best in this case, but i do not see why exactly.

  15. Wow! I am impressed. Of course you left out a number of important details in your description. For starters you state the in step 3 you “Compare the canvas to the source image”. Does that mean a simple pixel by pixel comparison, luminance averaging within a fixed or titrating grid, facial recognition, or what? As always the fitness function(s) are at the heart of a GA/GP but you give no clue.

    Also you took 900K generations to get there but you never mentioned your population size, rate of crossover, mutation, etc. Basically, I’d love to know as much about the algorithms you used as you’d be willing to share. A look at the code would be best although I agree with Andy Dwelly that if the the storage size was significantly less than MPG4, etc. (and the calculation effort was small enough!) you might have an interesting and marketable compression algorithm. In that case I’d work hard to get it patented and to market ASP.

    Again, great work and thanks for sharing….

  16. One explanation could be, that one of the mutation possibilities is adding a new polygon to the generating DNA string.

  17. @Luis: probably because it starts off with DNA being initialized to a blank slate (even though it’s not stated in the steps) and then generates the random polygons. So in this case, the black screen was a closer match then random polygons.

  18. This is seriously impressive. How long did it take to reach that final image? What algorithm did you use to compare the images?

    Is it really Genetic Programming though? From your description it sounds more like some other kind of evolutionary algorithm. It would be interesting to run it again with a larger (>1) population and cross-over. I’d imagine it would converge much quicker.

  19. I would say it is using Genetic Algorithm and if the algorithm is correct it sounds like single individual elitist GA. The easiest fitness function would be to do a per-pixel RGB comparison and optimising for lowest possible fitness value.

    It might fall somewhat under the GP category if the number of points in the polygons is variable. The easiest approach would probably be to use triangles with each triangle represented as 3 positions and RGBA-components.

    Going to try this out using Covariance Matrix Adaptation Evolution Strategies once I have more free time. This might actually result in a fast enough optimisation to make this a viable method for vectorising and compressing images with high complexity.

  20. @Dan: The algorithm as describes sounds like a simple form of simulated annealing without a probability of accepting worse matches with a probability > 0 and no cooloff. Another algorith that could work is MCMC (Markov chain Monte Carlo), which is usually used with bayesian statistics and probabilities, but can easily applied as a generic optimizqation algorithm.

  21. Compression through GA search is not a new idea. I have tried something similar to this myself a while ago.
    This implementation works better because of the alpha-blended polygons though.

  22. The comparison function could be as simple as the quadratic error, sum(square(pxy – qxy)). The used it in fractal image compression. (Note the applications of this to fractal image compression? ;))

    It’s interesting how one can get close to the target in the presence of a fitness function. There’s, however, a drawback to the algorithm that’s not immediately apparent. The approach is entirely greedy and with this limited population of (two) images at each generation you won’t be able to take the image through depression points. You don’t notice this, because your tools (polygons) are coarse grained and hence the goal is not a very fine picture to begin with.

    In any case, impressive.

  23. Dude, any chance of posting this up on youtube as a proper video/animation?

    Not sure about how effective this is as a genetic algorithm, but you could have a real gem of a video effect here!

  24. Nice! I would be interested in more detail:

    What limit did you put on the maximum degree for each poly? Did you considered limiting the total number of vertices rather than the number of polygons? How does varying the poly limit affect the final image? (ie what is the limit of convergence for different poly counts?)

    What did your mutation function look like – shift-vertex, add-vertex, tweak-color, tweak-alpha, what else? What relative-occurrence-probabilities did you use?

    Do you use edge-crossing-detection to ‘untwist’ the poly, or just ignore cross-overs? Do you limit new-vertex locations, ie midpoint-offset, or just use random-in-canvas? Did you consider a vertex-pruning mutator? Have you got any statistics on result-acceptance rates for different mutators? How do these rates change at different stages of convergence?

    What do you do when you hit the max number of polys – drop the least-fit, or just stop creating new ones? If drop, how did you determine least-fit?

    What color space did you use, RGB/LSV/? What measure did you use for image-comparison (sum of absolute pixel-differences?)? Have you considered using variable-resolution comparison to speed things up (ie start with a lo-res image and increase resolution as fitness increases)?

    Is it possible to calculate per-poly best-fit alphas as part of the image comparison? If so, can we do the same for each color channel?

    It sounds like you used population-1, ie random-walk search. Any idea what a valid crossover operator would look like – something like polygon-swapping with alpha adjustment?

    Have you tried using smart/directed mutators – ie, pick one poly to mutate, cache the image-and-difference for everything-but, then try a set of mutations on the poly and use the one which increases the fitness the most?

    Have you investigated how changing some of the parameters affects convergence speed in terms of either generations or overall computation time?

    For fun – what about using this alg to morph from one image to another?

  25. The representation of triangles needs
    3 vertices each having X and Y coordinates, plus the color (R,G,B), so 9 numbers per triangle, times 50, i.e. 450 numbers. This could be 1800 bytes if they all were single precision floats,
    or we could do integer arithmetic, with say 16-bit
    X/Y coordinates, and 24-bit color, i.e. 19 bytes per triangle, or 950 bytes total.

  26. I would be curious to see this where every corner of a triangle has a different color and interpolate the polygon colors to create a gradient. You might be able to get a more faithful representation that way and possibly be able to reduce the amount of polygons.

  27. To silveira: I do not have the code right here but I can sketch the approach: the gene size was variable, encoding a variable number of rectangles with a colour. The aprroach described in this page has a fixed number of arbitrary polygons and the colour combination function is some type of blending. I think the variable number of polygons is a better feature but this experiment shows that having general polygons instead of rectangles and specially blending colours (which makes the fitness function change more smoothly) is a good improvement. I wasn’t trying to solve exactly the same problem though.

    You can do a google search on ‘genetic algorithms image compression’ and you’ll see a few hits on the subject.

  28. Having now made a movie out of it myself, it is a lot easier to see what is going on.

    I think it would be interesting to place some limitations on the evolution:

    1) each polygon is limited to up to n-sides, or
    2) the total number of vertices is limited to some number.

    Or both.

    Also, I too would like to know more about your fitness algorithm. I would’ve thought that some regions would’ve gained fidelity (detail) more quickly than others, but it seems — from the samples you’ve provided — that detail sort of emerges together.

  29. Hmmmm…. I don’t think this is a true test of Evolution, though… The algorithm always had access to the original “Design” of Mona Lisa to compare itself with. The process and theory of evolution states that there is no Designer and that things happened randomly (uncoordinated). So, for me, evolution requires MORE faith than believing in God or an intelligent Designer.

  30. OK OK , I Will try to make a movie out of this.

    I’ll even to make a new run and output SVG or some other vector format out of the images :-)

    Also, the polygons are 3-x points large, so they are not only triangles but can be big complex polygons.
    (Saw a few comments about triangles in here)

    And I’ll blog about the various aspects on this topic as soon as the kids go to bed.

  31. Could you post the code? It would be interesting to see it. Especially the code used to compare to the original and estimate the match.

    My understanding is that that is where the magic really happens and the most difficult thing to design.

  32. About that first black image. I would guess its the first generation and selecting a strand with all initial values as zero is just as good as a random starting strand.

    Basically you need something to compare the next generation with and a black screen works? =)

  33. This is thoroughly awesome. It really is.
    Just curious – how long did the computation take ?

  34. I wouldn’t call this a genetic algorithm because you are not keeping multiple candidates at each iteration nor breeding them (that is, swapping DNA from multiple candidates). This is simply a straightforward hill climbing algorithm. Not that the method impacts on the results.

    If it took about 1M iterations to get to your result with hill climbing, I would expect you could do much much better with a genetic algorithm provided you were smart about how DNA got swapped. If it was swapped spatially, somehow, then you could get a result where a candidate which was good at the face bred with a candidate which was good at the background and you got a candidate which was good at both.

    Which isn’t to say that such an approach might not take more computation over all, keeping around and breeding multiple candidates takes time.

  35. Hey, instead of so narrowly directing the evolution, you could do something more like real life, by coupling a facial recognition program to the fitness testing. See what kinds of ugly mugshots you get coming out!

  36. I will release the source tomorrow, I just have to clean it up so I don’t have to feel too ashamed of the actual code quiality.

    (Threading it so that it wont lock up your computer etc)

    The app is written in C# 3 using .NET framework 3.5.
    So thats the requirements for using/compiling it.

    Just so you know :-)

    Im off to bed.

  37. source or more information, please? (in particular, i’m interested in how you compared particular states with the actual mona lisa)

    and, like others said, this isn’t really genetic programming (as there is only a population of 1), is it? more like a genetic algorithm?

  38. it is realy impressive, but i would also like to know more details about the algorithm. I think it would be also interesting to let fight several runs of the same test to find better result with same number of polygons (or simillar with less number).
    I also vote for canvas version here ;)
    And maybe we can all try to optimise the algorithm

  39. Interesting, but it isn’t evolution. You just showed intelligent design. You had an end result that you compared your program to. So you had an ultimate goal in mind. You didn’t really do it “randomly” though. So thanks for showing designs have designers. :)

  40. Incredible! As a programmer of genetic algorithms myself I must tip my hat to your ingenuity. For even I, with no 3D programming experience could write something to combine 50 semi transparent polygons, however the ingenuity of thought, the stroke of brilliance to apply such a seemingly impossible task to such simple constraints, that! is true genius.

    *tips hat*

  41. Just to echo a lot of other people, I’d really love to see the source. I’ve been fascinated by genetic algorithms ever since I first heard of them eight years ago, and I’ve long wanted to experiment with them myself, but the initial hurdle of coming up with a GA framework on my own has proved to be too much. If you could provide the source for this project (say, under the GNU GPL), that’d be a great starting point for me and many others. I’d start by evolving a different image, and then from there, who knows what I’d do?

    Thanks a lot.

  42. While I find this very interesting (and, frankly, cool), I hardly think this could be used for a new image compression algorithm — at least not in any practical manner.

    First, it’s likely to be hugely costly in terms of memory and computation in order to do so (almost 1M generations, not to mention an unknown population size). It’s probably also quite likely that the polygons evolved in the process are non-trivial. What you’re really looking at is the building of an SVG. Therefore, it’s possible that this could be used either by itself or perhaps in tandem with an existing algorithm to convert raster images to vector representations.

    I’m skeptical as to whether there’s any commercial viability to such code. If I’m proven wrong, then kudos! But not every project is meant to be commercialized.

    Good work!

  43. I’m wondering if the 50 polygons are particularly suited to rendering a face. Might 50 polygons have a harder time rendering “scream?” Would 50 polygons be really good at rendering a picasso?

  44. For all those people who want the code, why don’t you just follow the procedure that’s explained at the top of the page?

    If you can’t do that by yourself, then what the heck are you going to do with code except get confused?

  45. Sorry but I think you are not using a genetic algorithm at all like Johannes and Johan mentioned. Please change the title and description to refelect this since you are misleading other people.

  46. For MrE

    Why don’t you write the code based on the procedure and then explain it to us such that we won’t get confused with your code

  47. Are you sure this is GP or GA? Unless some of your polygons are being drawn off the canvas, I’m not quite following. Since the image is slowly converging by *adding* polygons and not mutating their positions, it looks like hill climbing to me.

  48. Damn. I did this same thing a year and half ago. Didn’t think there would be a Wired article about it.

  49. Very cool. I agree with previous posters though: this is a hill climber, not an EA.
    I’m guessing a hill climber is likely to work well in the given fitness landscape. Depends on the fitness function of course, but it would likely have no local maxima, making a hill climber more effective.

    Still: Seeing Mona Lisa slowly appearing like that is really cool.

  50. It is interesting to note, that 10^6 generations is equivalent for example 57*10^3 years of evolutions of drozofila. So: how complex structure may be evolved by this short time…

  51. Kakaz: Because each iteration happens immediately after the other.. no need to wait for the old generation to “reproduce”.

  52. Every mention of evolution shouldn’t turn into a god vs. science debate, but I’m going to fuel the fire anyway. To those saying that this is no proof of evolution: true, the algorithm did have access to the ‘finished product’. However, the final form provided to the algorithm (AFAIK) is a raster image, not the polygons from which it is formed by the algorithm. In the same way, the final form dictated by evolution is survival, but the method to obtain that isn’t completely specified.

  53. Pingback: Evolution Proof
  54. Yes I agree it’s not a genetic algorithm, but I think the convergence rate is fine. Why is that an issue? He’s come up with a good, 50-polygon approximation with a “population size” of 1 and less than a million iterations. This is a reasonable amount of work compared to the true GAs I’ve seen with populations of thousands and needing thousands of iterations. But, more to the point, why optimize a one-shot calculation? That would be premature …

    To the ID straw-clutchers, this is not evidence that mankind evolved through ID. Evolution is not purely random; it occurs within an epigenetic landscape which includes (at least) the fixed laws of physics and chemistry. This calculation uses an epigenetic landscape of the Mona Lisa.

    A good example to demonstrate true evolution to the non-believers would be to have a million people rate the output of the algorithm and evolve it using their opinions. Then the output would still be “designed”, but by a million people rather than one all-powerful “designer”.

    Now create a statistical approximation of the ratings you got, and use that instead of people to rate the pictures. I bet you’d get a picture out which was aesthetically pleasing but had no designer. That’d be a nail in the coffin for those who believe ID because they don’t understand evolution :)

  55. How about creating three sets of polygons, each set for one channel in YCbCr -color space. They could still be alpha blended in this stage.

  56. Pingback: appler's me2DAY
  57. Wow, this is certenly impressive. Admited, I personaly won’t ever need somthing like that then again screw practical use! It is just cool ;).
    I too can’t wait to have a look at the source (even so I’ll have a hard time to run it w/o windows) but I doubt it will take long and this algorithm is ported to a few other languages :)

    Well done!

  58. It doesn’t really have much to do with nature’s evolution when you set up a specific goal, and discard everything that doesn’t take you one step closer to it. This is a misconception that many people on both sides seem to have.

  59. very impressive. Do you use 50 polygons at the start? , it seems as though the algorithm adds them progressively.

  60. An etch-a-sketch picture is ONE polygon with a huge number of sides. I’ve seen TV interviews with artists who can produce superb quality pictures just using this child’s toy, so it’s quite surprising that this code needs 50x as many polygons for the Mona Lisa.

    As previous comments have said, you should try to limiting the number of sides or vertexes to make this impressive.

  61. “I have been pretty much obsessed with evolution stuff this past 1-2 years, read all Dawkins’s books. I Can really recommend the selfish gene, and the blind watchmaker, and ancestors tale, plus Genome by Matt Ridley was great, really interesting how DNA really can be viewed as a hard drive littered with remains neutralized viruses and other self serving copying processes.”

    Problem with these two guys is that they write nicely, but have very little actual knowledge of what they’re talking about. Neither has any background in genetics – let alone modern genetics (genomics/postgenomics) – that goes beyond undergrad level. Ridley is a passable zoologist, while Dawkins just writes nicely, but is not even a biologist at all and has fscked up the meaning of “gene” throughly and beyond redemption, thank you very much NOT: when Dawkins says “gene”, it may be anything between a single allele and a multigene complex like the bacterial flagellum and its chemical “rotary motor”.

    If they were correct, how come life ever progressed beyond protists? The price you pay for multicellularism is death, and that’s where it is hard to argue for “selfishness”. Sexual reproduction, as far as can be told, evolved not coincident with terminal mortality.

    The present example brings the point across rather nicely: no single “DNA sequence” benefits from being too “selfish”. They have to tread the middle gound between resource-hogging but replicating abundantly, and evolving but dying out for lack of replication carefully.

    Try Nature Reviews Genetics – there, you’ll find the real hardcore stuff. Dawkins and Ridley are, from a philosophical perspective, rather annoying logical positivists, with all weaknesses for their theories this implies: they only tell you what fits. That there are counterexamples galore, they’ll never tell you.

  62. Hm. I like that.


    Anthropomorphized “goal”: reproduction and survival (micro and macro). Fitness algorithm: reproduction and survival. Generation production: reproducing and surviving, or the opposite.

    How “much” does it have to do with “nature’s evolution” before it has *something* to do with it? And what’s your point, anyway? It’s still G(evolutionary)P–which is all that’s claimed.

  63. “It doesn’t really have much to do with nature’s evolution when you set up a specific goal, and discard everything that doesn’t take you one step closer to it. This is a misconception that many people on both sides seem to have.”

    Indeed. You can only assume temporary local optima; the “global optimum” is a misconception that smacks of ID, the pareidolia of the intelligent observer.

    “Un dimanche après-midi à l’Île de la Grande Jatte” might be a better work to start with. But nonwithstanding, you’d need an optimization against locally differing *variables*.

    You might *not* need randomness except in the “mutation” and as a starting condition however. The oldest biogenic stromatolites are more than half as old as this planet itself, so the first *nonrandom structures* created *by organisms* must be older.

    (Dawkin’s one *really* great idea was the extended phenotype. For anything else, refer to current peer-reviewed journals.)

  64. For those interested in genetic algorithms, I have a program (in C++) which encodes words into a Boggle grid. The source is freely available to view and play with. It is also written using templates so you can modify it easily to work on other genetic optimization problems.

    Look for the “Inverse Boggle Encoder” at

  65. Marvelous!

    Are the polygons always convex? Looking at the samples I don’t see any that aren’t.

  66. Evolutionist: The goal and discard is simply substituting for the “fittest” criterion of Nature’s “survival of the fittest”. However, earlier criticism is correct to some degree – this is a very limited version of “genetic programming”.

  67. I agree with Evolutionist – this is goal-directed evolution (more like – egads – intelligent design). A truly evolutionary algorithm would compare each picture and the LAST picture against a shifting set of environmental conditions. The order emerges from that process over time.

    However, as a demonstration that a randomised process can produce a highly non-random outcome, its certainly nice. As others have identified, the exact cirteria for accepting or rejecting each picture as closer than the last is critical. As ecological modelers say – the devils in the assumptions.

  68. *sigh* As someone with a degree in AI, (and written GAs that did vaguely the reverse… sought a set of transparent rectangles to best classify images) I’ve got to apologize for all the misguided, and even downright stupid comments that have been posted.

    Your description was perfectly adequate to code an equivalent. I’m not at all surprised by the result, though it is a wonderful demonstration.

    It’s a shame people keep trying to read complexity into the algorithm, rather than appreciating it’s sheer simplicity. I hope this becomes a textbook example.

    Finally, All GA’s are “hill climbers”, (variants of simulated annealing) just with different randomizing functions and population sets. Just because you only evolved one code doesn’t make it any less a GA. There might have been efficiencies in using a thousand parallel codes and cross-breeding, but it’s mathematically the same thing.

    Kudos to you.

  69. I would love to see the source code and make it trully genetic. We have here a climbing hill solution for 1.000.000 individuals. What would be the result for a 1000 generations of 1000 individuals with true genetic algorithm? What will it be with 10 generations of 100.000 individuals each? Looks funny to try.

  70. Pingback:
  71. Pingback: Alt Gr
  72. Pingback: Alt Gr
  73. Pingback: Whole
  74. Pingback: Interesting Reading… – The Blogs at HowStuffWorks
  75. Pingback: Tim
  76. Pingback: Quora

Comments are closed.