evolving logical operations for function approximation

http://vimeo.com/32381425

source attached.

In this simulation I evolve a population of abstract computer programs that are a series of binary logic gates that operate on their own memory.

I convert the x-coordinate [0, 255] into binary and load it into the memory of my abstract computer (machine.h). Then I run the encoded instructions for each x-coordinate, and read a resultant value from the computer memory in binary and convert it into a decimal float.

I then compare it to my function I am trying to approximate, and determine a fitness based on the inaccuracy.

I evolve a population of these programs, until I converge on my solution.

actuallyUsingLogic.tgz

uses for this technology, evolving abstract shaders

The source code will come for this soon.

very nice. the shaders are more interesting, in my opinion. evolving sine waves is a good proof of concept, but there’s no reason for an evolutionary algorithm if the solution is known. the only reason you use an EA is when you have requirements for the solution, but don’t know how to satisfy them.

also consider ‘electric sheep’, which has been evolving parameters for abstract visuals for… over a decade! :slight_smile: http://electricsheep.org/ http://draves.org/meg/

electricSheep++!

The sine wave approximation is just taking my car out for a drag race to proves the thing runs when I floor it.

The solution is really not even what I’m after with EAs, it’s the diverse selection of behavior that arises from mutation. I’m far more interested in the 100 versions that are just like it, but slightly different.

I’m also interested in writing one computer program that evolves a possible 10^59 programs.