ofxNeuralNetwork

ofxNeuralNetwork.zip
Demo-images

I wrote a backpropagation-based neural network for Java a few years ago, and I just ported it to C++. The biggest difference between this and something like FANN, apart from scope, is that it’s meant to be used online (i.e., continually alternating between training and running, rather than training first and then running).

NeuralNetworkDemo
Demonstrates how to interface with the class directly. First it sets up a network with 2 input nodes, 3 output nodes, and 3 intermediate nodes.

  
  
vector<int> dimensions(3);  
dimensions[0] = 2;  
dimensions[1] = 3;  
dimensions[2] = 3;  
nn = new NeuralNetwork(dimensions);  
  

Then it runs the network at each point on the screen:

  
  
vector<float> input(2);  
input[0] = map(x, 0, width, -1, +1);  
input[1] = map(y, 0, height, -1, +1);  
vector<float>* output = nn->run(input);  
  

(Notice that this neural network operates on -1 to +1 – just because I understand the math for this kind better than the 0 to 1 style). The output is then mapped to colors and drawn to the screen. Random weights create random gradients.

Learning is not demonstrated, but works like this:

  
  
vector<float> input, expected;  
...  
nn->learn(input, expected);  
  

ofxNeuralNetworkDemo
Demonstrates how to interface with the NeuralNetwork indirectly using ofxNeuralNetwork, which uses the VectorMath class as an interface. Also, notice that Connection has two static variables you can set:

  
  
Connection::learningRate = 0.001;  
Connection::maxInitWeight = .1;  
  

The network is set up to take two ofxPoint2fs as input, and output one ofxPoint2f. The two ofxPoint2fs have different boundaries (one is a position, the other is a velocity), and the ofxNeuralNetwork will take care of that mapping for you.

  
  
vector<ofxPoint2f> minInput, maxInput, minOutput, maxOutput;  
vector<int> topology;  
  
int maxVelocity = 128;  
  
minInput.push_back(ofxPoint2f(0, 0));  
minInput.push_back(ofxPoint2f(-maxVelocity, -maxVelocity));  
maxInput.push_back(ofxPoint2f(width, height));  
maxInput.push_back(ofxPoint2f(+maxVelocity, +maxVelocity));  
  
minOutput.push_back(ofxPoint2f(-maxVelocity, -maxVelocity));  
maxOutput.push_back(ofxPoint2f(+maxVelocity, +maxVelocity));  
  
topology.push_back(6);  
  
resolution = 8;  
  
nn = new ofxNeuralNetwork<ofxPoint2f>(minInput, maxInput, topology, minOutput, maxOutput, resolution);  
  

The size of the input and output is implicitly given by the number of boundaries. The intermediate nodes are given by “topology”. “resolution” determines how much discrimination the inputs have. The more resolution you have, the more intermediate nodes you’ll need. The more resolution or the more intermediate nodes you have, the slower the network will run. Currently, ofxNeuralNetwork is limited to using all the same ofxPoints.

Learning is accomplished like this:

  
  
vector<ofxPoint2f> input, expected;  
...  
nn->learn(input, expected);  
  

And ofxNeuralNetwork will save everything you’ve taught it so you can relearn quickly. For example, relearning a specific instance:

  
  
nn->relearn((int) ofRandom(0, nn->memorySize()));  
  

To run the ofxNeuralNetwork, you can say:

  
  
vector<ofxPoint2f> input;  
...  
vector<ofxPoint2f>* output = nn->run(input);  
...  
delete output;  
  

This specific demo uses this functionality to learn to scribble like you. I have another app where I’m using these scribbles to make some Yasunao Tone-style noise. If you’d like to play with it, the controls are: space to create a new neural network, enter to clear the screen, ‘i’ to toggle illustration mode, ‘f’ to toggle the field lines, and ‘s’ to save a screenshot.

ofxProcessing
This uses one other addon, ofxProcessing, which is just some of the functionality of Processing I find myself wanting in of. Right now it’s got: a bunch of print() and println()s, save() and saveFrame(), radians(), degrees() and rotate(), map() sq() constrain() dist() mag(). There are some Processing-like things I was considering adding here (e.g.: saveUniqueFrame() that uses ofxDirList to saveFrame() without writing over old files) but I think it’d be best to keep it to functions that exactly mirror Processing only. That way if other people want to implement other functions that aren’t already there, there is no ambiguity about whether it should be included or not.


So, obviously: you’ll need to copy these addons to your /addons and add to “addons.h”:

  
  
#ifdef OF_ADDON_USING_OFXNEURALNETWORK  
	#include "ofxNeuralNetwork.h"  
#endif  
  
#ifdef OF_ADDON_USING_OFXPROCESSING  
	#include "ofxProcessing.h"  
#endif  
  

Sweet add…I was wondering when someone was going to do something like this. I’m pretty excited to give the NN a try. Thanks!

Sounds very cool! But i get “file not found” on the link…

nice!

I’m going to try it. Just one thing, the ofxProcessing, seems really similar to todd’s ofxTodd. Take a look, there were some proposals to include it in the next oF version, so perhaps you can merge them.

My mistake – bad URL – fixed.

I didn’t want to infringe on Todd’s space by adding functions to his addon, and thought it would be better to make an addon with a clear boundary (i.e.: Processing functions) rather than “random utils”. If they were merged, Todd is probably right about the of-prefix convention, as it’s a bad idea to have a global function named “map” when there’s stl floating around.

Todd is probably right about the of-prefix convention, as it’s a bad idea to have a global function named “map” when there’s stl floating around.

yes, I had to rename yours in order to compile :slight_smile:

Also there are two bugs:

in NeuralNetwork.h and Neuron.h, the private methods size() don’t return.

[quote author=“arturo”]
in NeuralNetwork.h and Neuron.h, the private methods size() don’t return.[/quote]

Very strange. You mean Layer.h instead of Neuron.h? I just fixed it. Apparently mingw just interprets the last statement of a function as the return value? Let me know if you find anything else. If not, I’ll update the code I posted.

can you update the code? :slight_smile: i’m getting same errors with vs2008 & winxp.

thanks.

yes layer.h sorry

I don’t think mingw does that automatically, only is not an error but a warning, it will compile but that function will always return 0.

I just uploaded some new source. ofxProcessing now uses the standard of-prefix naming convention, and the other source has been updated to reflect that.

Also, apparently mingw does automatically do what I thought it does – my original suspicion was just because if size() didn’t work, nothing would work. I tested it out manually, and indeed it was returning 3 for the size of the neural network. Regardless, I fixed it – since the other way is not normal at all.

ungaro – let me know if it’s working now :slight_smile:

r2, updated for 006:

  • “ofaddons” references removed
  • some code formerly in ofxProcessing has been moved to the core, so I removed it from ofxProcessing

Thanks to lahiru for letting me know about the errors.

This is a very helpful addon, I’m just having some difficulty bringing it over to Xcode. I’ll give it a shot later. What i would hope for, in terms of helping this addon dissipate throughout the forum, is to have clearer documentation and testApp’s that are more oF-orthodox or something more standard in appearance. A single .h is a bit frightening for new comers. It’s not how much code there is, it’s if people can understand it and learn from it that matters. Thank you for understanding.

Hey OwlHuntr, thanks for the feedback.

I try to write code that is self-documenting, in that it has clear distinctions between what you do and don’t need to worry about. I tend to write single .h files when I can, because I find it speeds up development. If it were separated into a .cpp and .h file, the public part of the .h would look like this:

  
  
ofxNeuralNetwork<T>(vector<T>& minInput, vector<T>& maxInput, vector<int>& topology, vector<T>& minOutput, vector<T>& maxOutput, int resolution);  
vector<T>* run(vector<T> &input); // run an input and return an output  
void learn(vector<T> &input, vector<T> &expected); // learn an input-output pair  
int memorySize(); // how many unique pairs have been learned?  
void relearn(int i); // relearn pair i  
  

If there’s something specific I could clarify, let me know. Otherwise, I hope the demo explains anything you’re curious about :slight_smile:

download link dead, is there a copy of this library anywhere?

@ghz_tomash found this with google search: https://github.com/lian/ofx-dev/tree/master/apps/dev/neuralNetworkDemo/src/ofxNeuralNetwork but no idea if it the same…