the idea is to make easy to use machine learning routines for OF to support tasks in gesture/pattern recognition, data viz, and the like. i’ll be making some examples hopefully over the next little while, but for anyone interested in seeing the somewhat immature version right now, please check it out.
which OS are you using? and which example are you trying to compile? make sure you don’t pull in dlib’s actual source into your project, just make sure the libs folder is in your header search path.
you should not need to include dlib/queue.h. just pull in ofxLearn.h and ofxLearn.cpp into your source, include ofxLearn.h, and add the path to dlib ( addons/ofxLearn/libs/ ) to your header search path (or user header search path in xcode). let me know if that works for you.
In codeblocks the project open a file with #error “Don’t put the dlib folder in your include path”
and an explanation.
In testApp.h there is #include “ofxLearn.h”, the the Learn.cpp and Learn.h are in the src file.
But I don’t know how to put the link for dlib in the headers search path, I’m on Linux.
Sorry to reply so later.
But anyone can said me where I have to put this line : “…/…/…/addons/ofxLearn/libs/” for the dlib.
genekogan spoke about header search path but I’work on Linux and I don’t know what is it.
I’m trying to start using the dlib, and your openframeworks addon is a very good help.
Unfortunatly I have some problems, because I would like to include other things in the library, and when I start including
image_io.h, etc I receive a lot of error messages…
I tried to use the dlib library by itself, and I could manage to compile the matrix examples on mac osx with xcode. When I tried the image examples, I start receiving a lot of link errors…
I tried to include the source.cpp in my project, as explained in the site, but without any good results…
Can you help me? When are you thinking on expanding or addon with the other dlib capabilities?
Can I help you with that…I need some guidance with that
Thanks in advance…
best regards,
@genekogan thank you for the addon !
I started to look into the clustering example, the result is not looking as I imagined and I am curious to know if my result is normal or if I am missing something.
I am trying to do clusters from the bright zones in the image on the left side and I have a vbo mesh on the right to display the colored clusters
this is the code I am using
int numclusters = 6;
int increment = 4;
int index = 0;
for (int x = 0;x<myNoise.getWidth();x+= increment) {
for (int y = 0;y < myNoise.getHeight();y+= increment) {
ofColor col = myNoise.getColorAtPixel(x, y);
if (col.r > 100) {
vector<double>xyz;
xyz.push_back(ofMap(x,0,myNoise.getWidth(),0,1.0));
xyz.push_back(ofMap(y, 0, myNoise.getHeight(), 0, 1.0));
xyz.push_back(0);
clusterer.addTrainingInstance(xyz);
mesh.addVertex(ofVec3f(x,y,0));
mesh.addIndex(index);
mesh.addColor(ofColor(255,255,255));
index++;
}
}
}
clusterer.setNumClusters(numclusters);
clusterer.train();
clusters = clusterer.getClusters();
for (int i = 0;i < numclusters;i++) {
colors.push_back(ofColor(ofRandom(255), ofRandom(255), ofRandom(255)));
}
for (int i = 0;i < mesh.getNumVertices();i++) {
mesh.setColor(i, colors[clusters[i]]);
}
it seems to be doing what i would expect. what is the objective of your clustering? since the data you are using seems to be just the position of the point, it makes sense that it’s clustered according to location. are you expecting another layout?
k-means is trying to minimize amount of internal variance/interpoint distances in the clusters. if you want to separate out the bands you have, you may want to encode it instead as a radius from a center point instead of x, y – then you may get something closer to that.