Weird broken stuff with ofCam, ofEasyCam, worldToScreen and displaying text

I want to write 2D text in relation to some point in 3D space, in screenspace. I’m having some weird terrible time doing this with openframeworks. Maybe I’m missing something?

this works fine

ofCamera cam;
float Z;
void ofApp::setup(){
    Z = 0;
}
void ofApp::update(){
}

void ofApp::draw(){
  Z += 0.1;
  cam.setPosition(0,0,Z);

  cam.begin();
  ofSpherePrimitive sphere;
  sphere.drawWireframe();
  for(int i = 0; i < sphere.getMesh().getVertices().size(); i ++){
       ofVec3f loc = sphere.getMesh().getVertex(i);
       ofDrawBitmapString( ofToString( sphere.getMesh().getIndex(i) ), loc.x, loc.y, loc.z );
  }

 cam.end();
}

however if I change the for loop to this, its totally ruined

for(int i = 0; i < sphere.getMesh().getVertices().size(); i ++){
       ofVec3f loc = cam.worldToScreen( sphere.getMesh().getVertex(i)  );
       ofDrawBitmapString( ofToString( sphere.getMesh().getIndex(i) ), loc.x, loc.y);
  }

Why wouldn’t this work?

I’m actually trying to build a much more complex application that uses ofEasyCam and the ofTrueTypeFont but I tried to use the ofEasyCam with the pointPickerExample and it seems like ofEasyCam and WorldToScreen don’t work properly together.

do I need to force the camera to have some transformation matrix applied to it before I ask for worldToScreen ?

my intuition is that cam.worldToScreen shouldn’t be called inside cam, since it’s basically doing the transformation twice, ie, worldToScreen is transforming and projecting the point to screen position (and then having it inside cam, does something against a “2d” point vs the actual 3d world point. (I say 2d since I think there is some z value in that point but it’s usually very small after worldToScreen)

the trick would be to use worldToScreen not inside cam.begin(). this seems to work for me:

ofSpherePrimitive sphere(200, 10);

ofSetColor(255);
cam.begin();
sphere.drawWireframe();
cam.end();

ofSetColor(255, 0,0);
for(int i = 0; i < sphere.getMesh().getVertices().size(); i += 10){
    ofVec3f loc = cam.worldToScreen( sphere.getMesh().getVertex(i)  );
    ofDrawBitmapString( ofToString( sphere.getMesh().getIndex(i) ), loc.x, loc.y);
}

Hmm thats definitely an unexpected “gotcha” that should be captured in the worldToScreen / screenToWorld functions documentation.

My intuition was to start the camera at the beginning of my draw loop, then turn on all the lights, materials, and start drawing everything I needed to draw.

pull request here
https://github.com/openframeworks/ofSite/pull/526

Thank you!

I don’t think it’s that they don’t work between cam.begin() and cam.end() it’s just that by drawing the results of worldToScreen inside of cam.begin() you are essentially applying the transformation twice: ie

worldToScreen modifies a 3d point and gives back a 2d (ish) point (ie, apply the camera tranform to this point and project it to the screen)
cam.begin() modifies that 2d point when you draw it

so stuff is getting manipulated twice. It’s maybe conceptually easier to think of worldToScreen as a replacement for cam.begin() since it’s applying the camera transformation.