mac built-in camera issues

Hey guys,

Sorry to spam the list with my weird questions, here goes another one. Basically I do have two very identical applications using the same OF libraries and I am developing them on my computer with a DV camera. Now I exported two release version to test them with macbook pro and imac. Funnily enough, one of the applications works seamlessly with the built-in camera whereas the other one crashes trying to grab it and doesn’t run at all.

Both applications runs perfect on my machine with DV camera.

I have triple checked the code, ofVideoGrabber, and all those things that might cause trouble. Nada, I couldn’t find anything. I am sitting here, wondering what should be my next debugging approach to make this second application work with mac cameras.

I would appreciate any direction,

never mind, it’s working now. I still have no clue what the problem was.

weirdly enough, it’s not working again…

this is weird. what should I do?

it’s difficult to know without seeing any code or knowing what your apps should do, but the built-in mbpro camera has always worked fine for me so the problem should not be there.

what’s different code-wise between your applications?
perhaps you are hardcoding the camera resolution into the app that crashes?
are you running both applications at the same time?

anyway, if you post the code that is causing trouble, it’ll be easier to help.


hey jesus,

basically I am trying to find the contours of my grayDiffLitte feed and then interpolate these locations to the whole stage. I am wondering if it’s something to do with the population of the particleSystem but I don’t think so since they have been created no matter what.

let me do moretests.

void testApp::update(){  
	for(int i=0; i < NPARTICLESYSTEMS; i++)  
	timer-= 1.0;  
	bool bNewFrame = false;  
	 bNewFrame = vidGrabber.isFrameNew();  
		  colorImg.setFromPixels(vidGrabber.getPixels(), 320,240);  
		  colorImg.mirror(false, true);  
		  unsigned char * pixels = colorImg.getPixels();  
		grayImage = colorImg;  
		grayDiff = grayImage;  
		singleChannelMotionDetection.update( &grayImage);  
		grayDiffSmall.blur(3); // really blur the image alot!  
		VF.setFromPixels(grayDiffSmall.getPixels(), bForceInward, 0.04f);  
		//VF.addOutwardCircle((float)x, (float)y, 300, 1.5f);  
		scaleX = (float)ofGetWidth()/grayDiffSmall.width;   
		scaleY = (float)ofGetHeight()/grayDiffSmall.height;   
		contourFinder.findContours(grayDiffSmall, 0, (60*40)/2, 10, false);  
		cout << "str is: " << contourFinder.blobs[0].nPts << endl;   
	//	printf(contourFinder.nBlobs);  
	for (int i = 0; i < camWidth; i++){  
			for (int j = 0; j < camHeight; j++){  
				colorPixels[(j*camWidth+i)*3 + 0] =  255;// pixels[(j*camWidth+i)*3 + 0];	// r  
				colorPixels[(j*camWidth+i)*3 + 1] =  (((pixels[(j*camWidth+i)*3 + 0] + pixels[(j*camWidth+i)*3 + 1] + pixels[(j*camWidth+i)*3 + 2])/3)*165/255)+90;	// g  
				colorPixels[(j*camWidth+i)*3 + 2] =   (pixels[(j*camWidth+i)*3 + 0] + pixels[(j*camWidth+i)*3 + 1] + pixels[(j*camWidth+i)*3 + 2])/3;  // b  
				//printf(pixels[(j*camWidth+i)*3 + 0]);  
				//cout << pixels[(j*camWidth+i)*3 + 0] << "\t" <<  endl;  
	videoTexture.loadData(colorPixels, camWidth,camHeight, GL_RGB);  
	for(int j=0; j<contourFinder.blobs[0].nPts; j = j + 100){   
			ofxVec2f ps_loc(contourFinder.blobs[0].pts[j].x*scaleX, contourFinder.blobs[0].pts[j].y*scaleY );  
			//ofxVec2f ps_loc(100,500);  

okay spotted my stupidity!! this should be like below:

			for(int k=0; k<contourFinder.nBlobs; k = k + 1){   
			//cout << "blobs: " << contourFinder.blobs[k].nPts << "\t" << endl;  
				for(int j=0; j<contourFinder.blobs[k].nPts; j = j  +200){   
			ofxVec2f ps_loc(contourFinder.blobs[0].pts[j].x*scaleX, contourFinder.blobs[0].pts[j].y*scaleY );  
			//ofxVec2f ps_loc(100,500);