Suggestions please: change processing to xcode and improve

I am working on this project for school. I built it in Processing and it is pretty much working and doing what I want it to do but I want to learn openFrameworks so I want to try to convert the project to use in xcode. This is pretty much my first project so I’m sure there are plenty of things that I could have written differently that might work better (please feel free to point them out!) I am in a painting class and this program is going to use a camera to capture my classmates motions/color and then project them onto a blank canvas so they will be creating their own painting with their movements(and creating my painting assignment for me! haha). I plan to have paintbrushes spray painted in different colors available so they can draw in the color that they want. The image gradually resets itself to white when there is no motion. I’m looking for both some help in how to recreate this in xcode and also any suggestions at all as to how I could improve my project are very much appreciated! Also, I am wondering if anyone has suggestions for how I could add sound? I want there to be sound only when there is motion, I think. Below I put the code from processing and the code I have so far for xcode.
Thanks very very much.

ps I should add that I really have no idea what I am doing so if you are able to provide a response assume that I know nothing. thanks.


OpenCV opencv;
PImage trailsImg;
//float threshold = 80f;

void setup() {

size( 800, 600 );

// open video stream
opencv = new OpenCV( this );
opencv.capture( 800, 600 );
trailsImg = new PImage (800, 600);

void draw() {; // grab frame from camera

PImage camImage;

opencv.absDiff(); // make the difference between the current image and the image in memory
camImage = opencv.image();

opencv.blur (OpenCV.BLUR, 3 );

trailsImg.blend( opencv.image(), 0, 0, 800, 600, 0, 0, 800, 600, SCREEN);

image( trailsImg, 0,0 ); // display the result

opencv.copy (trailsImg);
opencv.blur (OpenCV.BLUR, 4 );
opencv.contrast (0);
opencv.brightness (-2);
trailsImg = opencv.image();

opencv.remember(); // store the actual image in memory

void keyPressed() {


This is what I have so far, I can draw the motion to the screen but I don’t know how to add the trails and the fade.

#include “testApp.h”

void testApp:: setup(){



void testApp::update(){


if (videoIn.isFrameNew()){
unsigned char * tempPixels = videoIn.getPixels ();
for (int i = 0; i < totalPixels; i+=3)
unsigned char r = abs(tempPixels[i] - dataPixels[i]);
unsigned char g = abs(tempPixels[i + 1] - dataPixels[i + 1]);
unsigned char b = abs(tempPixels[i + 2] - dataPixels[i + 2]);

int diff = r+g+b;
if (diff > 30) {
drawingPixels[i] = tempPixels[i];
drawingPixels[i + 1] = tempPixels[i+1] ;
drawingPixels[i + 2] = tempPixels[i+2];//if movement draws color that moved

} else {
drawingPixels[i] = 255;
drawingPixels[i + 1] = 255;
drawingPixels[i + 2] = 255;//if no movement draws white

memcpy(dataPixels, tempPixels, totalPixels);//changed totalpixels to drawing pixels, remembers but flash



void testApp::draw(){




#ifndef _TEST_APP
#define _TEST_APP

#include “ofMain.h”
#include “ofAddons.h”

#define GRABBED_VID_WIDTH 1000
#define ofxCvContourFinder contourFinder;

class testApp : public ofBaseApp{


void setup();
void update();
void draw();

unsigned char drawingPixels[GRABBED_VID_WIDTH * GRABBED_VID_HEIGHT *3];
unsigned char dataPixels[GRABBED_VID_WIDTH * GRABBED_VID_HEIGHT *3];
unsigned char trailPixels;
ofVideoGrabber videoIn;
ofTexture text;

int totalPixels;




take a look at the ofSetBackgroundAuto function:…-groundAuto

it sets OF so it doesn’t delete the previous frame, that way you can make trails and fade as you would do it in processing