Drawing only detected faces to buffer


#1

hi all! thanks in advance for stopping by and looking at my topic.

i’m working on a project that involves detecting faces from a webcam and essentially removing the rest of video where the faces are not being detected (by making those pixels black, transparent, etc). the only relevant pixels are those that make up the detected faces.

right now i’m doing this with ofxFaceTracker2 by finding faces and their regions of interest before getting the polylines that represent each face outline. where i’m having trouble is understanding how to extract each face and draw only the detected faces to the buffer.

right now, i go through each pixel, check if it is outside the face polylines, and if it is, then set those pixels to a single color, while doing nothing to the pixels inside the face polyline. i load those pixels into an oftexture which is what i end up using in draw(). this kind of works, except it’s slow, and when there’s more than one face, neither face gets drawn and i only get the background color.

i have a few vectors that i prepare in setup:

for (int i = 0; i < MAX_FACES; i++)
{
    faces.push_back(ofPolyline());
    faceBounds.push_back(ofRectangle());
}

before finding faces in update:

void ofApp::update(){
    ....
    cam.update(); 

    if(cam.isFrameNew())
    {
        faceTracker.update(cam);

        if (faceTracker.size() > 0)
        {
            for (std::size_t i = 0; i < faceTracker.getInstances().size(); i++)
                {
                    ofxFaceTracker2Instance instance = faceTracker.getInstances()[i];
                    faces[i] = instance.getLandmarks().getImageFeature(ofxFaceTracker2Landmarks::FACE_OUTLINE);
                    faceBounds[i] = instance.getBoundingBox();
                }

            isolateFace();
        }
    }
    ....
}

and then using isolateFace():

void ofApp::isolateFace(){    
    ofPixels & pixels = cam.getPixels();
        
    std::cout << faceTracker.getInstances().size() << std::endl;
        
    for (std::size_t x = 0; x < pixels.getWidth(); x++)
    {
        for (std::size_t y = 0; y < pixels.getHeight(); y++)
        {
            glm::vec3 point = glm::vec3(x, y, 0);
                
            for (std::size_t i = 0; i < faceTracker.getInstances().size(); i++)
            {
                if (!faces[i].inside(point.x, point.y))
                {
                    pixels.setColor(x, y, ofColor(255));
                } 
                else
                {

                }
            }
        }
    }
        
    facePixels = pixels;
    faceTexture.loadData(facePixels);

and then in draw:

void ofApp::draw(){
    ...
    faceTexture.draw(0, 0);
    ...
}

i haven’t really needed to work with pixels like this before so i’m not sure how to evaluate or compare this process, but i guess i’m just wondering if this way is effective for what i’d like to do? are there other ways to go about it that might make more sense, be more efficient, etc. also, are there names and terms for these processes because i had a hard time figuring out what to google, maybe some follow-up reading/videos on computer graphics i could look at that would be helpful? and ultimately, what’s wrong with how i’ve written this such that it only works for one face at a time, and not when multiple faces are present?

thanks! that’s all – am very appreciative of everyone’s time + energy.


#2

i think only one face shows up at a time because in

            for (std::size_t i = 0; i < faceTracker.getInstances().size(); i++)
            {
                if (!faces[i].inside(point.x, point.y))
                {
                    pixels.setColor(x, y, ofColor(255));
                }
                else
                {

                }
            }

it first checks for points outside of faces[0] and sets all of those to white – including the pixels representing faces[i] – and then increments to faces[1], and sets all the pixels outside of that to white, including faces[i-1] and faces[i+1]. will keep thinking about ways to move forward.


#3

Hi,

Agreed with @t.espinoza.

What you are trying to achieved is called a “mask”.

Yes, you can use the GPU to draw the mask.
For example you can create an ofFbo, draw the faces polylines into it, and use it as a mask with ofTexture::setAlphaMask(ofTexture &mask).

Example:

// ofApp.h
vector < ofPolyline > faces;
ofPath path;
ofVideoGrabber cam;
ofFbo mask;
// ofApp.cpp
void ofApp::setup()
{
	cam.setup( 640, 480 );

	// Init the fbo at the same size
	mask.allocate( cam.getWidth(), cam.getHeight() );
}

void ofApp::update()
{
	cam.update();

	// Here the polyline is updated with faces detection.
	// Your code
	
	// It is not possible to draw a filled polyline. That's why we need an ofPath
	path.clear();
	for( ofPolyline & face : faces )
	{
		path.newSubPath();
		path.moveTo( face[ 0 ] );
		for( size_t i = 1; i < face.size(); i++ ) path.lineTo( face[ i ] );
	}
}

void ofApp::draw()
{
	// Draw the mask
	mask.begin();
	ofClear( 0.f, 0.f, 0.f, 0.f ); // clear it
	path.draw();                   // draw the polylines
	mask.end();

	// Draw the mask, just to check if it is all right
	mask.draw( 0.f, 0.f );

	// Draw the image masqued
	cam.getTexture().setAlphaMask( mask.getTexture() );
	cam.draw( mask.getWidth(), 0.f );
}

Working example:
src.zip (1,6 Ko)

About drawing filled polylines see fill polyline

PS: By the way, do you know that

for (int i = 0; i < MAX_FACES; i++)
{
    faces.push_back(ofPolyline());
    faceBounds.push_back(ofRectangle());
}

can be reduced to

faces.resize(  MAX_FACES );
faceBounds.resize(  MAX_FACES );

?


#4

@lilive this was great! thanks for clarifying re: masking, resize(). did not know about either. will work on implementing what you showed here and will update with results.


#5

some strange overlapping-related behaviors to work through:

but overall working:


#6

If you use the ofPath and setMask method, I think you can try

path.setPolyWindingMode( OF_POLY_WINDING_NONZERO );