Tensorflow.js and Emscripten

Here is some general information: TensorFlow.js | Machine learning para desenvolvedores JavaScript

I tried that and it works quite well…

This is needed for loading tensorflow.js and the mobilenet model (I placed it at the end of template.html):

    <!-- Load TensorFlow.js. This is required to use MobileNet. -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js"> </script>
    <!-- Load the MobileNet model. -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet@1.0.0"> </script>

This is needed for object detection of any texture:

	fbo.allocate(texture.getWidth(), texture.getHeight(), GL_RGBA);
	texture.draw(0, 0);
	ofSaveImage(pixels, "screenshot.jpg");
	var content = FS.readFile("/data/screenshot.jpg");
	var blob = new Blob([content], {type: "text/plain;charset=utf-8"});
	img = new Image();
	url = URL.createObjectURL(blob);
	img.src = url; 
	// Load the model.
	mobilenet.load().then(model => {
		// Classify the image.
		model.classify(img).then(predictions => {
		console.log('Predictions: ');

It is basically this example: tfjs-models/mobilenet at master · tensorflow/tfjs-models · GitHub

This is an example result:

1. 0: {className: 'tabby, tabby cat', probability: 0.24129588901996613}
2. 1: {className: 'tiger cat', probability: 0.2404310554265976}
3. 2: {className: 'lynx, catamount', probability: 0.23845034837722778}

Could be further integrated, this was just a test.

One question: Is there a way to load:

    <!-- Load TensorFlow.js. This is required to use MobileNet. -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs/dist/tf.min.js"> </script>
    <!-- Load the MobileNet model. -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/mobilenet@1.0.0"> </script>

in ofSetup instead of template.html (I tried, but without success)?

Here is an object recognition example: https://object.handmadeproductions.de/
I had to edit library_html5video.js to access the video directly, I am sure there is a better way:

		if (video.pixelFormat=="RGBA"){
			model.detect(video).then(predictions => {
				console.log('number of detections: ', predictions.length);
				console.log('Predictions: ', predictions);
  				context.font = '20px Arial';
				for (let i = 0; i < predictions.length; i++) {
					context.lineWidth = 1;
					context.strokeStyle = 'green';
    					context.fillStyle = 'green';
        				predictions[i].score.toFixed(3) + ' ' + predictions[i].class, predictions[i].bbox[0],
        				predictions[i].bbox[1] > 10 ? predictions[i].bbox[1] - 5 : 10);
				var imageData = context.getImageData(0,0,video.width,video.height);	
				GLctx.bindTexture(GLctx.TEXTURE_2D, GL.textures[video.textureId]);
				GLctx.texImage2D(GLctx.TEXTURE_2D, 0, GLctx.RGBA, GLctx.RGBA, GLctx.UNSIGNED_BYTE, imageData);
				GLctx.bindTexture(GLctx.TEXTURE_2D, null);

Edit: While it is quite fast on desktop, it is slow on mobile devices (30 vs. 3 fps). Maybe it is because the use of pixels (need to try with grabber.setUsePixels(false))…

1 Like

I tested a little more. Actually my RTX 3090 needs about 30% for 3d-acceleration for 30 fps, maybe thats the reason why my mobile phone only reaches about 3 fps…

Here is another one (face-landmarks): https://landmarks.handmadeproductions.de/
ofEmscriptenExamples/videoGrabberLandmarksExample at main · Jonathhhan/ofEmscriptenExamples · GitHub


These are really fun! Both the object recognition and the landmarks run at 30 fps on an m1 air (7-core gpu), and maybe at 65 - 85% of the gpu capacity at that rate. Nice!

1 Like

@TimChi thanks.
here is pose-detection: https://pose.handmadeproductions.de/
ofEmscriptenExamples/videoGrabberPoseExample at main · Jonathhhan/ofEmscriptenExamples · GitHub

i guess i am doing something wrong with this example (body segmentation), because it crashes sometimes and is not fast:

here is the original example: tfjs-models/body-segmentation at master · tensorflow/tfjs-models · GitHub

edit: i changed the model to bodyPix, now it seems to run well…

And, my attempts are all very hacky. It would be nice to put the tensorflow.js stuff in kind of an ofxEmscripten addon, so that its at least possible to use it without editing the OF source code…

All four together:

Here another one: https://colorization.handmadeproductions.de/
Basically a port of: GitHub - santhtadi/Colorize-Images-Pix2Pix-in-TensorflowJS
ofEmscriptenExamples/videoGrabberColorizationExample at main · Jonathhhan/ofEmscriptenExamples · GitHub

Can anyone confirm that all of the examples run well on desktop but not on mobile? I always get only around 2-3 fps with my (middle class android) phone…

Here is a collection of different models: GitHub - w-okada/image-analyze-workers: The zoo of image processing webworkers for javascript or typescript.

It runs great on OSX Ventura 13.3.1.
On my iPhone 12 mini on both Safari and DuckDuckGo, it renders a frame of video and then the video does not update, then the fps is between 8 fps - 20 fps depending on the example

1 Like

@NickHardeman thanks.

I wonder, if it is possible to port stable diffusion to tensorflow.js…
Here is an attempt (not mine): edhyah/stable-diffusion-tensorflow-js at main
In theory it could work, but actually no idea…

Another one: https://hands.handmadeproductions.de/
ofEmscriptenExamples/library_html5video.js at main · Jonathhhan/ofEmscriptenExamples · GitHub

1 Like