How is everyone doing person tracking/masking only using RGB Cameras?

Been seeing a lot of videos (mostly on Instagram) with person tracking and/or masking with what appears to be only RGB cameras. How is this being done? A lot of them also seem to be AR so is it an AR thing or there’s something in openCV (or some other addon) that I can’t seem to figure…? (Also took a look at ARKit, and doesn’t seem very obvious to me how it could be done with ARKit as well).

(not looking for complete solutions but trying to understand how it’s being done…)

Edit: I know and have done the regular background separation tricks, but all that requires a steady camera, a lot of these newer things I’m seeing tend to be moving cameras and/or rapidly changing environments which has been making me wonder for a few weeks now.

ARKit 3 on iOS does person segmentation via Machine Learning (requires A12 chip, iOS13)

1 Like

Thanks @jvcleave, I do seem to have the necessary hardware and took a look at ofxARKit but don’t see an obvious way to use person segmentation through it - I’m guessing I’d have to use Objective C code/native iOS code to work with it, something I have no experience with, time to learn new things I suppose :slight_smile:

Any pointers on where I should be looking? I have done iOS apps before using OF but never had to go beyond the ofApp.* files or use Obj-C. I guess the examples are all there but I don’t know what order to pursue them in. Also, am I right in saying Obj-C or is it all SWIFT now? Should I look into how iOS apps are done as native apps/outside of the OF paradigm?

Hey @ayruos!
have you tried this addon ? I understand that it is what @zach uses for his AR stuff. if not he might give you a better answer :slight_smile: .

I’d rather tell you to take a look on how the OF and objC implementation is made. (look at the ofxiOS src files). You can mix C++ code and Obj-C code, thus, if you find an example made using ObjC you can port that into OF using a wrapper class.

As I was telling you back in denver, I am now a resident at Runway which has ML segmentation stuff, although it will run on a remote server, which will not give you full-framerate real-time performance. There is an addon for OF that is a bit outdated. I am updating it and I should have it ready tomorrow or the day after. This could be a good option and way much easier than having to code ObjC. I’ll let you know once I’ve published the updated addon.

cheers

all of the ARKit3 APIs are accessible via Objective-C but the example code from Apple is written in Swift so if you are trying to use in OF you will have to translate and bridge

This is the Apple example
https://developer.apple.com/documentation/arkit/effecting_people_occlusion_in_custom_renderers

Personally unless I am porting an existing OF app I typically write iOS stuff natively

Hey @roymacdonald and @jvcleave, thanks for the pointers. Yes, I know about ofxARKit and taken a brief look at it - drawing in AR space with the anchor management system seems nice and easy to get into but I’m yet to work with the camera input for CV stuff. FWIW, it seems like ARkit3 is still not wrapped in it so I gotta look into Obj-C at some point.

Looking forward to your Runway implementation Roy! I gave RunwayML a go last year when it was still in beta and haven’t looked into it since but it showed a lot of promise then, I’m sure it’s in a much better state right now!

I see. Well, if you are working with Apple’s stuff it is a good idea to learn ObjC. It might be a bit weird at the begginig but once you grasp its syntax it is very similar to C++ regarding the logic of it. There are plenty of tutorials online, so I’d recommend to pick one, as otherwise it can be a bit cryptic.

Regarding Runway, i have updated the addon. https://github.com/runwayml/ofxRunway
I am still working on having more examples, but you can certainly start from here.