The SDK is not available for OSX so you need to use libfreenect to get the frames - this is pretty cool but misses a few things, most notably for me, there is no coordinate space transformation from colour to depth, only from depth to colour so this means if you know where a pixel is in the colour frame there is no way to work out where it is in the depth frame, but you can go the other way round and you can work with a rectified colour image, one that is transformed into the same size and aspect ratio of the depth frame (512*424 I think).
Libfreenect also uses openCL or Cuda to unpack the depth frames and the SDK is CPU based, this is actually great, and with libfreenect you can specify an openCL device to use so you get your CPU back.
I have an addon for the kinectV2 here, there are many variations out there that may be better than mine.
Also there is no native skeleton tracking, but you can use it with openNi (there are some addons that are working with the latest OF for this. here is one https://github.com/pierrextardif/ofxNI2). They lack some features like hand states and of course the voice recognition from the SDK is not available.
One serious limitation of the actual windows SDK however is that it is limited to one camera per computer, libfreenect and openNi are not limited to this so this is great (both work on linux, OSX and windows as well). The quality of skeleton tracking for openNI is also not as good as it was in the SDK.
as per the caveats noted above - it does work fine with both kinect 1414 and kinect 2 -
I did a walkthrough tutorial on youtube building a simple 3D phootbooth on OSX with a kinect 1414