Osx color correction tips (on export)

I’ve been having some issues matching colors from the exports of OF to what I see on screen and I just wanted to mention a few things I learned. Essentially I was finding that when I took screenshots or screen recording, the colors seemed really vibrant, but when I use ofImage::save() the resulting pixels seemed slightly duller (and often I found I would have to do some color correction to match what I saw on screen). After some experimenting, I discovered the issue is basically related to a missing colorSync profile on images saved from OF. You can extract the color LCD icc from a screenshot and apply it the resulting export from OF

when you apply “color LCD” icc to the exported images, the colors match exactly what you see in OF. In addition, I discovered, this command does a similar color profile thing w/ ffmpeg – which I find useful since I often export image sequences to be combined in OF – it means the resulting video with have a color profile and view exactly like the screen recording.

anyway, just wanted to mention these two color specific things, that have more to do with “profiles” (how images should be displayed) vs actual color data…

6 Likes

Great, thanks @zach this is super useful.
Have you used “sips” built in apple command line application? I’ve used a lot for web image optimization because of the color profiles capabilities, I think you can use it too to embed color profiles

Great tips and love the hack with grabbing ICC profiles from screengrabs!

As a color geek, when I started in OF I wonder if OF internally did all of the color math in linear (say, GL_RGB16F?), and what kind of transformations it did to present on a SDR screen in 8 bit (say, GL_RGBA). I didn’t look at the source code, but I now have a feeling that unless you use a dedicated FBO set with 16 bit, OF will do 8bit GL_RGBA by default (happy to be proved wrong!). Anyway, if one could draw in 16 bit using a linear transfer function (1+1=2) and known primaries and whitepoint, one could also potentially do the conversion to 8 bit sRGB primaries and gamma 2.2 (for display) manually, using a bit of tonemapping math. Then, you could save the image in that raw “linear” format to have as much information as possible for archival (say, EXR 16 bit half float) and then apply that same custom math used for display purposes in order to turn it into a RGBA 8bit PNG for viewing/sharing/etc, matching exactly what you saw on screen (provided you can tag the PNG with that specific color space information).

If I can share a few resources that I’ve discovered in my VFX career, GitHub - bbc/qtff-parameter-editor: QuickTime file parameter editor for modifying transfer function, colour primary and matrix characteristics is a nice OSS set of tools developed by the BCC that was recommended to my by a color scientist. They can be used to update the color tags of a ProRes via their 3 CLIs ( movdump, rdd36dump and rdd36mod).

While Tagging Media with Video Color Information | Apple Developer Documentation offers a useful overview about color tagging media files (although is specific to Apple/Swift).

Finally, whenever I have some conceptual doubts, I always find myself going back to have a look at Chapter 1: Color Management - Chris Brejon and
Chapter 1.5: Academy Color Encoding System (ACES) - Chris Brejon (as a color geek that considers color perception a medium in itself, I found it to be incredibly fascinating and practical at the same time!).

Happy color hacking!

PS: final edit to this post to share Exploring Color Theory with Blender, which is also a super cool article exploring color visualizations using Blender!

1 Like