How to calibrate real moving lights to each other in 3d space? Homography?


#1

I have multiple pan tilt spot lights and want them all to point at the same real world location. A location that keeps one changing.

I assume there must be some sort of 3d world calibration method, similar to how 2 camera spaces can be calibrated via homography. But when I calibrated cameras in the past it was always just to a plane; like the floor.

But to have the beams of multiple lights hit the same spot in a 360˚ spherical world surely be a different beast.

I have a 3d simulation working where all simulated light fixtures follow the same 3d point. But real world lights are never setup as perfectly and might not have the perfect relation to each other and the space.

Any advice would be appreciated. Thanks.


#2

You can try to use estimateAffine3D from openCV, (there is an abtraction in ofxCV as well). It really depends on how you are using your lights and how you are setup, but in general it can give you a matrix to move between one 3D coordinate system and another. You would go to known points (where it lines up with your simulation), then manually move the moving head to the real world points and see what coordinates you actuall have to get to that point. Collect a bunch of these pairs (points in simulation and points in the real world) and then you get a matrix out of the function. With moving lights I am not sure how it would work, I would guess you need to use nodes as the base for the lights and then using node.lookAt() you can get the transformed look at point and then calcualte the rotations needed for your moving head.


#3

thanks for the advice.
I will check it.
my case has the lights in the centre of the room pointing outward and rotating light a lighthouse.
will have to see if estimateAffine3D can help here, since each light has a 360˚ “view”


#4

In this case I don’t think it will, in reality it may be easier to have a manual offset for each lights position. If you have an imaginary moving head that sits at 0,0,0 and use a 4*4matrix to rotate it and then give each light an empty matrix and use the ofMatrix4x4 set translation method via to enter an offset for each of these matrices (via a GUI) and then multiply the two matrices together you should be able to get the actual rotation needed for each moving head from the result of the multiplication. This would not be an automated calibration but it should work.


#5

@fresla thanks for the advice.

I might manually move all lights to a few known point and hopefully get one transformation matrix from those collected points. one matrix that can be applied to always move from point a to b.
will see how that works.

an other though was if I could use the HTC Vive tracker beacon, place it on each light and collect points, which will give me the position and orientation of each light.
I guess this till means we assume each light’s motion range is a perfect sphere.


#6

Hey!
I think you can do some sort of maths, I don’t have it clear now how this should be but the idea would be:

  • have the lights placed however you need to in real world.
  • make them all point to a certain point, then adjust the pan and tilt of each one (with a gui) until it hits the target (probably you might want to leave one without change so it becomes your reference).
  • Once you have all pointing at same spot record all the adjustments you had to make.
  • Then do the same but for a different spot.

I dont know how much times you should make this process but you should get to a point where you can input all of these “real world” pans and tilts and the ones that the computer originally calculated into a formula and get back the position and orientation of each light in computer coordinate system.
Does it make any sense?
I need to do this for a project that involves pan and tilt mirrors, I have scketched a few things but nothing working yet. The idea is the same.
please let me know how you finally solve it.

On the other hand, using optical tracking is also a good idea. You might not even need a tracker beacon. Just and overhead camera, maybe even a kinect and put some color markers on each moving head. Then just use openCV to find the markers in the RGB image and then use the kinect to find the real world position of each marker. (there is a function in ofxKinect which I dont rememember now that gives you this).


#7

@roymacdonald
yes this makes sense.
I already started writing those real world adjustments in to a look up table.
plan b is to just read the look up table values based on the simulated pan/tilt values.

but ideally (you are right) I should inter all collect values and figure out what the transformation matrix is. but if the fixture is not moving evenly between each dmx step then just one global transformation matrix per light will not work.


#8

Uh, that’s a good point about linearity of the headlights.
Going from computer simulation to real world is always so tricky.
please let me know if you find a solution.
Cheers.


#9

just thinking out loud and collecting some thoughts.

I could outfit every light with a 9 DOF sensor like this one:
https://www.pjrc.com/store/prop_shield.html

Then collect their orientation data and the actual dmx value and my app’s lookAt() data.

This then could be stored in a 3d lookup table by using Spherical Fibonacci Mapping http://donw.io/post/sphere-indexing/


#10

Hi. Well if you can afford those sensors and have enough time to implement such, I would go for that. It would be the most reliable and fast. Any thing that changes (what ever might happen) and you have an immediate update of the orientation. Then the other problem still is the position. The CV based option sounds good. The other way would be to have sensors with radio or something that would allow these to triangulate and find out their relative positions. It could be feasible but it could be overkill too. Depends on what you are aiming to, conceptually speaking.

Please let me know if you make any progress.
I have not been able to make any from my side, I have had not time for such.

All the best!


#11

I tired it. and it seems to report back a proper 3d orientation.
But I am worried once I use it with bigger light fixtures which have bigger motors and bigger power ballasts that the resulting magnetic field can mesh up the magnetometer.

I also glued a LIDAR sensor to one light and it builds a nice 3d point cloud of the space. If I do this with all lights this would allow me to align their positions to each other better and learn the shape of the room. Maybe even build a 3d lookup table of which pan tilt angle encounters a wall at what distance.
But I think it would not help to align the light fixtures orientation; know that certain pan tilt of light 1 = certain pan tilt of light 2 = specific point in room.


#12

I equipped 3 light fixture with the range finders and collected distance measurements after placing the each light to the same pan tilt angle.
I did this for multiple pan rotations with about 50 different tilt angles.

I placed the resulting point clouds in to meshlab and tried to use their root-translation tool to get the translation matrix between each of the lights. knowing this should allow me to find out how the light fixtures are placed in reference to each other.

but seeing how noisy the clouds makes me think the matrices will not achieve a perfect result. which means when pointing the light to the same virtual look at point they would not align in real space.

will try it and see.

but I am thinking now since the final result requires all lights to overlap perfectly it might be best to use computer vision to make that happen. Maybe I should use a wide angle lens > 180˚ and try to locate and track each bright light point and records it’s angles.


#13

Hey!

My solution is available here:

You need to hit a number of targets with your moving head (generally 4) and then it’ll calibrate.

The calibration is done at:

It uses a numerical model with 7 DOF for the moving head (which works well un my experience with Clay Paky Sharpie lights). Tested with around 24 Sharpies and a Vicon system for tracking.

You’ll need something that gives you the 3D positions to use as reference in space. You could use a pair of cameras if your space is large, or something like a HTC Vive if it’s smaller.

(btw if you want to calibrate things, you can always give me a call x)


#14

Thanks for your input. It’s very much appreciated.

Usually lights like this are used in stage shows where they have to focus on the stage floor, which one plane; one 2d surface. I am guessing pointing lights to 4 spots means that the calibration will only be accurate on this plane?
If I want the lights also be able to point on all other walls I assume I would need to create one calibration per wall. But it would not work in a none rectangular / curved surface room?

About the 3d position you mention. Did you use the Vicon system to get the 3d position of each light, or also it’s look-at-orientation while the light fixtures were moving?
My testes show that position and placement orientation (if lights are not 100% parallel to floor) are needed to predict the lights motion path. So I need the transformation matrix that describes the relationship of all lights to each other.

How would the 2 camera system work?

Thanks again for your advice.


#15

Heya

This calibration with 4 points then works for all points in 3D space

The vicon was used to track a target. The ‘target’ is an object in the 3D space with known position (could simply be measured with a tape measure). You aim the light to manually hit the target, the system records the 3D position of the target and the corresponding pan/tilt values
You do that 4 times (more doesn’t help that much, but best to have your calibration points at the edges of your volume of use, and no harm adding more)
At least one of your points needs to be outside of a plane (e.g. 3 on a plane and one not is enough)

The calibration system discovers:

  • translation/rotation of the fixture
  • axis offsets
    This is enough to get low reprojection errors with Ckay Paky Sharpie

The two cameras could be used to track the location of your target

  • stereo calibrate
  • find an object in the cameras (e.g. centroid, or markers with corners)
  • triangulate the 3D position of the object

#16

And the 3d calibration then works for all points in the 3d space?
In my case the lights sit in the middle of the room and need to look all around it’s 360˚ field of view.

Since it’s only a low number of points it might be easier to measure the 3D target positions by hand via a laser range finder. But is this 3D target position in relation to each light or in relation to the room?
i.e. take light A as zero and measure how the target point is offset to it? And do it for each light?
Since the calibration finds out the " translation/rotation of the fixture " I would not need to supply light fixture position? But I need to supply 3D target position info.
I guess I’m just trying to work out from where to measure and what elements.

I see you are actively working on the add-on right now; at the ZKM :slight_smile: very nice.
Good timing.


#17

Any Euclidean coordinate frame is fine (generally best just one one coordinate system for the whole room).

Your lights can be placed anywhere in the space
There is also a separate (rather basic) optimisation of the the many-to-one problem in the MovingHead class (for each target in 3D space, there are many possible sets of pan-tilt values which will hit that position).

Mostly answering here from my phone so apologies for the brevity