Coordinates of an object on a passive image

I have a few questions regarding a project I’m working on. I need some terms so that I can look it up via Google.

I have a map (picture) of a city that is printed on a wooden table. When I move an object directly over it I want to know the coordinates of the object on the map. And also the angle of the object on the map.

The map is not zoom-able or anything. It is printed on the wood as a fixed image. The object has a screen. I want to display a certain image on this screen depending where it at on the map. So if I move it on the map it shows a different image depending the location. The angle is because of the design. The cursor is on a transparant piece, so the sensor can’t be in that place. So I need the coordinates of the sensor plus the angle so it can recalculate it. It doesn’t really need that be preciese. It shows different images around each 3 cm or so. I rather have all the technology in the object than in the table itself (if that is possible). Can you help me out with that? Thanks for your time!

I thought a sort of a scanner is the best solution. That recognizes the part which it scans (part of the map) and translate this to the coordinates and angle. Does somebody know how this technology is called (and maybe a few links to guide me in the right direction)? Or does somebody else has a better idea how this can me realised?

Thanks in advance!

I’m having a bit of trouble following this, but I think you mean:

You have a Thing X which sits on top of a map and that will be tracked relative to the map as well as orientation-wise? When you say: “The cursor is on a transparant piece, so the sensor can’t be in that place.” does this mean that Thing X is transparent? That seems to go against this “I rather have all the technology in the object than in the table itself”, meaning you want all the technology is Thing X, right? Something transparent and that can determine its own position is pretty tricky. I might not be understanding this right though, so definitely clarify a little bit if you can.

If I’m understanding right you also have a screen in this Thing X? If that’s the case, then you can easily track an object with an overhead camera and corner detection. Determining the orientation on the Y axis is easy, finding orientation on the X and Z axes is much trickier and would require you to put an Arduino with a gyroscope and accelerometer or iPhone inside Thing X.

Hope that helps a little.

Maybe I should have included the image in the first post, sorry for the vague description. Here is a concept render:

There is a photoframe on the table. The table has the map of a city. I want that if people move the photoframe (digital) on the map the photo will change depending on the position on the table. I head that camera tracking may be the easiest solution, but I want to look at all the alternative before making the decision. I also rather not have something installed above the table. I hope it can be intregrated with the table/photoframe alone.

If the frame has a camera pointing backwards and down at the table and it doesn’t need to update too quickly, then it could simply look down at the table (provided that the light in the room was sufficient) and then compare points on what it sees to points in the map. This is called feature detection and matching, some algorithms that would be helpful are SIFT and maybe ORB, I don’t recall which are rotation invariant. This requires a lot of compute power though, so an iPad would do it at maybe 10fps, though if you don’t need it to be quick any tablet should do. I’m sure there’s other more low-power ways to do this, but that’s probably the one that requires the least custom hardware.

It does not have to be instant, a small delay is ok. It is important that it isn’t too slow, otherwise it would feel sluggish. Thank you, you really helped me out with those terms (SIFT and ORB). Now I have some terms I can search for.

Do you have any specific idea how this is possible with low-power solution?

Is it possible to create a grid with RFID tags. And put a RFID scanner in the frame. Is there a problem with interference? How close can two RFID be next to each other? I want to chips to be about 3 to 5 cm from each other, is this possible?

It depends on the reader and what’s around the tag, but yes, that’s possible. I’ve used these and depending on how you configure the antennae you can get a read range from 1cm - 10cm. If you want to limit the read range, get one without an antenna so you can bend it and get the right length for your table.