Inserting Virtual objects in known maps

Hi everyone!

I wanted to ask that if I have a map which is known to me and I have placed objects in this map at various co-ordinates .I wish to see the map and the objects through an hmd(head mounted display) and i am planning to calibrate the camera by asking the user to stand at a particular position and then see a known object(say a lamp) and then calculate the distance accordingly,

My question is that using a series of math (a lot of math) i can come up with how to move but my qn is how do i place a virtual object in a scene.Is there any resource available on the internet for such a task.

Pls do reply me …

Cheers!

hi Rahul,

it sounds like you’re talking about an Augmented Reality style system, is that correct?

the usual thing to do in this case is to draw each frame from your video camera as a background image, then translate the camera in your OpenGL scene to the position where you have calculated the person is standing (using gluLookAt or similar) then draw your virtual objects as usual.

this link might help (or it might not!):
http://www.opengl.org/resources/faq/tec-…-iewing.htm

i hope that points you in the right direction…

Dear Damian,

Thank you so very much for your reply,and I really Appreciate it.
I would like to ask you a question ,Say I have a known map(as in I know that obj a will be in 1 co-ordinate ,obj b in another etc…) I have an hmd and want to calculate how to position the virtual object so that even if it goes out of my field of view,it is still in the correct place(it is placed correctly).

I am trying for a project in augmented reality and i basically am trying to have like 2 maps,one is the positioned map(2-d map of all objects) and one is the first person view).I thought i might need to use a mapping algorithm (like SLAM) to put virtual objects into the scene but then I think slam is used to build a map first and then it places the unknown object.

I know that opengl wud be used to place the objects,but i just wanted 2 confirm with you whether i was thinkin in the right direction ,and thanks a lot for your help for providing the relevant topic as well.

Regards
Rahul

best way would be to incorporate a gps unit or at least a compass in your system; this way you’ll get precise informations about your user’s movements and you’ll be able to rotate/translate your openGL matrix accordingly.

if you want to use only the camera input it’s gonna be though: you’ll have to use computer vision tricks (optical flow, parallel tracking, etc…) on the background in order to guess how your users is moving.

Dear Theo,

Thanks a lot for the reply

Yeah i had thought of gps as well but the max accuracy of gps is 10 m …so unfortunately it dosent seem a gud option to me :frowning:

the thing is as i have the map already initialized with me …and to be very honest i have investigated techniques of optical flow and tracking methods,But unfortunately I wasnt able 2 find a technique suitable for my purpose as in I want to use the map which i have created and somehow make camera estimations based on the real world .

I mean I am sure that ther must be some way to map the real world co-ordinate system with the modelled map …But unfortunately I dunno how to map exactly.

I was thinking that i wud ask the user to come to a particular position in the real world(say a red box) and then tell him to see an object in the front .I wud know the position of the object in actual sense(say 1m) and wud know the cooordiantes of the lamp in my lamp,my question is how can i use this sort of calibration to map the real world and map coordinates.

I wud really appreciate if some1 cud help me /point out my mistakes or point me in a dirn suitable for my app.

Thanks in advance!

Cheers

so i guess you want to do markerless tracking. this is still work-in-progress in the computer science research community. it’s very difficult, in other words.

the touch-point for this kind of tracking is this source here: http://www.robots.ox.ac.uk/~gk/PTAM/, some discussion about it here: http://forum.openframeworks.cc/t/3d-from-single-camera–/1139/0

they have downloadable source but you need to know a bit about how to compile stuff at the terminal level to get it working nicely.

the biggest drawback of PTAM is that it doesn’t support loading a pre-built map, or saving the map between sessions (which is what you’re talking about).

Dear Damien,

Thank you very much for taking time out nd advising me .I have gone over PTAM and SLAM methods but they focus more on building a map of the unkown place right ? and the thing is I have used PTAM source code …took some time but den definitely was worth it …the thing ia dat I have the map already and like the real area say containing 3 lamps at a distance of 1.4 m ,2.8m,3.2m etc… are all available as a blueprint to me …I am thinking of using an algorithm like surf,sift to recognize the pre defined objects but only the question of how to use the map to tell the user that he is * far from a place + inserting virtual objects and drawing etc…are the issues which are eluding me …

I really hope dat I am making sense and thanks once again for replyin

Cheers!

well, when PTAM builds its map it does 2 things:

1- it figures out which points in the world are salient points from the point of view of the tracking algorithm (not the same as from the point of view as a human - this is an important point!); and

2- it stores a pixel patch with each keypoint so that it can recognise keypoints individually without needing the world-position context clue.

if your map is built by a human, it’s possible that it will not be useful for a tracking algorithm because your eyes run a different matching algorithm :wink: also, for tracking to be as good as PTAM you really need pixel patches to properly identify the keypoints when the tracking gets lost.

it’s good to hear that you already have PTAM/SLAM methods running. i think your best bet would be to try and manually overlay your existing map over a PTAM-style machine-generated map, merge the two, and then save it out in a format that can be reloaded - so you have your map and the machine-usable map together.

i’ve been thinking a lot about this stuff in the context of this project http://theartvertiser.com that i’m a part of… so i hope it all makes sense!

cheers
d

Dear Damien ,

Thanks for the quick reply.I saw your project web page,Seems pretty cool!All the best 4 dat !

Next,Well U pretty much summarized my worries with what you had written

i think your best bet would be to try and manually overlay your existing map over a PTAM-style machine-generated map, merge the two, and then save it out in a format that can be reloaded - so you have your map and the machine-usable map together.

The question that is bugging me is how do I do that?? I mean I tried PTAM and it took me like 3-4 days to actually set it up and make the thing work!

and next thing I am so darn scared to change anything in the code of PTAM ,I mean its got a lot of headers a lot of source code and I pretty much am confused about it 2 be honest.

I aw the code of OpenSURF which is of course an image descriptor algorithm ,I am currently trying to tweak some of the values with which I can identify objects easily with,

I just dunno how exactly how to place the objects mapping with the pre defined map once surf identifies the object.

What sort of calibrations can we make with the camera or how many parameters we should supply for pose estimation,any idea @ all.?

Anyways thanks for ur quik reply once again.

Cheers!

hey,

well, inserting the map interactively into the PTAM world should be fairly straightforward. once you have a nice PTAM tracking running, load up the map you’re using – or even just load the object you want to place in the world directly, bypassing your map – and rotate/scale/move it so that it seems to be in the correct place at the correct orientation. you’ll need to make some kind of interface for this - keyboard is probably easiest.

then you need to save out the position and orientation of your object, along with enough data to reconstruct the same PTAM map for the next time you load it.

the hard part in this will be saving out the PTAM map and then being able to reload it. you’ll have to figure out what PTAM needs to save to be able to reconstruct itself later, figure out how to save it, and figure out how to load it again.

the easy part should be placing your object - just replace the existing code to draw the four eyes in the PTAM world with code to draw your model, but rather than drawing them on the preconfigured ground plane, add some code that allows the model to be moved around the map.

i hope that’s enough – i’m not sure i can help you any more than that, sorry!

d

Sounds like PTAMM (As opposed to PTAM) has the serialisation you are talking about already built-in:
http://www.robots.ox.ac.uk/~bob/software/index.html

Hey Grimus,

Yea! ur right that does look really really promising !..Have u used PTAMM before ?
If you have then can u tell me .is ther some restriction to the number of features that can be tracked because basically I have a huge map to be tracked …which I am concerned about …

But yeah I have seen George Klein’s video of the museum objects so yeah I think this could be interesting .

Unfortunately ,this runs only in Linux guess will have to brush up my unix skills !

Any tips for getting the software up and running ?Is it easy to compile or no shudnt be much of a prob ?

Thanks once again for the suggestion!

Cheers!
Rahul

Dear All,

Thank you so much for all the help that u have rendered so far …

I found this site on google which i think is probaly the only site which provides some sort of guide for PTAMM :-

http://www.minus-reality.com/?p=1

I followed all the instructions and tried to install PTAMM but unfortunately I am encountering the following errors as I am a total newbie in linux:-

ideoSource.h:15:23: error: cvd/image.h: No such file or directory
VideoSource.h:16:22: error: cvd/byte.h: No such file or directory
VideoSource.h:17:21: error: cvd/rgb.h: No such file or directory
In file included from System.h:14,
from main.cc:6:
GLWindow2.h:11:26: error: cvd/glwindow.h: No such file or directory
In file included from System.h:13,
from main.cc:6:
VideoSource.h:27: error: ‘CVD’ has not been declared
VideoSource.h:27: error: expected ‘,’ or ‘
’ before ‘<’ token
VideoSource.h:28: error: ‘CVD’ has not been declared
VideoSource.h:28: error: ISO C++ forbids declaration of ‘ImageRef’ with no type
VideoSource.h:28: error: expected ‘;’ before ‘Size’
VideoSource.h:32: error: ‘CVD’ has not been declared
VideoSource.h:32: error: ISO C++ forbids declaration of ‘ImageRef’ with no type
VideoSource.h:32: error: expected ‘;’ before ‘mirSize’
In file included from System.h:14,
from main.cc:6:
GLWindow2.h:20: error: ‘CVD’ has not been declared
GLWindow2.h:20: error: expected ‘{’ before ‘GLWindow’
GLWindow2.h:20: error: invalid type in declaration before ‘,’ token
GLWindow2.h:20: error: expected unqualified-id before ‘public’
In file included from main.cc:6:
System.h:52: error: field ‘mGLWindow’ has incomplete type
System.h:53: error: ‘CVD’ has not been declared
System.h:53: error: ISO C++ forbids declaration of ‘Image’ with no type
System.h:53: error: expected ‘;’ before ‘<’ token
System.h:54: error: ‘CVD’ has not been declared
System.h:54: error: ISO C++ forbids declaration of ‘Image’ with no type
System.h:54: error: expected ‘;’ before ‘<’ token
System.h:67: error: ‘GVars3’ has not been declared
System.h:67: error: ISO C++ forbids declaration of ‘gvar3’ with no type
System.h:67: error: expected ‘;’ before ‘<’ token
System.h:68: error: ‘GVars3’ has not been declared
System.h:68: error: ISO C++ forbids declaration of ‘gvar3’ with no type
System.h:68: error: expected ‘;’ before ‘<’ token
System.h:71: error: ‘GVars3’ has not been declared
System.h:71: error: ISO C++ forbids declaration of ‘gvar3’ with no type
System.h:71: error: expected ‘;’ before ‘<’ token
System.h:72: error: ‘GVars3’ has not been declared
System.h:72: error: ISO C++ forbids declaration of ‘gvar3’ with no type
System.h:72: error: expected ‘;’ before ‘<’ token
main.cc:10: error: ‘GVars3’ is not a namespace-name
main.cc:10: error: expected namespace-name before ‘;’ token
main.cc: In function ‘int main()’:
main.cc:21: error: ‘GUI’ was not declared in this scope
main.cc:31: error: expected type-specifier before ‘CVD’
main.cc:31: error: expected ‘)’ before ‘::’ token
main.cc:31: error: expected ‘{’ before ‘::’ token
main.cc:31: error: ‘::Exceptions’ has not been declared
main.cc:31: error: expected ‘;’ before ‘e’
make: *** [main.o] Error 1

I have installed everything (I think) as per the instructions but am failing to go pat this step .Can anyone pls help me to get past these errors ?Any help in this regard will be highly appreciated.

Regards
Rahul