OpenFrameworks + robots!

Yesterday, at TED2014, Marco Tempest went on stage with the robotics project we’ve been working on in the last few months; here’s a preview from TED’s blog:

I wrote the robot’s driving and training software using OF and, as soon as I have a little free time, I plan to share some documentation about the process, because it involved a few really interesting topics like creating a timeline environment that controls both digital and real stuff (I did a couple customizations to ofxTimeline that I want to clean up and share) and, of course, interacting live with ROS

For now, let me share a behind the scenes video, showing a robot with stage panic :smile:


Hi, this looks very interesting!
What do you mean with driving and training software? Is it the graphic interface used to learn the movements seen in ?
And digital and real stuff?
I am looking forward to see your documentation.
Good work!!!

Hi and thanks :smile:

That video shows the very first steps in the development of our human/robot interface: now we have a unique environment that controls the robot during the shows and enables Marco to train it and prepare choreographies.
We can create new poses and animations using both the “0G” mode mentioned in the video (which basically means posing the robot by hand, like you would do with a big puppet) and/or using an “offline” 3D simulation. In a typical performance the robot can execute different series of movements and interact with Marco in different ways; in order to achieve this, we created a scene manager that can handle both “static” timeline-based events and live inputs from the robot sensors (using ROS).

Marco did a great job documenting the whole development: we hope to share something cool soon :smile:

1 Like

Hi naus3a,
I really want to know how you setup the project to use with ROS, and what IDE do you use?

the OF part was developed both for Linux and Mac, so I used code::blocks and Xcode.

ROS nodes were in Python, so we talked to them using websockets via the ROSBridge.
We also experimented with making c nodes and embedding them in the OF app: it worked, and seemed promising, but we did not have enough time to make a complete porting, so the show used websockets.

Thanks for the information. We are looking at it also. Have you written a rosbridge client/library in C++? Might you share that code? It would be great to develop a complete one, on par with the roslibjs implementation. There is a recent effort for Processing here. It seems incomplete though. Perhaps a combined effort is needed.


The original idea was writing a rosbridge client addon from the ground up, but we quickly realised that ofxLibWebsockets could already handle all the websockets communication on its own, so we simply created a class that could read and write the needed JSON messages and use libWebsockets to send/receive them.

I’ll check with Marco if he’s ready to share the ROSBridge part: I think there will be no problem, but there was other people involved and I want to be sure if everyone is happy with it :smile:

Thank you for the background information. Shall look forward to more…

@naus3a i realize this thread is a bit old, but do you think you can share how you connected OF with ROS. Seems like the ROSBridge is an important part.


Out of the box, a Baxter robot comes with its own ROS API that enables you to control actuators, read sensors, etc. On the top of that, during the first phase of the project, our robotic engineer (Robert Nunez from Media Lab) used the Python API to write a set of extra routines to control the custom hardware and start to sketch the behaviours that we would need for the show.

The OF software was designed to be both an authoring environment and a real time controller and ROSBridge was simply the quickest way to communicate between ROS and OF. I wrote a class that enforced the ROSBridge protocol, wrapped our ROS topics and handled the websockets communication. At that point I could setup a system of callbacks that enabled me to control the robot in real time: for example we could send a message that would pose the robot in a particular way (similarly to what you would do with a normal 3D model) and receive a “done” message when the operation was completed; or we could get notified if one arm was pushed and use this information to stop the current movement, activate a “pose learning” mode and make EDI look at the person holding his arm, smiling at him.

As an extra bonus, since the communication is network based, in theory EDI can be remotely controlled from an external computer running the OF app.
At some point we considered the idea to get rid of ROSBridge and port all the ROS part into a C++ library , but it would have taken a significant amount of time and we decided to invest the time we had developing new features and polishing the software.

1 Like