The 5th International Conference on Cognitive Systems was held in Vienna, Austria, on February 23 – 24, 2012.
The conference aimed at presenting the state-of-the-art in cognitive systems and robotics. It showed European research efforts being made in this field and provided an opportunity for open discussions.
Part of the lively interaction was a talk given by Heico Sandee about the RoboEarth project. It covers how robots can exchange knowledge through RoboEarth and how to determine whether the knowledge might be useful for a specific robot or not.
In order to watch the talk in full length (~18 min.) please follow the link below.
Heico Sandee – RoboEarth: Connecting robots world-wide
Update (Sep 11, 2012):
Finally, we compiled a video of the demonstrator we created during the workshop including additional explanations of what is going on behind the visible actions of the robots:
The third internal RoboEarth workshop took place at the Technical University of Munich from February 8th to 12th, 2012, and was directly followed by RoboEarth’s second Annual Review meeting on February 13th, 2012.
The RoboEarth demonstrator developed during the week-long workshop showed how two robots with different hardware and in different locations could use RoboEarth to share knowledge.
First, a PR2 robot in downtown Munich was ordered to serve a drink to a patient, who was resting in a bed in a mock-up hospital room. As a related semantic task description was available in the RoboEarth database, the PR2 could download this information and infer whether its capabilities comply with the task’s requirements and what other knowledge it was missing to execute the task, e.g. object detection models and environment maps. It successfully checked the availability of the missing components on RoboEarth, downloaded them and as a result could start executing the task. As the drink was stored inside of a cabinet behind a closed door, the PR2 had to learn the articulation model for that door. After completing the learning process, the PR2 annotated the object model of the cabinet with the learned articulation model for the door and updated it on the RoboEarth database.
Then an Amigo robot in a similar (but not identical) hospital room environment in Garching close to Munich was given the same command of serving a drink. The robot could download needed knowledge from RoboEarth like the PR2 did. This time the articulation model was included, so that during the execution of the task Amigo didn’t have to learn it by itself. Amigo was able to grasp the handle of the door and open it right away.
This demonstration showed what a shared knowledge base like RoboEarth including its reasoning services can add to the development of robots: Robots were able to navigate, recognize objects and perform complex manipulation tasks without being explicitly pre-programmed for these tasks beforehand.
To achieve this goal, all of the involved PhD students and several professors gathered in Munich to work on tomorrow’s cloud robotics solutions. The week was characterized by a large amount of work and a limited amount of sleep – and a joint evening at a Bavarian restaurant.
Update (Jan 05, 2012):
More than 100 people joined the introduction to RoboEarth and the interactive workshops. They created and detected their first 3D object models using the RoboEarth platform. We want to thank everyone who helped to organize the successfull event as well as all participants who showed their interest.
RoboEarth will present itself as part of the European Robotics Week from November 28th – December 4th, 2011. Therefore the RoboEarth team will set up a live webcast on Friday, December 2nd, 2011, starting from 15.00 (CET).
Dr. Oliver Zweigle is going to present a brief introduction to the concepts of RoboEarth. Subsequently, an interactive workshop will be held. The aim for the workshop is to let anyone interested try out the RoboEarth software to build 3D object models themselves and use them to detect the described objects.
The workshop’s prerequisites and details on how to register can be found at http://www.roboearth.org/webcast. Registration will be open until November 20th, 2011. The webcast itself will also be made available through this website.
Members of the RoboEarth team contributed seven papers to the IROS’11 conference, which took place in San Francisco (USA) from September 25-30th. In addition, RoboEarth supported a workshop on Knowledge Representation for Autonomous Robots.
During the workshop Jos Elfring gave an introduction to RoboEarth’s approach to world modelling. It uses a multiple hypothesis filter (MHF) to keep track of objects over time and introduces techniques to improve the probabilistic models by taking prior knowledge about objects into account, e.g. object dynamics, expected locations, relations between object classes and detector characteristics. For more details on this topic take a look at the corresponding paper, Knowledge-Driven World Modeling.
Other papers presented during the regular paper sessions were:
- Autonomous Semantic Mapping for Robots Performing Everyday Manipulation Tasks in Kitchen Environments. Nico Blodow, Lucian Cosmin Goron, Zoltan-Csaba Marton, Dejan Pangercic, Thomas Ruehr, Moritz Tenorth, and Michael Beetz
- Parameterizing Actions to Have the Appropriate Effects. Lorenz Mösenlechner and Michael Beetz
- Logic Programming with Simulation-Based Temporal Projection for Everyday Robot Object Manipulation. Lars Kunze, Mihai Emanuel Dolha and Michael Beetz
- Towards Semantic SLAM using a Monocular Camera. Javier Civera, Dorian Gálvez-López, Luis Riazuelo, J.D. Tardós and J.M.M. Montiel
- Dense Multi-Planar Scene Estimation from a Sparse Set of Images. Alberto Argiles, Javier Civera and Luis Montesano
- Real-Time Loop Detection with Bags of Binary Words. Dorian Gálvez-López and Juan D. Tardós
We are happy to announce RoboEarth’s first open source software release. This release allows you to create 3D object models and upload them to RoboEarth. It also allows you to download any model stored in RoboEarth and detect the described object using a Kinect or a webcam.
If you are familiar with ROS, creating and using object models is easy. As shown in the video tutorial above, it uses three main packages:
- RoboEarth’s re_object_recorder package allows you to create your own 3D object model using Microsoft’s Kinect sensor. By recording and merging point clouds gathered from different angles around the object, a detailed model is created, which may be shared with the world by uploading it to RoboEarth.
- RoboEarth’s re_kinect_object_detector package allows you to detect models you download from RoboEarth using a Kinect.
- Alternatively, you may also use RoboEarth’s re_vision package to detect objects using a common RGB camera.
A complete overview of the process can be found at http://www.ros.org/wiki/roboearth
RoboEarth aims at creating an object database including semantic descriptors. Semantic descriptors allow robots to not only detect objects, but reason about them. For example, if a robot is asked to serve a drink, semantic object descriptors allow the robot to determine if all required objects are available or if an additional object model is missing, and if a missing model is available via RoboEarth. You can help us with that process by supplying meaningful names and descriptions for the objects you create.
We are looking forward to your feedback in the comments below or at info at roboearth.org.