At this year’s PICNIC Festival in Amsterdam, RoboEarth held a joint workshop on Robots and the Internet of Things (IoT) with Council, a think tank part of the High Level Expert Group (EG IoT) on the Internet of Things of the European Commission.
The event met with unexpectedly high attendance, resulting in a room packed with an interested and engaged audience and resulting in lively discussion and debate. The main topics centered on how robots could enrich our lives through the Internet and the challenges both communities face to make a vision where the Internet gets hands through robots, and robots greatly benefit from the Internet become a reality.
As the chairman Rob van Kranenburg introduced: “Rather than programming robots to handle every potential situation, the Internet of Things could create an environment in which the objects themselves inform robots of their purpose and usage. Tomorrow’s smart objects can provide sensing, robots can act, processing can be on the robot or in the Cloud. To accomplish this, the fields of robotics and IoT need to define common standards for knowledge storage, representation and communication.”
The topics of debate had clear connection points, and pointed to potential future research questions for RoboEarth, including:
- Tomorrow’s smart objects can provide sensing, robots can act, processing can be on the robot or in the Cloud (e.g., using RoboEarth’s Cloud Engine)
- Rather than programming robots to handle every potential situation, the Internet of Things could create an environment in which the objects themselves inform robots of their purpose and usage.
- Both the IoT and RoboEarth encode knowledge. The fields of robotics and IoT need to define common standards for knowledge storage and representation.
- The IoT, robots, and humans need to communicate. The fields of robotics and IoT need to define interfaces and common standards for communication.
For more information, have a look at the article Enlisting Robots – Once robots are integrated into the Internet of Things, they can perform tasks automatically published in the RFID Journal.
Update (Feb 27, 2013):
Even more information can be found in the article The Internet of Things: Robots, RFID & Co-operation published in the December 2012 issue of Elektor.
This second meeting focused on getting feedback on the project’s direction at its half-way point. Thanks to all participants, in particular to the members of the committee for their participation and valueable feedback.
We gladly announce that the conference paper submission called “The RoboEarth language: Representing and Exchanging Knowledge about Actions, Objects, and Environments” (Moritz Tenorth, Alexander Perzylo, Reinhard Lafrenz and Michael Beetz) has won the Best Cognitive Robotics Paper Award at ICRA 2012.
The paper covers the design of the semantic RoboEarth language and how it is used to describe and reason about tasks, objects and environments in a way that allows to share knowledge between different robots. Descriptions of tasks include information about required physical attributes and software components, which is being matched against a robot’s capabilities entailed in its semantic self-model. This allows to infer whether a robot is capable of performing a certain task and if not, how it might be enabled by downloading additional information from RoboEarth.
The 5th International Conference on Cognitive Systems was held in Vienna, Austria, on February 23 – 24, 2012.
The conference aimed at presenting the state-of-the-art in cognitive systems and robotics. It showed European research efforts being made in this field and provided an opportunity for open discussions.
Part of the lively interaction was a talk given by Heico Sandee about the RoboEarth project. It covers how robots can exchange knowledge through RoboEarth and how to determine whether the knowledge might be useful for a specific robot or not.
In order to watch the talk in full length (~18 min.) please follow the link below.
Heico Sandee – RoboEarth: Connecting robots world-wide
Update (Sep 11, 2012):
Finally, we compiled a video of the demonstrator we created during the workshop including additional explanations of what is going on behind the visible actions of the robots:
The third internal RoboEarth workshop took place at the Technical University of Munich from February 8th to 12th, 2012, and was directly followed by RoboEarth’s second Annual Review meeting on February 13th, 2012.
The RoboEarth demonstrator developed during the week-long workshop showed how two robots with different hardware and in different locations could use RoboEarth to share knowledge.
First, a PR2 robot in downtown Munich was ordered to serve a drink to a patient, who was resting in a bed in a mock-up hospital room. As a related semantic task description was available in the RoboEarth database, the PR2 could download this information and infer whether its capabilities comply with the task’s requirements and what other knowledge it was missing to execute the task, e.g. object detection models and environment maps. It successfully checked the availability of the missing components on RoboEarth, downloaded them and as a result could start executing the task. As the drink was stored inside of a cabinet behind a closed door, the PR2 had to learn the articulation model for that door. After completing the learning process, the PR2 annotated the object model of the cabinet with the learned articulation model for the door and updated it on the RoboEarth database.
Then an Amigo robot in a similar (but not identical) hospital room environment in Garching close to Munich was given the same command of serving a drink. The robot could download needed knowledge from RoboEarth like the PR2 did. This time the articulation model was included, so that during the execution of the task Amigo didn’t have to learn it by itself. Amigo was able to grasp the handle of the door and open it right away.
This demonstration showed what a shared knowledge base like RoboEarth including its reasoning services can add to the development of robots: Robots were able to navigate, recognize objects and perform complex manipulation tasks without being explicitly pre-programmed for these tasks beforehand.
To achieve this goal, all of the involved PhD students and several professors gathered in Munich to work on tomorrow’s cloud robotics solutions. The week was characterized by a large amount of work and a limited amount of sleep – and a joint evening at a Bavarian restaurant.