Google Summer of Code 2017
The software libraries that origin from our laboratory and are now used and supported by a larger user community are: the KnowRob system for robot knowledge processing, the CRAM framework for plan-based robot control, openEASE for collecting and analyzing experiment data and RoboSherlock for cognitive perception. In our group, we have a very strong focus on open source software and active maintenance and integration of projects. The systems we develop are available under BSD license, Apache v2.0 and partly (L)GPL.
For the proposed topics in the context of our work please refer to the section further below.
For a PDF-version of this years ideas page, and a brief introduction of our research group, please see this document.
When contacting us, please make sure you read the description of the topic you are interested in carefully. Only contact the person responsible for the topic / topics you are interested in. Please only ask topic-relevant specific questions, otherwise your emails will not be answered due to limited resources we have for processing the vast amount of GSoC inquiries. For more general questions please use Gitter .
KnowRob -- Robot Knowledge Processing
KnowRob is a knowledge processing system that combines knowledge representation and reasoning methods with techniques for acquiring knowledge from different sources and for grounding the knowledge in a physical system. It provides robots with knowledge to be used in their tasks, for example action descriptions, object models, environment maps, and models of the robot's hardware and capabilities. The knowledge base is complemented with reasoning methods and techniques for grounding abstract, high-level information about actions and objects in the perceived sensor data.
KnowRob became the main knowledge base in the ROS ecosystem and is actively being used in different academic and industrial research labs around the world. Several European research projects use the system for a wide range of applications, from understanding instructions from the Web (RoboHow), describing multi-robot search-and-rescue tasks (SHERPA), assisting elderly people in their homes (SRS) to industrial assembly tasks (SMErobotics).
KnowRob is an open-source project hosted at GitHub that also provides extensive documentation on its website – from getting-started guides to tutorials for advanced topics in robot knowledge representation.
CRAM -- Robot Plans
CRAM is a high-level system for designing and performing abstract robot plans to define intelligent robot behavior. It consists of a library of generic, robot platform independent plans, elaborate reasoning mechanisms for detecting and repairing plan failures, as well as interface modules for executing these plans on real robot hardware. It supplies robots with concurrent, reactive task execution capabilities and makes use of knowledge processing backends, such as KnowRob, for information retrieval.
CRAM builds on top of the ROS ecosystem and is actively developed as an open-source project on GitHub. It is the basis for high-level robot control in many parts of the world, especially in several European research projects covering applications from geometrically abstract object manipulation (RoboHow), multi-robot task coordination and execution (SHERPA), experience based task parametrization retrieval (RoboEarth), and safe human robot interaction (SAPHARI). Further information, as well as documentation and application use-cases can be found at the CRAM website.
openEASE -- Experiment Knowledge Database
OpenEASE is a generic knowledge database for collecting and analysing experiment data. Its foundation is the KnowRob knowledge processing system and ROS, enhanced by reasoning mechanisms and a web interface developed for inspecting comprehensive experiment logs. These logs can be recorded for example from complex CRAM plan executions, virtual reality experiments, or human tracking systems. OpenEASE offers interfaces for both, human researchers that want to visually inspect what has happened during a robot experiment, and robots that want to reason about previous task executions in order to improve their behavior.
The OpenEASE web interface as well as further information and publication material can be accessed through its publicly available website. It is meant to make complex experiment data available to research fields adjacent to robotics, and to foster an intuition about robot experience data.
RoboSherlock -- Framework for Cognitive Perception
RoboSherlock is a common framework for cognitive perception, based on the principle of unstructured information management (UIM). UIM has proven itself to be a powerful paradigm for scaling intelligent information and question answering systems towards real-world complexity (i.e. the Watson system from IBM). Complexity in UIM is handled by identifying (or hypothesizing) pieces of structured information in unstructured documents, by applying ensembles of experts for annotating information pieces, and by testing and integrating these isolated annotations into a comprehensive interpretation of the document.
RoboSherlock builds on top of the ROS ecosystem and is able to wrap almost any existing perception algorithm/framework, and allows easy and coherent combination of the results of these. The framework has a close integration with two of the most popular libraries used in robotic perception, namely OpneCV and PCL. More details about RoboSherlock can be found on the project webpage.
Proposed Topics
In the following, we list our proposals for the Google Summer of Code topics that contribute to the aforementioned open-source projects.
Topic 1: Multi-modal Cluttered Scene Analysis in Knowledge Intensive Scenarios
Main Objective: In this topic we will develop algorithms that en- able robots in a human environment to recognize objects in diffi- cult and challenging scenarios. To achieve this the participant will develop annotators for RoboSherlock that are particularly aimed at object-hypotheses generation and merging. Generating a hypotheses essentially means to generate regions/clusters in our raw data that form a single object or object-part. In particular this entails the de- velopment of segmentation algorithms for visually challenging scenes or object properties, as the likes of transparent objects, or cluttered, occluded scenes. The addressed scenarios include stacked, occluded objects placed on shelves, objects in drawers, refrigerators, dishwash- ers, cupboards etc. In typical scenarios, these confined spaces also bare an underlying structure, which will be exploited, and used as background knowledge, to aid perception (e.g. stacked plates would show up as parallel lines using an edge detection). Specifically we would start from (but not necessarly limit ourselves to) the implemen- tation of two state-of-the-art algorithms described in recent papers:
[1] Aleksandrs Ecins, Cornelia Fermuller and Yiannis Aloimonos, Cluttered Scene Segmentation Using the Symmetry Constraint, International Conference on Robotics and Automation(ICRA) 2016 [2] Richtsfeld A., M ̈ orwald T., Prankl J., Zillich M. and Vincze M. - Segmentation of Unknown Objects in Indoor Environments. IEEE/RSJ International Conference on Intelligent Robots and Sys- tems (IROS), 2012.
Task Difficulty: The task is considered to be challenging, as it is still a hot research topic where general solutions do not exist.
Requirements: Good programming skills in C++ and basic knowledge of CMake and ROS. Experience with PCL, OpenCV is prefered.
Expected Results: Currently the RoboSherlock framework lacks good perception algorithms that can generate object-hypotheses in challenging scenarios(clutter and/or occlusion). The expected results are several software components based on recent advances in cluttered scene analysis that are able to successfully recognized objects in the scenarios mentioned in the objectives, or a subset of these.
Contact: Ferenc Bálint-Benczédi
Topic 2: Realistic Grasping using Unreal Engine
Main Objective: The objective of the project is to implement var- ious human-like grasping approaches in a game developed using Unreal Engine.
The game consist of a household environment where a user has to execute various given tasks, such as cooking a dish, setting the table, cleaning the dishes etc. The interaction is done using various sensors to map the users hands onto the virtual hands in the game.
In order to improve the ease of manipulating objects the user should be able to switch during runtime the type of grasp (pinch, power grasp, precision grip etc.) he/she would like to use.
Task Difficulty: The task is to be placed in the easy difficulty level, as it requires less algorithmic knowledge and more programming skills.
Requirements: Good programming skills in C++. Good knowledge of the Unreal Engine API. Experience with skeletal control / animations / 3D models in Unreal Engine.
Expected Results We expect to enhance our currently developed robot learning game with realistic human-like grasping capabilities. These would allow users to interact more realistically with the given virtual environment. Having the possibility to manipulate objects of various shapes and sizes will allow to increase the repertoire of the executed tasks in the game. Being able to switch between specific grasps will allow us to learn grasping models specific to each manipulated object.
Contact: Andrei Haidu
Topic 3: ROS with PR2 integration in Unreal Engine
Main Objective: The objective of the project is to integrate the PR2 8 robot in Unreal Engine with ROS support. Due to the lack of Windows OS support of ROS, this will be imple-mented in the Linux distribution of the engine.
Task Difficulty: The task is to be placed in the medium difficulty level, as it requires programming skills of various frameworks (ROS, Linux, Unreal Engine).
Requirements: Good programming skills in C++. Good knowledge of the Unreal Engine API, ROS and Linux. Some experience in robotics.
Expected Results We expect to be able to load URDF models of varios robots (e.g. PR2) and be able to control them through ROS in the game engine. In a similar fashion to a robotic simulator.
Contact: Andrei Haidu
Topic 4: Plan Library for Autonomous Robots performing Chemical Experiments
Main Objective: of this theme is to develop in Gazebo simulator a set of plan-based control programs which will equip an autonomous mobile robot to perform a set of typical manipulations within a chemistry laboratory. The set of plan-based control programs resulted at the end of the program will be tested on the real PR2 robot from the Institute for Artificial Intelligence of the University of Bremen, Germany.
The successful candidate will use the domain specific language of CRAM toolbox and code plan-based control programs which will enable the PR2 robot to perform manipulations like: simple grasping of different containers, screwing and unscrewing the cap of a test tube, pouring a substance from a container into another container, operating a centrifuge, etc.
In the first phase of the project the successful candidate will make sure he/she is familiar with the domain specific language of CRAM toolbox and the parameters of the plan-based control programs. This phase will culminate with the student having coded a simple complete and fully runnable plan-based control program.
In the second phase of the project together with the successful candidate we will decide the set of manipulations which will be implemented in order to enable the robot to perform a simple and complete chemical experiment.
In the last phase of the project, the plan-based control programs developed in the second phase will be put together and the complete chemical experiment will be tested and fixed until it runs successfully.
The set of plan-based control programs resulted at the end of the program will represent the execution basis of the future experiments which will be done at IAI in order to figure out how an autonomous robot can reproduce a chemical experiment represented with semantic web tools.
Requirements: The ideal candidate must be comfortable programming in LISP and familiar with the ROS concepts. The candidate familiar with the Gazebo simulator and CRAM toolbox will have a big plus.
Expected Results We expect to successfully code a library of plan-based control programs which will enable an autonomous robot to manipulate the typical chemical laboratory equipment and perform a small class of chemical experiments in Gazebo simulator.
Contact: Gheorghe Lisca
Prof. Dr. hc. Michael Beetz PhD
Head of Institute
Contact via
Andrea Cowley
assistant to Prof. Beetz
ai-office@cs.uni-bremen.de
Discover our VRB for innovative and interactive research
Memberships and associations: