This is an old revision of the document!
Information and troubleshooting with the robohow demo
Important: Before starting anything, make sure that the Kinect on the robot is set to high resolution mode, and the robot is localized. High resolution is important to get correct poses for objects that are detected based on RGB images. Filtering the pointclouds/depth-image in order to get data only from the regions of interest (the two counter tops and the table), requires the robot to be localized. Slight errors in localization are OK, but false detections might happen.
Running the perception pipeline for the demo
To start a pipeline run:
rosrun iai_rs_cpp rs_runAE demo
Note: on the demo PC this can also be done by running rs_run <AEame>
Optional: you can run the realtime urdf filter, which will filter the robot from the 3D data
roslaunch realtime_urdf_filter realtime_urdf_filter.launch
Note1: if using the urdf filter, the config file path in the CollectionReader2.xml. To do this go to the project home folder and open CollectionReader2.xml. It located in the {PROJ_HOME}/descriptors/annotators/ folder. Look for the lines:
<nameValuePair> <name>camera_config_file</name> <value> <!-- <string>config/config_Kinect_robot_urdf_filter.ini</string>--> <string>config/config_Kinect_robot.ini</string> </value> </nameValuePair>
Uncomment the line conatining urdf filter and comment the line after it.
Note2:Robot self filtering does not work on the demo PC yet.
Using MLNs
To use the results from the MLN based inferencing run:
roslaunch mln_query mln_query.launch
The mln atoms generator and the inferencer are by default in the pipeline, but the demo will run even we are not running it.
Pipeline definition
The pipeline for the demo is defined in an analasys engine called demo.xml. It can be found in ${PROJ_DIR}/descriptors/analysis_engines. The important part is:
<flowConstraints> <fixedFlow> <node>CollectionReader2</node> <node>URDFRegionFilter</node> <node>NormalEstimator</node> <node>PlaneAnnotator</node> <node>PointCloudClusterExtractor</node> <node>SpatulaSegmentation</node> <node>Cluster3DGeometryAnnotator</node> <node>PrimitiveShapeAnnotator</node> <!-- <node>ClusterTracker</node>--> <!-- <node>ClusterGogglesAnnotator</node>--> <node>SacModelAnnotator</node> <node>LinemodAnnotator</node> <node>ClusterColorHistogramCalculator</node> <node>MLNAtomsGenerator</node> <node>MLNInferencer</node> <node>PancakeAnnotator</node> <node>DisplayAnnotator</node> <node>ResultAdvertiserAnnotator</node> </fixedFlow> </flowConstraints>
To turn on or off modules simply (un)comment them. No recompilation needed. All of the modules in the pipeline have a configuration file of their own, located in {PROJ_SOURCE}/descriptors/annotators. Parameters of the algorithms can be changed here (e.g. min cluster size, min. plane size etc.)