Author: stachnis

2019-05: Code Available: ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras by Emanuele Palazzolo

ReFusion – 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals

ReFusion on github

Mapping and localization are essential capabilities of robotic systems. Although the majority of mapping systems focus on static environments, the deployment in real-world situations requires them to handle dynamic objects. In this paper, we propose an approach for an RGB-D sensor that is able to consistently map scenes containing multiple dynamic elements. For localization and mapping, we employ an efficient direct tracking on the truncated signed distance function (TSDF) and leverage color information encoded in the TSDF to estimate the pose of the sensor. The TSDF is efficiently represented using voxel hashing, with most computations parallelized on a GPU. For detecting dynamics, we exploit the residuals obtained after an initial registration, together with the explicit modeling of free space in the model. We evaluate our approach on existing datasets, and provide a new dataset showing highly dynamic scenes. These experiments show that our approach often surpass other state-of-the-art dense SLAM methods. We make available our dataset with the ground truth for both the trajectory of the RGB-D sensor obtained by a motion capture system and the model of the static environment using a high-precision terrestrial laser scanner.

If you use our implementation in your academic work, please cite the corresponding paper: E. Palazzolo, J. Behley, P. Lottes, P. Giguère, C. Stachniss. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals, Submitted to IROS, 2019 (arxiv paper).

This code is related to the following publications:
E. Palazzolo, J. Behley, P. Lottes, P. Giguère, C. Stachniss. ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals, Submitted to IROS, 2019 (arxiv paper).

2019-03-11: Cyrill Stachniss receives AMiner TOP 10 Most Influential Scholar Award (2007-2017)

Cyrill Stachniss received the 2018 AMiner TOP 10 Most Influential Scholar Award (2007-2017) in the area of robotics. The AMiner Most Influential Scholar Annual List names the world’s top-cited research scholars from the fields of AI/robotics. The list is conferred in recognition of outstanding technical achievements with lasting contribution and impact to the research community. In 2018, the winners are among the most-cited scholars whose paper was published in the top venues of their respective subject fields between 2007 and 2017. Recipients are automatically determined by a computer algorithm deployed in the AMiner system that tracks and ranks scholars based on citation counts collected by top-venue publications. In specific, the list of the field Robot answers the question of between 2007 and 2017, who are the most cited scholars in ICRA and IROS conferences, which are identified as the top venues of this field.

2018-12: Code Available: Release Surfel-based Mapping using 3D Laser Range Data by Jens Behley

SuMa – Surfel-based Mapping using 3D Laser Range Data

SuMa on github

Mapping of 3d laser range data from a rotating laser range scanner, e.g., the Velodyne HDL-64E. For representing the map, we use surfels that enables fast rendering of the map for point-to-plane ICP and loop closure detection. If you use our implementation in your academic work, please cite the corresponding paper: J. Behley, C. Stachniss. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments, Proc. of Robotics: Science and Systems (RSS), 2018 (pdf).

This code is related to the following publications:
J. Behley, C. Stachniss. Efficient Surfel-Based SLAM using 3D Laser Range Data in Urban Environments, Proc. of Robotics: Science and Systems (RSS), 2018 (pdf).

2018-11: Igor Bogoslavskyi defended his PhD Thesis

Igor Bogoslavskyi successfully defended his PhD thesis entitled “Robot mapping and navigation in real-world environments” at the University of Bonn on the Photogrammetry & Robotics Lab.

Download the PhD thesis

Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial di culty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots.
The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating di cult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and in this way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state.
The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance.
All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software.

2018-10: New Dataset Release: Agricultural Sugar Beet Datasets with Annotations

We released 12340 labeled images containing pixel-wise annotations of sugar beets and weeds. The labels belong to our previously published IJRR dataset paper: “Agricultural robot dataset for plant classification, localization, and mapping on sugar beet fields.” On average, we recorded data three times per week over 6 weeks within the season, which captures the interesting period for weed control starting at the emergence of the plants. The robot carried a 4-channel multi-spectral camera.

Link to the dataset
www.ipb.uni-bonn.de/data/sugarbeets2016/

Link to the new labels:
www.ipb.uni-bonn.de/datasets_IJRR2017/annotations/cropweed/

2018-09-27: Cluster of Excellence PhenoRob got Accepted

Our Cluster of Excellence proposal “PhenoRob – Robotics and Phenotyping for Sustainable Crop Production” has been accepted today.

One of the greatest challenges for humanity is to produce sufficient food, feed, fiber, and fuel for an ever-growing world population while simultaneously reducing the environmental footprint of agricultural production. x arable land is limited, and the input of agro-chemicals needs to be reduced to curb environmental pollution and halt the decline in biodiversity. Climate change poses additional constraints on crop farming. Achieving sustainable crop production with limited resources is, thus, a task of immense proportions.

Our main hypothesis is that a major shift toward sustainable crop production can be achieved via two approaches: (1) multi-scale monitoring of plants and their environment using autonomous robots with automated and individualized intervention and big data analytics combined with machine learning to improve our understanding of the relation between input and output parameters of crop production, and (2) assessing, modeling, and optimizing the implications of the developed technical innovations in a systemic manner.

To realize our vision, we will take a technology-driven approach to address the challenging scientific objectives. We foresee novel ways of growing crops and managing fields, and aim at reducing the environmental footprint of crop production, maintaining the quality of soil and arable land, and analyzing the best routes to improve the adoption of technology.

The novel approach of PhenoRob is characterized by the integration of robotics, digitalization, and machine learning on one hand, and modern phenotyping, modeling, and crop production on the other. First, we will systematically monitor all essential aspects of crop production using sensor networks as well as ground and aerial robots. This is expected to provide detailed spatially and temporally aligned information at the level of individual plants, nutrient and disease status, soil information as well as ecosystem parameters, such as vegetation diversity. This will enable a more targeted management of inputs (genetic resources, crop protection, fertilization) for optimizing outputs (yield, growth, environmental impact). Second, we will develop novel technologies to enable real-time control of weeds and selective spraying and fertilization of individual plants in field stands. This will help reduce the environmental footprint by reducing chemical input. Third, machine learning applied to crop data will improve our understanding and modeling of plant growth and resource efficiencies and will further assist in the identification of correlations. Furthermore, we will develop integrated multi-scale models for the soil-crop-atmosphere system. These technologies and the gained knowledge will change crop production on all levels. Fourth, in addition to the impact on management decisions at the farm level, we will investigate the requirements for technology adoption as well as socioeconomic and environmental impact of the innovations resulting from upscaling.