Instytut Podstawowych Problemów Techniki
Polskiej Akademii Nauk

Partnerzy

Michał Pelka

Institute of Mathematical Machines (PL)

Prace konferencyjne
1.  Będkowski J., Majek K., Pelka M., Semi autonomous mobile robot for inspection missions, IS, IEEE 12th International Conference on Intelligent Systems, 2024-08-29/08-31, Varna (BG), pp.1-4, 2024

Streszczenie:
This paper shows the results of a semi autonomous mobile robot tested in inspection missions during ENRICH 2023 and ELROB 2024. After successful ENRICH 2023 we decided to improve our system by adding LiDAR (Light Detection and Ranging) motion compensation with IMU (Inertial Measurement Unit). Our goal is to provide an affordable robotic solution as open source project available at https://github.com/JanuszBedkowski/msas_enrich_2023. Everything was fine till realistic test during ELROB 2024. Our system works well in 2D scenarios. It is not robust against large slopes. In this paper we show system overview and elaborate its limitations. We demonstrated the use of our open source project https://github.com/MapsHD/HDMapping for 3D map building with the mobile mapping system attached to the robot.

Słowa kluczowe:
mobile robot, semi autonomous, real world task, LiDAR

Afiliacje autorów:
Będkowski J. - IPPT PAN
Majek K. - Institute of Mathematical Machines (PL)
Pelka M. - Institute of Mathematical Machines (PL)
20p.
2.  Musialik P., Majek K., Majek P., Pelka M., Będkowski J., Masłowski A., Typiak A., Accurate 3D mapping and immersive visualization for Search and Rescue, RoMoCo 2015, 10th International Workshop on Robot Motion and Control, 2015-07-06/07-08, Poznań (PL), DOI: 10.1109/RoMoCo.2015.7219728, pp.153-158, 2015

Streszczenie:
This paper concentrates on the topic of gathering, processing and presenting 3D data for use in Search and Rescue operations. The data are gathered by unmanned ground platforms, in form of 3D point clouds. The clouds are matched and transformed into a consistent, highly accurate 3D model. The paper describes the pipeline for such matching based on Iterative Closest Point algorithm supported by loop closing done with LUM method. The pipeline was implemented for parallel computation with Nvidia CUDA, which leads to higher matching accuracy and lower computation time. An analysis of performance for multiple GPUs is presented. The second problem discussed in the paper is immersive visualization of 3d data for search and rescue personnel. Five strategies are discussed: plain 3D point cloud, hypsometry, normal vectors, space descriptors and an approach based on light simulation through the use of NVIDIA OptiX Ray Tracing Engine. The results from each strategy were shown to end users for validation. The paper discusses the feedback given. The results of the research are used in the development of a support module for ICARUS project.

Słowa kluczowe:
Three-dimensional displays, Data visualization, Graphics processing units, Image color analysis, Computational modeling, Solid modeling, Pipelines

Afiliacje autorów:
Musialik P. - Institute of Mathematical Machines (PL)
Majek K. - Institute of Mathematical Machines (PL)
Majek P. - Institute of Mathematical Machines (PL)
Pelka M. - Institute of Mathematical Machines (PL)
Będkowski J. - inna afiliacja
Masłowski A. - Politechnika Warszawska (PL)
Typiak A. - inna afiliacja
15p.
3.  Będkowski J., Pelka M., Majek K., Fitri T., Naruniec J., Open source robotic 3D mapping framework with ROS - Robot Operating System, PCL - Point Cloud Library and Cloud Compare, 5TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING AND INFORMATICS, 2015-08-10/08-11, Legian-Bali (ID), DOI: 10.1109/ICEEI.2015.7352578, pp.644-649, 2015

Streszczenie:
We propose an open source robotic 3D mapping framework based on Robot Operating System, Point Cloud Library and Cloud Compare software extended by functionality of importing and exporting datasets. The added value is an integrated solution for robotic 3D mapping and new publicly available datasets (accurate 3D maps with geodetic precision) for evaluation purpose Datasets were gathered by mobile robot in stop scan fashion. Presented results are a variety of tools for working with such datasets, for task such as: preprocessing (filtering, down sampling), data registration (ICP, NDT), graph optimization (ELCH, LUM), tools for validation (comparison of 3D maps and trajectories), performance evaluation (plots of various outputs of algorithms). The tools form a complete pipeline for 3D data processing. We use this framework as a reference methodology in recent work on SLAM algorithms.

Słowa kluczowe:
Three-dimensional displays, Robot kinematics, Cameras, Mobile communication, Robot sensing systems, XML

Afiliacje autorów:
Będkowski J. - inna afiliacja
Pelka M. - Institute of Mathematical Machines (PL)
Majek K. - Institute of Mathematical Machines (PL)
Fitri T. - Institute of Mathematical Machines (PL)
Naruniec J. - Politechnika Warszawska (PL)
15p.

Abstrakty konferencyjne
1.  Pelka M., Majek K., Będkowski J., Testing the affordable system for digitizing USAR scenes, SSRR 2019, IEEE INTERNATIONAL SYMPOSIUM ON SAFETY,SECURITY AND RESCUE ROBOTICS, 2019-09-02/09-04, Würzburg (DE), DOI: 10.1109/SSRR.2019.8848929, pp.104-105, 2019

Streszczenie:
Affordable technological solutions are always welcome, thus we decided to test the backpack based 3D mapping system for digitizing USAR scenes. The system is composed of Intel RealSense Tracking Camera T265, three Velodynes VLP16, custom electronics for multi-lidar synchronization and VR Zotac GO backpack computer equipped with GeForce GTX1070. This configuration allows the operator to collect and process 3D point clouds to obtain a consistent 3D map. To reach satisfactory accuracy we use RealSense as initial guess of trajectory from Visual Odometry (VO). Lidar odometry corrects trajectory and reduces scale error from VO. The academic 6DSLAM is used for loop closure and finally classical ICP algorithm refines the final 3D point cloud. All steps can be done in the field in reasonable time. The VR backpack can be used for virtual travel over digital content afterwords. Additionally deep neural network is used to perform online object detection using Relsense camera input.

Afiliacje autorów:
Pelka M. - Institute of Mathematical Machines (PL)
Majek K. - Institute of Mathematical Machines (PL)
Będkowski J. - IPPT PAN

Kategoria A Plus

IPPT PAN

logo ippt            ul. Pawińskiego 5B, 02-106 Warszawa
  +48 22 826 12 81 (centrala)
  +48 22 826 98 15
 

Znajdź nas

mapka
© Instytut Podstawowych Problemów Techniki Polskiej Akademii Nauk 2024