Prototyping of an indoor mobile mapping system

. The paper deals with prototyping a kinematic measurement system for digitizing indoor environments of buildings. The scanning system itself consists of several sensors for localization (inertial measurement unit, rotary encoders and a stereo camera), while its main part is a Velodyne lidar. The procedure of data processing is performed by an algorithm for simultaneous localization and mapping (SLAM). The paper describes the hardware of the measurement system. Part of the contribution is the proposal of a series of tests evaluating the quality of the prototype of the mobile mapping system. The tests verify the accuracy of length measurement, the accuracy of the determination of the inclination of the measuring system, and the accuracy of the lidar position determination.


Introduction
In today's era of digitization of processes and services, new methods and techniques must be implemented also in the construction industry.The recent trend in the construction industry is the use of Building Information Modeling (BIM) [1].A BIM model collects available information about the building, which can be graphic, no-graphic (parameters) and in form of documentation.The use of the BIM also makes processes more efficient, e.g. the as-built documentation and the verification of the geometry of executed structural parts of a building is more complex.For this purpose, modern methods of spatial data collecting are needed, when detailed and accurate data must be collected in a relatively short time.These requirements are fulfilled by the technique of terrestrial laser scanning and photogrammetric methods, often in combination with conventional surveying methods (e.g., using a total station).To make data collecting more efficient often mobile mapping systems are deployed.Such a system consists of several sensor for determination of the spatial position and orientation of the measuring system itself, light detection and ranging sensor (Lidar) and cameras, etc.In fact, they create a kinematic laser scanning system that significantly contribute to the automation of the as-built documentation of buildings.
The paper deals with a proposal and prototyping of a mobile mapping system for digitizing the indoor environment of buildings.It consists of 5 subsystems, described in the following sections, and the entire mapping system is controlled using the robotic operating system ROS.As part of the data processing, simultaneous localization and mapping (SLAM) is used, within which the current position of the measuring system is simultaneously calculated and a model of the surrounding environment is created.SLAM can be divided based on the sensors used into SLAM using lidar [2], [3], [4], SLAM using cameras [2], [3], [5], [6], [7] and SLAM using combination of several sensors [3].There are many measuring systems on the market, but their price reaches several 10 000 €. Our goal is to design a prototype of a low-cost mapping system for the indoor environments of buildings.The paper mainly focusses on the components of the prototype of the mapping system.Part of the paper is devoted to operating system ROS.Also, a proposal of future testing of the entire systems is described.

Prototype of the mobile mapping system
The designed prototype of the mobile mapping system (Fig. 1) consists of several sensors, while the individual sensors compensate for the shortcomings of the others.The mapping system consists of lidars sensors (RPLIDAR, Velodyne), a stereo camera, an inertial measurement unit (IMU), rotary encoders, carrier and other electrical parts such as power supplies and others.The mobile mapping system is created of individual subsystems that are placed on a moving "Ackerman type" carrier.They are as follows: • robot control and odometry subsystem, • stereo camera subsystem, • Velodyne subsystem, • RPLIDAR subsystem, • IMU subsystem.

Robot control and odometry subsystem
The main part of the subsystem is a pair of rotary encoders, which are located on the rear axle of the moving platform (Fig. 1(1)) [8].The entire moving platform is controlled by the STM32F103RC Controller, which can be connected to a Raspberry Pi 4 8 GB (Fig. 1 (2)) computer.Measured data from the pair of the rotary encoders are forwarded via the STM32F103RC Controller to the Raspberry Pi 4 and from there to the NVIDIA Jetson Nano 4 GB (Fig. 1(5)) master computer, where they are stored.The entire subsystem is powered by a 22.2 V battery (Fig. 1 (3)), where the Raspberry Pi 4 is powered via the STM32F103RC Controller.

Stereo camera subsystem
This subsystem is created by a ZED 2 stereo camera (Fig. 1(4)) [9], which uses two highresolution sensors to capture stereo images.From the images depth maps are generated consequently.The camera is also equipped with an IMU, barometer, magnetometer and two temperature sensors.The stereo camera's maximum field of view is 110° (H) x 70° (V) x 120° (D) with a depth range from 0.2 m to 20.0 m.The stereo camera is connected using a USB type A cable to the NVIDIA Jetson Nano 4 GB computer (Fig. 1(5)) and the entire system is powered by a 5 V battery (Fig. 1(6)).All data from the entire system is stored on the memory card of the NVIDIA Jetson Nano 4 GB computer, from which it is downloaded after the measurement is finished, and the data is post-processed.The results of the measurement are recorded is one file in *.svo format from the stereo camera.

Velodyne subsystem
The Velodyne Puck lidar (Fig. 1(7)) [10] is a 3D lidar that scans the surrounding environment in 16 channels and can measure up to 300 000 points per a second.Its maximum range is 100 m with a 360° horizontal field of view, a 30° vertical field of view (with +/-15° up and down) and a rotation rate of 5 Hz -20 Hz.The Velodyne Puck is powered by a 12 V battery (Fig. 1(8)) via interface box and is connected to the Raspberry Pi 4 8 GB computer (Fig. 1(2)) using an ethernet cable.The Raspberry Pi 4 8 GB computer is then powered by a 5 V battery (Fig. 1(6)).

RPLIDAR subsystem
The subsystem consists of three RPLIDARs A1 (Fig. 1(9)) [11] from Shanghai Slamtec Co., Ltd., which are located in three mutually perpendicular directions on a common platform.RPLIDAR A1 is a 2D lidar (measuring a single profile) with a maximum range of 12 m and scans in a 360 ° horizontal field of view.The scan rate is configurable between 2 Hz -10 Hz, with a recommended scan rate of 5,5 Hz.The outputs from RPLIDAR A1 are scans in the form of a point cloud created by sets of 2D (X, Y) coordinates of the measured points.A trio of A1 RPLIDARs is connected to a Raspberry Pi 4 8 GB computer (Fig. 1(2)), which controls the initialization of the system and starts the measurement.The measured points (measurement records) are stored on the Nvidia Jetson Nano 4 GB master computer (Fig. 1(5)).The entire subsystem is powered by a 5 V battery (Fig. 1(10)).

IMU subsystem
The main part of the subsystem is the inertial measurement unit IMU STIM 300 (Fig. 1(11)) [12], which contains 3 accurate MEMS gyroscopes, 3 stable accelerometers and 3 stable inclinometers.The STIM 300 is connected to a Raspberry Pi 4 8 GB computer (2) using a RS422 to USB type A cable.The IMU is placed on a common platform with the RPLIDAR A1 (Fig. 1(9)) and the Velodyne Puck (Fig. 1(7)) lidars to ensure the measurement of the movement of the platform carrying the lidar sensors.The entire subsystem is powered by a 5 V battery (Fig. 1(10)).

System launch and control
The entire system is controlled by the operating system ROS [13], which is an open-source software development kit for robotics applications.ROS provides the tools and libraries needed for development of a robotic system.ROS does not replace, but instead works alongside a traditional operating systems [14].Each computer of the mapping system is running Ubuntu 20.04 and using ROS1 Noetic Ninjemys, the latest version of ROS1.Before launching the ROS system, it is necessary that all computers are connected to the same Wifi network.When starting the system, ROS must first be launched on the NVIDIA Jetson Nano 4 GB master computer, and then the ZED 2 stereo camera can also be launched.The Raspberry Pi 4 8 GB computers are connected to the NVIDIA master computer based on their IP addresses, and then all the sensors for the given computers are launched (using ROS).After connecting all the computers to ROS, which make up its nodes, the measurement recording to a *.bak file is started on the master computer.The recording from the stereo camera is saved in a file of type *.svo.Subsequently, the measurement of the geometry of the indoor environment of the building is performed.After the measurement, the data recording is finished and all the subsystems are turned off.

Proposal of testing
Set of tests needs to be done to verify the accuracy of lidar sensors and their characteristics as well as the accuracy of the measured data.Also, the verification of the data processing algorithm is significantly important.The lidar sensors (3 x RPLIDARs A1 and 1 x Velodyne Puck) will be tested in order to verify the accuracy of the measured lengths, to determine the inclination and the position of the lidars within the reference model.Reference model is a model of the indoor environment created, for example, by terrestrial laser scanning.The test will be based on the comparison of the reference values, from reference model obtained by terrestrial laser scanner, with the values measured by the proposed mobile mapping system.
When testing the accuracy of the measured lengths from the lidars, the lidars will be placed in front of a wall in equal stations.The position of the lidars will be measured using a total station by measuring their centres (centres of rotation).The wall's relative position will be determined by measuring at least 3 points on the wall's surface.For the lidars RPLIDAR A1, the distance measurement will be tested up to 15 m because the producer defines the distance measurement range as from 0.15 m to 12.00 m.For the lidar Velodyne Puck it will be up to 110 m, while its distance measurement range is maximum 100 m.The length measured by lidars will be compared by the results from the total station.The differences between lengths will determine the accuracy of the lidar's length measurement.Based on the results it will be determined the range interval from which the scanned points can be used for data processing.
The next series of tests have to be conducted are aimed at the quantification of the accuracy of determination of the measurement system's inclination (tilt).This kind of test are necessary because during the measurement using the mapping system, there may be vibrations caused by the acceleration of the carrier, which will cause inclination of the whole system.The inclination have to be removed from the measured data (the influence on the resulting point clouds) before further processing.The lidars of the corresponding subsystem will be placed on a positionable device.Due to the light weight of the lidars, for this can be also used the baseline of a historical tachymeter instrument (e.g.Carl Zeiss Jena BRT006).The positionable device will be used to set the reference values of inclination.The procedure can be divided into several steps: setting the reference inclination, measuring by the subsystem for a few seconds, setting a new value of inclination, and repetition of the whole procedure for regular intervals in the range 0° to 90°.
The accuracy of lidar positioning will be tested in order to verify the quality of the resulting point clouds, while the position of the scans within a reference model will be determined.During the testing, the lidars will be placed on reference points (e.g.observation pillar heads), on which a short measurement (approx.. 60 s) will be made.The scans will be transformed into the reference model (obtained by terrestrial laser scanning), thus obtaining the measured position of the pillar head.Deviations will be calculated as the difference between the measured and known coordinates of the reference points.
In addition to the above-mentioned analytical tests (testing the individual components and subsystems separately), we also suggest performing a global test, the results of which qualify the point clouds resulting from mobile mapping.The SLAM used will be tested in order to verify the quality of the measured scans, while the position of the scans within the reference model will be determined.The results will be evaluated as differences between the reference point cloud and the point cloud acquired by the proposed mobile mapping system.

Conclusion
The paper is devoted to development of a prototype of a low-cost mobile mapping system.The mapping system represents a kinematic laser scanning system for digitizing of indoor environments of buildings.It consists of a trio of 2D lidar, multichannel 3D lidar, an IMU, a stereo camera and rotary encoders all mounted on an "Ackerman type" carrier.The sensors used form subsystems: a robot control and odometry subsystem, an RPLIDAR subsystem, a Velodyne lidar subsystem, an IMU subsystem, and a stereo camera subsystem.All the subsystems are connected and controlled by the operating system ROS.
A proposal od series of test have to be done in the future is also described.The future work will focus on testing the suitability of all components in the mapping system.In order to find out whether all the components are necessary, test measurements will be performed.The results will be evaluated also in a global test of the mapping system.The aim of it is the comparison of the results from scanning with the point cloud acquired by a terrestrial laser scanner, which will be considered as a reference model of the scanned environment.In addition, improved SLAM algorithm with an improved scan matching algorithm [15] will be developed.
"This publication was created with the support of the Scientific Grant Agency of the Ministry of Education, science, research and sport of the Slovak Republic and the Slovak Academy of Sciences for the project VEGA-1/0272/22"