Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction

Research output: Contribution to conferencePaperAcademicpeer-review

Abstract

In most of the scenarios which are addressed by the literature, there is at least a rough prior-knowledge about the structure which needs to be investigated. This prior-knowledge is most of the time come as 2D/3D drawings, CAD models, GIS data or point clouds acquired by laser imaging or photogrammetry. Rough knowledge about the structure can give opportunity to apply a pre-segmentation to the 3D environment and already design a flying path before starting inspection with drone (Mansouri et al., 2018). Even when the drone can make a path re-planning during the flight depending on the details inspected on the structure, the algorithms are still not totally unaware of what to expect approximately where. Herein, we introduce a framework which is able to make path planning on the flight without having any prior information about the environment and the structure which needs to be investigated. Stereo cameras placed on the drone enable us to create a rough representation of the 3D environment in real-time, in order to determine the object of interest which must eventually have a dense and accurate 3D model.

We have conducted our research and experiments in order to answer the following main question; In order to generate a complete 3D visual model of an unknown object, is it possible to determine a suitable flight direction, considering the completeness of the model on-the-fly?

For generating the rough real-time 3D environment models, we benefit from the
robust approach ORBSLAM2 which extract visual features (ORBs) and finds their 3D positions based on a stereo camera observation [ref?]. ORBSLAM2 algorithm results with generating a point cloud where each point corresponds to the ORB features located in the correct position within the 3D environment. The initial object segmentation is done by looking at 3D point cloud normal vectors and calculating the differences of normals (DoN) [ref?] as a clue of the 3D object separation. This 3D segmentation of the ORBSLAM2 outcome point cloud gives us the approximate 3D appearance of our object of interest. Denser representation of the object can be generated again with photogrammetry while the drone determines the best positions to observe the model in order to complete the dense 3D point cloud. The best positions for the drone are
determined by calculating a 3D entropy map from the point cloud of the structure. The lowest entropy positions are assumed to be the locations where the drone should look from a certain distance and from slightly different angles, in order to add more points to the point cloud. The flight re-planning and point cloud densifying method is repeated until the lowest entropy points reach to a satisfactory value (a pre-determined threshold). In this way, we guarantee to not have holes or poor density information in some local areas of the 3D model.
The work flow of our novel, fully automated and active drone path planning algorithm is represented in Figure 1. The mathematical implementation details of each algorithm step will be given in the full version of this manuscript.
We have implemented our algorithms with in the ROS framework and the image acquisitions are done with a simulation of the stereo camera mounted on a drone in Gazebo environment. Our experimental results indicate high potential of our framework in order to perform fully automated 3D reconstruction and inspection work on environments without prior information. This 3D reconstructed visual model will allow the drone to perform the interaction task with any complex shaped body with more autonomy without the prior knowledge of the geometry of it.
Original languageEnglish
Publication statusPublished - 2019
EventInternational Conference on Unmanned Aerial Vehicles in Geomatics 2019 - Enschede, Netherlands
Duration: 10 Jun 201914 Jun 2019

Conference

ConferenceInternational Conference on Unmanned Aerial Vehicles in Geomatics 2019
Abbreviated titleUAV-g 2019
CountryNetherlands
CityEnschede
Period10/06/1914/06/19

Fingerprint

Unmanned aerial vehicles (UAV)
Antennas
Entropy
Photogrammetry
Cameras
Motion planning
Inspection
Drones
Planning
Image acquisition
Geographic information systems
Computer aided design
Imaging techniques
Geometry
Lasers

Cite this

Sirmaçek, B., Rashad, R., & Radl, P. (2019). Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction. Paper presented at International Conference on Unmanned Aerial Vehicles in Geomatics 2019, Enschede, Netherlands.
Sirmaçek, Beril ; Rashad, Ramy ; Radl, Patrick. / Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction. Paper presented at International Conference on Unmanned Aerial Vehicles in Geomatics 2019, Enschede, Netherlands.
@conference{ebf993d7bfb546869d991b68380fc341,
title = "Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction",
abstract = "In most of the scenarios which are addressed by the literature, there is at least a rough prior-knowledge about the structure which needs to be investigated. This prior-knowledge is most of the time come as 2D/3D drawings, CAD models, GIS data or point clouds acquired by laser imaging or photogrammetry. Rough knowledge about the structure can give opportunity to apply a pre-segmentation to the 3D environment and already design a flying path before starting inspection with drone (Mansouri et al., 2018). Even when the drone can make a path re-planning during the flight depending on the details inspected on the structure, the algorithms are still not totally unaware of what to expect approximately where. Herein, we introduce a framework which is able to make path planning on the flight without having any prior information about the environment and the structure which needs to be investigated. Stereo cameras placed on the drone enable us to create a rough representation of the 3D environment in real-time, in order to determine the object of interest which must eventually have a dense and accurate 3D model. We have conducted our research and experiments in order to answer the following main question; In order to generate a complete 3D visual model of an unknown object, is it possible to determine a suitable flight direction, considering the completeness of the model on-the-fly?For generating the rough real-time 3D environment models, we benefit from therobust approach ORBSLAM2 which extract visual features (ORBs) and finds their 3D positions based on a stereo camera observation [ref?]. ORBSLAM2 algorithm results with generating a point cloud where each point corresponds to the ORB features located in the correct position within the 3D environment. The initial object segmentation is done by looking at 3D point cloud normal vectors and calculating the differences of normals (DoN) [ref?] as a clue of the 3D object separation. This 3D segmentation of the ORBSLAM2 outcome point cloud gives us the approximate 3D appearance of our object of interest. Denser representation of the object can be generated again with photogrammetry while the drone determines the best positions to observe the model in order to complete the dense 3D point cloud. The best positions for the drone aredetermined by calculating a 3D entropy map from the point cloud of the structure. The lowest entropy positions are assumed to be the locations where the drone should look from a certain distance and from slightly different angles, in order to add more points to the point cloud. The flight re-planning and point cloud densifying method is repeated until the lowest entropy points reach to a satisfactory value (a pre-determined threshold). In this way, we guarantee to not have holes or poor density information in some local areas of the 3D model.The work flow of our novel, fully automated and active drone path planning algorithm is represented in Figure 1. The mathematical implementation details of each algorithm step will be given in the full version of this manuscript.We have implemented our algorithms with in the ROS framework and the image acquisitions are done with a simulation of the stereo camera mounted on a drone in Gazebo environment. Our experimental results indicate high potential of our framework in order to perform fully automated 3D reconstruction and inspection work on environments without prior information. This 3D reconstructed visual model will allow the drone to perform the interaction task with any complex shaped body with more autonomy without the prior knowledge of the geometry of it.",
author = "Beril Sirma{\cc}ek and Ramy Rashad and Patrick Radl",
year = "2019",
language = "English",
note = "International Conference on Unmanned Aerial Vehicles in Geomatics 2019, UAV-g 2019 ; Conference date: 10-06-2019 Through 14-06-2019",

}

Sirmaçek, B, Rashad, R & Radl, P 2019, 'Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction' Paper presented at International Conference on Unmanned Aerial Vehicles in Geomatics 2019, Enschede, Netherlands, 10/06/19 - 14/06/19, .

Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction. / Sirmaçek, Beril ; Rashad, Ramy; Radl, Patrick.

2019. Paper presented at International Conference on Unmanned Aerial Vehicles in Geomatics 2019, Enschede, Netherlands.

Research output: Contribution to conferencePaperAcademicpeer-review

TY - CONF

T1 - Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction

AU - Sirmaçek, Beril

AU - Rashad, Ramy

AU - Radl, Patrick

PY - 2019

Y1 - 2019

N2 - In most of the scenarios which are addressed by the literature, there is at least a rough prior-knowledge about the structure which needs to be investigated. This prior-knowledge is most of the time come as 2D/3D drawings, CAD models, GIS data or point clouds acquired by laser imaging or photogrammetry. Rough knowledge about the structure can give opportunity to apply a pre-segmentation to the 3D environment and already design a flying path before starting inspection with drone (Mansouri et al., 2018). Even when the drone can make a path re-planning during the flight depending on the details inspected on the structure, the algorithms are still not totally unaware of what to expect approximately where. Herein, we introduce a framework which is able to make path planning on the flight without having any prior information about the environment and the structure which needs to be investigated. Stereo cameras placed on the drone enable us to create a rough representation of the 3D environment in real-time, in order to determine the object of interest which must eventually have a dense and accurate 3D model. We have conducted our research and experiments in order to answer the following main question; In order to generate a complete 3D visual model of an unknown object, is it possible to determine a suitable flight direction, considering the completeness of the model on-the-fly?For generating the rough real-time 3D environment models, we benefit from therobust approach ORBSLAM2 which extract visual features (ORBs) and finds their 3D positions based on a stereo camera observation [ref?]. ORBSLAM2 algorithm results with generating a point cloud where each point corresponds to the ORB features located in the correct position within the 3D environment. The initial object segmentation is done by looking at 3D point cloud normal vectors and calculating the differences of normals (DoN) [ref?] as a clue of the 3D object separation. This 3D segmentation of the ORBSLAM2 outcome point cloud gives us the approximate 3D appearance of our object of interest. Denser representation of the object can be generated again with photogrammetry while the drone determines the best positions to observe the model in order to complete the dense 3D point cloud. The best positions for the drone aredetermined by calculating a 3D entropy map from the point cloud of the structure. The lowest entropy positions are assumed to be the locations where the drone should look from a certain distance and from slightly different angles, in order to add more points to the point cloud. The flight re-planning and point cloud densifying method is repeated until the lowest entropy points reach to a satisfactory value (a pre-determined threshold). In this way, we guarantee to not have holes or poor density information in some local areas of the 3D model.The work flow of our novel, fully automated and active drone path planning algorithm is represented in Figure 1. The mathematical implementation details of each algorithm step will be given in the full version of this manuscript.We have implemented our algorithms with in the ROS framework and the image acquisitions are done with a simulation of the stereo camera mounted on a drone in Gazebo environment. Our experimental results indicate high potential of our framework in order to perform fully automated 3D reconstruction and inspection work on environments without prior information. This 3D reconstructed visual model will allow the drone to perform the interaction task with any complex shaped body with more autonomy without the prior knowledge of the geometry of it.

AB - In most of the scenarios which are addressed by the literature, there is at least a rough prior-knowledge about the structure which needs to be investigated. This prior-knowledge is most of the time come as 2D/3D drawings, CAD models, GIS data or point clouds acquired by laser imaging or photogrammetry. Rough knowledge about the structure can give opportunity to apply a pre-segmentation to the 3D environment and already design a flying path before starting inspection with drone (Mansouri et al., 2018). Even when the drone can make a path re-planning during the flight depending on the details inspected on the structure, the algorithms are still not totally unaware of what to expect approximately where. Herein, we introduce a framework which is able to make path planning on the flight without having any prior information about the environment and the structure which needs to be investigated. Stereo cameras placed on the drone enable us to create a rough representation of the 3D environment in real-time, in order to determine the object of interest which must eventually have a dense and accurate 3D model. We have conducted our research and experiments in order to answer the following main question; In order to generate a complete 3D visual model of an unknown object, is it possible to determine a suitable flight direction, considering the completeness of the model on-the-fly?For generating the rough real-time 3D environment models, we benefit from therobust approach ORBSLAM2 which extract visual features (ORBs) and finds their 3D positions based on a stereo camera observation [ref?]. ORBSLAM2 algorithm results with generating a point cloud where each point corresponds to the ORB features located in the correct position within the 3D environment. The initial object segmentation is done by looking at 3D point cloud normal vectors and calculating the differences of normals (DoN) [ref?] as a clue of the 3D object separation. This 3D segmentation of the ORBSLAM2 outcome point cloud gives us the approximate 3D appearance of our object of interest. Denser representation of the object can be generated again with photogrammetry while the drone determines the best positions to observe the model in order to complete the dense 3D point cloud. The best positions for the drone aredetermined by calculating a 3D entropy map from the point cloud of the structure. The lowest entropy positions are assumed to be the locations where the drone should look from a certain distance and from slightly different angles, in order to add more points to the point cloud. The flight re-planning and point cloud densifying method is repeated until the lowest entropy points reach to a satisfactory value (a pre-determined threshold). In this way, we guarantee to not have holes or poor density information in some local areas of the 3D model.The work flow of our novel, fully automated and active drone path planning algorithm is represented in Figure 1. The mathematical implementation details of each algorithm step will be given in the full version of this manuscript.We have implemented our algorithms with in the ROS framework and the image acquisitions are done with a simulation of the stereo camera mounted on a drone in Gazebo environment. Our experimental results indicate high potential of our framework in order to perform fully automated 3D reconstruction and inspection work on environments without prior information. This 3D reconstructed visual model will allow the drone to perform the interaction task with any complex shaped body with more autonomy without the prior knowledge of the geometry of it.

M3 - Paper

ER -

Sirmaçek B, Rashad R, Radl P. Autonomous UAV-based 3d-reconstruction of structures for aerial physical interaction. 2019. Paper presented at International Conference on Unmanned Aerial Vehicles in Geomatics 2019, Enschede, Netherlands.