Infrastructure Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 6 of 6
  • Item
    No Preview Available
    Synthetic-real image domain adaptation for indoor camera pose regression using a 3D model
    Acharya, D ; Tatli, CJ ; Khoshelham, K (ELSEVIER, 2023-08)
  • Item
    No Preview Available
    Single-image localisation using 3D models: Combining hierarchical edge maps and semantic segmentation for domain adaptation
    Acharya, D ; Tennakoon, R ; Muthu, S ; Khoshelham, K ; Hoseinnezhad, R ; Bab-Hadiashar, A (ELSEVIER, 2022-04)
  • Item
    No Preview Available
    Results of the ISPRS benchmark on indoor modelling
    Khoshelham, K ; Tran, H ; Acharya, D ; Vilariño, LD ; Kang, Z ; Dalyot, S (Elsevier BV, 2021-12-01)
  • Item
    Thumbnail Image
    The ISPRS Benchmark on Indoor Modelling: Preliminary Results
    Khoshelham, K ; Tran, H ; Acharya, D ; Díaz Vilariño, L ; Kang, Z ; Dalyot, S (Copernicus GmBH, 2020-08-06)
    Automated 3D reconstruction of indoor environments from point clouds has been a topic of intensive research in recent years. Different methods developed for the generation of 3D indoor models have achieved promising results on different case studies. However, a comprehensive evaluation and comparison of the performance of these methods has not been available. This paper presents the preliminary results of the ISPRS benchmark on indoor modelling, an initiative of Working Group IV/5 to benchmark the performance of indoor modelling methods using a public dataset and a comprehensive evaluation framework. The performances of the different methods are compared through geometric quality evaluation of the reconstructed models in terms of completeness, correctness, and accuracy of wall elements. The results show that the reconstruction methods generally achieve high completeness but lower correctness for the reconstructed models while accuracies range from 0.5 cm to 6.7 cm.
  • Item
    Thumbnail Image
    Indoor LiDAR relocalization based on deep learning using a 3D model
    Zhao, H ; Acharya, D ; Tomko, M ; Khoshelham, K (Copernicus GmBH, 2020-08-06)
    Indoor localization, navigation and mapping systems highly rely on the initial sensor pose information to achieve a high accuracy. Most existing indoor mapping and navigation systems cannot initialize the sensor poses automatically and consequently these systems cannot perform relocalization and recover from a pose estimation failure. For most indoor environments, a map or a 3D model is often available, and can provide useful information for relocalization. This paper presents a novel relocalization method for LiDAR sensors in indoor environments to estimate the initial LiDAR pose using a CNN pose regression network trained using a 3D model. A set of synthetic LiDAR frames are generated from the 3D model with known poses. Each LiDAR range image is a one-channel range image, used to train the CNN pose regression network from scratch to predict the initial sensor location and orientation. The CNN regression network trained by synthetic range images is used to estimate the poses of the LiDAR using real range images captured in the indoor environment. The results show that the proposed CNN regression network can learn from synthetic LiDAR data and estimate the pose of real LiDAR data with an accuracy of 1.9 m and 8.7 degrees.
  • Item
    Thumbnail Image
    A Recurrent Deep Network for Estimating the Pose of Real Indoor Images from Synthetic Image Sequences
    Acharya, D ; Singha Roy, S ; Khoshelham, K ; Winter, S (MDPI, 2020-10)
    Recently, deep convolutional neural networks (CNN) have become popular for indoor visual localisation, where the networks learn to regress the camera pose from images directly. However, these approaches perform a 3D image-based reconstruction of the indoor spaces beforehand to determine camera poses, which is a challenge for large indoor spaces. Synthetic images derived from 3D indoor models have been used to eliminate the requirement of 3D reconstruction. A limitation of the approach is the low accuracy that occurs as a result of estimating the pose of each image frame independently. In this article, a visual localisation approach is proposed that exploits the spatio-temporal information from synthetic image sequences to improve localisation accuracy. A deep Bayesian recurrent CNN is fine-tuned using synthetic image sequences obtained from a building information model (BIM) to regress the pose of real image sequences. The results of the experiments indicate that the proposed approach estimates a smoother trajectory with smaller inter-frame error as compared to existing methods. The achievable accuracy with the proposed approach is 1.6 m, which is an improvement of approximately thirty per cent compared to the existing approaches. A Keras implementation can be found in our Github repository.