Infrastructure Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 27
  • Item
    No Preview Available
    Synthetic-real image domain adaptation for indoor camera pose regression using a 3D model
    Acharya, D ; Tatli, CJ ; Khoshelham, K (ELSEVIER, 2023-08)
  • Item
    No Preview Available
    Predicting the ripening time of 'Hass' and 'Shepard' avocado fruit by hyperspectral imaging
    Han, Y ; Bai, SH ; Trueman, SJ ; Khoshelham, K ; Kamper, W (SPRINGER, 2023-10)
    Abstract Predicting the ripening time of avocado fruit accurately could improve fruit storage and decrease food waste. No reasonable method exists for predicting the postharvest ripening time of avocado fruit during transport, storage or retail display. Here, hyperspectral imaging ranging from 388 to 1005 nm with 462 bands was applied to 316 ‘Hass’ and 160 ‘Shepard’ mature, unripe avocado fruit to predict how many days it took for individual fruit to become ripe. Three models were developed using partial least squares regression (PLSR), deep convolutional neural network (DCNN) regression and DCNN classification. Our PLSR models provided coefficients of determination (R2) of 0.76 and 0.50 and root mean squared errors (RMSE) of 1.20 and 1.13 days for ‘Hass’ and ‘Shepard’ fruit, respectively. The DCNN-based regression models produced similar results with R2 of 0.77 and 0.59, and RMSEs of 1.43 and 0.94 days for ‘Hass’ and ‘Shepard’ fruit, respectively. The prediction accuracies and RMSEs from DCNN classification models, respectively, were 67.28% and 1.52 days for ‘Hass’ and 64.06% and 1.03 days for ‘Shepard’. Our study demonstrates that the spectral reflectance of the skin of mature, unripe ‘Hass’ and ‘Shepard’ fruit provides adequate information to predict ripening time and, thus, has the potential to improve postharvest processing and reduce postharvest losses of avocado fruit.
  • Item
    No Preview Available
    Editorial of theme issue 3D Modeling of Indoor Environments
    Kang, Z ; Wang, C ; Khoshelham, K ; Lehtola, V (ELSEVIER, 2023-01)
  • Item
    Thumbnail Image
    Pedestrian Origin-Destination Estimation Based on Multi-Camera Person Re-Identification
    Li, Y ; Sarvi, M ; Khoshelham, K ; Zhang, Y ; Jiang, Y (MDPI, 2022-10)
    Pedestrian origin-destination (O-D) estimates that record traffic flows between origins and destinations, are essential for the management of pedestrian facilities including pedestrian flow simulation in the planning phase and crowd control in the operation phase. However, current O-D data collection techniques such as surveys, mobile sensing using GPS, Wi-Fi, and Bluetooth, and smart card data have the disadvantage that they are either time consuming and costly, or cannot provide complete O-D information for pedestrian facilities without entrances and exits or pedestrian flow inside the facilities. Due to the full coverage of CCTV cameras and the huge potential of image processing techniques, we address the challenges of pedestrian O-D estimation and propose an image-based O-D estimation framework. By identifying the same person in disjoint camera views, the O-D trajectory of each identity can be accurately generated. Then, state-of-the-art deep neural networks (DNNs) for person re-ID at different congestion levels were compared and improved. Finally, an O-D matrix based on trajectories was generated and the resident time was calculated, which provides recommendations for pedestrian facility improvement. The factors that affect the accuracy of the framework are discussed in this paper, which we believe could provide new insights and stimulate further research into the application of the Internet of cameras to intelligent transport infrastructure management.
  • Item
    Thumbnail Image
    VIRTUAL ELEMENT RETRIEVAL IN MIXED REALITY
    Radanovic, M ; Khoshelham, K ; Fraser, C ; Ziatanova, S ; Sithole, G ; Barton, J (COPERNICUS GESELLSCHAFT MBH, 2022)
    Abstract. The application of mixed reality visualisation in construction engineering requires accurate placement and retrieval of virtual models within the real world, which depends on the localisation accuracy. However, it is hard to understand what this means practically from localisation accuracy alone. For example, when we superimpose a Building Information Model (BIM) over the real building, it is unclear how well does a BIM element fit the real one and how small a BIM element are we able to retrieve. In this paper, we evaluate virtual element retrieval by designing an experiment where we attempt to retrieve a set of cubes of different sizes placed in both the real and the virtual world. Furthermore, inspired by existing camera localisation methods for indoor MR being almost exclusively image-based, we use a localisation approach based solely on 3D-3D model registration. The approach is based on the automated registration of a low-density mesh model of the surroundings created by the MR device to the existing point cloud of an indoor environment. We develop a prototype and perform experiments on real-world data which show high localisation accuracy, with average translation and rotation errors of 1.4 cm and 0.24°, respectively. Finally, we show that the success rate of virtual element retrieval is closely related to the localisation accuracy.
  • Item
    Thumbnail Image
    3D MAPPING OF INDOOR AND OUTDOOR ENVIRONMENTS USING APPLE SMART DEVICES
    Diaz-Vilarino, L ; Tran, H ; Frias, E ; Balado, J ; Khoshelham, K ; Ziatanova, S ; Sithole, G ; Barton, J (COPERNICUS GESELLSCHAFT MBH, 2022)
    Abstract. Recent integration of LiDAR into smartphones opens up a whole new world of possibilities for 3D indoor/outdoor mapping. Although these new systems offer an unprecedent opportunity for the democratization in the use of scanning technology, data quality is lower than data captured from high-end LiDAR sensors. This paper is focused on discussing the capability of recent Apple smart devices for applications related with 3D mapping of indoor and outdoor environments. Indoor scenes are evaluated from a reconstruction perspective, and three geometric aspects (local precision, global correctness, and surface coverage) are considered using data captured in two adjacent rooms. Outdoor environments are analysed from a mobility point of view, and elements defining the physical accessibility in building entrances are considered for evaluation.
  • Item
    Thumbnail Image
    Real-time monitoring of construction sites: Sensors, methods, and applications
    Rao, AS ; Radanovic, M ; Liu, Y ; Hu, S ; Fang, Y ; Khoshelham, K ; Palaniswami, M ; Tuan, N (ELSEVIER, 2022-04)
    The construction industry is one of the world's largest industries, with an annual budget of $10 trillion globally. Despite its size, the efficiency and growth in labour productivity in the construction industry have been relatively low compared to other sectors, such as manufacturing and agriculture. To this extent, many studies have recognised the role of automation in improving the efficiency and safety of construction projects. In particular, automated monitoring of construction sites is a significant research challenge. This paper provides a comprehensive review of recent research on the real-time monitoring of construction projects. The review focuses on sensor technologies and methodologies for real-time mapping, scene understanding, positioning, and tracking of construction activities in indoor and outdoor environments. The review also covers various case studies of applying these technologies and methodologies for real-time hazard identification, monitoring workers’ behaviour, workers’ health, and monitoring static and dynamic construction environments.
  • Item
    Thumbnail Image
    A review of augmented reality visualization methods for subsurface utilities
    Muthalif, MZA ; Shojaei, D ; Khoshelham, K (ELSEVIER SCI LTD, 2022-01)
    Subsurface utilities are important assets that need to be perceived during any construction activities. Positioning and visualizing the subsurface utilities before the construction work starts has significant benefits for the effective management of construction projects. Augmented Reality (AR) is a promising technology for the visualization of subsurface utilities. The aim of this paper is to provide a comprehensive review of the state-of-the-art in AR visualization of subsurface utilities, including existing AR visualization methods, categorization of the methods and their drawbacks, comprehensive discussion on the challenges, research gaps and potential solutions. The paper begins with an introduction of current practice of locating subsurface utilities and an overview of different reality technologies including AR. We propose a taxonomy of AR visualization methods including X-Ray view, transparent view, shadow view, topo view, image rendering and cross-section view. We provide a comparison of existing methods in terms of quality of depth perception, occlusion of real world, complexity of visualization and parallax effect followed by a discussion of the drawbacks in these methods. Poor depth perception, parallax effect caused by the user movement, poor positional accuracy in Global Navigation Satellite System (GNSS) deprived or indoor areas and unavailability of accurate location information for generating virtual models are identified as main challenges and topics of future research in effective AR visualization of subsurface utilities.
  • Item
    Thumbnail Image
    Corrigendum to "A review of augmented reality visualization methods for subsurface utilities (vol 51, 101498, 2022)"
    Muthalif, MZA ; Shojaei, D ; Khoshelham, K (Elsevier, 2022-01-01)
    The authors regret, Our article that was published in the previous issue of the journal contains incorrect citations to research work in a few places. The correct reference should be 140. Hansen, L.H.; Fleck, P.; Stranner, M.; Schmalstieg, D.; Arth, C. Augmented Reality for Subsurface Utility Engineering, Revisited. IEEE Transactions on Visualization and Computer Graphics 2021, 27, 4119–4128. Instead of 140. Piroozfar, P.; Judd, A.; Boseley, S.; Essa, A.; Farr, E.R. Augmented reality for urban utility infrastructure: a UK perspective. In Collaboration and Integration in Construction, Engineering, Management and Technology; Springer: 2021; pp. 535–541. Additionally, Figure 20 should be referenced as 37. Eren, M.T.; Balcisoy, S. Evaluation of X-ray visualization techniques for vertical depth judgments in underground exploration. The visual Computer 2017, 34, 405–416, https://doi.org/10.1007/s00371-016-1346-5. Instead of 62. Doolani, S.; Wessels, C.; Kanal, V.; Sevastopoulos, C.; Jaiswal, A.; Nambiappan, H.; Makedon, F. A Review of Extended Reality (XR) Technologies for Manufacturing Training. Technologies 2020, 8, https://doi.org/10.3390/technologies8040077. And Figure 21 should be referenced as 144. Baek, J.-M.; Hong, I.-S. The Design of an Automatically Generated System for Cross Sections of Underground Utilities using Augmented Reality. International Journal of Smart Home 2013, 7, 255–264, https://doi.org/10.14257/ijsh.2013.7.6.25. Instead of 64. Chuah, S.H.-W. Why and who will adopt extended reality technology? Literature review, synthesis, and future research agenda. Literature Review, Synthesis, and Future Research Agenda (December 13, 2018) 2018. The authors would like to apologise for any inconvenience caused.
  • Item
    No Preview Available
    Single-image localisation using 3D models: Combining hierarchical edge maps and semantic segmentation for domain adaptation
    Acharya, D ; Tennakoon, R ; Muthu, S ; Khoshelham, K ; Hoseinnezhad, R ; Bab-Hadiashar, A (ELSEVIER, 2022-04)