Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 12
  • Item
    Thumbnail Image
    An efficient deep neural model for detecting crowd anomalies in videos
    Yang, M ; Tian, S ; Rao, AS ; Rajasegarar, S ; Palaniswami, M ; Zhou, Z (Springer, 2023-06-01)
    Identifying unusual crowd events is highly challenging, laborious, and prone to errors in video surveillance applications. We propose a novel end-to-end deep learning architecture called Stacked Denoising Auto-Encoder (DeepSDAE) to address these challenges, comprising SDAE, VGG16 and Plane-based one-class Support Vector Machine (SVM), abbreviated as PSVM, to detect anomalies such as stationary people in an active scene or loitering activities in a crowded scene. The DeepSDAE framework is a hybrid deep learning architecture. It consists of a four-layered SDAE and an enhanced convolutional neural network (CNN) model. Our framework employs Reinforcement Learning to optimise the learning parameters to detect crowd anomalies. We use the Markov Decision Process (MDP) with Deep Q-learning to find the optimal Q value. We also present a late fusion procedure to combine individual decisions from the intermediate and final layers of the SDAE and VGG16 networks to detect different anomalies. Our experiments on four real-world datasets reveal the superior performance of our proposed framework in detecting (frame-level and pixel-level) anomalies.
  • Item
    Thumbnail Image
    Network Resource Allocation for Industry 4.0 with Delay and Safety Constraints
    Sardar, AA ; Rao, AS ; Alpcan, T ; Das, G ; Palaniswami, M (Institute of Electrical and Electronics Engineers, 2023)
    In this paper, we model a futuristic factory floor equipped with Automated Guided Vehicles (AGVs), cameras, and a Virtual Reality (VR) surveillance system; and connected to a 5G network for communication purposes. Motion planning of AGVs and VR applications is offloaded to an edge server for computational flexibility and reduced hardware on the factory floor. Decisions on the edge server are made using the video feed provided by the cameras in a controlled manner. Our objectives are to ensure factory floor safety and provide smooth VR experience in the surveillance room. Providing proper and timely allocation of network resources is of utmost importance to maintain the end-to-end delay necessary to achieve these objectives. We provide a statistical analysis to estimate the bandwidth required by a factory to satisfy the delay requirements 99.999 percent of the time. We formulate a nonconvex integer nonlinear problem aiming to minimize the safety and delay violations. To solve it, we propose a real-time network resource allocation algorithm that has linear time complexity in terms of the number of components connected to the wireless network. Our algorithm significantly outperforms existing solvers (genetic algorithm, surrogate optimizer) and meets the objectives using less bandwidth compared to existing methods.
  • Item
    Thumbnail Image
    Vision transformer-based autonomous crack detection on asphalt and concrete surfaces
    Shamsabadi, EA ; Xu, C ; Rao, AS ; Nguyen, T ; Ngo, T ; Dias-da-Costa, D (ELSEVIER, 2022-08)
    Previous research has shown the high accuracy of convolutional neural networks (CNNs) in asphalt and concrete crack detection in controlled conditions. Yet, human-like generalisation remains a significant challenge for industrial applications where the range of conditions varies significantly. Given the intrinsic biases of CNNs, this paper proposes a vision transformer (ViT)-based framework for crack detection on asphalt and concrete surfaces. With transfer learning and the differentiable intersection over union (IoU) loss function, the encoder-decoder network equipped with ViT could achieve an enhanced real-world crack segmentation performance. Compared to the CNN-based models (DeepLabv3+ and U-Net), TransUNet with a CNN-ViT backbone achieved up to ~61% and ~3.8% better mean IoU on the original images of the respective datasets with very small and multi-scale crack semantics. Moreover, ViT assisted the encoder-decoder network to show a robust performance against various noisy signals where the mean Dice score attained by the CNN-based models significantly dropped (<10%).
  • Item
    Thumbnail Image
    Real-time monitoring of construction sites: Sensors, methods, and applications
    Rao, AS ; Radanovic, M ; Liu, Y ; Hu, S ; Fang, Y ; Khoshelham, K ; Palaniswami, M ; Tuan, N (ELSEVIER, 2022-04)
    The construction industry is one of the world's largest industries, with an annual budget of $10 trillion globally. Despite its size, the efficiency and growth in labour productivity in the construction industry have been relatively low compared to other sectors, such as manufacturing and agriculture. To this extent, many studies have recognised the role of automation in improving the efficiency and safety of construction projects. In particular, automated monitoring of construction sites is a significant research challenge. This paper provides a comprehensive review of recent research on the real-time monitoring of construction projects. The review focuses on sensor technologies and methodologies for real-time mapping, scene understanding, positioning, and tracking of construction activities in indoor and outdoor environments. The review also covers various case studies of applying these technologies and methodologies for real-time hazard identification, monitoring workers’ behaviour, workers’ health, and monitoring static and dynamic construction environments.
  • Item
    Thumbnail Image
    Achieving AI-Enabled Robust End-to-End Quality of Experience Over Backhaul Radio Access Networks
    Roy, D ; Rao, AS ; Alpcan, T ; Das, G ; Palaniswami, M (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022-09)
    Emerging applications such as Augmented Reality, the Internet of Vehicles and Remote Surgery require both computing and networking functions working in harmony. The End-to-end (E2E) quality of experience (QoE) for these applications depends on the synchronous allocation of networking and computing resources. However, the relationship between the resources and the E2E QoE outcomes is typically stochastic and non-linear. In order to make efficient resource allocation decisions, it is essential to model these relationships. This article presents a novel machine-learning based approach to learn these relationships and concurrently orchestrate both resources for this purpose. The machine learning models further help make robust allocation decisions regarding stochastic variations and simplify robust optimization to a conventional constrained optimization. When resources are insufficient to accommodate all application requirements, our framework supports executing some of the applications with minimal degradation (graceful degradation) of E2E QoE. We also show how we can implement the learning and optimization methods in a distributed fashion by the Software-Defined Network (SDN) and Kubernetes technologies. Our results show that deep learning-based modelling achieves E2E QoE with approximately 99.8% accuracy, and our robust joint-optimization technique allocates resources efficiently when compared to existing differential services alternatives.
  • Item
    Thumbnail Image
    Achieving QoS for bursty uRLLC applications over passive optical networks
    Roy, D ; Rao, AS ; Alpcan, T ; Das, G ; Palaniswami, M (Optica Publishing Group, 2022-05)
    Emerging real-time applications such as those classified under ultra-reliable low latency (uRLLC) generate bursty traffic and have strict quality of service (QoS) requirements. The passive optical network (PON) is a popular access network technology, which is envisioned to handle such applications at the access segment of the network. However, the existing standards cannot handle strict QoS constraints for such applications. The available solutions rely on instantaneous heuristic decisions and maintain QoS constraints (mostly bandwidth) in an average sense. Existing proposals in generic networks with optimal strategies are computationally complex and are, therefore, not suitable for uRLLC applications. This paper presents a novel computationally efficient, far-sighted bandwidth allocation policy design for facilitating bursty uRLLC traffic in a PON framework while satisfying strict QoS (age of information/delay and bandwidth) requirements. To this purpose, first we design a delay-tracking mechanism, which allows us to model the resource allocation problem from a control-theoretic viewpoint as a model predictive control (MPC) problem. MPC helps in making far-sighted decisions regarding resource allocations and captures the time-varying dynamics of the network. We provide computationally efficient polynomial time solutions and show their implementation in the PON framework. Compared to existing approaches, MPC can improve delay violations by 15% and 45% at loads of 0.8 and 0.9, respectively, for delay-constrained applications of 1 ms and 4 ms. Our approach is also robust to varying traffic arrivals.
  • Item
    Thumbnail Image
    The Role of Visual Assessment of Clusters for Big Data Analysis: From Real-World Internet of Things
    Palaniswami, M ; Rao, AS ; Kumar, D ; Rathore, P ; Rajasegarar, S (Institute of Electrical and Electronics Engineers (IEEE), 2020-10)
    The Internet of Things (IoT) is playing a vital role in shaping today?s technological world, including our daily lives. By 2025, the number of connected devices due to the IoT is estimated to surpass a whopping 75 billion. It is a challenging task to discover, integrate, and interpret processed big data from such ubiquitously available heterogeneous and actively natural resources and devices. Cluster analysis of IoT-generated big data is essential for the meaningful interpretation of such complex data. However, we often have very limited knowledge of the number of clusters actually present in the given data. The problem of finding whether clusters are present even before applying clustering algorithms is termed the assessment of clustering tendency. In this article, we present a set of useful visual assessment of cluster tendency (VAT) tools and techniques developed with major contributions from James C. Bezdek. The article further highlights how these techniques are advancing the IoT through large-scale IoT implementations.
  • Item
    Thumbnail Image
    Automated Scoring of Hemiparesis in Acute Stroke From Measures of Upper Limb Co-Ordination Using Wearable Accelerometry.
    Datta, S ; Karmakar, CK ; Rao, AS ; Yan, B ; Palaniswami, M (Institute of Electrical and Electronics Engineers, 2020-04)
    Stroke survivors usually experience paralysis in one half of the body, i.e., hemiparesis, and the upper limbs are severely affected. Continuous monitoring of hemiparesis progression hours after the stroke attack involves manual observation of upper limb movements by medical experts in the hospital. Hence it is resource and time intensive, in addition to being prone to human errors and inter-rater variability. Wearable devices have found significance in automated continuous monitoring of neurological disorders like stroke. In this paper, we use accelerometer signals acquired using wrist-worn devices to analyze upper limb movements and identify hemiparesis in acute stroke patients, while they perform a set of proposed spontaneous and instructed movements. We propose novel measures of time (and frequency) domain coherence between accelerometer data from two arms at different lags (and frequency bands). These measures correlate well with the clinical gold standard of measurement of hemiparetic severity in stroke, the National Institutes of Health Stroke Scale (NIHSS). The study, undertaken on 32 acute stroke patients with varying levels of hemiparesis and 15 healthy controls, validates the use of short length (< 10 minutes) accelerometry data to identify hemiparesis through leave-one-subject-out cross-validation based hierarchical discriminant analysis. The results indicate that the proposed approach can distinguish between controls, moderate and severe hemiparesis with an average accuracy of 91%.
  • Item
    Thumbnail Image
    Vision-based automated crack detection using convolutional neural networks for condition assessment of infrastructure
    Rao, AS ; Tuan, N ; Palaniswami, M ; Tuan, N (SAGE PUBLICATIONS LTD, 2020-11-01)
    With the growing number of aging infrastructure across the world, there is a high demand for a more effective inspection method to assess its conditions. Routine assessment of structural conditions is a necessity to ensure the safety and operation of critical infrastructure. However, the current practice to detect structural damages, such as cracks, depends on human visual observation methods, which are prone to efficiency, cost, and safety concerns. In this article, we present an automated detection method, which is based on convolutional neural network models and a non-overlapping window-based approach, to detect crack/non-crack conditions of concrete structures from images. To this end, we construct a data set of crack/non-crack concrete structures, comprising 32,704 training patches, 2074 validation patches, and 6032 test patches. We evaluate the performance of our approach using 15 state-of-the-art convolutional neural network models in terms of number of parameters required to train the models, area under the curve, and inference time. Our approach provides over 95% accuracy and over 87% precision in detecting the cracks for most of the convolutional neural network models. We also show that our approach outperforms existing models in literature in terms of accuracy and inference time. The best performance in terms of area under the curve was achieved by visual geometry group-16 model (area under the curve = 0.9805) and best inference time was provided by AlexNet (0.32 s per image in size of 256 × 256 × 3). Our evaluation shows that deeper convolutional neural network models have higher detection accuracies; however, they also require more parameters and have higher inference time. We believe that this study would act as a benchmark for real-time, automated crack detection for condition assessment of infrastructure.
  • Item
    Thumbnail Image
    Missing Data Imputation with Bayesian Maximum Entropy for Internet of Things Applications
    Gonzalez-Vidal, A ; Rathore, P ; Rao, AS ; Mendoza-Bernal, J ; Palaniswami, M ; Skarmeta-Gomez, AF (Institute of Electrical and Electronics Engineers (IEEE), 2021-11-01)
    Internet of Things (IoT) enables the seamless integration of sensors, actuators and communication devices for real-time applications. IoT systems require good quality sensor data in order to make real-time decisions. However, values are often missing from the sensor data collected owing to faulty sensors, a loss of data during communication, interference and measurement errors. Considering the spatiotemporal nature of IoT data and the uncertainty of the data collected by sensors, we propose a new framework with which to impute missing values utilizing Bayesian Maximum Entropy (BME) as a convenient means to estimate the missing data from IoT applications. Missing sensor measurements adversely affect the quality of data, and consequently the performance and outcomes of IoT systems. Our proposed framework incorporates BME in order to impute missing values in diverse IoT scenarios by making use of the combination of low-and high-precision sensors. Our approach can incorporate the measurement errors of low-precision sensors as interval quantities along with the high-precision sensor measurements, making it highly suitable for real-time IoT systems. Our framework is robust to variations in data, requires less execution time, and requires only a single input parameter, thus outperforming existing IoT data imputation methods. The experimental results obtained for three IoT datasets demonstrate the superiority of the BME framework as regards accuracy, running time and robustness. The framework can additionally be extended to distributed IoT nodes for the online imputation of missing values.