Electrical and Electronic Engineering - Theses
Now showing items 1-12 of 248
A probabilistic approach for Wi-Fi based indoor localization
The Global Navigation Satellite System (GNSS) has been widely used to provide location information in outdoor environments, but it fails to provide reliable positioning indoors. WiFi based localization systems have attracted considerable attention because of the extensive deployment of Wireless Local Area Network (WLAN) infrastructure and the ubiquity of WiFi enabled mobile devices, offering a potentially low-cost way to track a mobile user in an indoor environment. The mainstream WiFi fingerprint based systems deployed in a practical large-scale wireless environment still encounter critical challenges in terms of intensive survey cost and large variations in a dynamic environment. Crowdsourcing, by its nature, uses heterogeneous devices in the process of surveying the site. While this reduces the time needed during the surveying phase, account needs to be taken of the variation in sensor performance. This variation results in a diversity in received signal strength (RSS) values and varying sensitivities to different access points (APs). In a complex and noisy indoor environment, for example a university building, a large number of APs can be sensed during both the survey and positioning phase, leading to a high-dimensional classification problem. In addition, because of multipath (fading channel) variation, signals from APs may not be sensed in every scan, thus resulting in a missing data problem. This PhD dissertation aims to mitigate these challenges and develop a practical room-level localization system at low deployment cost in a public wireless environment focusing on system architecture and methods. First, room-level localization is defined in terms of cell-based localization. By segmenting the floor plan into cells, training data collection is carried out by fusing RSS measurements taken within each cell by all contributed devices. A multivariate linear regression model is applied to calibrate the RSS measurements collected from different devices involved in the crowdsourced training phase. The conventional method of dealing with missing data is to set a low RSS value which will distort the RSS distribution and cause biased estimation. The Expectation-Maximization (EM) imputation method is used instead to estimate missing RSS values in the incomplete RSS measurements. Different features of the RSS spatial correlation for both fixed single location and across-cell measurements are studied. It is demonstrated that the RSS independence assumption is not valid in this context. We follow by using a high-dimensional probabilistic fingerprint for each cell, based on a multivariate Gaussian mixture model (MVGMM) to take account of spatial correlation of the signal strengths from multiple APs. The benefits of using information provided by invisible APs in differentiating between cells has been investigated, by incorporating a geometric distribution to provide a probability of existence of an AP that has not been seen in training. Finally, we design two frameworks based on hidden Markov model (HMM) and route grammar for mobile user tracking. The proposed system is able to achieve reliable and accurate localization performance. Field test results achieve a reliable 97% localization room level accuracy of multiple mobile users in a real university campus WiFi network. In addition, it has been demonstrated that an existing radio map can be adapted to localize a device new to the environment with an average matching accuracy of 94% in a multiple-surveyor-multiple-client system where client devices have not participated in the training phase.
Output constrained extremum seeking: theory and application to UAV communication chains
Typically, the mobile ad-hoc network (MANET) refers to networks that do not rely on a pre-existing infrastructure such as wired routers to provide communication support. Ideally, a MANET is self-configuring, and nodes in the network can be dynamically added, removed, and change their locations as necessary. The goal of this thesis is to develop a distributed controller to restore a short-term communication service in a disaster-stricken area, through deploying a team of UAV-mounted communication relays. The deployed relays acting as mobile routers provide communication service for people in the disaster-stricken area. To serve more people, the deployed MANET is preferred to scatter in a highly populated region. In other words, we set the sparsely populated region as the constrained area where the deployed MANET are not preferred to enter. Since the environmental conditions such as humidity and obstacles within the signal path can affect, for instance, the path loss coefficient and the signal decay rate while modelling the signal distribution of the relay node. Without an accurate signal distribution model, deploying MANET to fixed locations using a signal-model-based approach can easily render the result suboptimal. In this regard, we proposed a novel extremum seeking control scheme, a model-free online optimisation strategy, to optimise the MANET communication quality, and meanwhile subject to the area constraint. Under reasonable assumptions and parameter tuning, the derived controller is shown to provide semi-global practical asymptotic stability guarantees for a class of multi-input multi-output dynamic plant. The developed method extends the known class of algorithms by explicitly incorporating constraints to meet the requirements of the UAV-based system described above. Numerical simulations of signal chaining using MANET with an area constraint are given to validate the proposed strategy.
Handover Analysis and Coverage Modelling in Ultra-Dense Heterogeneous Networks
Ultra-Dense Networks (UDNs) are one of the most important trends towards next generation cellular systems. It is expected that small cell densification will offload traffic from traditional macro base stations, and thus significantly boost network capacity. Despite its promising capacity gains, UDNs can lead to frequent handovers (HOs), which in turn can cause significant network overheads and a decline in user experience. With the aim of modelling handovers in the context of ultra-dense heterogeneous networks, we first propose a low-complexity analytical framework for multi-target small cell handovers. Our proposed HO framework accurately models important context-aware parameters of user velocity, small cell density and the effect of received power filtering and HO failure. To avoid load imbalance, we derive a simple HO threshold condition that leverages multiple cell load conditions while also guaranteeing the expected throughput of small cell users. Furthermore, we also propose a novel approach to model coverage regions of overlapping small cells. Based on this model, we derive the cumulative distribution function of the sojourn time in small cells using boundary length and chord length distributions of small cell coverage regions. Our model is comprehensive enough to capture both inter-tier and intra-tier HOs in small cell networks. The derived analytical results provide guidance for optimizing handover parameters based on user velocity and small cell density to reduce network overhead and improve user experience. Finally, a downlink coverage analysis of an unmanned aerial vehicle (UAV) assisted network with clustered UEs is presented. In this model, Nakagami fading is used to capture line-of-sight channels for air-to-ground communication. Simulations show that line-of-sight channels can be well approximated with minimal computation power.
Resource optimization for future wireless communications and energy harvesting systems with coordinated transmission
Dense-cell deployment with coordinated multiple point transmission has been widely investigated to minimize inter-cell interference. Depending on the knowledge of channel state information and whether joint coding and signal processing are performed at the cooperative transmitters, coordinated transmission can be divided into coherent and non-coherent transmission. In the first half of the thesis, we study optimal power allocation for capacity maximization with coherent and non-coherent transmission, in which K coordinated transmitters coherently/non-coherently allocate power across N subchannels under joint total and individual power constraints. This allows the system to limit the overall energy consumption for cost and/or green factors, while also preventing individual transmitters to overdrive their high-powered amplifiers. For coherent coordinated transmission, we derive a new optimal co-phasing power allocation which shows that the optimal power allocation must follow a particular proportional rule. This result highlights that the optimal power allocation for transmitters with individual power constraints is different from waterfilling, as more power is not necessarily allocated to the subchannels with better channel conditions. In the non-coherent coordinated transmission case, we show that the optimal power allocation solution has an interesting sparse feature that among N subchannels, at most K-1 subchannels can be allocated power for joint transmission by multiple transmitters, and the rest of the subchannels must be served by a single transmitter. As wireless devices (e.g., Internet of things device and wireless sensor) become more pervasive, there is an ever-increasing interest for powering electronic devices wirelessly. In order to avoid the high radiation intensity and expand coverage, distributed but coordinated wireless power transfer (WPT) using energy beamforming is considered as a promising technology to address the energy scarcity problem. In the second half of the thesis, we study an optimal distributed energy beamforming strategy for total harvested power maximization, where K coordinated energy transmitters (CETs) coherently transmit energy over N subchannels. Under joint total and individual antenna power constraints, we derive the optimal power allocation rule which reveals that all K CETs will participate in energy beamforming with T < K CETs transmitting with their maximum individual powers due to the total power constraint. Nevertheless, the optimal WPT strategy is that no more than T+1 subchannels are selected for power allocation regardless of the channel conditions. Finally, we analyse a distributed multi-antenna WPT system, where each CET k is equipped with M antennas and has a transmit power constraint Pk. We show that the optimal power allocation has similar properties as coherent wireless information transmission. However, the optimal WPT strategy is that no more than K subchannels are selected for power allocation regardless of the channel conditions or the number of antennas in each CET.
Non-convex Optimisation in Modern Power Systems and Advanced Array Antennas
This research work focuses on non-convex optimisation problems that arise in two important areas of current interest. First problem is in the field of power systems and smart grids. There is a growing need for automated procedures to improve the accuracy of distribution systems with minimal physical inspections. It is particularly important for secondary (low-voltage) networks where a large share of the new controllable devices, such as electric vehicles, solar systems with smart inverters, are located. Recent progress in advanced metering infrastructure has enabled observability of a low-voltage grid, which was previously unknown, not modelled at all or modelled with low level of accuracy. We focus on identification of power line parameters in a low-voltage distribution grid. First, we build a computationally efficient sequential model that is suitable for radial power networks. Second, we propose an estimation method for identification of impedances and voltage phases at each node in the network, assuming only time synchronism of measurements. The problem we pose is non-convex nevertheless can be solved optimally, fast and, potentially, in distributed fashion. We prove that our algorithms find better solution than current state-of-the art approach. The second problem concerns an emerging area of multi-frequency phased antenna array design. There is widespread interest in developing array antenna systems capable of supporting multiple simultaneous independently steerable beams operating at different frequencies. This reduces cost, weight, size and improves overall efficiency of the system. In this work we propose an optimisation algorithm which is relevant for an array system where each element can only support one (or two if power amplifier linearity allows) frequency at any time instant and where several independently steerable beams at different frequencies are simultaneously required. The problem we consider is NP hard due to its combinatorial nature. The proposed approach is sub-optimal, nevertheless computationally fast and allows reaching nearly global optimum as shown in simulations. In addition, we develop uncertainty principles for antenna design that are of theoretical importance and provide some guidance for the array parameters choice. Various simulations are used to demonstrate the superiority of developed algorithms over conventional approaches in terms of accuracy, usage scenarios and computation complexity.
Impact of Rooftop Solar on Distribution Network Planning
Electricity networks have been undergoing significant transformation recently, especially in terms of embedded generation. There has been a lot of focus on demand fluctuations from solar and wind farms that are being connected onto high voltage (HV) grids in energy markets. But the distribution low voltage (LV) grid may prove the most challenging for the network owners and market operators. This is because rooftop solar, whether installed in commercial or residential areas, is leading to high demand fluctuations within the last mile. Customer-installed solar is also causing voltages to rise, but it is the Distribution Network Operator (DNO) on which the responsibility of voltage regulation falls. There is hence greater importance for the DNO to have full visibility of the LV feeder voltages at all times, accurately analysing proposed connections, and meeting the regulators’ and government expectations of enabling solar penetration. Voltage monitoring and regulating infrastructure at the LV level, though, is expensive to implement and hence scarce due to its huge scale. Utilities hence employ empirical or statistical techniques to calculate voltage drop and voltage rise. Conservative allowances for demand diversity and unbalance can lead to erroneous results and can form the basis of considerable utility capital expenditure programs. Utility expenditure in turn usually leads to an increase in customer bills over time. A small number of utilities in the world have access to voltage data from smart metering infrastructures, such as in Victoria, Australia, but ownership of data is becoming an open question. Data availability also presents a different problem to them, as these meters are leading to an extraordinary amount of near real-time data, which they are failing to fully embrace. They see smart-technology driven initiatives as a form of disruption and are slow or unwilling to adapt to the changing nature of the grid. This dissertation details the use of data analytics for forecasting future voltages on the network. Standard machine learning techniques are used to create a non-linear regression model fit to train parameters that reflect the operational status of the feeder. These parameters reflect load diversity and unbalance as well as generator diversity and unbalance. The trained model consequently accurately predicts voltages on the feeder with additional connections. A load-flow simulation of a real-world network is carried out. Training and testing are performed on data from different halves of the year. Predicted voltages are compared to simulation results to confirm the high accuracy, even though consumption patterns and solar irradiation patterns change due to different seasons in the test data. Hence, by leveraging interval metering data, it is shown how standard machine learning methods can be used to develop forecasting capabilities. The methodology developed in this thesis can used as a planning tool to quickly and accurately evaluate future rate of recurrence of voltage violations; and predict the voltage headroom available on the LV feeder. This is a significant outcome as predictability of LV feeder voltages is a concern for the utilities, consumers as well as regulating bodies. The presented method will enable more loads and PVs onto the network without the need of new assets such as distribution transformers or LV feeders, that may be left underutilised. It will also help resolve certain quality of supply issues such as voltage drop complaints; and help better prioritise and technically analyse constrained areas of the network. It is clear that high-quality, high-volume data analysis will play a key role in resolving the needs of the electricity industry. This thesis serves as an interface between network planning engineers and data scientists who will solve the emerging energy constraints, play a part in minimising customer energy prices and assist in the transition to decentralised clean energy sources.
Photoplethysmogram Derived Cardio-Respiratory Biomarkers for Sleep Monitoring
Continuous monitoring of cardio-respiratory biomarkers provides critical information regarding patient health status in the clinical setting. Additionally, cardio-respiratory biomarkers play an essential role in the monitoring of sleep quality and sleep-related disorders. Although the clinicians routinely monitor the heart rate (HR), the respiratory rate (RR) is not recorded most often due to the use of the cumbersome device that it requires to monitor. One possible solution to this is to develop a wearable device that is capable of monitoring continuous RR non-invasively. We have implemented algorithms that are capable of extracting HR and RR from the short length photoplethysmogram (PPG) signal. These cardio-respiratory biomarkers are applied for sleep quality monitoring. The work presented in this thesis consists of three major parts. In the first part, a model based on spectral estimation and median filtering is proposed to estimate HR for short length PPG signal. In case of intensive motion corrupted PPG signal due to excessive physical exercise or daily living conditions, a novel algorithm based on recursive Wiener filter with history tracking is proposed to reduce intense motion artifacts from PPG signal to estimate HR for a very short length signal (8 seconds). In the second part, we propose an automatic threshold selection technique of multi-scale principal component analysis based model for PPG derived RR estimation. Similar to the conventional methods, it works better for long length PPG, but its performance decays for short length PPG. To estimate RR reliably from the short length PPG, we have proposed a novel method that comprises of ensemble empirical mode decomposition (EEMD) and principal component analysis. We have also investigated the other noise assisted variants of EEMD to improve the performance and presented a summary on EEMD variants based PPG decomposition for RR estimation. In the final part of the thesis, we propose a PPG based automated approach for sleep monitoring. We have investigated the statistical and surrogate cardio-respiratory biomarkers to classify sleep stages using supervised machine learning techniques. The results of the performance metrics show that PPG could be a promising candidate for wearable sleep monitoring. Since PPG is well for a long time continuous monitoring, it will reduce the discomfort of patients, which is the main limitation of polysomnography based conventional sleep monitoring.
Low-Latency Communication over Heterogeneous Fiber-Wireless Networks for Human-to-Machine Applications
With the advent of the Tactile Internet, next-generation communication networks will see the emergence of a variety of low-latency human-to-machine (H2M) applications. In these applications, human beings will be able to remotely control and manipulate machines/robots in real-time, and concurrently experience haptic feedback such as tactile and kinetic sensations. Such applications demand stringent low latency in milliseconds in their transmission for effective H2M interactions. Current communication networks need to reduce its latency to support emerging H2M applications. Motivated by the necessity to address the latency challenge, this thesis discusses crucial building blocks to realise Tactile Internet and support H2M applications, and to improve their latency performance with novel solutions. Particularly, in this thesis, we emphasise the importance of wireless body area network (WBANs) and heterogeneous optical and wireless access networks for H2M applications. In these networks, the medium access control (MAC) layer impacts the latency significantly as it determines the way bandwidth resource is utilised. As such, we focus on the MAC layer designs in the above-mentioned networks, comprehensively investigate existing MAC layer solutions, and proposes novel solutions for low-latency H2M applications. WBANs, comprising sensors and actuators on or around the human body, is essential for personal area H2M application delivery. For WBANs, we overview the evolution and standardisation of WBAN systems. Among existing standards, we pay special interest to the ETSI smart body area network (SmartBAN), which has been proposed to achieve low system complexity and power consumption for WBANs. Existing studies on SmartBANs mainly focus on the energy performance considering the limited energy capacity of miniaturised sensors, and on the uplink transmission for monitoring and reporting-based applications. In this thesis, we address the critical aspects in the: (a) assessment of SmartBAN performance in terms of both energy and latency via analytical models and simulations; (b) investigation on SmartBAN MAC channel access mechanism designs for both uplink and downlink transmission. Our studies yield the proposal of low-latency and high-energy-efficiency MAC frameworks for SmartBANs in supporting low-latency applications. In realising remote H2M communications, such as that between SmartBANs and distance clinicians in telemedicine, heterogeneous optical and wireless access network is considered as a promising underlying architecture. The converged application delivery can benefit from the high capacity and reliability of optical fiber communication and the mobility and wide coverage of wireless networks. In this thesis, we consider the integration of passive optical networks (PONs) and wireless local area networks (WLANs) since PONs are recognised as the most efficient technology to provide wired access and WLANs are cost-effective and flexible in their deployment compared to mobile networks. In reducing the end-to-end latency over heterogeneous PON and WLAN networks, we present a detailed analysis of MAC layer bandwidth allocation solutions in WLANs and in PONs. Particularly, we explore the benefit of using machine learning (ML) in analysing and improving existing bandwidth allocation solutions. In our study, a deep neural network (DNN) is utilised to critically characterise the dependency of network latency on multiple bandwidth allocation decisions parameters and network features in PONs and WLANs via supervised training. Then, with this dependency learnt, the optimal bandwidth decisions that reduce the end-to-end latency are derived by using the trained DNN. State-of-the-art research on H2M communications reports a bursty traffic profile. When such traffic aggregates upstream into the integrated optical network units and wireless access points (ONU-APs), adaptive bandwidth allocation to ONU-APs based on their bandwidth demand is critical in reducing latency. To this end, we propose a machine learning-based predictive dynamic bandwidth allocation (DBA) scheme, termed MLP-DBA, to address the bandwidth contention among ONU-APs, and the latency bottleneck caused by the bursty arrivals. In MLP-DBA, we exploit an artificial neural network (ANN) at the central office (CO) to predict H2M packet bursts to each ONU-AP, thereby enabling the bandwidth demand of each ONU-AP to be estimated. As such, arrivals to ONU-APs can be allocated bandwidth for transmission by the CO without having to wait extra transmission cycles. Then, the MLP-DBA makes adaptive bandwidth allocation decisions by classifying each ONU-AP according to its estimated bandwidth, thereby reducing the latency and packet drop compared to that in existing schemes. Since the development of Tactile Interent and H2M applications is in its infancy, current understandings on H2M traffic characteristics are still limited. In this thesis, we develop experimental H2M applications to study the traffic characteristics and innovative bandwidth allocation schemes for H2M applications based on its traffic characteristics. We present our H2M applications developed in a haptic teleoperation system and analyse the human control and haptic feedback traffic traces in these applications. In an attempt to find suitable models that characterise H2M arrivals, we analyse the statistical distributions of packet inter-arrival times. We also analyse the time-domain correlations of control and feedback packets and report a high cross-correlation between control and feedback traffic. This characteristic is defined as the traffic causality in H2M applications in this thesis. Based on this finding, we propose an artificial intelligence-facilitated interactive bandwidth allocation (AIBA) scheme in supporting low-latency H2M applications over access networks. In the AIBA scheme, the CO estimates and pre-allocates bandwidth to subsequent haptic feedback when forwarding the human control, thereby expediting the feedback delivery. Moreover, since our future access networks will need to support both H2M and conventional content-centric applications, we discuss priority differentiation between H2M and content traffic. The capability of existing schemes and the proposed AIBA scheme in reducing and constraining latency for H2M application is comprehensively evaluated and compared. Overall, the technical contributions presented in this thesis provide novel MAC layer solutions into key enabling networks, including WBANs and heterogeneous PON and WLAN networks, for low-latency H2M applications. We extend the discussions on how ML techniques can be explored to facilitate intelligent bandwidth allocation in access networks. Experimental study on H2M traffic and understandings on human control and haptic feedback traffic in H2M applications are reported. Future directions that can be extended from this thesis are also discussed.
Signal Processing For Pile-up Mitigation in X-Ray and Gamma Ray Spectroscopy
The phenomena of pulse pile-up presents a significant challenge to the upper limits of performance of X-ray and γ-ray spectroscopic systems. In spite of the advances in pile-up mitigation algorithms over the years, it still remains a significant performance bottleneck. The advent of low-cost computational power and advanced mathematical techniques have opened new possibilities for addressing this problem. In this thesis we review numerous approaches that have been proposed in the literature for addressing the pile-up problem. We provide an overview of the physics of X-ray and γ-ray detectors and various simulation model approximations. We investigate a quantitative measure commonly chosen to evaluate pile-up performance of algorithms, and describe several common scenarios of interest to spectroscopic applications in which the measure is largely insensitive to pile-up. We propose several alternative measures and compare their sensitivity to discriminate between different pile-up correction algorithms, demonstrating that the proposed measures are superior at discerning the efficacy of pile-up correction algorithms. We investigate a common paradigm upon which many pile-up correction strategies are based, the detection and characterization of individual pulses. By considering the asymptotic performance of an ideal Neyman-Pearson detector, we prove that although approaches based on this paradigm may reduce pile-up, they will never be capable of fully resolving it. We demonstrate that as count-rate increases, all such algorithms are no better than making an arbitrary decision to resolve pile-up. Finally we propose a non-parametric estimator that avoids the underlying problem previously mentioned. Modifications are made to adapt a decompounding estimator to suit the assumptions of spectroscopy. Finite length data sets are incorporated without making restrictive assumptions about the maximum order of pile-up. A near optimal kernel bandwidth selection algorithm is proposed. We demonstrate the superior performance of the estimator and bandwidth selection mechanism compared to asymptotic and fixed bandwidth results.
Finite-time algorithms and performance bounds for real-time Internet of Things
Rapid developments in technology have enabled a large scale deployment of interconnected sensors and actuators, captured under the umbrella term Internet of Things (IoT). Real-time IoT applications in smart grid, smart traffic control etc. are made possible by the real-time processing of high volume data, generated from dedicated and multi-purpose sensors, and exchanged over heterogeneous wired or wireless communication networks. Our work focuses explicitly on smart intersection management applications and develops and analyses algorithms to cater to its stringent latency, mobility and geo-distribution requirements. Considering the communication delays, distributed IoT implementations like these prefer fog/hybrid architecture-based data processing to the conventional centralised and cloud-based approach. Further, for relevant distributed real-time IoT algorithms, finite-time performance matters more than the asymptotic results in the literature . Thus, precise estimates have to be obtained on the delay needed for an optimisation algorithm to compute a solution within the desired proximity of the optimal solution. Such trade-offs are inevitable for the design of real-time algorithms over an IoT network. This thesis develops distributed optimisation algorithms, which involve explicit delay-accuracy trade-offs possible and studies the effects of channel impairments and communication network structure on them. We introduce a finite-time distributed optimisation algorithm and derives universal performance bounds for an asymptotic algorithm solving a quadratic Network Utility Maximisation (NUM) problem using quantised inter-agent communication. The finite-time algorithm is then used to solve a Model Predictive Control (MPC) problem and applied to a smart traffic intersection management scenario.
Optimization and deep learning techniques for next-generation wireless communication networks
Due to the explosive growth of consumer electronic devices, such as smartphones, tablets, and the Internet of Things, the global mobile data traffic is estimated to increase seven fold by 2022 in the fifth generation (5G) of wireless communication networks. At the same time, the mobile network connection speed is envisioned to increase more than three-fold by 2022. Many technologies have been proposed to fulfill such unprecedented user demands. On the macroscopic level, the network architecture largely determines the performance of the network. Novel network architectures, such as heterogeneous networks (HetNets) and centralized radio access networks (C-RANs), have been proposed to accommodate massive numbers of wireless devices. On the microscopic level, the control of all devices is vital for network operations on a daily basis. Intelligent control agents are hence required to operate networks without human intervention. In this thesis, we start off by focusing on the architecture side of network and investigate a MINLP problem of a joint backhual-access HetNet by using a classical optimization approach. Then, we move on to the operational side of networks and focus on spectrum sharing in cognitive radio networks and topology control in wireless sensor networks. In these problems, we employ deep learning approaches that can learn from collected data and adapt to the changing radio environment without a priori knowledge about the network. We show the applicability and superiority of our deep learning-based algorithms compared with classical analytic approaches. More importantly, we show the novel applicability of deep learning in solving MINLP problems that are commonly encountered engineering problems in wireless communication networks.
An Input-Output Framework for Stability and Performance Analysis of Networked Systems
A framework is developed for analyzing networked feedback interconnections of dynamical systems. The framework accommodates linear time-invariant open-loop dynamics and a range of digital network characteristics, including uncertain time-varying inter-sample intervals, communication delays, and quantization. The main results relate to integral-quadratic-constraint based descriptions of the uncertainty and non-linearities, and corresponding L2-gain performance certificates. These numerically tractable certificates are parametrized to enable exploration of achievable performance for given network characteristics, and of networking requirements for achieving performance specifications.