Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 22
  • Item
    Thumbnail Image
    Sensor processing for localization with applications to safety
    Ul Haq, Ehsan ( 2017)
    Heavy industries such as construction, mining and transport typically have dangerous work environments, where injuries and fatalities are rampant despite all the rules and regulations. Such mishaps are largely due to human negligence and improper monitoring of the work place. Injuries are also more likely when man and machine operate together. To ensure safety, a framework is needed capable of tracking moving objects around a user with centimeter accuracy. The sensor should be small enough to be easily incorporated in workers safety equipment, and robust against all the random movements of the user and the objects in the surrounding area. This thesis addresses the issues in developing a framework of a low cost smart helmet for workers in dangerous work environments. The techniques developed for safety helmets are also directly applicable to light-weight navigation systems needed for tiny drones. At its core, we have developed a framework and algorithms using simple and cheap continuous wave (CW) Doppler radars to obtain the precise location of static and dynamic obstacles around a user. CW Doppler radars only provide relative radial velocity, so the first issue is to determine the conditions under which the position of a target is observable. We have also designed, compared and analyzed different nonlinear trackers to determine which works better under certain scenarios. We explore how instantaneous frequency measurements can be obtained from rate of phase change in returned waves of CW radars. To this end, we performed various simulations with different order models and results showed that we can successfully localize walls with sub-centimeter accuracy. Moreover, we show that random human head movements and walking do not pose much threat to estimation accuracy and can be easily handled through added noise in the system.
  • Item
    Thumbnail Image
    Voltage stability issues in power grids: analysis and solutions
    Jalali, Ahvand ( 2017)
    Voltage Stability (VS) is gaining increasing significance in today's power systems, which are undergoing sizeable growth in power consumption and higher integration of renewables. Economic and environmental barriers impede new investment on network infrastructure to keep up with the load growth and renewables' intermittency. As a result, many of the power systems around the world are being operated close to their VS limits. This has made voltage instability an ever-existing operational problem for many power systems, and reveals the need for smarter and more efficient approaches to analyse and ensure VS. The significance of VS has been well demonstrated by many evidences of real-life incidents of power system instability which have been associated with VS. From an analytical perspective, with the increasing variability of today's power systems, with higher levels of intermittent renewables integrated into the grid, more frequent evaluation of power system's VS condition is imperative. Hence, more efficient VS evaluation tools, in terms of speed, accuracy, and automated applicability are needed. Also, from a practical point of view, prohibitive cost of upgrading power systems' infrastructure necessitates taking smarter, more efficient alternative approaches to ensure VS of power systems. This includes operating the existing power system components through intelligent, active network management (ANM) schemes. Continuation power flow (CPF) is the conventional, mostly used approach of steady-state VS analysis. CPF algorithm and all its improved versions, however, suffer from high complexity and relatively long execution time. Considering the need for more frequent VS analysis in today's renewable-rich power systems, in this thesis, a more efficient approach of plotting the P-V curves, and identifying VS limits, i.e. saddle-node bifurcation (SNB) and limit-induced bifurcation (LIB) points, of power systems is proposed. The method is based on standard Newton-Raphson power flow (NR-PF) algorithm and, thus, relaxes all the complexities of the existing CPF methods. It offers much reduced execution time, high accuracy, automated applicability, and ease of implementation and comprehension. Several novel, simple techniques are used in the proposed approach to identify both SNB and LIB points. The method is tested on several, including a large-scale, power systems and its performance is compared with some established CPF methods. Modal Analysis (MA) is another commonly-used approach that can be used to identify the weak areas of a power system, from a VS viewpoint. This thesis proposes two improved MA methods, applicable to radial distribution systems. The proposed MA methods, unlike the original MA, do not ignore active power variation and allow taking into account any combination of active and reactive power variations. As a result, the proposed methods improve the accuracy of the original MA, in identifying the best buses to apply active or reactive compensation, with the aim of improving the distribution system's voltage stability margin (VSM). On the other hand, the ongoing technological advances in energy storage systems (ESSs) has made the grid integration of these devices technically and economically more viable. Accordingly, in this thesis, optimal placement and operation of ESSs in power systems with possible embedded wind farms, with a VSM improvement viewpoint, is carried out. The probabilistic nature of the wind is taken into account, through the probability density function (PDF) of the wind farm's output power. A combination of MA and CPF is used to identify the best placement of ESS in the network. A new method of power sharing between the ESSs, based on their effect on system's VSM, is proposed too. The required power injection of ESSs, at an optimal power factor (PF), to ensure a pre-specified minimum required VSM, is also calculated at all load-wind levels. Furthermore, in this thesis, the problem of ESS placement is formulated as a probabilistic optimization framework, through which optimal placement, sizing, and operation of ESS devices in wind-embedded distribution systems are carried out. The main objective of the allocation problem is to minimize the required power and energy ratings of ESSs to be installed, such that a desired level of VSM is always ensured. The reactive power loss and reactive power import from the upstream network are also minimised through a multi-objective optimization framework. Wind uncertainty is accounted for through optimally generated wind power scenarios and using risk-based stochastic optimization approach. Besides, ANM tools, such as tap position of on-load tap changers (OLTCs), modelled by using a new method, and reactive power capabilities of both ESS devices and wind farms, are used as additional means to reduce the required ESS size. Finally, dynamic simulation is carried out to demonstrate the effectiveness of ESS devices to dynamically improve VS of power systems. The effects of induction motor (IM) loads, fixed speed induction generator (FSIG)-based wind turbines (WTs), and over-excitation limiter (OEL) of synchronous generators (SGs), on the power system's short term voltage stability (ST-VS) are evaluated. Then, the use of ESSs to provide dynamic voltage support (DVS) to power system during and after large disturbances, as a countermeasure against short term voltage instability, is investigated. In order to do so, systematic control of ESS, to inject any desired active and reactive powers into the system, is carried out. The effects of implementing fault ride through (FRT) and time-overload (TOL) capabilities of ESS, as well as the ESS's PF, on ST-VS are also analysed.
  • Item
    Thumbnail Image
    Energy and carbon footprint of ubiquitous broadband
    Suessspeck, Sascha ( 2017)
    This thesis concerns ubiquitous broadband in Australia. We use a comparative-static computable general equilibrium model to analyse the economic effects, and to derive the environmental effects of the National Broadband Network (NBN) in the short term and long term. While investment is significantly increased due to NBN deployment in the short term, overall economic activity increases marginally. We find that national greenhouse gas (GHG) emissions are effectively unchanged by the construction of the NBN. We run model long-run simulations to analyse the impact of new services and new ways of working that are enabled by the NBN. The simulation results are dependent on our estimates of the incremental impact of the NBN on service delivery. For this purpose, we map the coverage of broadband in Australian regions using an open-source geographical information system (GIS). We then define two sets of service requirements and determine service availability across regions with and without the NBN. The results show that the NBN produces substantial benefit when services require higher bandwidths than today’s offerings to the majority of end users. In this scenario, the economic effects of productivity improvements facilitated by electronic commerce, telework or telehealth practice made widely available through the NBN will be sufficient to achieve a net improvement to the Australian economy over and above the economic cost of deploying the NBN itself. If, on the other hand, the NBN has a significant effect only on the availability of entertainment services, then the net effect will not be sufficient to outweigh the cost of deployment. We find that national GHG emissions increase with service availability and are higher with the NBN. We construct an NBN power consumption model to estimate the purchased electricity and GHG emissions of the NBN network in the long term post NBN deployment. We find that the NBN network increases energy demand and GHG emissions marginally. The main contributions resulting from this thesis relate to the model simulations. Detailed analysis of the economic and environmental effects of the NBN on the Australian economy provides policymakers and researchers new insights based on a state-of-the-art methodology. Beyond the regional scope of this thesis, the results provide fresh evidence of the rebound effect and GHG emissions abatement potential of ubiquitous technologies such as broadband. While this thesis points at the possible trade-offs when evaluating economic policy faced by various individuals or groups, an efficient way to achieve a more sustainable outcome is to address externalities related to GHG emissions directly by way of implementing appropriate environmental policies.
  • Item
    Thumbnail Image
    Medical image processing with application to psoriasis
    George, Yasmeen ( 2017)
    Psoriasis is a chronic, auto-immune and long-lasting skin condition, with no clear cause or cure. Psoriasis affects people of all ages, and in all countries. According to the International Federation of Psoriasis Associations (IFPA), 125 million people worldwide have psoriasis. The severity of psoriasis is determined by clinical assessment of affected areas and how much it affects a person's quality of life. The most common form is plaque psoriasis (at least 80% of cases), which appears as red patches covered with a silvery white build-up of dead skin cells. The current practice of assessing the severity of psoriasis is called "Psoriasis Area Severity Index" (PASI), which is considered the most widely accepted severity index. PASI has four parameters: percentage of body surface area covered, erythema, plaque thickness, and scaliness. Each measure is scored for four different body regions: head, trunk, upper-limbs, and lower-limbs. Although, PASI scores guide the dermatologists to prescribe a treatment, significant inter- and intra- observer variability in PASI scores exist, and are a fact of life. This variability along with the subjectivity and time required to manually determine the final score make the current practice inefficient and unattractive for use in daily clinics. Therefore, developing a computer-aided diagnosis system for psoriasis severity assessment is highly beneficial and long over due. Although, research in the area of medical image analysis has advanced rapidly during the last decade, notable advances in psoriasis image analysis and PASI scoring have been limited and only recently have started to attract the attention. In this thesis, we present the framework of a computer-aided system for PASI scoring using 2D digital skin images by exploring advanced image processing and machine learning techniques. From one side, this will greatly help improve access to early diagnosis and appropriate treatment for psoriasis, by obtaining consistent, precise and reliable severity scoring as well as reducing the inter- and intra- observer variations in clinical practice. From the other side, this can improve the quality of life for psoriasis patients. The framework consists of (i) a novel preprocessing algorithm for removing skin hair and side clinical markers in 2D psoriasis skin images, (ii) psoriasis skin segmentation method, (iii) a fully automated nipple detection approach for psoriasis images, (iv) a semi-supervised approach for erythema severity scoring, (v) a robust, reliable and fully automated superpixel-based method for psoriasis lesion segmentation, and (vi) a new automated scale scoring method using bag of visual words model with different colour and texture descriptors.
  • Item
    Thumbnail Image
    Electron transport in nanoscale electronics
    Jiang, Liming ( 2017)
    The current booming development of information and communication-related technologies would not have existed without the advance of integrated circuits. The computational capability of integrated circuits has tremendously increased since it was first invented, and this significant improvement is due to the minimization of electronic devices, which allows greatly more transistors to be packed into an individual chip and allows much lower power and faster operations. A typical commercialised processor today can integrate over billions of transistors into a single chip and provides enormous computational capabilities. However, this trend towards system and electronic device minimization, which has lasted for decades, will not persist in the future as the conventional electronic devices reach to nanometre scale, and fundamental limits start to emerge. One major problem that prevents the further minimization of the conventional electronics, the heat dissipation, can be difficult to overcome due to the limitation of the material itself. Thus, new materials and device concepts are needed to mitigate the limitation and advance the field of electronics. This thesis presents a theoretical approach to investigate the electronic property of various novel materials for nanoscale electronics applications. Novel materials, such as two-dimensional materials as well as functional molecules, hold the potential for mitigating the current constraints and producing the novel nanoscale electronics applications. This thesis is dedicated to the electronic property modelling of materials and nanoscale devices by using the state-of-art computational approaches, including the electronic structure simulation using semiempirical tight-binding (TB) approach and ab initio density functional theory (DFT), and electron transport simulation using the non-equilibrium Green’s function (NEGF) method. The novel material stanene was estimated to be a large gap topological insulator and is a prospective candidate for nanoscale electronics. Previous studies mainly applied the DFT-based methods, which can be very computationally expensive. In this thesis, a novel TB model with much-reduced complexity is developed for the monolayer stanene. The derived model has been verified with the ab initio approaches with a notable equivalency in the low-energy region. Based on the model, high-symmetry-points analytical solutions have been derived, and energy parameters for the tight-binding method and k∙p perturbation theory have been numerically fitted. The outcome of this study can be applied to the high-efficiency nanoscale stanene-based device modelling. Electron spin based device has an enormous potential in producing low power consumption and faster electronic circuits and is under intensive research. Maintaining the spin coherence is critical in realising the electron spin-based logic device. In this thesis, electron spin dependent transport is investigated, and a device that realises high spin filtering efficiency is proposed by creating a break-junction on zigzag graphene nanoribbon (ZGNR). This study demonstrates a device concept with simple geometry yet promising spin filtering performance that can provide easy integration between spin injection and spin transport. This thesis also investigates the potential of using biomolecule DNA for nanoscale electronics applications. The robustness and its capability for a significant amount of information storage make DNA a promising candidate for next-generation storage media. However, many problems, including that the molecular storage is prone to synthesis error as well as complex data readout method, make it difficult to apply in practice. In this thesis, a feasibility study is conducted to investigate using DNA 5-methylcytosine to store information. This study demonstrates a molecular device concept which can be beneficial in the design of future molecule based memory or storage devices.
  • Item
    Thumbnail Image
    An investigation of spatial receptive fields of complex cells in the primary visual cortex
    Almasi, Ali ( 2017)
    One of the main concerns of visual neuroscience is to understand how information is processed by the neural circuits in the visual system. Since the historic experiments of Hubel and Wiesel, many more aspects of visual information processing in the brain have been discovered using experimental approaches. However, a lot of computations underlying such processing remain unclear or even unknown. In the retina and the lateral geniculate nucleus, the basic computations have been identified by measuring the responses of neurons to simple visual stimuli such as gratings and oriented bars. However, in higher areas of the visual pathway, e.g. the cortical visual areas, many neurons (including complex cells) cannot be characterised entirely based on their responses to simple stimuli. The complex cells in the visual cortex do not exhibit linear receptive field properties. Hence, the failure of linear receptive field models to describe the behaviour of such neurons leads neuroscientists to seek more plausible quantitative models. Efficient coding is a computational hypothesis about sensory systems. Recently developed models based on the efficient coding hypothesis were able to capture certain properties of complex cells in the primary visual cortex. The Independent feature Subspace Analysis (ISA) model and the covariance model are such examples of these models. The ISA model employs the notion of the energy model in describing the responses of complex cells, whereas the covariance model is based on a recent speculation that complex cells tend to encode the second-order statistical dependencies of the visual input. In this thesis, the parametric technique of the generalised quadratic model (GQM) in conjunction with white Gaussian noise stimulation is used to identify the spatial receptive fields of complex cells in cat primary visual cortex. The validity of the identified receptive field filters are verified by measuring their performance in predicting the responses to test stimuli using correlation coefficients. The findings suggest that a majority of the complex cells in cat primary visual cortex are best described using a linear and one or more quadratic receptive field filters, which are classified as mixed complex cells. We observed that some complex cells exhibit linear as well as quadratic dependencies on an identified filter of their receptive fields. This often introduces a significant shift in the feature-contrast responses of these cells, which results in violations of the polarity invariance property of complex cells. Lastly, a quantitative comparison is performed between the experiment and theory using statistical analysis of the population of the cells' receptive fields identified by experiment and those predicted by the efficient coding models. For this, motivated by the experimental findings for complex cells, a modification of the ISA model that incorporates a linear term is introduced. The simulated model receptive fields of the modified ISA and the covariance model are then used to draw comparison to the experimental data. While the modified ISA and the covariance models are comparable in predicting the complex cell receptive fields characteristics in the primary visual cortex, the latter shows more capable in explaining the observed intra-receptive field inhomogeneity of complex cells, including differences in orientation preference and ratio spatial frequency for the receptive field filters of the same cell. However, the major discrepancies between theory and experiment lie in the orientation bandwidth and spatial frequency bandwidth of the receptive field filters, where the population of the predicted model receptive field filters demonstrate much narrower bandwidths. These findings, thereby, suggest the sub-optimality of the experimental receptive field filters in terms of the efficiency of the code.
  • Item
    Thumbnail Image
    Colour-based computer image processing approach to melanoma diagnosis
    Sabbaghi Mahmouei, Sahar ( 2017)
    Melanoma is one of the most prevalent skin cancers in the world. The incidence and mortality rates of melanoma in Australian populations have been sharply increasing over the last decades. For instance, it is represented that two in three Australian develops some form of skin cancer before they reach the age of 70. Most melanoma can be cured if diagnosed and treated in the early stages. Over the past decades, advances in dermoscopy technology has made it an effective technique used in early diagnosis of malignant melanoma. Dermoscopy allows the clinicians to visualise different colours and examine microstructures in the skin that are not visible to the naked eye. This clear view of the skin reduces screening errors and improves the diagnostic accuracy of pigmented skin lesions significantly. However, it has been demonstrated that the performance and accuracy of melanoma diagnosis using dermoscopic images manually depend on the quality of the image and the clinical experience of the dermatologists. Several medical diagnosis methods have been developed to help dermatologists interpret the structures revealed through dermoscopy, such as the pattern analysis, the ABCD rule, the 7-point checklist, the Menzies method, CASH algorithm, the Chaos and Clues algorithm and the BLINCK algorithm. However, the diagnosis criteria used in assessing the potential of melanoma may be easily overlooked in early melanomas, or be misinterpreted as a benign mole, mainly attending to the subjectivity of clinical interpretation. Also, human judgement is often hardly reproducible. Therefore, clinical diagnosis is still challenging, especially with equivocal pigmented lesions, which leading to the accuracy of melanoma diagnosis by expert dermatologists remains at 75–84%. Only biopsy or excision of a pigmented skin lesion can provide a definitive diagnosis. However, a biopsy can rise metastasizing, in addition to be being invasive and an unpleasant experience to the patient. Therefore, to minimise the diagnostic errors, and provide a reliable second independent opinion to dermatologists, the development of computerised image analysis techniques is of paramount importance. In the last decade, several computer-aided diagnosis (CAD) systems have been proposed to tackle this problem. However, the diversity of existing problems makes any further contributions greatly appreciated. Moreover, it is widely acknowledged that much higher accuracy is required for computer-based system to be considered reliable and trustworthy enough by clinicians, therefore be adopted routinely in their diagnostic process. With the aim of improving some of existing approaches and developing new techniques to facilitate accurate, fast and more reliable computer-based diagnosis of melanoma, this thesis describes novel image processing approaches for computer-aided detection on selected subset of medical criteria that play an important role in the diagnosis of melanoma. This ensures that the features used by the system have a medical meaning, making it possible for the dermatologist to understand and validate the automated diagnosis. One of the contributions of this thesis is to develop a fast and accurate colour detection method. It is observed that colours may vary slightly in dermoscopy images, because of different levels of contrast. This may lead to difficulty in the perception of colours by dermatologists, resulting in subjectivity of clinical diagnosis. A computer-assisted system for quantitative colour identification is highly desirable for dermatologists to use. However, these colour variations within the lesion makes colour detection a challenging process. To tackle this challenge, a comprehensive colour detection procedure is conducted in this thesis. It incorporates a colour enhancement step to overcome the problems of poor contrast. Since colours perceived by the human observer are produced by a mixture of pixel values, we performed a summarised representation of colours by subdividing the colour space into colour clusters, using QuadTree clustering, comprising a set of RGB values. The proposed method employed a colour palette, to mimic human interpretation of of lesion colours in determining the type and the number of colours in melanocytic lesion images. In addition, a set of parameters such as colour feature set, texture feature set, and locational features is extracted to numerically describe the colour properties of each segmented block throughout the lesion. Furthermore, when comparing colour distribution in malignant melanomas (MMs) and benign melanomas (BMs), a significant difference in the number of colours in the two populations is detected. Also, the proposed method shown that the type of colour can greatly affect in the diagnosis outcome. The effectiveness of the proposed colour detection system is evaluated by comparing the obtained results with those obtained by using expert dermatologists. The highest correlation coefficients for detecting the type of colour is observed for red and blue–grey, which, in respect of the image set used in this thesis, signifies the most important colours for diagnosis purposes. The overall performance of the proposed system is evaluated by using machine learning techniques, and the best classification results, AUC of 0.93, are achieved by using kernel SVM classifier. Another contribution of this thesis is to provide meaningful visualisation of streak, and extract features to determine the relative importance of streak in classifying the skin lesion into two class of benign and malignant. To find streaks, a trainable B-COSFIRE filter applied in dermoscopy images to detect a prototype pattern of interest (bar-shaped structures) such as streak. Its application consists of convolution with Difference of Gaussian (DoG) filters, its blurring responses; shifting the blurred responses and estimate a point-wise weighted Geometric Mean (GM). To also account the different thickness and structure of streak a bank of B-COSFIRE filter is applied on the image with different orientation and rotation. Then to identify valid streaks from candidate streak lines, clinical criteria such as number of streaks in the images and the orientation pattern analysis is calculated and the false detected lines are removed. The result includes line segments that indicate the pixels that belong to streaks are displayed. Also, a set of features derived from streaks (such as geometrics, colour and texture features) are fed to three different classifiers for classifying images. We achieved an accuracy of 93.3% for classifying dermoscopy images into benign and malignant on 807 dermoscopy images. Furthermore, a novel, comprehensive and highly effective application of deep learning (stacked sparse auto-encoders) is examined in this thesis for classification of skin lesion. The model learns a hierarchal high-level feature representation of skin image in an unsupervised manner. The stacked sparse auto-encoder discovers latent information features in input images (pixel intensities). These high-level features are subsequently fed into a classifier for classifying dermoscopy images. In addition, we proposed a new deep neural network architecture based on bag-of-features (BoF) model, which learns high-level image representation and maps images into BoF space. We have shown that using BoF as the input to the auto-encoder can easily improve the performance of neural network in comparison with the raw input images. The proposed method is evaluated on a test set of 244 skin images and result shown that the deep BoF model achieves higher classification scores (with SE = 95.4% and SP = 94.9%) in compare to the raw input images. Our contributions will improve automated diagnosis of melanoma using dermoscopy images.
  • Item
    Thumbnail Image
    Automatic analysis of 4D laryngeal CT scans to assist diagnosing of voice disorders
    Hewavitharanage, Sajini Ruwanthika Gintota ( 2017)
    Vocal folds are the two smooth bands of muscles located in larynx just above the trachea. Humans produce voice by vibrating the vocal folds using the air coming from the lungs. This abduction and adduction of vocal folds are controlled by the muscles connected to thyroid cartilage, cricothyroid cartilages and arytenoid cartilages. When vocal muscles are misused or excessively used, they can be strained or damaged and voice disorders may occur. Furthermore, vocal folds can be damaged and the connecting cartilages and muscles can be affected due to the effect of other illnesses like Parkinson's disease (PD), multiple sclerosis (MS), myasthenia gravis (MG), strokes or tumours. PD is a neuro-degenerative disease which currently has neither cure nor any pathological tests to detect. The disease progresses very slowly over the years and symptoms appear when approximately 70% of the neuron cells have ceased to function. Usual symptoms are tremors and stiffness in the body muscles which results in difficulty moving most of the body parts externally as well as internally. Consequently, vocal folds and laryngeal muscles get affected and PD patients suffer from vocal impairments. Furthermore, previous studies carried out using laryngoendoscopy, laryngostroboscopy and laryngeal electromyography of PD patients found that those patients have an abnormal phase closure and abnormal laryngeal muscle activity. Moreover, in 2014, a study carried out using a group of early PD patients demonstrated increased glottis area and reduced inter-arytenoid distance in subjects. Therefore, laryngeal measurements could be used as a biomarker for early detection of PD. However, segmenting the vocal folds region from volumetric laryngeal computed tomography (CT) images is a tedious task, when it is done manually. Manual segmentation schemes require lot of expert knowledge and time, and often provide poor objective and reproducible results. In this project, we hope to develop a novel automated algorithm to segment the vocal folds region and measure the laryngeal parameters. This thesis consists of two major parts; first it proposes a fully automated segmentation method for segmenting the rima glottidis from 3D laryngeal CT scans and generates the time series for rima glottidis areas, which in future can be used to develop an automated diagnosis tool for voice disorders. The gray-level difference features are learnt through a support vector machine classifier and several post processing algorithms are introduced to refine the final segmentation result. Second, a fully automated method to estimate the vocal plane position in a 3D laryngeal CT volume using computer vision algorithms and techniques is proposed. Vocal plane position is identified using anatomical markers like thyroid cartilage and vertebral bones and these markers are segmented using gray-scale and edge-based features. The experiments are conducted using a private data set from the Movement Disorder Clinic at Monash Medical Centre The detailed implementations of the two methods including feature extraction, kernel selection, post processing and validation are explained in this thesis.
  • Item
    Thumbnail Image
    Design challenges of smart meter-based applications
    Amarasekara, Athauda Arachchige Bhagya ( 2017)
    The smart grid is an interconnected electricity network. It integrates the electricity grid with powerful control and communications networks that can dynamically respond to customer demands and energy supply scenarios with increased reliability. One of the key components of the smart grid is the smart meter, which is the main sensor in the electricity distribution grid. As of today, the introduction of the smart meter has transformed manual electricity billing system to an automated meter reading system. In the future, the capabilities of smart meters will not only be limited to meter-readings but are expected to facilitate outage detections and demand side management, allowing the grid to respond dynamically to both customer demands and energy market pricing signals. However, these smart meters based applications face many challenges in implementing them in the network including provisioning adequate resources for smart metering traffic to guarantee the required quality of service (QoS) level, maintaining scalability of applications that require complex computations, ensuring the security of the smart metering data, and providing a platform to identify the effect of communications networks on smart meter applications. This thesis investigates approaches to overcome the challenges in implementing smart meter based applications to achieve a reliable and cost-efficient electricity network. In particular, this thesis examines efficient solutions to overcome three key challenges: mechanisms to guarantee QoS levels when smart meters use public communications networks to transport their data, approaches to guarantee the scalable deployment of the complex smart meter based applications, and a platform to efficiently simulate smart grid networks along with its control and communication operations in order to assess smart grid applications. For the smart meter communications network, the public telecommunications network is considered as a cost-effective solution as it does not involve any separate installation or maintenance costs. However, when sharing network resources with both public traffic and smart metering traffic, required QoS levels of essential broadband services along with those of smart meters should be satisfied. To this end, this thesis explores resource allocation mechanisms in both core network and access networks on providing adequate services for all users in the shared network. In particular, this thesis proposes approaches to classify and schedule traffic in the core network, in addition to scheduling algorithms for long-term evolution (LTE) wireless access network when it shares its resources with smart meter traffic. Our simulation results indicate that the proposed scheduling mechanisms can significantly improve the QoS performance of the public traffic and smart grid traffic related to automatic meter reading and outage detection applications. Another key challenge faced by smart meter applications is to provide scalable deployment for the smart grid applications such as the demand side management (DSM). Though it is important to integrate a large number of energy customers to DSM to achieve desired cost-effective supply demand balance, limited computational resources such as memory hinder this integration. Therefore, this thesis explores efficient ways to accommodate a large number of customers into DSM by using aggregators that consolidate underlying customer’s energy, power, and cost requirements. We also present simplified methods to distribute the aggregated optimal decisions to the end customers and demonstrate the applicability of the proposed method, by using it in a large electricity network. The results reveal that the proposed aggregated method is better in providing scalability and also in achieving a higher satisfaction level among customers. Moreover, as the smart grid is an interconnected network comprising of both electricity and communications networks, smart grid applications would be affected by imperfect communications networks. Therefore, these applications should be evaluated based on their robustness to communications errors and their design should be improved considering those effects. Hence, in this thesis, we present the design of a co-simulation platform, which is capable of simulating smart grid applications with both electricity and communications networks. The feasibility of this proposed platform is analysed by using it to assess the real-time pricing (RTP) application, which is one of the important DSM application. Furthermore, by using this designed simulation platform, we explore the ways of utilising features of the public LTE communications networks for different RTP designs. Overall, our studies reported in this thesis, provide insight into deployment strategies that can be used to realise scalable smart meter based applications in cost-effective manner with guaranteed QoS and user satisfaction.
  • Item
    Thumbnail Image
    Adaptive control of voltage sourced inverters in microgrid
    Wu, Zhiding ( 2017)
    The microgrid is a new scheme of future distribution power grid with small scale and the integration of distributed generation (DG). Generally, the interface between the DG and the utility grid is a voltage sourced inverter (VSI) and a power filter, aiming to (i) regulate the electricity injection in grid-connected mode and the additional voltage \& frequency in islanding mode, (ii) eliminate the high-frequency harmonics caused by pulse width modulation (PWM) controlled VSI, respectively. The coefficients of VSI controller and the parameters of power filter should be designed properly to achieve system stability and to prevent harmonic injection and excessive power loss. In the equivalent structure of VSI based DG unit, the grid impedance is another component that exists between the filter and the ideal grid, which is an aggregate impedance of the whole network. However, it is usually seen to be uncertain or unknown in the practical system that will highly affect the controller design and output performance of VSI, both in grid-connected and islanded modes. Imprudent selection of filter parameters may cause worse filtering outcomes and significant power losses. The rationale for the need for optimal filter design is that conventional designs methods are only able to provide a range of values for each parameter and therefore are unable to guarantee performance. This thesis proposes an optimal design method of power filter with passive damping to address these problems. The novelty of the proposed method stems from using multi-objective optimisation approach to find optimal values of filter parameters using the genetic algorithm. The objective of the optimisation is to attain high harmonic attenuation performance and small switching frequency ripple while achieving low power consumption. The proposed method is verified through simulation studies carried out on a three-phase grid-connected VSI based DG system, using the parameter values obtained from the proposed design method. The simulation results demonstrate that the new method can achieve a higher level of ripple reduction, greater harmonic attenuation, and higher system efficiency than existing design methods. Controller design for grid-connected and islanded VSI is based on the knowledge of equivalent grid impedance (or network impedance). Grid impedance is determined by experience or calculated through the technical manuals of overhead line or underground cable in the system. However, the actual line impedance may vary due to the variation of temperature, humidity, and ageing. Any inconsistency of grid impedance between the control loop and the real value will lead to poor output performance and even instability of VSI. By using the information of grid topology and multiple measurement scenarios of bus voltage and power, a network impedance estimation (NIE) is proposed to calculate every line impedance of the grid based on reverse power flow. The Newton-Raphson iterative method is employed to solve the proposed NIE problem, while the corresponding Jacobian matrix is formulated. Then the impedance of every line in the network can be obtained iteratively. The NIE method is verified through three benchmark systems. Estimation results show that great accuracy and fast iteration of the proposed method can be realised. The proposed NIE process can operate online to provide the estimated value of network impedance continuously. Based on the knowledge of network topology, the equivalent grid impedance of every DG unit can be computed subsequently. When the \emph{LCL} filter is properly designed by the proposed optimisation method, the accurate model of VSI can be obtained. For grid-connected mode, an adaptive state feedback controller integrated with NIE process is presented. The state feedback gain is well designed by using the estimated impedance values of NIE. In islanded mode, the output feedback controller with droop control is more suggested since its simplicity and good power sharing. However, sharing performance is very sensitive concerning the value of impedance. Hence, an adaptive controller with virtual impedance is proposed based on the NIE method to achieve the desired performance. Then the output impedance of every VSI can be adequately designed. Grid-connected and islanded simulations are carried out to verify the proposed control method in a 14-bus benchmark microgrid. Results demonstrate the effectiveness and superiority of the NIE based adaptive methods, which are better in comparison with the conventional method in both grid-connected and islanded operations. Moreover, it also shows that the circulating current among multiple islanded DG units is eliminated and the stability of system frequency can be preserved during the process of impedance variation.