Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 19
  • Item
    Thumbnail Image
    Sensor processing for localization with applications to safety
    Ul Haq, Ehsan ( 2017)
    Heavy industries such as construction, mining and transport typically have dangerous work environments, where injuries and fatalities are rampant despite all the rules and regulations. Such mishaps are largely due to human negligence and improper monitoring of the work place. Injuries are also more likely when man and machine operate together. To ensure safety, a framework is needed capable of tracking moving objects around a user with centimeter accuracy. The sensor should be small enough to be easily incorporated in workers safety equipment, and robust against all the random movements of the user and the objects in the surrounding area. This thesis addresses the issues in developing a framework of a low cost smart helmet for workers in dangerous work environments. The techniques developed for safety helmets are also directly applicable to light-weight navigation systems needed for tiny drones. At its core, we have developed a framework and algorithms using simple and cheap continuous wave (CW) Doppler radars to obtain the precise location of static and dynamic obstacles around a user. CW Doppler radars only provide relative radial velocity, so the first issue is to determine the conditions under which the position of a target is observable. We have also designed, compared and analyzed different nonlinear trackers to determine which works better under certain scenarios. We explore how instantaneous frequency measurements can be obtained from rate of phase change in returned waves of CW radars. To this end, we performed various simulations with different order models and results showed that we can successfully localize walls with sub-centimeter accuracy. Moreover, we show that random human head movements and walking do not pose much threat to estimation accuracy and can be easily handled through added noise in the system.
  • Item
    Thumbnail Image
    Voltage stability issues in power grids: analysis and solutions
    Jalali, Ahvand ( 2017)
    Voltage Stability (VS) is gaining increasing significance in today's power systems, which are undergoing sizeable growth in power consumption and higher integration of renewables. Economic and environmental barriers impede new investment on network infrastructure to keep up with the load growth and renewables' intermittency. As a result, many of the power systems around the world are being operated close to their VS limits. This has made voltage instability an ever-existing operational problem for many power systems, and reveals the need for smarter and more efficient approaches to analyse and ensure VS. The significance of VS has been well demonstrated by many evidences of real-life incidents of power system instability which have been associated with VS. From an analytical perspective, with the increasing variability of today's power systems, with higher levels of intermittent renewables integrated into the grid, more frequent evaluation of power system's VS condition is imperative. Hence, more efficient VS evaluation tools, in terms of speed, accuracy, and automated applicability are needed. Also, from a practical point of view, prohibitive cost of upgrading power systems' infrastructure necessitates taking smarter, more efficient alternative approaches to ensure VS of power systems. This includes operating the existing power system components through intelligent, active network management (ANM) schemes. Continuation power flow (CPF) is the conventional, mostly used approach of steady-state VS analysis. CPF algorithm and all its improved versions, however, suffer from high complexity and relatively long execution time. Considering the need for more frequent VS analysis in today's renewable-rich power systems, in this thesis, a more efficient approach of plotting the P-V curves, and identifying VS limits, i.e. saddle-node bifurcation (SNB) and limit-induced bifurcation (LIB) points, of power systems is proposed. The method is based on standard Newton-Raphson power flow (NR-PF) algorithm and, thus, relaxes all the complexities of the existing CPF methods. It offers much reduced execution time, high accuracy, automated applicability, and ease of implementation and comprehension. Several novel, simple techniques are used in the proposed approach to identify both SNB and LIB points. The method is tested on several, including a large-scale, power systems and its performance is compared with some established CPF methods. Modal Analysis (MA) is another commonly-used approach that can be used to identify the weak areas of a power system, from a VS viewpoint. This thesis proposes two improved MA methods, applicable to radial distribution systems. The proposed MA methods, unlike the original MA, do not ignore active power variation and allow taking into account any combination of active and reactive power variations. As a result, the proposed methods improve the accuracy of the original MA, in identifying the best buses to apply active or reactive compensation, with the aim of improving the distribution system's voltage stability margin (VSM). On the other hand, the ongoing technological advances in energy storage systems (ESSs) has made the grid integration of these devices technically and economically more viable. Accordingly, in this thesis, optimal placement and operation of ESSs in power systems with possible embedded wind farms, with a VSM improvement viewpoint, is carried out. The probabilistic nature of the wind is taken into account, through the probability density function (PDF) of the wind farm's output power. A combination of MA and CPF is used to identify the best placement of ESS in the network. A new method of power sharing between the ESSs, based on their effect on system's VSM, is proposed too. The required power injection of ESSs, at an optimal power factor (PF), to ensure a pre-specified minimum required VSM, is also calculated at all load-wind levels. Furthermore, in this thesis, the problem of ESS placement is formulated as a probabilistic optimization framework, through which optimal placement, sizing, and operation of ESS devices in wind-embedded distribution systems are carried out. The main objective of the allocation problem is to minimize the required power and energy ratings of ESSs to be installed, such that a desired level of VSM is always ensured. The reactive power loss and reactive power import from the upstream network are also minimised through a multi-objective optimization framework. Wind uncertainty is accounted for through optimally generated wind power scenarios and using risk-based stochastic optimization approach. Besides, ANM tools, such as tap position of on-load tap changers (OLTCs), modelled by using a new method, and reactive power capabilities of both ESS devices and wind farms, are used as additional means to reduce the required ESS size. Finally, dynamic simulation is carried out to demonstrate the effectiveness of ESS devices to dynamically improve VS of power systems. The effects of induction motor (IM) loads, fixed speed induction generator (FSIG)-based wind turbines (WTs), and over-excitation limiter (OEL) of synchronous generators (SGs), on the power system's short term voltage stability (ST-VS) are evaluated. Then, the use of ESSs to provide dynamic voltage support (DVS) to power system during and after large disturbances, as a countermeasure against short term voltage instability, is investigated. In order to do so, systematic control of ESS, to inject any desired active and reactive powers into the system, is carried out. The effects of implementing fault ride through (FRT) and time-overload (TOL) capabilities of ESS, as well as the ESS's PF, on ST-VS are also analysed.
  • Item
    Thumbnail Image
    Energy and carbon footprint of ubiquitous broadband
    Suessspeck, Sascha ( 2017)
    This thesis concerns ubiquitous broadband in Australia. We use a comparative-static computable general equilibrium model to analyse the economic effects, and to derive the environmental effects of the National Broadband Network (NBN) in the short term and long term. While investment is significantly increased due to NBN deployment in the short term, overall economic activity increases marginally. We find that national greenhouse gas (GHG) emissions are effectively unchanged by the construction of the NBN. We run model long-run simulations to analyse the impact of new services and new ways of working that are enabled by the NBN. The simulation results are dependent on our estimates of the incremental impact of the NBN on service delivery. For this purpose, we map the coverage of broadband in Australian regions using an open-source geographical information system (GIS). We then define two sets of service requirements and determine service availability across regions with and without the NBN. The results show that the NBN produces substantial benefit when services require higher bandwidths than today’s offerings to the majority of end users. In this scenario, the economic effects of productivity improvements facilitated by electronic commerce, telework or telehealth practice made widely available through the NBN will be sufficient to achieve a net improvement to the Australian economy over and above the economic cost of deploying the NBN itself. If, on the other hand, the NBN has a significant effect only on the availability of entertainment services, then the net effect will not be sufficient to outweigh the cost of deployment. We find that national GHG emissions increase with service availability and are higher with the NBN. We construct an NBN power consumption model to estimate the purchased electricity and GHG emissions of the NBN network in the long term post NBN deployment. We find that the NBN network increases energy demand and GHG emissions marginally. The main contributions resulting from this thesis relate to the model simulations. Detailed analysis of the economic and environmental effects of the NBN on the Australian economy provides policymakers and researchers new insights based on a state-of-the-art methodology. Beyond the regional scope of this thesis, the results provide fresh evidence of the rebound effect and GHG emissions abatement potential of ubiquitous technologies such as broadband. While this thesis points at the possible trade-offs when evaluating economic policy faced by various individuals or groups, an efficient way to achieve a more sustainable outcome is to address externalities related to GHG emissions directly by way of implementing appropriate environmental policies.
  • Item
    Thumbnail Image
    Medical image processing with application to psoriasis
    George, Yasmeen ( 2017)
    Psoriasis is a chronic, auto-immune and long-lasting skin condition, with no clear cause or cure. Psoriasis affects people of all ages, and in all countries. According to the International Federation of Psoriasis Associations (IFPA), 125 million people worldwide have psoriasis. The severity of psoriasis is determined by clinical assessment of affected areas and how much it affects a person's quality of life. The most common form is plaque psoriasis (at least 80% of cases), which appears as red patches covered with a silvery white build-up of dead skin cells. The current practice of assessing the severity of psoriasis is called "Psoriasis Area Severity Index" (PASI), which is considered the most widely accepted severity index. PASI has four parameters: percentage of body surface area covered, erythema, plaque thickness, and scaliness. Each measure is scored for four different body regions: head, trunk, upper-limbs, and lower-limbs. Although, PASI scores guide the dermatologists to prescribe a treatment, significant inter- and intra- observer variability in PASI scores exist, and are a fact of life. This variability along with the subjectivity and time required to manually determine the final score make the current practice inefficient and unattractive for use in daily clinics. Therefore, developing a computer-aided diagnosis system for psoriasis severity assessment is highly beneficial and long over due. Although, research in the area of medical image analysis has advanced rapidly during the last decade, notable advances in psoriasis image analysis and PASI scoring have been limited and only recently have started to attract the attention. In this thesis, we present the framework of a computer-aided system for PASI scoring using 2D digital skin images by exploring advanced image processing and machine learning techniques. From one side, this will greatly help improve access to early diagnosis and appropriate treatment for psoriasis, by obtaining consistent, precise and reliable severity scoring as well as reducing the inter- and intra- observer variations in clinical practice. From the other side, this can improve the quality of life for psoriasis patients. The framework consists of (i) a novel preprocessing algorithm for removing skin hair and side clinical markers in 2D psoriasis skin images, (ii) psoriasis skin segmentation method, (iii) a fully automated nipple detection approach for psoriasis images, (iv) a semi-supervised approach for erythema severity scoring, (v) a robust, reliable and fully automated superpixel-based method for psoriasis lesion segmentation, and (vi) a new automated scale scoring method using bag of visual words model with different colour and texture descriptors.
  • Item
    Thumbnail Image
    Electron transport in nanoscale electronics
    Jiang, Liming ( 2017)
    The current booming development of information and communication-related technologies would not have existed without the advance of integrated circuits. The computational capability of integrated circuits has tremendously increased since it was first invented, and this significant improvement is due to the minimization of electronic devices, which allows greatly more transistors to be packed into an individual chip and allows much lower power and faster operations. A typical commercialised processor today can integrate over billions of transistors into a single chip and provides enormous computational capabilities. However, this trend towards system and electronic device minimization, which has lasted for decades, will not persist in the future as the conventional electronic devices reach to nanometre scale, and fundamental limits start to emerge. One major problem that prevents the further minimization of the conventional electronics, the heat dissipation, can be difficult to overcome due to the limitation of the material itself. Thus, new materials and device concepts are needed to mitigate the limitation and advance the field of electronics. This thesis presents a theoretical approach to investigate the electronic property of various novel materials for nanoscale electronics applications. Novel materials, such as two-dimensional materials as well as functional molecules, hold the potential for mitigating the current constraints and producing the novel nanoscale electronics applications. This thesis is dedicated to the electronic property modelling of materials and nanoscale devices by using the state-of-art computational approaches, including the electronic structure simulation using semiempirical tight-binding (TB) approach and ab initio density functional theory (DFT), and electron transport simulation using the non-equilibrium Green’s function (NEGF) method. The novel material stanene was estimated to be a large gap topological insulator and is a prospective candidate for nanoscale electronics. Previous studies mainly applied the DFT-based methods, which can be very computationally expensive. In this thesis, a novel TB model with much-reduced complexity is developed for the monolayer stanene. The derived model has been verified with the ab initio approaches with a notable equivalency in the low-energy region. Based on the model, high-symmetry-points analytical solutions have been derived, and energy parameters for the tight-binding method and k∙p perturbation theory have been numerically fitted. The outcome of this study can be applied to the high-efficiency nanoscale stanene-based device modelling. Electron spin based device has an enormous potential in producing low power consumption and faster electronic circuits and is under intensive research. Maintaining the spin coherence is critical in realising the electron spin-based logic device. In this thesis, electron spin dependent transport is investigated, and a device that realises high spin filtering efficiency is proposed by creating a break-junction on zigzag graphene nanoribbon (ZGNR). This study demonstrates a device concept with simple geometry yet promising spin filtering performance that can provide easy integration between spin injection and spin transport. This thesis also investigates the potential of using biomolecule DNA for nanoscale electronics applications. The robustness and its capability for a significant amount of information storage make DNA a promising candidate for next-generation storage media. However, many problems, including that the molecular storage is prone to synthesis error as well as complex data readout method, make it difficult to apply in practice. In this thesis, a feasibility study is conducted to investigate using DNA 5-methylcytosine to store information. This study demonstrates a molecular device concept which can be beneficial in the design of future molecule based memory or storage devices.
  • Item
    Thumbnail Image
    An investigation of spatial receptive fields of complex cells in the primary visual cortex
    Almasi, Ali ( 2017)
    One of the main concerns of visual neuroscience is to understand how information is processed by the neural circuits in the visual system. Since the historic experiments of Hubel and Wiesel, many more aspects of visual information processing in the brain have been discovered using experimental approaches. However, a lot of computations underlying such processing remain unclear or even unknown. In the retina and the lateral geniculate nucleus, the basic computations have been identified by measuring the responses of neurons to simple visual stimuli such as gratings and oriented bars. However, in higher areas of the visual pathway, e.g. the cortical visual areas, many neurons (including complex cells) cannot be characterised entirely based on their responses to simple stimuli. The complex cells in the visual cortex do not exhibit linear receptive field properties. Hence, the failure of linear receptive field models to describe the behaviour of such neurons leads neuroscientists to seek more plausible quantitative models. Efficient coding is a computational hypothesis about sensory systems. Recently developed models based on the efficient coding hypothesis were able to capture certain properties of complex cells in the primary visual cortex. The Independent feature Subspace Analysis (ISA) model and the covariance model are such examples of these models. The ISA model employs the notion of the energy model in describing the responses of complex cells, whereas the covariance model is based on a recent speculation that complex cells tend to encode the second-order statistical dependencies of the visual input. In this thesis, the parametric technique of the generalised quadratic model (GQM) in conjunction with white Gaussian noise stimulation is used to identify the spatial receptive fields of complex cells in cat primary visual cortex. The validity of the identified receptive field filters are verified by measuring their performance in predicting the responses to test stimuli using correlation coefficients. The findings suggest that a majority of the complex cells in cat primary visual cortex are best described using a linear and one or more quadratic receptive field filters, which are classified as mixed complex cells. We observed that some complex cells exhibit linear as well as quadratic dependencies on an identified filter of their receptive fields. This often introduces a significant shift in the feature-contrast responses of these cells, which results in violations of the polarity invariance property of complex cells. Lastly, a quantitative comparison is performed between the experiment and theory using statistical analysis of the population of the cells' receptive fields identified by experiment and those predicted by the efficient coding models. For this, motivated by the experimental findings for complex cells, a modification of the ISA model that incorporates a linear term is introduced. The simulated model receptive fields of the modified ISA and the covariance model are then used to draw comparison to the experimental data. While the modified ISA and the covariance models are comparable in predicting the complex cell receptive fields characteristics in the primary visual cortex, the latter shows more capable in explaining the observed intra-receptive field inhomogeneity of complex cells, including differences in orientation preference and ratio spatial frequency for the receptive field filters of the same cell. However, the major discrepancies between theory and experiment lie in the orientation bandwidth and spatial frequency bandwidth of the receptive field filters, where the population of the predicted model receptive field filters demonstrate much narrower bandwidths. These findings, thereby, suggest the sub-optimality of the experimental receptive field filters in terms of the efficiency of the code.
  • Item
    Thumbnail Image
    Colour-based computer image processing approach to melanoma diagnosis
    Sabbaghi Mahmouei, Sahar ( 2017)
    Melanoma is one of the most prevalent skin cancers in the world. The incidence and mortality rates of melanoma in Australian populations have been sharply increasing over the last decades. For instance, it is represented that two in three Australian develops some form of skin cancer before they reach the age of 70. Most melanoma can be cured if diagnosed and treated in the early stages. Over the past decades, advances in dermoscopy technology has made it an effective technique used in early diagnosis of malignant melanoma. Dermoscopy allows the clinicians to visualise different colours and examine microstructures in the skin that are not visible to the naked eye. This clear view of the skin reduces screening errors and improves the diagnostic accuracy of pigmented skin lesions significantly. However, it has been demonstrated that the performance and accuracy of melanoma diagnosis using dermoscopic images manually depend on the quality of the image and the clinical experience of the dermatologists. Several medical diagnosis methods have been developed to help dermatologists interpret the structures revealed through dermoscopy, such as the pattern analysis, the ABCD rule, the 7-point checklist, the Menzies method, CASH algorithm, the Chaos and Clues algorithm and the BLINCK algorithm. However, the diagnosis criteria used in assessing the potential of melanoma may be easily overlooked in early melanomas, or be misinterpreted as a benign mole, mainly attending to the subjectivity of clinical interpretation. Also, human judgement is often hardly reproducible. Therefore, clinical diagnosis is still challenging, especially with equivocal pigmented lesions, which leading to the accuracy of melanoma diagnosis by expert dermatologists remains at 75–84%. Only biopsy or excision of a pigmented skin lesion can provide a definitive diagnosis. However, a biopsy can rise metastasizing, in addition to be being invasive and an unpleasant experience to the patient. Therefore, to minimise the diagnostic errors, and provide a reliable second independent opinion to dermatologists, the development of computerised image analysis techniques is of paramount importance. In the last decade, several computer-aided diagnosis (CAD) systems have been proposed to tackle this problem. However, the diversity of existing problems makes any further contributions greatly appreciated. Moreover, it is widely acknowledged that much higher accuracy is required for computer-based system to be considered reliable and trustworthy enough by clinicians, therefore be adopted routinely in their diagnostic process. With the aim of improving some of existing approaches and developing new techniques to facilitate accurate, fast and more reliable computer-based diagnosis of melanoma, this thesis describes novel image processing approaches for computer-aided detection on selected subset of medical criteria that play an important role in the diagnosis of melanoma. This ensures that the features used by the system have a medical meaning, making it possible for the dermatologist to understand and validate the automated diagnosis. One of the contributions of this thesis is to develop a fast and accurate colour detection method. It is observed that colours may vary slightly in dermoscopy images, because of different levels of contrast. This may lead to difficulty in the perception of colours by dermatologists, resulting in subjectivity of clinical diagnosis. A computer-assisted system for quantitative colour identification is highly desirable for dermatologists to use. However, these colour variations within the lesion makes colour detection a challenging process. To tackle this challenge, a comprehensive colour detection procedure is conducted in this thesis. It incorporates a colour enhancement step to overcome the problems of poor contrast. Since colours perceived by the human observer are produced by a mixture of pixel values, we performed a summarised representation of colours by subdividing the colour space into colour clusters, using QuadTree clustering, comprising a set of RGB values. The proposed method employed a colour palette, to mimic human interpretation of of lesion colours in determining the type and the number of colours in melanocytic lesion images. In addition, a set of parameters such as colour feature set, texture feature set, and locational features is extracted to numerically describe the colour properties of each segmented block throughout the lesion. Furthermore, when comparing colour distribution in malignant melanomas (MMs) and benign melanomas (BMs), a significant difference in the number of colours in the two populations is detected. Also, the proposed method shown that the type of colour can greatly affect in the diagnosis outcome. The effectiveness of the proposed colour detection system is evaluated by comparing the obtained results with those obtained by using expert dermatologists. The highest correlation coefficients for detecting the type of colour is observed for red and blue–grey, which, in respect of the image set used in this thesis, signifies the most important colours for diagnosis purposes. The overall performance of the proposed system is evaluated by using machine learning techniques, and the best classification results, AUC of 0.93, are achieved by using kernel SVM classifier. Another contribution of this thesis is to provide meaningful visualisation of streak, and extract features to determine the relative importance of streak in classifying the skin lesion into two class of benign and malignant. To find streaks, a trainable B-COSFIRE filter applied in dermoscopy images to detect a prototype pattern of interest (bar-shaped structures) such as streak. Its application consists of convolution with Difference of Gaussian (DoG) filters, its blurring responses; shifting the blurred responses and estimate a point-wise weighted Geometric Mean (GM). To also account the different thickness and structure of streak a bank of B-COSFIRE filter is applied on the image with different orientation and rotation. Then to identify valid streaks from candidate streak lines, clinical criteria such as number of streaks in the images and the orientation pattern analysis is calculated and the false detected lines are removed. The result includes line segments that indicate the pixels that belong to streaks are displayed. Also, a set of features derived from streaks (such as geometrics, colour and texture features) are fed to three different classifiers for classifying images. We achieved an accuracy of 93.3% for classifying dermoscopy images into benign and malignant on 807 dermoscopy images. Furthermore, a novel, comprehensive and highly effective application of deep learning (stacked sparse auto-encoders) is examined in this thesis for classification of skin lesion. The model learns a hierarchal high-level feature representation of skin image in an unsupervised manner. The stacked sparse auto-encoder discovers latent information features in input images (pixel intensities). These high-level features are subsequently fed into a classifier for classifying dermoscopy images. In addition, we proposed a new deep neural network architecture based on bag-of-features (BoF) model, which learns high-level image representation and maps images into BoF space. We have shown that using BoF as the input to the auto-encoder can easily improve the performance of neural network in comparison with the raw input images. The proposed method is evaluated on a test set of 244 skin images and result shown that the deep BoF model achieves higher classification scores (with SE = 95.4% and SP = 94.9%) in compare to the raw input images. Our contributions will improve automated diagnosis of melanoma using dermoscopy images.
  • Item
    Thumbnail Image
    Design challenges of smart meter-based applications
    Amarasekara, Athauda Arachchige Bhagya ( 2017)
    The smart grid is an interconnected electricity network. It integrates the electricity grid with powerful control and communications networks that can dynamically respond to customer demands and energy supply scenarios with increased reliability. One of the key components of the smart grid is the smart meter, which is the main sensor in the electricity distribution grid. As of today, the introduction of the smart meter has transformed manual electricity billing system to an automated meter reading system. In the future, the capabilities of smart meters will not only be limited to meter-readings but are expected to facilitate outage detections and demand side management, allowing the grid to respond dynamically to both customer demands and energy market pricing signals. However, these smart meters based applications face many challenges in implementing them in the network including provisioning adequate resources for smart metering traffic to guarantee the required quality of service (QoS) level, maintaining scalability of applications that require complex computations, ensuring the security of the smart metering data, and providing a platform to identify the effect of communications networks on smart meter applications. This thesis investigates approaches to overcome the challenges in implementing smart meter based applications to achieve a reliable and cost-efficient electricity network. In particular, this thesis examines efficient solutions to overcome three key challenges: mechanisms to guarantee QoS levels when smart meters use public communications networks to transport their data, approaches to guarantee the scalable deployment of the complex smart meter based applications, and a platform to efficiently simulate smart grid networks along with its control and communication operations in order to assess smart grid applications. For the smart meter communications network, the public telecommunications network is considered as a cost-effective solution as it does not involve any separate installation or maintenance costs. However, when sharing network resources with both public traffic and smart metering traffic, required QoS levels of essential broadband services along with those of smart meters should be satisfied. To this end, this thesis explores resource allocation mechanisms in both core network and access networks on providing adequate services for all users in the shared network. In particular, this thesis proposes approaches to classify and schedule traffic in the core network, in addition to scheduling algorithms for long-term evolution (LTE) wireless access network when it shares its resources with smart meter traffic. Our simulation results indicate that the proposed scheduling mechanisms can significantly improve the QoS performance of the public traffic and smart grid traffic related to automatic meter reading and outage detection applications. Another key challenge faced by smart meter applications is to provide scalable deployment for the smart grid applications such as the demand side management (DSM). Though it is important to integrate a large number of energy customers to DSM to achieve desired cost-effective supply demand balance, limited computational resources such as memory hinder this integration. Therefore, this thesis explores efficient ways to accommodate a large number of customers into DSM by using aggregators that consolidate underlying customer’s energy, power, and cost requirements. We also present simplified methods to distribute the aggregated optimal decisions to the end customers and demonstrate the applicability of the proposed method, by using it in a large electricity network. The results reveal that the proposed aggregated method is better in providing scalability and also in achieving a higher satisfaction level among customers. Moreover, as the smart grid is an interconnected network comprising of both electricity and communications networks, smart grid applications would be affected by imperfect communications networks. Therefore, these applications should be evaluated based on their robustness to communications errors and their design should be improved considering those effects. Hence, in this thesis, we present the design of a co-simulation platform, which is capable of simulating smart grid applications with both electricity and communications networks. The feasibility of this proposed platform is analysed by using it to assess the real-time pricing (RTP) application, which is one of the important DSM application. Furthermore, by using this designed simulation platform, we explore the ways of utilising features of the public LTE communications networks for different RTP designs. Overall, our studies reported in this thesis, provide insight into deployment strategies that can be used to realise scalable smart meter based applications in cost-effective manner with guaranteed QoS and user satisfaction.
  • Item
    Thumbnail Image
    Pedestrian head detection in deep motion networks
    Hsu, Fu-Chun ( 2017)
    MONITORING large crowds using video cameras is critical and challenging. In large public spaces where people gather, such as train stations, stadia, and at public events such as public protest speeches, security is of paramount importance. In the pipeline of automated analysis systems that detect unusual behavior, pedestrian detection is key and plays an essential role. It serves as a start module in all surveillance applications, including human recognition, people tracking and trajectory, and crowd analysis. Current practice is to manually monitor surveillance videos, which is a tedious task. However, automated detection of pedestrians from surveillance videos faces challenges such as occlusion, low resolution, poor quality, and cluttered background, making automated pedestrian detection difficult. To over- come these challenges, a head detection framework based on moving information in videos is proposed. The proposed framework consists of (1) novel shallow motion features that learn head information in low-quality videos, (2) extensions of these features to learn deep motion convolutional representations to further solve problems from low-resolution pedestrians and cluttered background, (3) a highly computationally efficient detection framework based a novel deep network architecture. In experiments conducted on the difficult PETS2009 dataset, the proposed framework achieved excellent head detection results, especially in an end-to-end Fully Convolutional Network pipeline.
  • Item
    Thumbnail Image
    Enhancing thermoelectric performance of graphene through nanostructuring
    Hossain, Md Sharafat ( 2017)
    The field of thermoelectrics has the potential to solve many problems that are predominant in the electronics industry. However, due to low-efficiency, high material cost, and toxicity, this field is yet to meet its expectations. On another, graphene, a two-dimensional allotrope of carbon has attracted a great deal of attention due to its unique electronic, thermal and mechanical properties. In this work, we explore the suitability of planar materials like graphene for thermoelectric application and propose techniques that have the potential to address the issues that are holding back thermoelectrics from large scale applications. Due to the high thermal conductivity and lack of electrical band gap graphene has not been thoroughly investigated for thermoelectric application. In this work, the inherent properties of graphene are modified through nanostructuring in order to make it suitable for the thermoelectric application. First, the graphene sheet is nanostructured into graphene nanoribbon (GNR). The focus is given to the electronic properties that affect the thermoelectric performance. Graphene nano-ribbon with different width and array combinations are analyzed. To explain the experimental results, a model that considers the effect of scattering mechanism and random charge carrier fluctuations is proposed. Based on the model, a route to further enhance the thermoelectric properties of graphene is presented. In the next part of this thesis, further nanostructuring of graphene nano-ribbon is investigated. Simulation is carried out for GNR with pores. Pores impede the phonon transmission while electron transmission continues to take place at the edges. There have been several reports on utilizing GNRs with pores for thermoelectric application, but the design methodology of such structures has not been thoroughly investigated. In this work, the effect of pore dimensions on thermoelectric parameters is studied through quantum mechanical simulations. The results report a surprising relation between pore width and TE parameters which is later explained using physical insights. Similarly, another approach of nano-structuring GNRs through the introduction of break junctions is investigated. The motivation behind this work is to utilize the tunneling mechanism that is observed in GNR with break junction, in order to achieve delta like transmission spectra. Moreover, the nano-break impedes phonon transmission. As a result, the overall thermoelectric performance of the device is enhanced significantly. Finally, a novel approach of bio-sensing based on the Seebeck coefficient measurement of graphene is proposed and validated using quantum mechanical simulation. This proof of concept study indicates the wide range of applications that are enabled through exploiting thermoelectric properties of planar devices. Overall, the techniques and insights presented throughout the thesis are based on graphene but can be applied and investigated on other two-dimensional materials as well.