Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 13
  • Item
  • Item
    Thumbnail Image
    Performance evaluation of inter-cell interference mitigation techniques for OFDMA cellular networks
    Wu, Weiwei ( 2010)
    For emerging cellular wireless systems, the mitigation of inter-cell interference is the key to achieve a high capacity and good user experience. This thesis is devoted to the performance analysis of interference mitigation techniques for the downlink in an orthogonal frequency division multiple access (OFDMA) network, with a focus on the Long Term Evolution (LTE) standard. We investigate two types of coordination techniques for interference mitigation, namely reuse partitioning and resource prioritization in the frequency domain. First, we assume best-effort elastic traffic for broad-band data networks and introduce a new metric, called the flow capacity, to indicate the maximum traffic intensity that can be supported by a base station sector while satisfying a minimum level of provided service. We develop a queueing theoretic methodology to analyse the flow capacity for standard reuse and reuse partitioning schemes with different scheduling algorithms. Using this analysis framework, we show how an improved cell-edge throughput can translate into an improvement in the flow capacity. We develop model variants for infinite (Poisson arrivals) and finite user populations; the infinite user population model is more tractable and yields simple, insightful expressions for the flow capacity, while the finite user population model has greater practical relevance. Furthermore, we develop a methodology to account for the effect of interference from neighbouring base stations with an arbitrary level of loading. Next, we propose possible distributed realizations of interference coordination schemes in a reuse-1 environment, which are based on setting allocation priority in the frequency domain. The proposed schemes are more suited to narrow-band services and can be implemented in a fractional loading scenario.
  • Item
    Thumbnail Image
    Optimal power allocation in interference-limited communication networks
    BADRUDDIN, NASREEN ( 2010)
    Communication networks such as wireless networks and Digital Subscriber Line (DSL) systems are plagued by the effects of interference, which degrades the signal to interference and noise ratio (SINR) at the receiver. Increasing the transmit power of one link may boost the SINR to its intended receiver at the expense of causing more interference in other links in the network. Therefore, power control is a balancing act between getting the most out of the individual link rates without degrading the performance of other links in the network. In this thesis, we tackle the problem of finding the optimal power scheme which maximises the overall network sumrate of different models of the interference network. The sumrate function is well-known to be non-concave in general, and so convex optimisation techniques may not be applicable in finding the optimal power solution. We present solutions to the power optimisation problem for various models of the interference network by treating interference as worst-case Gaussian noise. A recurring result with the networks investigated is the optimality of binary power control, where the power policy is simply to either switch them off or on. We also discovered a sufficient condition where binary power control is optimal for sumrate maximisation in a network treating interference as noise. Apart from the actual characterisation of the optimal power solution for these interference networks, our contribution is the various techniques used in arriving at the solutions. These include a method of grouping and performing piecewise comparison of the power vectors, the use of majorisation and Schur-concavity/convexity and dynamic programming. Our results give potential insights on solving other, more complex network models.
  • Item
    Thumbnail Image
    Hidden Markov models with multiple observation processes
    Zhao, James Yuanjie ( 2010)
    We consider a hidden Markov model with multiple observation processes, one of which is chosen at each point in time by a policy---a deterministic function of the information state---and attempt to determine which policy minimises the limiting expected entropy of the information state. Focusing on a special case, we prove analytically that the information state always converges in distribution, and derive a formula for the limiting entropy which can be used for calculations with high precision. Using this fomula, we find computationally that the optimal policy is always a threshold policy, allowing it to be easily found. We also find that the greedy policy is almost optimal.
  • Item
    Thumbnail Image
    High-speed coherent optical orthogonal frequency-division multiplexing design and implementation
    YANG, QI ( 2010)
    We have witnessed a dramatic increase of interest in orthogonal frequency-division multiplexing (OFDM) from optical communication community in recent years. The number of publications on optical OFDM has grown dramatically since it was proposed as an attractive modulation format for long-haul transmission either in coherent detection or direct-detection. Over the last few years, net transmission data rates grew at a factor of 10 per year at the experimental level. These progresses may eventually lead to realization of commercial transmission products based on optical OFDM in the future, with the potential benefits of high spectral efficiency and flexible network design. As the IP traffic continues to grow at a rapid pace, 100 Gb/s Ethernet is being considered as the new generation transport standard for IP networks. As the data rate approaches 100 Gb/s and beyond, the electrical bandwidth required for CO-OFDM would be at least 15 GHz and may not be cost-effective to implement even with the best commercial digital-to-analog converters (DAC) and analog-to-digital converters (ADC) in silicon integrated circuit (IC). To overcome this electrical bandwidth bottleneck, we propose and demonstrate the concept of OBM-OFDM to divide the entire OFDM spectrum into multiple orthogonal bands. Due to the inter-band orthogonality, the multiple OFDM bands with zero or small guard bands can be multiplexed and de-multiplexed without inter-band interference. With this scheme, transmission of 107 Gb/s CO-OFDM signal over 1000 km (10×100 km) standard single mode fiber (SSMF) has been realized using only erbium-doped fiber amplifier (EDFA) and without a need for optical dispersion compensation. Large amount of optical OFDM studies are reported based on offline processing using high-speed sampling scope which show many advantages of optical OFDM systems achieving over 100 Gb/s aggregated data rate and over thousands of km in transmission distance. However, many lack discussion on the potential implementation difficulties. Special requirements of optical communication systems such as several order of magnitude higher data rate than wireless counterpart requires careful studies in feasible real-time implementation. We demonstrate a field-programmable gate array (FPGA) based real-time CO-OFDM receiver at a sampling speed of 2.5 GS/s, and show its performance in receiving a subband of a 53.3 Gb/s multi-band signal. Additionally, by taking advantage of the multi-band structure of the OFDM signal, we successfully characterize a 53.3 Gb/s CO-OFDM signal in real-time by measuring one of its subbands at a time (3.55 Gb/s). Transmission band wdith of ever-advacing optical transport is one of the important cost-drivers. To save the transmission bandwidth, using advanced coding to improve system performance without the bandwidth extension is a promising technique. We show two approaches with different coding scheme for CO-OFDM - trellis coded modulation (TCM) and low-density-parity-check (LDPC). Both schemes are demonstrated using CO-OFDM with higher order moulation format for long haul transmission. The superior system performance of these two schemes shows that the combination of advanced coding with high-level modulation may be a promising technique to support high-spectral-efficiency and high-performance CO-OFDM transmission.
  • Item
    Thumbnail Image
    Design and analysis of multi-gigahertz track and hold amplifiers
    LIANG, HAILANG ( 2010)
    Ultra high speed, moderate resolution data acquisition systems such as high-end test and measurement equipment, radio, radar require ultra high speed analog to digital converters(ADCs). The track and hold amplifier (THA) is a crucial front-end building block in ADC since it makes a great impact on the dynamic performance of an ADC. At ultra high sampling rates the achievable resolution of a THA decreases as the input frequency increases. Particularly, as presented in the literature, the resolution is far less than 8 bit when the sampling frequency reaches 15 GSample/sec (GS/s), which cannot meet our design goal of an 8 bit, 15 GS/s THA. Many techniques can be used to increase the resolution. Among them two effective approaches are to reduce noise and distortion in the THA. In order to suppress the noise and distortion, first, a noise and distortion analysis of an ultra high speed THA operating in the saturation region is investigated. Simplified transistor models that are justified and valid for CMOS and SiGe HBT are derived for both noise and distortion analysis respectively. Theoretical noise analyses for both track mode and hold mode of the THA show that, at ultra high frequency, the noise contribution from the parasitic capacitors is significant. Simulation results locate the major noise contributor and indicate that the dominant noise in the THA is KT/C thermal noise. Volterra series analysis is applied to analyse the weakly nonlinear behavior of the THA. It shows strong agreement between the Volterra analysis and SpectreRF simulation. Several techniques are then applied to decrease the effects of noise and distortion in the THA. These techniques involves open-loop linearization used to compensate for the nonlinear distortion, the replica switch technique employed to reduce the hold mode feed through as well as the parasitic capacitance compensation technique applied to expand the input bandwidth. As a result, the linearity of the proposed THA is significantly improved and a moderate resolution (8 bit), ultra high sampling rate (15 GS/s) THA with switched source follower (SSF) configuration is designed as the front-end of ultrahigh speed analog to digital converters.
  • Item
    Thumbnail Image
    Flow control and performance optimization for multi-service networks
    JIN, JIONG ( 2010)
    As networks grow and evolve, there are various emerging applications and services available. Taking the Internet as an example, real-time applications (e.g., VoIP and online video clips) nowadays have become increasingly popular besides traditional data transmission services. Given multi-service networks, flow control is a key design issue to ensure the performance of heterogeneous applications as well as a fair network resource allocation without congestion. Specifically, from the flow control perspective, the applications in communication networks can be broadly categorized as either elastic traffic or inelastic traffic based on their Quality of Service (QoS) requirements. This in turn implies that future communication networks will have to support a multitude of applications or services with different QoS characteristics, for both elastic and inelastic traffic. The majority of flow control strategies are mainly designed to cater for elastic traffic, thus far from sufficient in multi-service networks. It is primarily because the QoS utility function of inelastic traffic does not satisfy the strict concavity condition. It is also found to possibly yield an unfair bandwidth allocation even for elastic traffic. To address these limitations, this thesis is concerned with flow control and performance optimization for multi-service networks. Since heterogeneous applications are engaged, it is no longer desirable to allocate bandwidth simply according to the traditional fairness criteria in terms of bandwidth. Instead, networks are expected to guarantee the performance of different applications. The utility function is hence assumed to be strictly increasing only, which is applicable to elastic traffic and inelastic traffic, but may not necessarily be strictly concave as strongly required by the existing flow control approaches. In this thesis, we develop a utility fair flow control framework to allocate bandwidth such that their associated utilities achieve certain fairness criteria, i.e., the fairness is considered in terms of utility rather than bandwidth. Indeed, the utility-based fairness criteria generalize and strengthen the bandwidth-based ones. The framework is first considered in a general wired network setting, like the Internet, with both single-path routing and multi-path routing scenarios. It involves an efficient and fair flow control scheme, consisting of source algorithm and congestion feedback mechanism, to achieve utility proportional fairness and/or utility max-min fairness and provide QoS guarantees. In addition, a sliding mode control-based algorithm is also devised to obtain utility max-min fairness with low overhead and rapid convergence. Furthermore, the theory of utility fair flow control is adapted from wired networks to wireless networks, wireless sensor networks in particular. As the capacity region of wireless networks is usually unknown and complex, and critically depends on the underlying MAC and physical layers, it is not a direct application. We tackle the difficulties in both a layered and cross-layered manner. Through the layered approach, we formulate the flow control and resource allocation problem in heterogeneous sensor networks by characterizing the channel capacity and energy consumption properly, and then derive the corresponding algorithms and evaluate their performance. Through the cross-layered approach, we not only present an elegant queue backpressure-based algorithm that jointly optimizes transport layer flow control and MAC layer scheduling policy, but also design a first-ever flexible and practical transmission protocol that can efficiently handle elastic and inelastic traffic for wireless sensor networks.
  • Item
    Thumbnail Image
    A general distributed source coding framework via block codes and their complements
    CAO, XIAOMIN ( 2010)
    Distributed Source Coding (DSC) is a practical coding technique for compression of correlated information sources which is suitable for sensor networks and video compression. Even though several DSC coding schemes have been proposed in the literature, connections between them are not clear. Moreover, the literature is not clear on extensions of these DSC schemes to more than two correlated sources. Also, the literature is not clear on how to achieve flexible compression rates per source. The aim of this thesis is to reveal the connections between DSC schemes by developing a general DSC framework for any number of sources which enables optimal compression, flexible code rates per source and a simplified coding structure. Before presenting our DSC scheme, we discuss linear block codes as sub spaces, and present the concept of “complementary code”. This concept is a key tool throughout this thesis. Our major contributions on DSC framework design are twofold. Firstly, we construct a generalized DSC scheme with conceptual simplicity via linear block codes and their complements, in which existing DSC schemes, as presented in the literature, are viewed as special cases. Not only does this framework allow for flexible non-asymmetric rates, but it also can be applied to any number of correlated information sources and achieve the optimal overall compression obtained by the asymmetric-rate DSC. In addition, our framework has flexibility in the choice of linear block codes: it enables practical DSC systems to be applied to any linear code and any corresponding complementary code with flexible code rates. Secondly, by utilizing the properties of complementarity, we simplify the DSC decoding algorithm into a combination of channel encoding, syndrome computation and syndrome-to-error- pattern decoding functions of linear codes and their complementary codes. Then, we interpret the framework in polynomial form since cyclic codes have simple hardware circuits in channel encoding and syndrome computation. By applying both cyclic codes and their cyclic complementary codes, our framework obtains hardware computational simplicity. This efficient coding algorithm is illustrated by a BCH code as well as a Reed-Solomon code. Finally, we investigate DSC systems which attempt to resolve the practical difficulties of compression rate initialization and modification. We take simultaneous error detection in the joint decoder as a rate adaptation trigger and build rate-adaptive DSC encoding schemes by cyclic codes and Complex Rotary codes, which exhibit computational efficiency. A part of the research results in this thesis has been published: X. Cao and M. Kuijper. A Distributed Source Coding framework for multiple sources. Information Theory and Its Applications, 2008. ISITA 2008, Dec. 2008. X. Cao and M. Kuijper. Distributed Source Coding with cyclic codes and their duals. International Conferences on Information, Communications and Signal Processing, Macau, Dec. 2009.
  • Item
    Thumbnail Image
    A paradigm for intelligent sustainable design in architecture via open systems evolution
    GU, YAN ( 2010)
    The assumption of an environmental crisis, e.g. global warming and climate change, has put into question the viability of the modern development pattern, especially in industrialised societies. This pattern is charactered by excessive exploitation of energy and resources without concern for negative impact upon the natural ecosystem. One of the primary challenges to sustainable development is identified as the dilemma between long term economic development and environmental damage, composed of temporal, social, economic, environmental, cultural and technological dimensions. Influenced by this industrial pattern, modern design of buildings and cities contributed to environmental degradation. To explore an alternative paradigm for sustainable design, a model of open systems evolution is investigated as a manifesto of the Post-Modernism world-view. Based on the scientific foundations of the Second Law of Thermodynamics and complex systems science, this new paradigm states that the creativity of the universe appears as the emergence of order or organised complexity via the mechanisms of open systems evolution. An open system spontaneously adapts to its host environment via gradients and uses inputs of available energy and resources for its evolution towards order, which is further supported by an internal structure through self-organisation, to minimise negative impact upon the host environment and to optimise environmental compatibility with the host environment as measured by entropy. The paradigm of open systems evolution implies a design framework for sustainable co-existence of man and nature, which directly impacts institutional education and design practice for the built environment. Technically, it suggests an intelligent model of sustainable design, in which, order is interpreted as the sustainable symbiosis of nature, buildings and cities. It can be realised by macroscopically minimised negative impacts of buildings and cities upon the natural ecosystem while meeting environmental demands of the end-users; and by microscopically establishing open thermodynamic relationships with nature, optimising their environmental performance via the mechanisms of open systems evolution and consequently generating an optimal balance of energy and resource usage. Theoretically, this proposition of sustainable design is a contextual design strategy for ecological sustainability of buildings and cities, adapting to the natural ecosystem and ensuring positive environmental impact.
  • Item
    Thumbnail Image
    Measurement in 802.11 wireless networks and its applications
    Ahmad Yar Khan, Malik ( 2010)
    Ease of deployment, wireless connectivity and ubiquitous mobile on-the-go computing has made the IEEE 802.11 the most widely deployed Wireless Local Area Network (WLAN) standard in the world. The wireless channel is fundamentally different to its wired counterpart and exhibit characteristics which are difficult to model. We therefore revert to measurement based characterization of wireless networks. A wireless network testbed was thus developed using off-the-shelf wireless cards. Available bandwidth measurement is particularly challenging in the wireless environment because of adaptive data rates, a time varying channel, and CSMA/CA based contention instead of simple FIFO queueing. We present and experimentally evaluate a novel available bandwidth estimation scheme, ‘SPEEDO’, for 802.11 networks in infrastructure mode, based on passive monitoring, without any need for access point cooperation or protocol modifications. An ability to accurately classify observed packet errors according to their root cause: physical layer or MAC layer contention, in 802.11 networks, opens up many opportunities for performance improvement at both, the MAC and IP layers. We investigate three approaches to isolate physical errors from contention ones based on: channel utilisation, error correlation and fragmentation based frame reservation. We implemented these approaches on our testbed and show that fragmentation technique outperforms the other approaches. We show that current rate adaptation algorithms in the IEEE 802.11 suffer under congested scenarios because of their inability to isolate the physical error from the contention. We introduce and compare two variants of a single core idea enabling the isolation and accurate measurement of physical packet error, based on exploiting existing features of the MAC standard in a novel way. One is based on the RTS/CTS mechanism, and the other on packet fragmentation. Using experimental results from a wireless testbed, we show these mechanisms can be used to improve the performance of two existing algorithms, SampleRate and AMRR, both for individual stations and for the system as a whole. Finally, we present ‘SmartRate’, a highly adaptive, throughput, congestion and environment aware rate adaptation algorithm. It is designed to avoid the weaknesses inherently present in the current rate adaptation algorithms and is efficient, robust and can readily adapt according to the situation (stationary or mobile). It is shown to outperform both, SampleRate and AMRR, in single and multi user (congested) scenarios. The theme of this dissertation can be summarized as the quest to understand, characterize and improve the behavior of wireless links by using real live measurements for the benefit of network users. The work presented contributes to the field of wireless networks by: development of an available bandwidth estimation technique, extraction of wireless link properties like physical PER, improvement of existing rate adaptation algorithms by incorporation of physical PER, and finally by the development of a more robust and dynamic rate adaptation algorithm.