Electrical and Electronic Engineering - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 16
  • Item
    Thumbnail Image
    Transthoracic resistance during cardiac defibrillation
    Tulloh, Andrew McCall ( 1983)
    Before entering into a description of the scope of this project, it is worth mentioning that human transthoracic resistance, not impedance, is the subject of this thesis. The relationship between voltage and current in the human body is known to-vary with both current and time. Complex impedance is an abstraction, and is not useful for describing such a non-linear, time varying system. The concept is best suited to linear, time invariant systems or, at worst, non-linear time invariant systems because of the system model which is implied with complex impedance. Certainly, it may be reasonable to represent the electrical transfer function of the thorax as a combination of non linear phase lead, phase lag and transconductance parameters, but such a model loses significance when the only waveform available for testing is a very slightly underdamped sinusoid (the usual DC defibrillating waveform). Of course, measurements can be taken at low current levels, below a few tens of milliamps, with different source waveforms and with no adverse effects on the subject; however, this does not necessarily give information about what happens at the higher current levels. Voltage and current are the only two "real" parameters available for measurement. The transfer function between these two variables for the human thorax can be completely described for a particular current (or voltage) waveform if they are both measured at each point in time for the duration of the waveform. The term "resistance" will be used hereafter to refer to the instantaneous ratio of this voltage and current. Closed chest defibrillation of the heart is carried out frequently in hospitals as treatment for cardiac arrest, ventricular fibrillation and other cardiac arrythmias. The success rate is not 100%, and since the turn of the century much work has been done to isolate and quantify thevariables affecting the oùtcome of such resuscitation attempts. Most workers now agree that the peak current density in the myocardium during defibrillation is one of the most important factors determining success. It is clear that this will be critically affected by the electrical resistance of the thorax, where the defibrillating electrodes are applied. (From Ch. 1)
  • Item
    Thumbnail Image
    An adaptive system for patient-controlled analgesia vol.1
    Rudolph, Heiko E. R. ( 1995-11)
    Patient-Controlled Analgesia (PCA) has become accepted as an important means of self-regulated relief from post-surgical pain. In commonly used PCA systems, patients use a hand-held push-button to indicate the presence of pain and initiate a predetermined bolus of drug infusion. A disadvantage of this system is that no means is provided to accommodate variations in the intensity of pain or the sensitivity of the patient to the analgesic in use apart from the frequency of button pushing. A fixed rate background infusion is usually an option. A new adaptive PCA system is proposed to provide improved PCA through the use a variable background infusion, the provision for an extended high range of analgesic dosages and a novel handset which allows patients to rate their pain. The total system is under the control of an expert algorithm and is proposed to overcome some of the shortcomings of current systems. (For complete abstract open document)
  • Item
    Thumbnail Image
    Managed DC power reticulation systems
    Morton, Anthony Bruce ( 1999-11)
    Electric power engineering, as it applies to low-voltage power reticulation in buildings and industrial sites, is ripe for a ‘paradigm shift’ to bring it properly into the Electronic Age. The conventional alternating-current approach, now over a hundred years old, is increasingly unsatisfactory from the point of view of plant and appliance requirements. Alternative approaches can deliver substantial cost savings, higher efficiencies, power quality improvements, and greater safety. Power reticulation systems in the future can be expected to differ from present systems in two key respects. The first is a greatly increased role for direct current; the second is the augmentation of the power system with a wide range of ‘management’ technologies. Combining these two trends, which can already be observed today, leads to consideration of ‘managed DC’ power reticulation systems, operating from AC bulk supply mains via AC-DC converters.
  • Item
    Thumbnail Image
    Optical fibre-loop buffers
    Dickson, Adam Matthew ( 1996)
    This thesis contains a detailed investigation of fibre-loop buffers. Fibre-loop buffers will be required in all-optical packet-switching networks which may be the basis of future telecommunications networks. The term "all-optical" or "Photonic" refers to the fact that this buffer stores this data in an optical form. At no stage is optical data converted from an optical into an electrical form or vice-versa. The buffer investigated in this thesis achieves all-optical data storage using an optical fibre delay-line (hence “fibre-loop”). Data entering the buffer passes into the input of this delay-line. Some time later (typically less than a microsecond) the data appears at the output of the delay-line. The data can be passed back into the input of this delay-line and the process repeated, thereby extending the storage time. Optical power loss is inevitable as stored data make repeated round-trips through the fibre delay-line. A semiconductor optical amplifier (SOA) is inserted in series with this delay-line to provide all-optical amplification and compensate for this power loss. Unfortunately, a SOA also generates amplified spontaneous emission noise (ASE) which successively adds to and degrades stored data as it makes multiple passes through the SOA. The presence of ASE therefore limits the maximum number of roundtrips (recirculations) that data can be stored in the memory loop. This thesis contains new experimental data showing the accumulation of ASE. A characteristic of the fibre-loop buffer is that small changes in the gain of the SOA can have a cumulative effect on the power level of stored data after a few recirculations. Such large changes of power level must be avoided if the fibre-loop buffer is to be reliable storage system with predictable characteristics. One cause of such gain changes is gain compression which causes the gain of a SOA to decrease at high input power levels. Previous researchers have utilised the negative-feedback effect caused by gain compression to stabilise the power level of stored data in a fibre-loop buffer, in what is a partial answer to the above requirement. Near-travelling-wave SOAs also possess Fabry-Pérot ripple in their gain spectra which is caused by residual end-facet reflections. This ripple is shown in this thesis to also affect the power levels of stored data as well as making the performance (i.e. the maximum storage time) of a fibre-loop buffer dependent on its wavelength. It is shown in this thesis that the thermal characteristics of the SOA active region also influence the power levels of stored data. Both the gain and Fabry-Pérot ripple characteristics depend on the temperature of the active region. This latter quantity is in turn dynamically coupled to the bias current level (with microsecond to millisecond time constants) of the SOA. It is shown in this thesis that since the bias current level is likely to vary in a complicated manner with time in a practical application, the cumulative effect of the SOA thermal characteristics on the power level of stored data can be large and unpredictable unless corrective measures are taken. The phenomena described above are complicated by the carrier recombination dynamics in the SOA, which affects the degree of gain compression and also the Fabry-Pérot ripple characteristics on sub-nanosecond time scales. It is shown in this thesis that the dynamic behaviour of gain compression significantly distorts high bit-rate data as well as affecting the bit-error-rate (BER) at subsequent detection. It is shown by experiment in this thesis that all of the phenomena described above affect fibre-loop buffer performance to a significant degree. These experiments have been performed using a prototype fibre-loop buffer constructed by the author. Optical component characteristics (the SOA, as well as other components) which significantly affect the operation of the prototype fibre-loop buffer are discussed in detail. This thesis also contains a time-domain model of the prototype fibre-loop buffer which incorporates all of the above phenomena. This model successfully (and quantitatively) accounts for all of the observed behaviour of the prototype buffer. The time-domain model, having been proven, is then used to predict the performance of the prototype fibre-loop buffer under realistic operating conditions at data-rates up to 40 Gbit/s. This model is also used to predict the performance of an improved fibre-loop buffer design using a strained-quantum-well SOA. It is also shown using the time-domain model that the use of gain compression to stabilise power levels requires a fibre-loop buffer to operate in such a way that it provides sub-optimal storage times. An active feedback mechanism is shown be a better way of guaranteeing repeatable operation of a fibre-loop buffer. This feedback mechanism monitors the power level of stored data and adjusts the gain of the memory loop by changing the bias current level of the SOA.
  • Item
    Thumbnail Image
    The bioengineering development of a multi-channel, implantable hearing prosthesis for the profoundly deaf
    Forster, Ian Cameron ( 1978)
    This thesis describes the bioengineering development of a sensory prosthesis system specifically dedicated to the multi-channel electrical stimulation of the terminations of the auditory nerve in the cochleae of profoundly deaf persons. By simulating the gross pattern of electrical activity that would exist in the auditory nerve of a person with normal hearing, it is hoped that the sensation of hearing may be restored, and in particular, speech comprehension and communication re-established. In order that the complications associated with a percutaneous connection to the intra-cochlear array may be avoided, a system has been devised which comprises an implantable stimulator controlled transcutaneously by an external transmitter. The implementation of this concept has established an interface between the electrode array and an external speech processor which is sufficiently transparent with respect to stimulus parameter control, to permit the investigation of coding schemes based on current auditory neurophysiological studies. In particular, control over both the amplitude and relative time of stimulation (or phase), as well as frequency of stimulation, of up to fifteen independent channels is possible. The phase control facility is considered a unique feature of the system for coding temporal data. Finally, the transcutaneous link has been realised using a novel, dual radio frequency coupling system to provide efficient transfer of both power and data to the implanted device. The realisation of this link has been based upon the results of basic studies concerning the properties of coupling networks, together with the development of a high efficiency switching mode power converter to effect transcutaneous power transfer.
  • Item
    Thumbnail Image
    Interleaving techniques for high speed data transmission
    Hui, Wing Hong ( 1993)
    Interleaving is a technique used to convert a transmission channel with memory into one that is memoryless. The performance of Forward Error Correction (FEC) systems operating in the presence of burst errors is improved by passing the coded signal through an interleaving process. Commercial FEC sub-systems such as Viterbi and Reed-Solomon decoders are now commonplace, however interleavers, while indispensable, are still quite rare. This dissertation provides a comprehensive review of the two main interleaver types: block and convolutional interleavers. Following this review, the optimum convolutional interleaver is chosen for further analysis. To gain some "real-time" experience and to investigate the commercial potential of a convolutional interleaver, a variable rate interleaver has been successfully implemented on a TMS320C51 Digital Signal Processor (DSP). Many factors were considered in this implementation: throughput, synchronisation, interleaving depth and full-duplex interleaving and de-interleaving. To test the implementation, the proposed convolutional interleaver was finally interfaced to a commercial 1024 QAM 2Mbit/s modem. The investigation of the implementation of interleavers with DSP indicates that there is a need for more compact and flexible interleaver structures which can be readily integrated (in VLSI or DSP). The final part of the dissertation focused on cascaded and adaptive interleavers. Cascaded interleavers allow more sophisticated interleavers to be constructed from simple interleaving blocks. Adaptive interleavers provide the ability to adjust the interleaving depth (and thus the burst error protection) dynamically. A comprehensive computer simulation was developed and used for these investigations. The previously mentioned DSP based interleaver was also interfaced to the host personal computer (PC). This system facilitates rapid simulation results with the interleaving part of the simulation being run in real-time. In summary, this thesis provides new designs and associated implementation results for various interleaving systems including high speed single chip, variable rate, byte oriented convolutional interleavers. Based on a novel dynamic interleaver concept, a new adaptive interleaving system is proposed and this is supported with successful simulation results for advanced high speed data transmission system.
  • Item
    Thumbnail Image
    Performance analysis of Hidden Markov Model based tracking algorithms
    Arulampalam, Moses Sanjeev ( 1997)
    This thesis investigates the performance of Hidden Markov Model (HMM) based tracking algorithms. The algorithms considered have applications in frequency line tracking and target position tracking. The performance of these algorithms are investigated by a combination of theoretical and simulation based approaches. The theoretical based approach focuses on deriving upper bounds on probabilities of error paths in the output of the tracker. Upper bounds on specific error paths, conditioned on typical true paths are derived for a HMM based frequency line tracker that uses continuous valued observation vectors. These bounds are derived by enumerating possible estimated state sequences, and using necessary conditions on the Viterbi scores of these sequences. The derived upper bounds are found to compare well with simulation results. Next, upper bounds on average error event probabilities (averaged over all possible true paths) are derived for the same HMM based frequency tracker. Here, 'error event' refers to a brief divergence of the estimated track from the true path. Numerical computation of the derived upper bounds are shown to compare well with simulation results. Using these bounds a theorem is established which states that optimum tracking, corresponding to minimum error probability, is achieved when model transition probabilities are matched to 'true' transition probabilities of the underlying signal. Other interesting features of this algorithm are analysed, including robustness of the algorithm to variations in model transition probabilities, and characterisation of the benefits of using HMM based tracking as opposed to a simple approach based on isolated Maximum Liklihood estimators. The theoretical analysis is extended to two other HMM based frequency line trackers that use discrete valued observation vectors. A comparative study of the three HMM based frequency line trackers is carried out to arrive at conditions for the superiority of one algorithm over another. The simulation based approach to analysing performance consists of a combination of Monte-Carlo (MC) and Importance Sampling (IS) simulations. MC simulations are carried out at moderate SNR where required computation time for estimating performance measures is feasible. At high SNR, the error probabilities are small and the required computation time becomes infeasible. To overcome this, importance sampling schemes are designed which reduce the computation time by orders of magnitude. Importance sampling is a modified Monte-Carlo method which is useful in the simulation of rare probabilities. The basic principle is to use a different simulation density to increase the relative frequency of "important" events and then weight the observed data in order to obtain an unbiased estimate of the parameter of interest. In this thesis, a systematic procedure based on minimizing an upper bound to the IS estimator variance is used in the simulation density design. High efficiency gains, of order 1013 are demonstrated with the proposed scheme.
  • Item
    Thumbnail Image
    The design of an interface between a hardware ATM cell-stream splitter and the system bus of an experimental B-ISDN terminal
    Liew, Selbyn ( 1992)
    As worldwide standards on global networking and B-ISDN (Broadband Integrated Services Digital Network) are developed and increasing in detail and clarity, the stage is set for an advancement towards standardised high-speed global networking. With the enormous transmission bandwidth available through the use of optical communications technology, it is equally important that new applications are developed to make this worthwhile and to meet the demand and supply of communication needs today, which are calling for the increase of both volume and sophistication. In response to the imminent deployment of B-ISDN in the future, an experimental B-ISDN terminal is being developed in this department with a view to eventually provide an integration of video, voice and data communication services. This has been made possible with the CCITT adoption of Asynchronous Transfer Mode (ATM) as the basis for B-ISDN, which provides the flexibility to accommodate a variety of services with varying bit-rates. The work treated in this thesis is a continuation of this B-ISDN terminal project and is specifically targeted at completing the design of a hardware ATM cell-stream splitter to sort out ATM cells so that different cell types may be directed appropriately to where they are to be processed. The work here aims at completing the interface of the hardware splitter to the system bus of the computer workstation being used as the experimental terminal. A fair amount of attention has also been given to the use of the XC4000 Field Programmable Gate Array (FPGA) technology provided by Xilinx Inc.. The use of FPGAs has been a chief feature of the work on the hardware splitter.
  • Item
    Thumbnail Image
    Linearization of analogue optical transmitter by feedforward compensation
    Kwan, Anthony Chiu-Chi ( 1993)
    In recent years, analogue optical systems have received a lot of attention. Although conventional optical systems mostly employ digital format because of the low power budget requirement and good immunity to noise and distortion, analogue systems prove to have significant advantage over digital systems in applications like video distribution and satellite communication systems. However, analogue transmission systems require transmitters of low noise and low distortion. A number of linearization schemes have been proposed to reduce the distortion introduced by the analogue transmitter. One of the most widely used linearization scheme for analogue transmitter is feedforward compensation. Previous research has shown that feedforward compensation can reduce laser intensity noise as well as distortion. The work described in this thesis is to investigate design optimization of feedforward compensation and to develop a prototype of this optimized system. Experimental testing shows that the feedforward prototype is capable of reducing distortion products by 15 dB over 2.7 GHz. This work also involves modelling the feedforward system to investigate the factors limiting the operation bandwidth of the system. The model developed is a useful design tool for feedforward systems because the distortion reduction performance of the feedforward system can be predicted by characterising the individual components that made up the linearization scheme. Theoretical analysis of an alternative implementation of the feedforward system is also performed. Unlike conventional feedforward systems this new implementation uses only one laser source. Although the distortion reduction performance is degraded, the advantage is that problems associated with the use of two optical sources of different wavelengths is eliminated.
  • Item
    Thumbnail Image
    Comprehensive circulatory management of seriously ill patients with a closed-loop system
    Mason, David Glenn ( 1989)
    The objective of this project was to develop a closed-loop computer system to assist intensive care staff manage multiple drug and fluid infusions for the treatment of seriously ill patients with circulatory failure. The system was to be designed specifically for clinical application, forming an integral part of an intensive care system. The treatment of circulatory failure can present a complex management problem with the need for multiple, concurrently administered drug and fluid infusions. Hence a major part of this work involved the design of the control algorithm which determines the infusion to be adjusted, the amount it needs to be adjusted and the timing of the adjustment. The design of the algorithm is complicated by the range of circulatory responses to an adjustment of a drug infusion. In addition, there is currently no reliable means of continuously monitoring important haemodynamic variables such as cardiac output and pulmonary artery wedge pressure. It was therefore deemed appropriate to prepare a multivariable control strategy based on the control actions of clinical staff using expert system techniques. This required interaction with skilled clinical staff to identify general rules which they use when managing drug and fluid infusions for patients in shock. The practical implementation of this computer system required specific attention to patient safety and user interface. Therefore the architecture of the developed system contains a safety net to ensure a safe recovery path for all possible alarm conditions. All displayed information including alarm and advisory messages are presented in a clear and concise format. The trials of this computer system were set in stages to validate the system's knowledge base and to promote the progressive evolution of the system architecture. At first the computer system operated as a multi variable monitoring station. The knowledge base was embedded in the system architecture. Advisory modes of operation were implemented to assist in the validation of the knowledge base. The final and desired mode of operation automated the adjustment of multiple concurrently administered drug and fluid infusions. Progress through these development phases was facilitated by the analysis of data recorded by the system. Initial clinical trials have demonstrated the clinical utility of this system. In addition, the system reliably produced stable closed-loop control under difficult management conditions. Therefore the contribution of this study is a closed-loop computer system which satisfies the initial objective. Ample scope exists for the continued development of this system. Insights gained through its continued use will contribute to its refinement. New intensive care equipment can be incorporated into the system. This will see the system develop in sophistication and versatility.