Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 24
  • Item
    Thumbnail Image
    Energy Efficient Time Synchronization in WSN for Critical Infrastructure Monitoring
    Rao, AS ; Gubbi, J ; Tuan, N ; Nguyen, J ; Palaniswami, M ; Wyld, DC ; Wozniak, M ; Chaki, N ; Meghanathan, N ; Nagamalai, D (SPRINGER-VERLAG BERLIN, 2011-01-01)
    Wireless Sensor Networks (WSN) based Structural Health Monitoring (SHM) is becoming popular in analyzing the life of critical infrastructure such as bridges on a continuous basis. For most of the applications, data aggregation requires high sampling rate. A need for accurate time synchronization in the order of 0.6 − 9 μs every few minutes is necessary for data collection and analysis. Two-stage energy-efficient time synchronization is proposed in this paper. Firstly, the network is divided into clusters and a head node is elected using Low-Energy Adaptive Clustering Hierarchy based algorithm. Later, multiple packets of different lengths are used to estimate the delay between the elected head and the entire network hierarchically at different levels. Algorithmic scheme limits error to 3-hop worst case synchronization error. Unlike earlier energy-efficient time synchronization schemes, the achieved results increase the lifetime of the network.
  • Item
    Thumbnail Image
    A robust algorithm for foreground extraction in crowded scenes
    Rao, AS ; Gubbi, J ; Marusic, S ; Palaniswami, M (IEEE, 2012-12-01)
    The widespread availability of surveillance cameras and digital technology has improved video based security measures in public places. Surveillance systems have been assisting officials both in civil and military applications. It is helping to identify unlawful activities by means of uninterrupted transmission of surveillance videos. By this, the system is adding extraneous onus on to the already existing workload of security officers. Instead, if the surveillance system is intelligent and efficient enough to identify the events of interest and alert the officers, it alleviates the burden of continuous monitoring. In other words, our existing surveillance systems are lacking to identify the objects that are dissimilar in shape, size, and color especially in identifying human beings (nonrigid motions). Global illumination changes, frequent occurrences of shadows, insufficient lighting conditions, unique properties of slow and fast moving objects, unforeseen appearance of objects and its behavior, availability of system memory, etc., may be ascribed to the limitations of existing systems. In this paper, we present a filtering technique to extract foreground information, which uses RGB component and chrominance channels to neutralize the effects of nonuniform illumination, remove shadows, and detect both slow-moving and distant objects.
  • Item
    Thumbnail Image
    Jeeva: Enterprise Grid Enabled Web Portal for Protein Secondary Structure Prediction
    Jin, C ; Gubbi, J ; Buyya, R ; Palaniswami, M ; Thulasiram, R (IEEE, 2008)
    This paper presents a Grid portal for protein secondary structure prediction developed by using services of Aneka, a .NET-based enterprise Grid technology. The portal is used by research scientists to discover new prediction structures in a parallel manner. An SVM (Support Vector Machine)-based prediction algorithm is used with 64 sample protein sequences as a case study to demonstrate the potential of enterprise Grids.
  • Item
    Thumbnail Image
    Real value solvent accessibility prediction using adaptive support vector regression
    Gubbi, J ; Shilton, A ; Palaniswami, M ; Parker, M (IEEE, 2007)
  • Item
    Thumbnail Image
    Protein topology classification using two-stage support vector machines.
    Gubbi, J ; Shilton, A ; Parker, M ; Palaniswami, M (Universal Academy Press, 2006)
    The determination of the first 3-D model of a protein from its sequence alone is a non-trivial problem. The first 3-D model is the key to the molecular replacement method of solving phase problem in x-ray crystallography. If the sequence identity is more than 30%, homology modelling can be used to determine the correct topology (as defined by CATH) or fold (as defined by SCOP). If the sequence identity is less than 25%, however, the task is very challenging. In this paper we address the topology classification of proteins with sequence identity of less than 25%. The input information to the system is amino acid sequence, the predicted secondary structure and the predicted real value relative solvent accessibility. A two stage support vector machine (SVM) approach is proposed for classifying the sequences to three different structural classes (alpha, beta, alpha+beta) in the first stage and 39 topologies in the second stage. The method is evaluated using a newly curated dataset from CATH with maximum pairwise sequence identity less than 25%. An impressive overall accuracy of 87.44% and 83.15% is reported for class and topology prediction, respectively. In the class prediction stage, a sensitivity of 0.77 and a specificity of 0.91 is obtained. Data file, SVM implementation (SVMHEAVY) and result files can be downloaded from http://www.ee.unimelb.edu.au/ISSNIP/downloads/.
  • Item
    Thumbnail Image
    Stability Analysis of the Decomposition Method for solving Support Vector Machines
    Lai, Daniel ; SHILTON, ALISTAIR ; Mani, N. ; PALANISWAMI, MARIMUTHU ( 2005)
    In situations where processing memory is limited, the Support Vector Machine quadratic program can be decomposed into smaller sub-problems and solved sequentially. The convergence of this method has been proven previously through the use of a counting method. In this initial investigation, we approach the convergence analysis by treating the decomposed sub-problems as subsystems of a general system. The gradients of the subproblems and the inequality constraints are explicitly modelled as system variables. The change in these variables during optimization form a dynamic system modelled by vector differential equations. We show that the change in the objective function can be written as the energy in the system. This makes it a natural Lyapunov function which has an asymptotically stable point at the origin. The asymptotic stability of the whole system then follows under certain assumptions.
  • Item
    Thumbnail Image
    Disulphide Bridge Prediction using Fuzzy Support Vector Machines
    Jayavardhana, Rama G. L. ; SHILTON, ALISTAIR ; PARKER, MICHAEL ; PALANISWAMI, MARIMUTHU ( 2005)
    One of the major contributors to the native form of protien is cystines forming covalent bonds in oxidized state. The Prediction of such bridges from the sequence is a very challenging task given that the number of bridges will rise exponentially as the number of cystines increases. We propose a novel technique for disulphide bridge prediction based on Fuzzy Support Vector Machines. We call the system DIzzy. In our investigation, we look at disulphide bond connectivity given two Cystines with and without a priori knowledge of the bonding state. We make use of a new encoding scheme based on physico-chemical properties and statistical features such as the probability of occurrence of each amino acid in different secondary structure states along with psiblast profiles. The performance is compared with normal support vector machines. We evaluate our method and compare it with the existing method using SPX dataset.
  • Item
    Thumbnail Image
    Distributed data fusion using support vector machines
    Challa, S. ; Palaniswami, M. ; Shilton, A. ( 2002)
    The basic quantity to be estimated in the Bayesian approach to data fusion is the conditional probability density function (CPDF). In recent times, computationally efficient particle filtering approaches are gaining growing importance in estimating these CPDF. In this approach, i.i.d samples are used to represent the conditional probability densities. However, their application in data fusion is severely limited due to the fact that the information is stored in the form of a large set of samples. In all practical data fusion systems that have limited communication bandwidth, broadcasting this probabilistic information, available as a set of samples, to the fusion center is impractical. Support vector machines, through statistical learning theory, provide a way of compressing information by generating optimal kernal based representations. In this paper we use SVM to compress the probabilistic information available in the form of i.i.d samples and apply it to solve the Bayesian data fusion problem. We demonstrate this technique on a multi-sensor tracking example.
  • Item
    Thumbnail Image
    Machine learning using support vector machines
    Palaniswami, M. ; Shilton, A. ; Ralph, D. ; Owen, B. D. ( 2000)
    Machine learning invokes the imagination of many scientific minds due to its potential to solve complex and difficult real world problems. This paper gives methods of constructing machine learning tools using Support Vector Machines (SVMs). We first give a simple example to illustrate the basic concept and then demonstrate further with a practical problem. The practical problem is concerned with electronic monitoring of fishways for automatic counting of different fish species for the purpose of environmental management in Australian rivers. The results illustrate the power of the SVM approaches on the sample problem and their computational attractiveness for practical implementations.
  • Item
    Thumbnail Image
    A convergence rate estimate for the SVM decomposition method
    Lai, D. ; Shilton, A. ; Palaniswami, M. ( 2005)
    The training of Support Vector Machines using the decomposition method has one drawback; namely the selection of working sets such that convergence is as fast as possible. It has been shown by Lin that the rate is linear in the worse case under the assumption that all bounded Support Vectors have been determined. The analysis was done based on the change in the objective function and under a SVMlight selection rule. However, the rate estimate given is independent of time and hence gives little indication as to how the linear convergence speed varies during the iteration. In this initial analysis, we provide a treatment of the convergence from a gradient contraction perspective. We propose a necessary and sufficient condition which when satisfied provides strict linear convergence of the algorithm. The condition can also be interpreted as a basic requirement for a sequence of working sets in order to achieve such a convergence rate. Based on this condition, a time dependant rate estimate is then further derived. This estimate is shown to monotonically approach unity from below.