Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 4 of 4
  • Item
    Thumbnail Image
    Cluster validation and discovery of multiple clusterings
    Lei, Yang ( 2016)
    Cluster analysis is an important unsupervised learning process in data analysis. It aims to group data objects into clusters, so that the data objects in the same group are more similar and the data objects in different groups are more dissimilar. There are many open challenges in this area. In this thesis, we focus on two: discovery of multiple clusterings and cluster validation. Many clustering methods focus on discovering one single ‘best’ solution from the data. However, data can be multi-faceted in nature. Particularly when datasets are large and complex, there may be several useful clusterings existing in the data. In addition, users may be seeking different perspectives on the same dataset, requiring multiple clustering solutions. Multiple clustering analysis has attracted considerable attention in recent years and aims to discover multiple reasonable and distinctive clustering solutions from the data. Many methods have been proposed on this topic and one popular technique is meta-clustering. Meta-clustering explores multiple reasonable and distinctive clusterings by analyzing a large set of base clusterings. However, there may exist poor quality and redundant base clustering which will affect the generation of high quality and diverse clustering views. In addition, the generated clustering views may not all be relevant. It will be time and energy consuming for users to check all the returned solutions. To tackle these problems, we propose a filtering method and a ranking method to achieve higher quality and more distinctive clustering solutions. Cluster validation refers to the procedure of evaluating the quality of clusterings, which is critical for clustering applications. Cluster validity indices (CVIs) are often used to quantify the quality of clusterings. They can be generally classified into two categories: external measures and internal measures, which are distinguished in terms of whether or not external information is used during the validation procedure. In this thesis, we focus on external cluster validity indices. There are many open challenges in this area. We focus two of them: (a) CVIs for fuzzy clusterings and, (b) Bias issues for CVIs. External CVIs are often used to quantify the quality of a clustering by comparing it against the ground truth. Most external CVIs are designed for crisp clusterings (one data object only belongs to one single cluster). How to evaluate the quality of soft clusterings (one data object can belong to more than one cluster) is a challenging problem. One common way to achieve this is by hardening a soft clustering to a crisp clustering and then evaluating it using a crisp CVI. However, hardening may cause information loss. To address this problem, we generalize a class of popular information-theoretic based crisp external CVIs to directly evaluate the quality of soft clusterings, without the need for a hardening step. There is an implicit assumption when using external CVIs for evaluating the quality of a clustering, that is, they work correctly. However, if this assumption does not hold, then misleading results might occur. Thus, identifying and understanding the bias behaviors of external CVIs is crucial. Along these lines, we identify novel bias behaviors of external CVIs and analyze the type of bias both theoretically and empirically.
  • Item
    Thumbnail Image
    Design and adjustment of dependency measures
    Romano, Simone ( 2015)
    Dependency measures are fundamental for a number of important applications in data mining and machine learning. They are ubiquitously used: for feature selection, for clustering comparisons and validation, as splitting criteria in random forest, and to infer biological networks, to list a few. More generally, there are three important applications of dependency measures: detection, quantification, and ranking of dependencies. Dependency measures are estimated on finite data sets and because of this the tasks above become challenging. This thesis proposes a series of contributions to improve performances on each of these three goals. When differentiating between strong and weak relationships using information theoretic measures, the variance plays an important role: the higher the variance, the lower the chance to correctly rank the relationships. In this thesis, we discuss the design of a dependency measure based on the normalized mutual information whose estimation is based on many random discretization grids. This approach allows us to reduce its estimation variance. We show that a small estimation variance for the grid estimator of mutual information if beneficial to achieve higher power when the task is detection of dependencies between variables and when ranking different noisy dependencies. Dependency measure estimates can be high because of chance when the sample size is small, e.g. because of missing values, or when the dependency is estimated between categorical variables with many categories. These biases cause problems when the dependency must have an interpretable quantification and when ranking dependencies for feature selection. In this thesis, we formalize a framework to adjust dependency measures in order to correct for these biases. We apply our adjustments to existing dependency measures between variables and show how to achieve better interpretability in quantification. For example, when a dependency measure is used to quantify the amount of noise on functional dependencies between variables, we experimentally demonstrate that adjusted measures have more interpretable range of variation. Moreover, we demonstrate that our approach is also effective to rank attributes during the splitting procedure in random forests where a dependency measure between categorical variables is employed. Finally, we apply our framework of adjustments to dependency measures between clusterings. In this scenario, we are able to analytically compute our adjustments. We propose a number of adjusted clustering comparison measures which reduce to well known adjusted measures as special cases. This allows us to propose guidelines for the best applications of our measures as well as for existing ones for which guidelines are missing in literature, e.g. for the Adjusted Rand Index (ARI).
  • Item
    Thumbnail Image
    Recommendation systems for travel destination and departure time
    Xue, Yuan ( 2015)
    People travel on a daily basis to various local destinations such as office, home, restaurant, appointment venue, and sightseeing spot. It is vital to most people that we have a positive experience and high efficiency of daily travel. With this observation, my research strives to provide daily-travel related recommendations by solving two optimisation problems, driving destination prediction and departure time recommendation for appointments. Our “SubSyn” destination prediction algorithm, by definition, predicts potential destinations at real-time for drivers on the road. Its applications include recommending sightseeing places, pushing targeted advertisement, and providing early warnings for road congestion. It employs the Bayesian inference framework and second-order Markov model to compute a list of high-probability destinations. The key contributions include real-time processing and the ability to predict destinations with very limited amount of training data. We also look into the problem of privacy protection against such prediction. The “iTIME” departure time recommendation system is a smart calendar that reminds users to depart in order to arrive at appointment venues on time. It also suggests the best transport mode based on users’ travel history and preferences. Currently, it is very inefficient for people to manually and repeatedly check the departure time and compare all transport modes using, for instance, Google Maps. The functionalities of iTIME were realised by machine learning algorithms that learn users’ habits, analyse the importance of appointments and optimal mode of transport, and estimate the start location and travel time. Our field study showed that we can save up to 40% of time by using iTIME. The system can also be extended easily to provide additional functionalities such as clashing appointments detection and appointment scheduling, both taking into account the predicted start location and travel time of future appointments. Both problems can be categorised as recommender systems (or recommendation systems) that provide insightful suggestions in order to improve daily-travel experiences and efficiency.
  • Item
    Thumbnail Image
    Scalable approaches for analysing high density single nucleotide polymorphism array data
    Wong, Gerard Kum Peng ( 2012)
    Prior to making inferences from the raw data produced by these microarrays, several challenges need to be addressed. First, it is important to limit the impact of noise on microarray measurements while maintaining data integrity. An unexplored aspect of noise is the extent of probeset sequence identity in SNP microarrays. Second, microarray-based datasets often have at least two orders of magnitude more probesets than the number of samples they describe. This poses a challenge for traditional statistical tests when used in this context. Third, the number of features in each dataset is large even when sample sizes are small, thus computationally efficient approaches are required to analyse these datasets. Finally, with improving resolution of SNP arrays, there is a need to exploit this improvement in resolution to identify finer-scaled mutations in the human genome. Most existing approaches deal with these challenges at an individual sample level and do not look for consensus change across the population to identify sites of DNA mutation. Other approaches artificially smooth or segment the data to obtain uniform segments of copy number change, and lose possible fine-scaled copy number changes in the process. Others are based on computationally expensive approaches that do not scale well to array resolution and sample size. Our first contribution is a comprehensive survey of the sequence identity of all probesets for all variants of the Affymetrix Genome-Wide Human SNP array. This survey assesses the target uniqueness of every probeset and provides a basis for the development of a set of gold standard labels of copy number change between genders. The derived sets of gold standard labels are a benchmark for assessing the performance of various algorithms in detecting recurrent copy number change. This benchmark is utilised in the evaluation of our second and third contribution. Our second contribution is a statistical approach called Detecting Recurrent Copy Number Changes Using Rank Order Statistics (DRECS), which is designed to identify regions of consensus copy number change across multiple samples in SNP array datasets. Through the use of rank-based statistics, DRECS efficiently draws on the statistical power of using multiple samples to identify fine-scaled copy number changes down to the width of a single probe in a computationally efficient way. Our third contribution is called Sum of Ranks Exact Test (SoRET), a non-parametric extension of DRECS. SoRET addresses SNP datasets with small sample sizes and makes no assumptions about the distribution from which the data was sampled. Its performance in terms of Type I and Type II errors is superior to competitive parametric and non-parametric statistical tests for small sample sizes. Our fourth contribution is a feature set reduction approach called FSR. FSR enables existing filter-based feature selection approaches to handle large dimensional microarray-type datasets by pruning irrelevant and redundant features. A novel scoring measure is developed to assess the strength of each feature in terms of sample class discrimination. FSR uses measures of entropy to efficiently gauge the contribution of higher order feature patterns to avoid combinatorial explosions in assessing the utility of features. In our tested datasets, classifiers trained on features selected from FSR-reduced feature sets have shown notably better predictive accuracy than classifiers trained on features selected from complete feature sets.