Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 13
  • Item
    Thumbnail Image
    Machine learning for feedback in massive open online courses
    HE, JIZHENG ( 2016)
    Massive Open Online Courses (MOOCs) have received widespread attention for their potential to scale higher education, with multiple platforms such as Coursera, edX and Udacity recently appearing. Online courses from elite universities around the world are offered for free, so that anyone with internet access can learn anywhere. Enormous enrolments and diversity of students have been widely observed in MOOCs. Despite their popularity, MOOCs are limited in reaching their full potential by a number of issues. One of the major problems is the notoriously low completion rates. A number of studies have focused on identifying the factors leading to this problem. One of the factors is the lack of interactivity and support. There is broad agreement in the literature that interaction and communication play an important role in improving student learning. It has been indicated that interaction in MOOCs helps students ease their feelings of isolation and frustration, develop their own knowledge, and improve learning experience. A natural way of improving interactivity is providing feedback to students on their progress and problems. MOOCs give rise to vast amounts of student engagement data, bringing opportunities to gain insights into student learning and provide feedback. This thesis focuses on applying and designing new machine learning algorithms to assist instructors in providing student feedback. In particular, we investigate three main themes: i) identifying at-risk students not completing courses as a step towards timely intervention; ii) exploring the suitability of using automatically discovered forum topics as instruments for modelling students' ability; iii) similarity search in heterogeneous information networks. The first theme can be helpful for assisting instructors to design interventions for at-risk students to improve retention. The second theme is inspired by recent research on measurement of student learning in education research communities. Educators explore the suitability of using latent complex patterns of engagement instead of traditional visible assessment tools (e.g. quizzes and assignments), to measure a hypothesised distinctive and complex learning skill of promoting learning in MOOCs. This process is often human-intensive and time-consuming. Inspired by this research, together with the importance of MOOC discussion forums for understanding student learning and providing feedback, we investigate whether students' participation across forum discussion topics can indicate their academic ability. The third theme is a generic study of utilising the rich semantic information in heterogeneous information networks to help find similar objects. MOOCs contain diverse and complex student engagement data, which is a typical example of heterogeneous information networks, and so could benefit from this study. We make the following contributions for solving the above problems. Firstly, we propose transfer learning algorithms based on regularised logistic regression, to identify students who are at risk of not completing courses weekly. Predicted probabilities with well-calibrated and smoothed properties can not only be used for the identification of at-risk students but also for subsequent interventions. We envision an intervention that presents probability of success/failure to borderline students with the hypothesis that they can be motivated by being classified as "nearly there". Secondly, we combine topic models with measurement models to discover topics from students' online forum postings. The topics are enforced to fit measurement models as statistical evidence of instruments for measuring student ability. In particular, we focus on two measurement models, the Guttman scale and the Rasch model. To the best our knowledge, this is the first study to explore the suitability of using discovered topics from MOOC forum content as instruments for measuring student ability, by combining topic models with psychometric measurement models in this way. Furthermore, these scaled topics imply a range of difficulty levels, which can be useful for monitoring the health of a course and refining curricula, student assessment, and providing personalised feedback based on student ability levels and topic difficulty levels. Thirdly, we extend an existing meta path-based similarity measure by incorporating transitive similarity and temporal dynamics in heterogeneous information networks, evaluated using the DBLP bibliographic network. The proposed similarity measure might apply to MOOC settings to find similar students or threads, or thread recommendation in MOOC forums, by modelling student interactions in MOOC forums as a heterogeneous information network.
  • Item
    Thumbnail Image
    Cluster validation and discovery of multiple clusterings
    Lei, Yang ( 2016)
    Cluster analysis is an important unsupervised learning process in data analysis. It aims to group data objects into clusters, so that the data objects in the same group are more similar and the data objects in different groups are more dissimilar. There are many open challenges in this area. In this thesis, we focus on two: discovery of multiple clusterings and cluster validation. Many clustering methods focus on discovering one single ‘best’ solution from the data. However, data can be multi-faceted in nature. Particularly when datasets are large and complex, there may be several useful clusterings existing in the data. In addition, users may be seeking different perspectives on the same dataset, requiring multiple clustering solutions. Multiple clustering analysis has attracted considerable attention in recent years and aims to discover multiple reasonable and distinctive clustering solutions from the data. Many methods have been proposed on this topic and one popular technique is meta-clustering. Meta-clustering explores multiple reasonable and distinctive clusterings by analyzing a large set of base clusterings. However, there may exist poor quality and redundant base clustering which will affect the generation of high quality and diverse clustering views. In addition, the generated clustering views may not all be relevant. It will be time and energy consuming for users to check all the returned solutions. To tackle these problems, we propose a filtering method and a ranking method to achieve higher quality and more distinctive clustering solutions. Cluster validation refers to the procedure of evaluating the quality of clusterings, which is critical for clustering applications. Cluster validity indices (CVIs) are often used to quantify the quality of clusterings. They can be generally classified into two categories: external measures and internal measures, which are distinguished in terms of whether or not external information is used during the validation procedure. In this thesis, we focus on external cluster validity indices. There are many open challenges in this area. We focus two of them: (a) CVIs for fuzzy clusterings and, (b) Bias issues for CVIs. External CVIs are often used to quantify the quality of a clustering by comparing it against the ground truth. Most external CVIs are designed for crisp clusterings (one data object only belongs to one single cluster). How to evaluate the quality of soft clusterings (one data object can belong to more than one cluster) is a challenging problem. One common way to achieve this is by hardening a soft clustering to a crisp clustering and then evaluating it using a crisp CVI. However, hardening may cause information loss. To address this problem, we generalize a class of popular information-theoretic based crisp external CVIs to directly evaluate the quality of soft clusterings, without the need for a hardening step. There is an implicit assumption when using external CVIs for evaluating the quality of a clustering, that is, they work correctly. However, if this assumption does not hold, then misleading results might occur. Thus, identifying and understanding the bias behaviors of external CVIs is crucial. Along these lines, we identify novel bias behaviors of external CVIs and analyze the type of bias both theoretically and empirically.
  • Item
    Thumbnail Image
    Volatility homogenisation and machine learning for time series forecasting
    Kowalewski, Adam Waldemar ( 2016)
    Volatility homogenisation is a technique of looking at a process at regular points in space. In other words, we are only interested when the process moves by a certain quantum. The intuition and empirical evidence behind this is that we ignore smaller movements which are just noise, while only concerning ourselves with larger movements which represent the information from the underlying process. In this vein, we have derived theoretical results showing volatility homogenisation as a means of estimating the drift and volatility of theoretical processes and verify these results by simulation. This demonstrates the ability of a “homogenised” process to retain salient information regarding the underlying process. Volatility homogenisation is then coupled, as a preprocessing step, with various machine learning techniques which yields greater forecasting accuracy than when the machine learning techniques are used without volatility homogenisation preprocessing. In addition to this, we develop volatility homogenisation kernels for machine learning kernel-based techniques such as support vector machines, relevance vector machines and Gaussian processes for machine learning. The volatility homogenisation kernel causes a kernel-based machine learning technique to utilise volatility homogenisation internally and, with it, obtain better predictions on forecasting the direction of a financial time series. In order to create and use the volatility homogenisation kernel, we have developed a solution to the problem of a kernel taking inputs which have dimensions of differing size while still maintaining a convex solution to the model for techniques such as support vector machines, for a given set of parameters. Furthermore, we have demonstrated the efficacy of volatility homogenisation as a way of successfully investing using a Kelly criterion strategy. The strategy makes use of the information inherent in a support vector machine model which uses a volatility homogenisation kernel in order to calculate the necessary parameters for the Kelly betting strategy. We also develop strategies which select additional features for the support vector machine through the use of a nearest neighbour strategy using various measures of association. Overall, volatility homogenisation is a robust strategy for the decomposition of a process which allows various machine learning techniques to discern the main driving process inherent in a financial time series, which leads to better forecasts and investment strategies.
  • Item
    Thumbnail Image
    Design and adjustment of dependency measures
    Romano, Simone ( 2015)
    Dependency measures are fundamental for a number of important applications in data mining and machine learning. They are ubiquitously used: for feature selection, for clustering comparisons and validation, as splitting criteria in random forest, and to infer biological networks, to list a few. More generally, there are three important applications of dependency measures: detection, quantification, and ranking of dependencies. Dependency measures are estimated on finite data sets and because of this the tasks above become challenging. This thesis proposes a series of contributions to improve performances on each of these three goals. When differentiating between strong and weak relationships using information theoretic measures, the variance plays an important role: the higher the variance, the lower the chance to correctly rank the relationships. In this thesis, we discuss the design of a dependency measure based on the normalized mutual information whose estimation is based on many random discretization grids. This approach allows us to reduce its estimation variance. We show that a small estimation variance for the grid estimator of mutual information if beneficial to achieve higher power when the task is detection of dependencies between variables and when ranking different noisy dependencies. Dependency measure estimates can be high because of chance when the sample size is small, e.g. because of missing values, or when the dependency is estimated between categorical variables with many categories. These biases cause problems when the dependency must have an interpretable quantification and when ranking dependencies for feature selection. In this thesis, we formalize a framework to adjust dependency measures in order to correct for these biases. We apply our adjustments to existing dependency measures between variables and show how to achieve better interpretability in quantification. For example, when a dependency measure is used to quantify the amount of noise on functional dependencies between variables, we experimentally demonstrate that adjusted measures have more interpretable range of variation. Moreover, we demonstrate that our approach is also effective to rank attributes during the splitting procedure in random forests where a dependency measure between categorical variables is employed. Finally, we apply our framework of adjustments to dependency measures between clusterings. In this scenario, we are able to analytically compute our adjustments. We propose a number of adjusted clustering comparison measures which reduce to well known adjusted measures as special cases. This allows us to propose guidelines for the best applications of our measures as well as for existing ones for which guidelines are missing in literature, e.g. for the Adjusted Rand Index (ARI).
  • Item
    Thumbnail Image
    Similarity analysis with advanced relationships on big data
    Huang, Jin ( 2015)
    Similarity analytic techniques such as distance based joins and regularized learning models are critical tools employed in numerous data mining and machine learning tasks. We focus on two typical such techniques in the context of large scale data and distributed clusters. Advanced distance metrics such as the Earth Mover's Distance (EMD) are usually employed to capture the similarity between data dimensions. The high computational cost of EMD calls for a distributed solution, yet it is difficult to achieve a balanced workloads given the skewed distribution of the EMDs. We propose efficient bounding techniques and effective workload scheduling strategies on the Hadoop platform to design a scalable solution, named HEADS-Join. We investigate both the range joins and the top-k joins, and explore different computation paradigms including MapReduce, BSP, and Spark. We conduct comprehensive experiments and confirm that the proposed techniques achieve an order of magnitude speedup over the state-of-the-art MapReduce join algorithms. The hypergraph model is demonstrated to achieve excellent effectiveness in a wide range of applications where high-order relationships are of interest. When processing a large scale hypergraph, the straightforward approach is to convert it to a graph and reuse the distributed graph frameworks. However, such an approach significantly increases the problem size, incurs excessive replicas due to partitioning, and renders it extremely difficult to achieve a balanced workloads. We propose a novel scalable framework, named HyperX, to directly operate on a distributed hypergraph representation and minimize the numbers of replicas while still maintain a great workload balance among the distributed machines. We closely investigate an optimization problem of partitioning a hypergraph in the context of distributed computation. With extensive experiments, we confirm that HyperX achieve an order of magnitude improvement over the graph conversion approach in terms of the execution time, network communication, and memory consumption.
  • Item
    Thumbnail Image
    Recommendation systems for travel destination and departure time
    Xue, Yuan ( 2015)
    People travel on a daily basis to various local destinations such as office, home, restaurant, appointment venue, and sightseeing spot. It is vital to most people that we have a positive experience and high efficiency of daily travel. With this observation, my research strives to provide daily-travel related recommendations by solving two optimisation problems, driving destination prediction and departure time recommendation for appointments. Our “SubSyn” destination prediction algorithm, by definition, predicts potential destinations at real-time for drivers on the road. Its applications include recommending sightseeing places, pushing targeted advertisement, and providing early warnings for road congestion. It employs the Bayesian inference framework and second-order Markov model to compute a list of high-probability destinations. The key contributions include real-time processing and the ability to predict destinations with very limited amount of training data. We also look into the problem of privacy protection against such prediction. The “iTIME” departure time recommendation system is a smart calendar that reminds users to depart in order to arrive at appointment venues on time. It also suggests the best transport mode based on users’ travel history and preferences. Currently, it is very inefficient for people to manually and repeatedly check the departure time and compare all transport modes using, for instance, Google Maps. The functionalities of iTIME were realised by machine learning algorithms that learn users’ habits, analyse the importance of appointments and optimal mode of transport, and estimate the start location and travel time. Our field study showed that we can save up to 40% of time by using iTIME. The system can also be extended easily to provide additional functionalities such as clashing appointments detection and appointment scheduling, both taking into account the predicted start location and travel time of future appointments. Both problems can be categorised as recommender systems (or recommendation systems) that provide insightful suggestions in order to improve daily-travel experiences and efficiency.
  • Item
    Thumbnail Image
    Generalized language identification
    LUI, MARCO ; ( 2014)
    Language identification is the task of determining the natural language that a document or part thereof is written in. The central theme of this thesis is generalized language identification, and deals with eliminating the assumptions that limit the applicability of language identification techniques to specific settings that may not be representative of real-world use cases for automatic language identification techniques. Research to date has treated language identification as a supervised machine learning problem, and in this thesis I argue that such a characterization is inadequate, showing how standard document representations do not take into account the variation in a language between different sources of text, and developing a representation that is robust to such variation. I also develop a method that allows for language identification of multilingual documents, i.e. documents that contain text in more than one language. Finally, I investigate the robustness of existing off-the-shelf language identification methods on a novel and challenging domain.
  • Item
    Thumbnail Image
    Computational substructure querying and topology prediction of the beta-sheet
    Ho, Hui Kian ( 2014)
    Studying the three-dimensional structure of proteins is essential to understanding their function, and ultimately, their dysfunction that causes disease. The limitations of experimental protein structure determination presents a need for computational approaches to protein structure prediction and analysis. The beta-sheet is a commonly occurring protein substructure important to many biological processes and are often implicated in neurological disorders. Targeted experimental studies of beta-sheets are especially difficult due to their general insolubility in isolation. This thesis presents a series of contributions to the computational analysis and prediction of beta-sheet structure, which are useful for knowledge discovery and for directing more detailed experimental work. Approaches for predicting the simplest type of beta-sheet, the beta-hairpin, are first described. Improvements over existing methods are obtained by using the most important beta-hairpin features identified through systematic feature selection. An examination of the most important features provides a physiochemical basis of their usefulness in beta-hairpin prediction. New methods for the more general problem of beta-sheet topology prediction are described. Unlike recent methods, ours are independent of multiple sequence alignment (MSAs) and therefore do not rely on the coverage of reference sequence databases or sequence homology. Our evaluations showed that our methods do not exhibit the same reductions in performance as a state-of-the-art method for sequences with low quality MSAs. A new method for the indexing and querying of beta-sheet substructures, called BetaSearch, is described. BetaSearch exploits the inherent planar constraints of beta-sheet structure to achieve significant speedups over existing graph indexing and conventional 3D structure search methods. Case studies are presented that demonstrate the potential of this method for the discovery of biologically interesting beta-sheet substructures. Finally, a purpose-built open source toolkit for generating 2D protein maps is described, which is useful for the coarse-grained analysis and visualisation of 3D protein structures. It can also be used in existing knowledge discovery pipelines for automated structural analysis and prediction tasks, as a standalone application, or imported into existing experimental applications.
  • Item
    Thumbnail Image
    Statistical modeling of multiword expressions
    Su, Kim Nam ( 2008)
    In natural languages, words can occur in single units called simplex words or in a group of simplex words that function as a single unit, called multiword expressions (MWEs). Although MWEs are similar to simplex words in their syntax and semantics, they pose their own sets of challenges (Sag et al. 2002). MWEs are arguably one of the biggest roadblocks in computational linguistics due to the bewildering range of syntactic, semantic, pragmatic and statistical idiomaticity they are associated with, and their high productivity. In addition, the large numbers in which they occur demand specialized handling. Moreover, dealing with MWEs has a broad range of applications, from syntactic disambiguation to semantic analysis in natural language processing (NLP) (Wacholder and Song 2003; Piao et al. 2003; Baldwin et al. 2004; Venkatapathy and Joshi 2006). Our goals in this research are: to use computational techniques to shed light on the underlying linguistic processes giving rise to MWEs across constructions and languages; to generalize existing techniques by abstracting away from individual MWE types; and finally to exemplify the utility of MWE interpretation within general NLP tasks. In this thesis, we target English MWEs due to resource availability. In particular, we focus on noun compounds (NCs) and verb-particle constructions (VPCs) due to their high productivity and frequency. Challenges in processing noun compounds are: (1) interpreting the semantic relation (SR) that represents the underlying connection between the head noun and modifier(s); (2) resolving syntactic ambiguity in NCs comprising three or more terms; and (3) analyzing the impact of word sense on noun compound interpretation. Our basic approach to interpreting NCs relies on the semantic similarity of the NC components using firstly a nearest-neighbor method (Chapter 5), then verb semantics based on the observation that it is often an underlying verb that relates the nouns in NCs (Chapter 6), and finally semantic variation within NC sense collocations, in combination with bootstrapping (Chapter 7). Challenges in dealing with verb-particle constructions are: (1) identifying VPCs in raw text data (Chapter 8); and (2) modeling the semantic compositionality of VPCs (Chapter 5). We place particular focus on identifying VPCs in context, and measuring the compositionality of unseen VPCs in order to predict their meaning. Our primary approach to the identification task is to adapt localized context information derived from linguistic features of VPCs to distinguish between VPCs and simple verb-PP combinations. To measure the compositionality of VPCs, we use semantic similarity among VPCs by testing the semantic contribution of each component. Finally, we conclude the thesis with a chapter-by-chapter summary and outline of the findings of our work, suggestions of potential NLP applications, and a presentation of further research directions (Chapter 9).
  • Item
    Thumbnail Image
    Structured classification for multilingual natural language processing
    Blunsom, Philip ( 2007-06)
    This thesis investigates the application of structured sequence classification models to multilingual natural language processing (NLP). Many tasks tackled by NLP can be framed as classification, where we seek to assign a label to a particular piece of text, be it a word, sentence or document. Yet often the labels which we’d like to assign exhibit complex internal structure, such as labelling a sentence with its parse tree, and there may be an exponential number of them to choose from. Structured classification seeks to exploit the structure of the labels in order to allow both generalisation across labels which differ by only a small amount, and tractable searches over all possible labels. In this thesis we focus on the application of conditional random field (CRF) models (Lafferty et al., 2001). These models assign an undirected graphical structure to the labels of the classification task and leverage dynamic programming algorithms to efficiently identify the optimal label for a given input. We develop a range of models for two multilingual NLP applications: word-alignment for statistical machine translation (SMT), and multilingual super tagging for highly lexicalised grammars.