Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 10
  • Item
    Thumbnail Image
    On the effectiveness of removing location information from trajectory data for preserving location privacy
    Hossain, Amina ( 2017)
    The ubiquity of GPS enabled smartphones with Internet connectivity has resulted in the widespread development of location-based services (LBSs). People use these services to obtain useful advises for their daily activities. For example, a user can open a navigation app to find a route that results in the shortest driving time from the current location to a destination. Nevertheless, people have to reveal location information to the LBS providers to leverage such services. Location information is sensitive since it can reveal habits about an individual. LBS providers are aware of this and take measures to protect user privacy. One well established and simple approach is to remove GPS data from user data working with the assumption that it will lead to a high degree of privacy. In this thesis, we challenge this notion of removing location information while retaining other features would lead to a high degree of location privacy. We find that it is possible to reconstruct the original routes by analyzing just the turn instructions provided to a user by a navigation service. We evaluated our approach using both synthetic and real road network data and demonstrate the effectiveness of this new attack in a range of realistic scenarios.
  • Item
    Thumbnail Image
    Supervised algorithms for complex relation extraction
    Khirbat, Gitansh ( 2017)
    Binary relation extraction is an essential component of information extraction systems, wherein the aim is to extract meaningful relations that might exist between a pair of entities within a sentence. Binary relation extraction systems have witnessed a significant improvement over past three decades, ranging from rule-based systems to statistical natural language techniques including supervised, semi-supervised and unsupervised machine learning approaches. Modern question answering and summarization systems have motivated the need for extracting complex relations wherein the number of related entities is more than two. Complex relation extraction (CRE) systems are highly domain specific and often rely on traditional binary relation extraction techniques employed in a pipeline fashion, thus susceptible to processing-induced error propagation. In this thesis, we investigate and develop approaches to extract complex relations directly from natural language text. In particular, we deviate from the traditional disintegration of complex relations into constituent binary relations and propose usage of shortest dependency parse spanning the n related entities as an alternative to facilitate direct CRE. We investigate this proposed approach by a comprehensive study of supervised learning algorithms with a special focus on training support vector machines, convolutional neural networks and deep learning ensemble algorithms. Research in the domain of CRE is stymied by paucity of annotated data. To facilitate future exploration, we create two new datasets to evaluate our proposed CRE approaches on a pilot biographical fact extraction task. An evaluation of results on new and standard datasets concludes that usage of shortest path dependency parse in a supervised setting enables direct CRE with an improved accuracy, beating current state-of-the-art CRE systems. We further show the application of CRE to achieve state-of-the-art performance for directly extracting events without the need of disintegrating them into event trigger and event argument extraction processes.
  • Item
    Thumbnail Image
    Efficient orthogonal parametrisation of recurrent neural networks using householder reflections
    Mhammedi, Zakaria ( 2017)
    In machine learning, Recurrent Neural Networks (RNNs) have been successfully used in many applications. They are particularly well suited for problems involving time-series. This is because RNNs process input sequences one element at a time, and thus, the chronological order of these elements is taken into account. In many practical prediction tasks involving time-series, long-term time dependencies in data are crucial. These are the features that RNNs need to learn. However, learning these long-term dependencies in sequences using RNNs is still a major challenge. This is mainly due to the exploding and vanishing gradient problems, which are more prominent the longer the input sequences are. Recent methods have been suggested to solve this problem by constraining the transition matrix to be unitary during training, which ensures that its norm is exactly equal to one. This sets an upper bound on the norm of the back-propagated gradients, preventing them from growing exponentially. However, existing methods either have limited expressiveness or scale poorly with the size of the network when compared with the simple RNN case, especially when using stochastic gradient descent. Our contributions are as follows. We first show that constraining the transition matrix to be unitary is a special case of an orthogonal constraint. Therefore, it may not be necessary to work with complex-valued matrices. Then we present a new parametrisation of the transition matrix which allows efficient training of an RNN while ensuring that the matrix is always orthogonal. Furthermore, a good trade-off between speed and expressiveness can be chosen by selecting the right number of reflection vectors - the vectors used in the new parametrisation. In particular, when the number of reflection vector is equal to the size of the hidden layer the transition matrix is allowed to span the full set of orthogonal matrices. Using our approach, one stochastic gradient step can, in the worst case, be performed in time complexity $\mathcal{O}(\bm{T} n^2)$, where $T$ and $n$ are the length of the input sequence and the size of the hidden layer respectively. This time complexity is the same as that of the simple RNN. Finally, we test our new parametrisation on problems with long-term dependencies. Our results suggest that the orthogonal constraint on the transition matrix has similar benefits to the unitary constraint.
  • Item
    Thumbnail Image
    Low-cost leaving home activity recognition using mobile sensing
    Li, Han ( 2017)
    Leaving home activity recognition (LHAR) is essential in context-aware applications. For example, on a rainy day, a smartphone can remind a user to bring an umbrella when the leaving home activity is recognized. However, research in this field is substantially lacking. Most existing studies require extra hardware such as sensors installed at home to help recognize such activities, which limits the applicability of these studies. With the ubiquity of mobile sensing technique, it becomes feasible for a smartphone to sense the ambient environment and a user’s current context. In this thesis, we develop a low-cost system using mobile sensing for timely recognition of leaving home activities. To the best of our knowledge, we are the first to recognize leaving home activities using only sensors on smartphones. Overall, our system can recognize leaving home activities within 20 seconds after the home door is closed with a precision of 93.1\% and recall of 100\%. Recognizing leaving home activities while only leveraging sensors on smartphones can be challenging in two aspects: 1) the diversity of home environments result in inconsistent features, which significantly affects the recognition performance; and 2) mobile sensing is restricted by limited resources on smartphones, e.g., power and computation capability. To overcome such limitations, we first investigate sensors available on commodity smartphones and find that features extracted from WiFi, barometer, cell tower, magnetic field sensor and accelerometer readings are relatively robust in recognizing leaving home activities. Second, due to the variety of residential property environments, we propose a sensor selection algorithm to adaptively select the most suitable sensors for each home to personalize the training. Both classification performance and sensing cost are taken into consideration to provide the user an option to trade power consumption for recognition accuracy, and vice versa. Inspired by the observation that leaving home activity usually involves walking, we activate the power-hungry sensors to start the recognition only when the walking event is detected. Thus, we reduce the sensing cost by 76.65\%.
  • Item
    Thumbnail Image
    Linking news and tweets
    Lin, Xiaojie ( 2016)
    In recent years, the rise of social media such as Twitter has been changing the way people acquire information. Meanwhile, traditional information sources such as news articles are still irreplaceable. These have led to a new branch of study on understand- ing the relationship between news articles and social media posts and fusing information from these heterogeneous sources. In this study, we focus on techniques for linking news and relevant tweets. Specifically, given a set of news articles and a set of tweets, we aim to find a subset of relevant tweets for each article in the news set. We propose a super- vised linking algorithm that employs multiple features such as time, named entities and event phrases, which outperforms all the baseline methods in our experiments. Since humans can accurately determine whether a tweet is relevant to a news article, we study two types of crowdsourcing algorithms, namely validation algorithms and amendment algorithms. Validation algorithms put a news-tweet pair in the result set only if the pair is validated by the crowd in some way. We propose three validation algorithms, namely Highest, EP and EP+, all of which effectively collect relevant tweets while maintaining a high precision in the experiments. Amendment algorithms first use a machine-based approach to generate an initial set of results, and then improve the result set by asking the crowd informative questions. The amendment algorithms Half , HHalf and LHalf also demonstrate their effectiveness on improving the quality of the result sets in the experi- ments. We extend our study to social network subgraph matching algorithms, which can be used to i) improve the performance of our linking algorithms; ii) analyze and extract information from the news-tweet network or other social network graphs. We propose an efficient GPU algorithm, which significantly outperforms the state-of-the-art CPU al- gorithm in terms of running time in our experiments when the workload is heavy.
  • Item
    Thumbnail Image
    A bit-vector solver based on word-level propagation
    Wang, Wenxi ( 2016)
    Reasoning with bit-vectors arises in a variety of applications in verification and cryptography. Michel and Van Hentenryck have proposed an interesting approach to bit-vector constraint propagation on the word level. Most of the operations are propagated in constant time, assuming the bit-vector fits in a machine word. In contrast, bit-vector SMT solvers usually solve bit-vector problems by bit-blasting, that is, mapping the resulting operations to conjunctive normal form clauses, and using SAT technology to solve them. This also means generating intermediate variables which can be an advantage, as these can be searched on and learnt about. Since each approach has advantages it is important to see whether we can benefit from these advantages by using a word-level propagation approach with learning. In this dissertation, we describe an approach to bit-vector solving using word-level propagation with learning. We provide alternative word-level propagators to Michel and Van Hentenryck, and we evaluate the variants of the approach empirically. We also experiment with different approaches to learning and back-jumping in the solver. Based on the insights gained, we propose a portfolio solver using machine learning which can enhance state-of-the-art solvers. We show that, with careful engineering, a word-level propagation based approach can compete with (or complement) bit-blasting.
  • Item
    Thumbnail Image
    A suite of greedy methods for set cover computation
    Lim, Ching Lih ( 2015)
    The Set Cover problem is simple to describe: if a collection of sets is given, each containing a subset of a universe of elements, the challenge is to find a smallest (cardinality) sub-collection such that the union of the sets in the sub-collection is equal to the universe. Since this problem is NP-hard, there is no known efficient exact algorithm for non-trivial problem instances. An approximation algorithm is thus employed to construct a nearly optimal solution within reasonable resource bounds. One such algorithm is the greedy approach, which repeatedly adds the set with the greatest number of uncovered elements to the solution until every element in the universe is covered. In this thesis we make several contributions to Set Cover and the greedy approach. First, the approach can be implemented with a priority queue to keep track of set sizes so as to always be able to quickly find a set with the most uncovered elements. However, this implementation is costly in one aspect as some or all sets in the queue must be updated to their correct sizes after adding one set. A variant of the approach, Lazy-Greedy, is therefore proposed to evaluate sets lazily, reducing that cost. We show that while the worst-case time complexity is worsened, on typical input instances the revised approach implementation is faster than the standard eager greedy implementation, while retaining the same approximation ratio of the greedy method. Second, to process a large instance on secondary storage efficiently, Lazy-Greedy is re-engineered to break an input into a logarithmic number of buckets of related set sizes, and to process each bucket lazily. It has two innovations: automated management of a large number of buckets, and a fast search for a particular bucket. Although the algorithm handles a number of large instances efficiently, it runs more slowly than one benchmark disk-friendly algorithm that has a weaker approximation ratio. Finally, we propose two input-processing algorithms of the same following task given that some sets in a typical problem instance have hapax legomena (singleton elements). These sets must appear in every valid solution, including the optimal solution, because the hapax legomena will not be covered otherwise. A partial solution of such sets can be pre-built, with remaining sets and elements passed to a Set Cover algorithm to solve and complete the solution. The in-memory algorithm of counting element occurrences produces complete solutions faster than the direct approach of solving without the pre-processing. The disk-oriented algorithm of transforming an instance into a list of set-element items solves slower than a semi-external covering algorithm due to the relatively expensive cost of sorting the list.
  • Item
    Thumbnail Image
    Reliable power transmission networks
    GUMUCIO-ESCOBAR, RODRIGO ( 2015)
    The thesis presents a study of models and algorithms for the expansion of power system networks. The study indicates that the traditional DC model is not appropriate for network expansion, while AC-based heuristics may be far from optimal. The study also shows that a network expansion algorithm based on the LPAC model provides high-quality primal solutions (or configurations with minor violations) on standard benchmarks in reasonable time.
  • Item
    Thumbnail Image
    Location proof architectures
    SENEVIRATNE, JANAKA ( 2014)
    Upcoming location based services such as pay-as-you-drive insurances mandate verified locations. To enable such services Location Proof Architectures (LPAs) have been proposed in literature to verify or prove a user location. Specifically, an LPA allow a user (or a device on behalf of its user) to obtain a proof of its presence at a location from a trusted third party. In addition to guarding against cheating users who may claim false locations, another major concern in an LPA is to preserve user location privacy. To achieve this a user's identity and location data should be maintained separately in tandem with additional measures that avoid leaking sensitive identity and location data. We identify two types of location proof architectures: 1. sporadic location proofs for specific user locations and 2. continuous location proofs for user routes. In this thesis, we present two sporadic LPAs. Firstly, we propose an LPA where a user cannot falsely claim a location. Also, this LPA preserves user privacy by verifying a user identity and a location independently. Secondly, we propose an LPA that uses pseudonyms. We present a trusted third party free group pseudonym registering system for the LPA and show that our approach can achieve a guaranteed degree of privacy in the LPA. This thesis also introduces a framework for continuous LPA. In a continuos LPA, a verifier receives a sequence of location samples on a user route and assigns a degree of confidence with each possible user route. Specifically, we explain a stochastic model which associates a degree of confidence with a user route based on the distribution pattern of location samples.
  • Item
    Thumbnail Image
    User centric cellular trajectory inference with partial knowledge
    PERERA, BATUGAHAGE KUSHANI ANURADHA ( 2014)
    The uncertainty associated with cellular network trajectories is a major problem for their use in location based applications such as person tracking. Inferring the real trajectory of a person, using highly imprecise cellular trajectory data, is a challenging task. GPS trajectories are subjected to less uncertainty compared to cellular network trajectories and are preferred over cellular network trajectories, for many location based applications. However, GPS based location acquisition has limited applicability for certain contexts due to high power consumption and poor coverage. Cellular network based location acquisition is therefore a good alternative for GPS in such scenarios. Consequently, a cellular trajectory inference method which can handle the uncertainty of cellular trajectories is promising to investigate, such that cellular trajectories can be utilised in location based applications. In this thesis, our main focus is on user centric trajectory inference approaches, where the trajectory inference is performed by mobile phone users rather than the mobile network operator. Many existing cellular trajectory inference methods use knowledge about the cellular network such as the spatial distribution of neighbouring cell towers and signal strength information. However, this full knowledge about the cellular network is confidential to the mobile network operator, and mobile phone users are not guaranteed to have access to such information. Therefore, these techniques are not applicable for user centric cellular trajectory inference with partial knowledge about the cellular network. Therefore, user centric approaches for cellular trajectory inference are even more challenging. We propose a cellular trajectory inference method which utilises only a user’s connected cell tower location sequence and corresponding timing information, as this is the only type of knowledge guaranteed for a mobile phone user. We suggest using a user’s speed information as background knowledge, as it is easily accessible by the user, compared to knowledge about the cellular network. Furthermore, we suggest exploiting the preciseness of the time dimension of cellular trajectories to obtain precise handover times. These precise handover times can be used with speed information to accurately compute the distance, a user has travelled within a cell. We propose a method to infer the straight line segments of a trajectory, using above distance information. The inferred straight line trajectory segments are later used to estimate other segments of the trajectory. We theoretically and experimentally show that our proposed method achieves higher accuracy than existing cellular trajectory inference methods, for cases where the user’s trajectory tends to be a combination of straight lines. The intuition behind straight line inference is that people follow the shortest path to reach a destination avoiding unnecessary turns and therefore often prefer to select a straight line path. Additional advantages of our proposed inference method include the ability to locally run on mobile devices and the ability to perform trajectory inference within an unfamiliar environment, since no historical trajectory information or pre-training is required.