Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 18
  • Item
    Thumbnail Image
    Modelling human behaviour with BDI agents
    Norling, Emma Jane ( 2009)
    Although the BDI framework was not designed for human modelling applications, it has been used with considerable success in this area. The work presented here examines some of these applications to identify the strengths and weaknesses of the use of BDI-based frameworks for this purpose, and demonstrates how these weaknesses can be addressed while preserving the strengths. The key strength that is identified is the framework's folk-psychological roots, which facilitate the knowledge acquisition and representation process when building models. Unsurprisingly, because the framework was not designed for this purpose, several shortcomings are also identified. These fall into three different classes. Firstly, although the folk-psychological roots mean that the framework captures a human-like reasoning process, it is at a very abstract level. There are many generic aspects of human behaviour - things that are common to all people across all tasks - which are not captured in the framework. If a modeller wishes to take these things into account in a model, they must explicitly encode them, replicating this effort for every model. To reduce modellers' workload and increase consistency, it is desirable to incorporate such features into the framework. Secondly, although the folk-psychological roots facilitate knowledge acquisition, there is no standardised approach to this process, and without experience it can be very difficult to gather the appropriate knowledge from the subjects to design and build models. And finally, these models must interface with external environments in which they 'exist.' There are often mismatches in the data representation level which hinder this process. This work makes contributions to dealing with each of these problems, drawing largely on the folk-psychological roots that underpin the framework. The major contribution is to present a systematic approach to extending the BDI framework to incorporate further generic aspects of human behaviour and to demonstrate this approach with two different extensions. A further contribution is to present a knowledge acquisition methodology which gives modellers a structured approach to this process. The problems at the agent-environment interface are not straightforward to solve, because sometimes the problem lies in the way that the environment accepts and receives data. Rather than offering the golden solution to this problem, the contribution provided here is to highlight the different types of mismatches that may occur, so that modellers may recognise them early and adapt their approach to accommodate them.
  • Item
    Thumbnail Image
    A framework for valuing the quality of customer information
    Hill, Gregory ( 2009)
    This thesis addresses a widespread, significant and persistent problem in Information Systems practice: under-investment in the quality of customer information. Many organisations require clear financial models in order to undertake investments in their information systems and related processes. However, there are no widely accepted approaches to rigorously articulating the costs and benefits of potential quality improvements to customer information. This can result in poor quality customer information which impacts on wider organisational goals. To address this problem , I develop and evaluate a framework for producing financial models of the costs and benefits of customer information quality interventions. These models can be used to select and prioritise from multiple candidate interventions across various customer processes and information resources, and to build a business case for the organisation to make the investment. The research process involved: The adoption of Design Science as a suitable research approach, underpinned by a Critical Realist philosophy. A review of scholarly research in the Information Systems sub-discipline of Information Quality focusing on measurement and valuation, along with topics from relevant reference disciplines in economics and applied mathematics. A series of semi-structured context interviews with practitioners (including analysts, managers and executives) in a number of industries, examining specifically information quality measurement, valuation and investment. A conceptual study using the knowledge from the reference disciplines to design a framework incorporating models, measures and methods to address these practitioner requirements. A simulation study to evaluate and refine the framework by applying synthetic information quality deficiencies to real-world customer data sets and decision process in a controlled fashion. An evaluation of the framework based on a number of published criteria recommended by scholars to establish that the framework is a purposeful, innovative and generic solution to the problem at hand.
  • Item
    Thumbnail Image
    Capturing the semantics of change: operation augmented ontologies
    Newell, Gavan John ( 2009)
    As information systems become more complex it is infeasible for a non-expert to understand how the information system has evolved. Accurate models of these systems and the changes occurring to them are required for interpreters to understand, reason over, and learn from evolution of these systems. Ontologies purport to model the semantics of the domain encapsulated in the system. Existing approaches to using ontologies do not capture the rationale for change but instead focus on the direct differences between one version of a model and the subsequent version. Some changes to ontologies are caused by a larger context or goal that is temporally separated from each specific change to the ontology. Current approaches to supporting change in ontologies are insufficient for reasoning over changes and allow changes that lead to inconsistent ontologies. In this thesis we examine the existing approaches and their limitations and present a four-level classification system for models representing change. We address the shortcomings in current techniques by introducing a new approach, augmenting ontologies with operations for capturing and representing change. In this approach changes are represented as a series of connected, related and non-sequential smaller changes. The new approach improves on existing approaches by capturing root causes of change, by representing causal relationships between changes linking temporally disconnected changes to a root cause and by preventing inconsistencies in the evolution of the ontology. The new approach also explicitly links changes in an ontology to the motivating real-world changes. We present an abstract machine that defines the execution of operations on ontologies. A case study is then used to explain the new approach and to demonstrate how it improves on existing ways of supporting change in ontologies. The new approach is an important step towards providing ontologies with the capacity to go beyond representing an aspect of a domain to include ways in which that representation can change.
  • Item
    Thumbnail Image
    Proactive traffic control strategies for sensor-enabled cars
    Wang, Ziyuan ( 2009)
    TRAFFIC congestions and accidents are major concerns in today’s transportation systems. This thesis investigates how to improve traffic throughput by reducing or eliminating bottlenecks on highways, in particular for merging situations such as intersections where a ramp leads onto the highway. In our work, cars are equipped with sensors that can measure distance to neighboring cars, and communicate their velocity and acceleration readings with one another. Sensor-enabled cars can locally exchange sensed information about the traffic and adapt their behavior much earlier than regular cars. We propose proactive algorithms for merging different streams of sensor-enabled cars into a single stream. A proactive merging algorithm decouples the decision point from the actual merging point. Sensor-enabled cars allow us to decide where and when a car merges before it arrives at the actual merging point. This leads to a significant improvement in traffic flow as velocities can be adjusted appropriately. We compare proactive merging algorithms against the conventional priority-based merging algorithm in a controlled simulation environment. Experimental results show that proactive merging algorithms outperform the priority-based merging algorithm in terms of flow and delay. More importantly, the imprecise information (errors in sensor measurements) is a major challenge for merging algorithms, because inaccuracies can potentially lead to unsafe merging behaviors. In this thesis, we investigate how the accuracy of sensors impacts merging algorithms, and design robust merging algorithms that tolerate sensor errors. Experimental results show that one of our proposed merging algorithms, which is based on the theory of time geography, is able to guarantee safe merging while tolerating two to four times more imprecise positioning information, and can double the road capacity and increase the traffic flow by 25%.
  • Item
    Thumbnail Image
    Mining surprising patterns
    KUO, YEN-TING ( 2009)
    From the perspective of an end-user, patterns derived during the data mining process are not always interesting. The mining of unexpected patterns is a computational technique introduced in earlier work to address this problem. However, such unexpected patterns are not necessarily surprising to the user. In this thesis, we show that the quality of a user's knowledge, that is encoded in computational form, is key to bridging the gap between unexpected and surprising patterns. The thesis presents an approach that reduces this gap by exploiting a synergy between existing techniques utilised in data mining. Key to the new approach is (1) the employment of a domain ontology to guide the mining of association rules, (2) an encoding of users' knowledge using a Bayesian network representation, and (3) a probabilistic model to generate explanations for unexpected rules. The methods are tested on real-world data in two domains using users who are domain experts. In the medical domain, a dataset of chronic kidney disease patients is mined with a nephrologist; in the educational domain, a dataset of a decimal comparison test of children is mined with two education researchers. Surprising patterns have been successfully discovered. Further gaps, identified during the investigation, are captured in the discussion of case studies. In total, the surprisingness problem needs to be tackled from the aspects of knowledge representation, knowledge acquisition, interpretation assistance; and prevention of meaningless rules. A lack of sufficient information about rules is found to be a major cause of meaningless rules, where the un-informativeness problem was caused outside the scope of rule ranking. We conclude that the surprisingness problem should be further researched beyond the scope of the thesis.
  • Item
    Thumbnail Image
    Utility-oriented internetworking of content delivery networks
    Pathan, Al-Mukaddim Khan ( 2009)
    Today’s Internet content providers primarily use Content Delivery Networks (CDNs) to deliver content to end-users with the aim to enhance their Web access experience. Yet the prevalent commercial CDNs, operating in isolation, often face resource over-provisioning, degraded performance, and Service Level Agreement (SLA) violations, thus incurring high operational costs and limiting the scope and scale of their services. To move beyond these shortcomings, this thesis sets out to establish the basis for developing advanced and efficient content delivery solutions that are scalable, high performance, and cost-effective. It introduces techniques to enable coordination and cooperation between multiple content delivery services, which is termed as “CDN peering”. In this context, this thesis addresses five key issues ― when to peer (triggering circumstances), how to peer (interaction strategies), whom to peer with (resource discovery), how to manage and enforce operational policies (re-quest-redirection and load sharing), and how to demonstrate peering applicability (measurement study and proof-of-concept implementation). Thesis Contributions: To support the thesis that the resource over-provisioning and degraded performance problems of existing CDNs can be overcome, thus improving Web access experience of Internet end-users, we have: - identified the key research challenges and core technical issues for CDN peering, along with a systematic understanding of the CDN space by covering relevant applications, features and implementation techniques, captured in a comprehensive taxonomy of CDNs; - developed a novel architectural framework, which provides the basis for CDN peering, formed by a set of autonomous CDNs that cooperate through an interconnection mechanism, providing the infrastructure and facilities to virtualize the service of multiple providers; - devised Quality-of-Service (QoS)-oriented analytical performance models to demonstrate the effects of CDN peering and predict end-user perceived performance, thus facilitating to make concrete QoS performance guarantees for a CDN provider; - developed enabling techniques, i.e. resource discovery, server selection, and request-redirection algorithms, for CDN peering to achieve service responsiveness. These techniques are exercised to alleviate imbalanced load conditions, while minimizing redirection cost; - introduced a utility model for CDN peering to measure its content-serving ability by capturing the traffic activities in the system and evaluated through extensive discrete-event simulation analysis. The findings of this study provide incentive for the exploitation of critical parameters for a better CDN peering system design; and - demonstrated a proof-of-concept implementation of the utility model and an empirical measurement study on MetaCDN, which is a global overlay for Cloud-based content delivery. It is aided with a utility-based redirection scheme to improve the traffic activities in the world-wide distributed network of MetaCDN.
  • Item
    Thumbnail Image
    ARTS: Agent-Oriented Robust Transactional System
    WANG, MINGZHONG ( 2009)
    Internet computing enables the construction of large-scale and complex applications by aggregating and sharing computational, data and other resources across institutional boundaries. The agent model can address the ever-increasing challenges of scalability and complexity, driven by the prevalence of Internet computing, by its intrinsic properties of autonomy and reactivity, which support the flexible management of application execution in distributed, open, and dynamic environments. However, the non-deterministic behaviour of autonomous agents leads to a lack of control, which complicates exception management in the system, thus threatening the robustness and reliability of the system, because improperly handled exceptions may cause unexpected system failure and crashes. In this dissertation, we investigate and develop mechanisms to integrate intrinsic support for concurrency control, exception handling, recoverability, and robustness into multi-agent systems. The research covers agent specification, planning and scheduling, execution, and overall coordination, in order to reduce the impact of environmental uncertainty. Simulation results confirm that our model can improve the robustness and performance of the system, while relieving developers from dealing with the low level complexity of exception handling. A survey, along with a taxonomy, of existing proposals and approaches for building robust multi-agent systems is provided first. In addition, the merits and limitations of each category are highlighted. Next, we introduce the ARTS (Agent-Oriented Robust Transactional System) platform which allows agent developers to compose recursively-defined, atomically-handled tasks to specify scoped and hierarchically-organized exception-handling plans for a given goal. ARTS then supports automatic selection, execution, and monitoring of appropriate plans in a systematic way, for both normal and recovery executions. Moreover, we propose multiple-step backtracking, which extends the existing step-by-step plan reversal, to serve as the default exception handling and recovery mechanism in ARTS. This mechanism utilizes previous planning results in determining the response to a failure, and allows a substitutable path to start, prior to, or in parallel with, the compensation process, thus allowing an agent to achieve its goals more directly and efficiently. ARTS helps developers to focus on high-level business logic and relaxes them from considering low-level complexity of exception management. One of the reasons for the occurrence of exceptions in a multi-agent system is that agents are unable to adhere to their commitments. We propose two scheduling algorithms for minimising such exceptions when commitments are unreliable. The first scheduling algorithm is trust-based scheduling, which incorporates the concept of trust, that is, the probability that an agent will comply with its commitments, along with the constraints of system budget and deadline, to improve the predictability and stability of the schedule. Trust-based scheduling supports the runtime adaptation and evolvement of the schedule by interleaving the processes of evaluation, scheduling, execution, and monitoring in the life cycle of a plan. The second scheduling algorithm is commitment-based scheduling, which focuses on the interaction and coordination protocol among agents, and augments agents with the ability to reason about and manipulate their commitments. Commitment-based scheduling supports the refactoring and parallel execution of commitments to maximize the system's overall robustness and performance. While the first scheduling algorithm needs to be performed by a central coordinator, the second algorithm is designed to be distributed and embedded into the individual agent. Finally, we discuss the integration of our approaches into Internet-based applications, to build flexible but robust systems. Specifically, we discuss the designs of an adaptive business process management system and of robust scientific workflow scheduling.
  • Item
    Thumbnail Image
    A dynamic approximate representation scheme for streaming time series
    Zhou, Pu ( 2009)
    The huge volume of time series data generated in many applications poses new challenges in the techniques of data storage, transmission, and computation. Further more, when the time series are in the form of streaming data, new problems emerge and new techniques are required because of the streaming characteristics, e.g. high volume, high speed and continuous flowing. Approximate representation is one of the most efficient and effective solutions to address the large-volume-high-speed problem. In this thesis, we propose a dynamic representation scheme for streaming time series. Existing methods use a unitary function form for the entire approximation task. In contrast, our method adopts a set of function candidates such as linear function, polynomial function(degree ≥ 2), and exponential function. We provide a novel segmenting strategy to generate subsequences and dynamically choose candidate functions to approximate the subsequences. Since we are dealing with streaming time series, the segmenting points and the corresponding approximate functions are incrementally produced. For a certain function form, we use a buffer window to find the local farthest possible segmenting point under a user specified error tolerance threshold. To achieve this goal, we define a feasible space for the coefficients of the function and show that we can indirectly find the local best segmenting point by the calculation in the coefficient space. Given the error tolerance threshold, the candidate function representing more information by unit parameter is chosen as the approximate function. Therefore, our representation scheme is more flexible and compact. We provide two dynamic algorithms, PLQS and PLQES, which involve two and three candidate functions, respectively. We also present the general strategy of function selection when more candidate functions are considered. In the experimental test, we examine the effectiveness of our algorithms with synthetic and real time series data sets. We compare our method with the piecewise linear approximation method and the experimental results demonstrate the evident superiority of our dynamic approach under the same error tolerance threshold.
  • Item
    Thumbnail Image
    The adoption of advanced mobile commerce services by individuals: investigating the impact of the interaction between the consumer and the mobile service provider
    AlHinai, Yousuf Salim (The University of Melbourne, 2009)
    This research investigates the impact of the interaction between the consumer and mobile service provider on the adoption of advanced mobile commerce services by existing consumers of mobile technology. These factors include: 1) Perceived Relationship Quality (PRQ), which is the consumer’s evaluation of the quality of his/her relationship with the mobile service provider, and 2) Perceived Value of the Adoption Incentive (PVI), which is the consumer’s evaluation of the value of incentives that are offered by the service provider to entice him/her to adopt the mobile service. The influence of these factors on consumer attitudes and intentions towards adopting mobile commerce services is studied and compared with three other well-known adoption factors including perceived usefulness, ease of use and the subjective norm. This study was undertaken in three parts. Firstly, a conceptual study was conducted to investigate and analyse the existing literature on consumer adoption of mobile commerce services. This phase started with a general review of the existing studies using a novel model: the Entities-Interactions Framework, EIF. The EIF explains adoption behaviour in terms of interactions between the consumer and the other entities including the mobile service, the service provider and the social system. This framework was used to analyse the extent to which important adoption factors have been covered by past research and therefore identify the research questions. The conceptual study resulted in the development of a research model and relevant hypotheses. Secondly, a large-scale questionnaire survey was conducted to test the research model and the proposed hypotheses. This part of the research helped give a broad picture of the influence of consumer-service provider factors on consumer adoption of mobile commerce services. Thirdly, face-to-face interviews with mobile phones users were conducted in order to validate the survey results and provide an understanding of the mechanisms that control the impact of the investigated factors. The research found that PRQ and PVI have an important influence on the attitude and intention of existing mobile phone users towards accepting and using advanced mobile commerce services. Furthermore, the research found that these newly introduced factors are more influential on consumer adoption perceptions than other well-established factors. The study enriches our understanding of technology adoption by individuals because it explains why an existing user of a technology, such as mobile technology, will or will not adopt advanced versions of that technology. The findings affirm that in the context of communication technologies, which are interactive by nature, understanding the interaction between consumers and service providers is a key to understanding the progressive adoption by consumers of advanced forms of these technologies. The thesis provides practitioners (particularly mobile service providers) with a better understanding of the impact and implication of their interaction with consumers on consumers’ acceptance and use of mobile services. The study emphasises the importance of incorporating this understanding throughout the mobile service provision process, starting from the conceptualisation of the service to the actual provision of the service to the market. The study also offers a novel comprehension of how to view each mobile service offer as a consequence of the previous offer and a precedent of the next in order to enhance consumer adoption of mobile service in the short and long runs.
  • Item
    Thumbnail Image
    A study of recursive partitioning technique for indexing spatial objects and trajectories
    Antoine, Elizabeth Arockiarani ( 2009)
    The requirement to store and manipulate data that represents the location and extent of objects, like roads, cities, rivers, etc. (spatial data), led to the evolution of spatial database systems. The domains that led to an increased interest in spatial database systems are earth science, robotics, resource management, urban planning, autonomous navigation and geographic information systems (GIS). To handle the spatial data efficiently, spatial database systems require indexing mechanisms that can retrieve spatial objects based on their locations using direct look-ups as opposed to the sequential search. Indexing structures designed for relational database systems cannot be used for objects with non-zero size. The fact that there is no total ordering of objects in space makes the conventional indexes, such as the B+ tree, incapable of handling spatial data. Extensive work has been done on spatial indexing and indexing methods are categorized in terms of their efficiency for the type of spatial objects or the type of queries. Queries in spatial database system are classified as single-scan and multi-scan queries. Spatial join is the most important multi-scan query in a spatial database system and the execution time of such queries is super linear to the number of objects. Among the indexing structures available for spatial join queries, Filter trees perform better than its counterparts, such as Hilbert R-trees. Filter tree join algorithm outperforms the R-tree join algorithm by reading each block of entities at most once. Filter trees combine the recursive partitioning, size separation and space filling curves to achieve this efficiency. However, for data sets of low join selectivity, the number of blocks processed for Filter trees is excessive compared to the number of blocks that have intersecting entities. The goal of this work is to provide a method for accelerating spatial join operations by using Spatial Join Bitmap (SJB) indices. The file organization is based on the concepts introduced in Filter trees. The SJB indices keep track of blocks that have intersecting entities and make the algorithm process only those blocks. We provide algorithms for generating SJB indices dynamically and for maintaining SJB indices when the data sets are updated. Although maintaining SJB indices for updates increases the cost in terms of response time, the cost saving in terms of the join operation is substantial and this makes the overall behaviour of the spatial system very efficient. We have performed an extensive study using both real and synthetic data sets of various data distributions. The results show that the use of SJB indices produces a substantial speed-up, ranging from 25% to 150% when compared to Filter trees. This method is highly beneficial in a real world scenario, as the number of times the data set is updated is fairly low when compared to the number of times the join processing is done on the data sets. The spatial indexing structures can be extended to handle data of higher dimensions including time. The position of the geometries, like points, lines, areas or volumes changing over time, represents moving objects. The need for storing and processing moving object data arises in a wide range of applications, including digital battlefields (battlefield simulators), air-traffic control, and mobile communication systems. The successive locations of the object are gathered as the object moves around the space and the locations that are ordered in time are interpolated to obtain the movement of the object: this is called as trajectory of the object. R-tree variations, such as the three dimensional R-trees, TB-trees, FNR trees, STR trees, MON trees and SETI trees, are found to be effective for storing and manipulating past locations of moving objects. The SETI tree is a combination of the R-tree in the time dimension and the partition based technique in the space dimension, and outperforms the other R-tree indexing structures in handling coordinate based queries. However, SETI increases the computational time when handling trajectory queries that retrieve the whole or part of the trajectories. We propose a methodology for using the recursive partitioning technique for indexing trajectories, called the Recursively Partitioned Trajectory Index (RPTI). RPTI uses a two-level indexing structure that is similar to the SETI and maintains separate indices for the space and time dimensions. However, the splitting of trajectory segments in SETI, which increases the computational time, does not arise in RPTI. We present the algorithms for constructing the RPTI and the algorithms for updates, which include insertion and deletion. We have conducted an experimental study of this method and have demonstrated that RPTI is better than SETI in handling trajectory queries and is competitive with SETI in handling coordinate based queries. Contrary to the SETI structure, RPTI recursively partitions the space and avoids the splitting of line segments, making it efficient for query processing. Deletion is often ignored while proposing a trajectory index as a result of the assumption that deleting the trajectory of a moving object is meaningless after the transmitted positions are recorded. However, deletions are necessary when the trajectory of a moving object is no longer useful. We have also shown that deletion of a trajectory can be efficiently done using the RPTI structure. The structure of RPTI can be easily implemented by using any of the existing spatial indexing structures. The only design parameters required are the standard disk page size and maximum level of recursive partitioning. However, in SETI, the number of spatial partitions, which is a crucial parameter in any spatial partitioning strategy, is highly dependent on the distribution of data sets.