Computing and Information Systems - Theses
Permanent URI for this collection
Now showing 1 - 4 of 4
ItemSafety critical multi-agent systemsLiu, Ching Louis ( 2018)Artificial intelligence (AI) has come of age, especially with the rapid growth in the variety and quantity of data, together with automation in industrial and private applications, such as 3D printing and automated manufacturing, all of which require computing for control and interaction with humans at some point. Computer-controlled machines that operate in industrial or domestic settings must also operate to ensure the safety of any humans in the vicinity or who interact with the machines. Further, if we are to realise the dream of autonomous agents interacting freely with humans, then the issue of safety becomes even more critical. In this regard, the aim of this thesis is to propose methods that complement existing Agent-Oriented Software Engineering methods and provide a means of safety engineering at the agent design stage. Our proposed methods can also build accident knowledge in such a way that a safety analysis on one type of multi-agent system can be transferred to another provided that the other multi-agent system shares enough similarities with the first. The current situation is that it is difficult to apply agent-oriented methods in situations where safety is critical because of the lack of available agent-specific safety analysis methods. Traditional safety engineering methods are not tailored to the analysis of the full capability of agents and although a number of attempts has been made to automate traditional safety engineering methods, a gap between the dynamic behaviour of multi-agent systems and safety analysis remains. Further, there is no single accepted definition of a multi-agent system, but there is a list of concepts common to most definitions and widely accepted among different methodologies: the concept of an open system, dynamic behaviour and adaption to name a few. Current safety analysis methods do not fully handle concepts with much success. Of the existing methods, which we review in chapter 3, all require domain knowledge as well as expertise in the application of the methods themselves and are limited by the size of the component that can be analysed. This thesis contributes to safety analysis in agent-oriented software engineering by providing safety analysis methods that generate tangible safety goals based on previous accident data and system behaviour. Another contribution of this thesis is that our method enables agents to dynamically calculate accident likelihood and then, through a specific systems level ontology, to translate the safety analysis from one multi-agent system to another with similar agent characteristics. An example of where this latter case can be applied is to provide estimations on the design of a new multi-agent system that does not yet have any accident data. We first look at ways of modelling system behaviour and, importantly, the interactions between different agents. Then, we present a way to convert the interaction model to a Bayesian network that combines data from multiple previous accidents and a method for identifying which system component to change to improve the safety of the overall multi-agent system. When we apply this method to real-life situations, we find that the current limitation is the lack of data at the right level of detail. However, exploring the interactions in the system and the relationships between agents, we can overcome the limitations in data to some extent. Our approach can be used to estimate the accident rate by combining accident data from different existing physical systems. Doing this provides a quick way to estimate the accident rate and provide design feedback to the multi-agent system designer. Our thesis will advance the application of multi-agent systems by improving their safety aspects. Moreover, the ability provided by our Bayesian networks to dynamically calculate the likelihood of accidents provides agents with the means to improve safety as they encounter new incidents. Our method of translating the analysis from one type of multi-agent system to another on the basis of ontology provides an interesting approach for sharing accident knowledge between related systems when they are in the field.
ItemA lock-free environment for computer music: concurrent components for computer supported cooperative workShelton, Robert James ( 2011)More than fifty years of computer music research has established a wide range of techniques and tools. Implementing all of these within a single application is impractical, yet each task might require a different selection to be applied. Therefore, the sharing of structured information between distinct applications becomes a key challenge for computer music as a field. While existing protocols provide a great deal of support for communication between the synthesis engines, minimal facilities exist for communicating user-interface constructs between distributed computer music applications. As a result, it remains difficult to apply techniques for computer supported cooperative work in a format which can be reused by a wide range of applications. This diminishes the opportunities by which more general collaborative composition and performance environments can be developed. In a distributed environment where user-interface objects may be replicated across multiple remote locations, concurrency control is required to maintain the integrity of each object. Previous methods for managing concurrent objects have typically introduced strong guarantees regarding a range of behaviours; however, these guarantees come at the cost of more complicated object frameworks which have failed to gain widespread community acceptance. Instead, a light-weight component model providing only the essential facilities is seen as a viable alternative. This dissertation investigates the use of lock-free shared-memory techniques for distributed computer music composition and performance environments. A method is presented by which the underlying user-interface structures may be separated from their manipulation, establishing an environment in which concurrent applications can provide multiple real-time views of shared components. Results include a formal correctness model for lock-free algorithms, a portable framework for concurrent components and a practical demonstration using a collection of components to implement a distributed text buffer.
ItemExploring social aspects of requirements engineering: an ethnographic study of Thai systems analystsThanasankit, Theerasak ( 1999)Requirements engineering has been considered as an important phase for information systemsdevelopment. There has been much evidence, which shows how the lack of understanding ofusers' requirements has led to information systems failure and rejection by clients.Requirements engineering emerged from software engineering focusing on elicitingrequirements and finalising requirements specification for systems analysts to design systems. There has been a focus in requirements engineering research on the technical area. This study focuses on the social dimensions of requirements engineering, which has been poorly understood due to a lack of research in this area. The social dimensions of requirements engineering are broad and cover many areas of social activities. This study focuses on the influences of culture and values on requirements engineering processes and on the tools/techniques employed by systems analysts for requirements engineering. The data collected was from intensive interviews with eight Thai systems analysts. These interviews were transcribed and analysed using the accepted practices of hermeneutics. Culture is learned by members in a society. They learn how to behave to their parents, relatives, peers, and their superiors throughout their development from home, school, and workplace. Thai culture is high in power distance, group focus, emotion and relationship focus, and is characterised by a dislike of uncertain situations. These unique characteristics in Thai culture influence the requirements engineering processes and the use of tools/techniques for requirements engineering. Three important issues emerged from the study. They are a continual evolving of requirements, long decision-making processes, and misconceptions about requirements and of the problem domain. These three issues are shown to be influenced by the process of requirements engineering as practiced by the participant systems analysts. Thai culture and values construct the learning process in Thai society and form the emotional and relationship structures in Thailand. These two unique issues are shown to influence the use of tools/techniques for requirements engineering by the participant systems analysts. This study shows that local culture and values have influenced requirements engineering processes. Therefore, systems analysts need to take social factors into consideration for the best selection and adaptation of existing requirements engineering processes to suit their client's culture, values, and work practices. This study's findings are crucial for multinational information systems consulting organizations, operating in Thailand, to gain a better understanding of Thai culture and its impact on the use of requirements engineering methodologies. The study also assists consulting organisations to better manage requirements engineering processes and understanding implicit factors that create problems during requirements engineering and throughout the information systems development processes.
ItemToward semantic interoperability for software systemsLister, Kendall ( 2008)“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.”  In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application. The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data. The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed. Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems. In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.