Computing and Information Systems - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 9 of 9
  • Item
    Thumbnail Image
    Grid security
    Sinnott, Richard O. (CRC Press, 2009)
    Security is essential for inter-organizational collaborative e-Research. Without robust, reliable, easy to understand and manage e-Research security models and their implementations many communities and wider industry will simply not engage. To support inter-organizational, inter-disciplinary research it is essential e-Research security infrastructures support several key (defining) characteristics …
  • Item
    Thumbnail Image
    Specifying ODP computational objects in Z
    SINNOTT, RICHARD ; Turner, Kenneth J. (Springer, 1996)
    The computational viewpoint contained within the Reference Model of Open Distributed Processing (RM-ODP) shows how collections of objects can be configured within a distributed system to enable interworking. It prescribes certain capabilities that such objects are expected to possess and structuring rules that apply to how these objects can be configured with one another. This paper highlights how the specification language Z can be used to formalise these capabilities and the associated structuring rules, thereby enabling specifications of ODP systems from the computational viewpoint to be achieved.
  • Item
    Thumbnail Image
    E-infrastructures fostering multi-centre collaborative research into the intensive care management of patients with brain injury
    Sinnott, Richard O. ; Piper, Ian (Information Science Reference (an imprint of IGI Global), 2009)
    Clinical research is becoming ever more collaborative with multi-centre trials now a common practice. With this in mind, never has it been more important to have secure access to data and, in so doing, tackle the challenges of inter-organisational data access and usage. This is especially the case for research conducted within the brain injury domain due to the complicated multi-trauma nature of the disease with its associated complex collation of time-series data of varying resolution and quality. It is now widely accepted that advances in treatment within this group of patients will only be delivered if the technical infrastructures underpinning the collection and validation of multi-centre research data for clinical trials is improved. In recognition of this need, IT-based multi-centre e-Infrastructures such as the Brain Monitoring with Information Technology group (BrainIT - www.brainit.org) and Cooperative Study on Brain Injury Depolarisations (COSBID - www.cosbid.de) have been formed. A serious impediment to the effective implementation of these networks is access to the know-how and experience needed to install, deploy and manage security-oriented middleware systems that provide secure access to distributed hospital based datasets and especially the linkage of these data sets across sites. The recently funded EU framework VII ICT project Advanced Arterial Hypotension Adverse Event prediction through a Novel Bayesian Neural Network (AVERT-IT) is focused upon tackling these challenges. This chapter describes the problems inherent to data collection within the brain injury medical domain, the current IT-based solutions designed to address these problems and how they perform in practice. The authors outline how the authors have collaborated towards developing Grid solutions to address the major technical issues. Towards this end we describe a prototype solution which ultimately formed the basis for the AVERT-IT project. They describe the design of the underlying Grid infrastructure for AVERT-IT and how it will be used to produce novel approaches to data collection, data validation and clinical trial design.
  • Item
    Thumbnail Image
    The pros and cons of using SDL for creation of distributed services
    Olsen, Anders ; Demany, Didier ; Cardoso, Elsa ; Lodge, Fiona ; Kolberg, Mario ; Bjorkander, Morgan ; SINNOTT, RICHARD (Springer, 1999)
    In a competitive market for the creation of complex distributed services, time to market, development cost, maintenance and flexibility are key issues. Optimizing the development process is very much a matter of optimizing the technologies used during service creation. This paper reports on the experience gained in the Service Creation projects SCREEN and TOSCA on use of the language SDL for efficient service creation.
  • Item
    Thumbnail Image
    Finite state machine based SDL
    Sinnott, Richard O. ; Hogrefe, Dieter (Cambridge University Press, 2001)
    SDL is a language for specifying and describing systems. The basic idea of SDL is to describe systems in the form of asynchronously communicating processes represented as extended finite state machines. For this reason SDL is particularly suited to model and develop parallel. e.g. distributed communicating systems.
  • Item
    Thumbnail Image
    Integrated trouble management to support service quality assurance in a multi-provider context
    Dragan, Dan ; Gringel, Thomas ; Hall, Jane ; SINNOTT, RICHARD ; Tschichholz, Michael ; Vortisch, Wilhelm (Springer, 2000)
    Liberalisation of telecommunications encourages competition between the various actors in the Open Service Market (OSM). In this highly competitive context, Connectivity Service Providers (CSPs) and Value Added Service Providers (VASPs) are investigating opportunities to provide differentiated Service Quality related Service Layer Agreements (SLAs) to their customers. The services provided will span several administrative domains which makes their management complex. The key element for end users when choosing a particular service is the guarantee of support to be provided when using the service and the desire to interact with as few actors as possible. On the other hand, key issues for network operators and service providers are the cost-effective maintenance of equipment and services. The aim of this paper is to present a novel architecture that provides the necessary infrastructure, models and mechanisms to help VASPs and CSPs to rapidly introduce customer care services for user quality assurance in a Multi-Domain environment. The architecture aims at integrating TINA, TMF and TMN concepts as well as established legacy in-house customer care and help desk systems. This work is being undertaken within the Assurance part of the CEC ACTS project FlowThru.
  • Item
    Thumbnail Image
    Supporting service quality assurance via trouble management
    SINNOTT, RICHARD ; Gringel, Tom ; Tschichholz, Michael ; Vortisch, Wilhelm (Kluwer Academic, 2000)
    The open service market encourages competition between service providers. To attract and keep customers, service providers require – amongst other things - better tools and techniques to increase their competitiveness. In this paper we address one area for tool support: namely, tools for the support of service quality assurance, i.e. so that checks can be made to ensure that services (and the networks they operate over) fulfil the expectations of customers who have subscribed to them. To demonstrate this, we show how trouble management techniques can be applied to develop generic and reusable components. The test bed for this work is based on a TINA platform Y.TSP that has been extended with a trouble management component. We show how this trouble management component can be used to support service quality assurance via two application cases studies.
  • Item
    Thumbnail Image
    Experiences of applying advanced grid authorisation infrastructures
    Sinnott, R. O. ; Stell, A. J. ; Chadwick, D. W. ; Otenko, O. (Springer, 2005)
    The widespread acceptance and uptake of Grid technology can only be achieved if it can be ensured that the security mechanisms needed to support Grid based collaborations are at least as strong as local security mechanisms. The predominant way in which security is currently addressed in the Grid community is through Public Key Infrastructures (PKI) to support authentication. Whilst PKIs address user identity issues, authentication does not provide fine grained control over what users are allowed to do on remote resources (authorisation). The Grid community have put forward numerous software proposals for authorisation infrastructures such as AKENTI [1], CAS [2], CARDEA [3], GSI [4], PERMIS [5,6,7] and VOMS [8,9]. It is clear that for the foreseeable future a collection of solutions will be the norm. To address this, the Global Grid Forum (GGF) have proposed a generic SAML based authorisation API which in principle should allow for fine grained control for authorised access to any Grid service. Experiences in applying and stress testing this API from a variety of different application domains are essential to give insight into the practical aspects of large scale usage of authorisation infrastructures. This paper presents experiences from the DTI funded BRIDGES project [10] and the JISC funded DyVOSE project [11] in using this API with Globus version 3.3 [12] and the PERMIS authorisation infrastructure.
  • Item
    Thumbnail Image
    Development of a grid infrastructure for functional genomics
    Sinnott, R. ; Bayer, M. ; Houghton, D. ; Berry, D. ; Ferrier, M. (Springer, 2005)
    The BRIDGES project is incrementally developing and exploring database integration over six geographically distributed research sites with the framework of a Wellcome Trust biomedical research project (the Cardiovascular Functional Genomics project) to provide a sophisticated infrastructure for bioinfomaticians. Grid technologies are being used to facilitate this integration. Key issues to be investigated in BRIDGES are data integration and data federation, security, user friendliness, access to large scale computational facilities and incorporation of existing bioinformatics software solutions, both for visualisation as well as analysis of genomic data sets. This paper outlines the initial experiences in applying Grid technologies and outlines the on-going designs put forward to address these issues.