Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 70
  • Item
    Thumbnail Image
    Protecting organizational knowledge: a strategic perspective framework
    DEDECHE, AHMED ( 2014)
    Organizational knowledge is considered a valuable resource for providing competitive advantage. Extensive research has been done on strategies to encourage knowledge creation and sharing. However, limited research has been done on strategies for protecting this valuable resource from the risk of leakage. This research aims to contribute in bridging this gap by two contributions: developing a model that describes knowledge leakage, and providing a framework of strategies for protecting competitive organisational knowledge. The research is grounded on two bodies of literature: Knowledge management and information security. The research aims for identifying security strategies in literature and adapting them to address knowledge protection needs.
  • Item
    Thumbnail Image
    Towards achieving participation equilibrium in information and communication technology for development (ICT4D) projects: a human action perspective
    Maail, Arthur Glenn ( 2014)
    Realising the benefits offered by information and communication technologies (ICTs), governments and donor agencies spend millions of dollars annually to establish ICT projects aiming to improve livelihood of rural and disadvantaged community all around the world. Such projects are known as Information and Communication Technology for Development (ICT4D) projects. However, many of these projects failed to achieve their objectives, which may deter future efforts and donor agencies to support such important development initiatives. Researchers and practitioners have argued about the importance of participation from the target users (i.e. community) in the development of ICT4D projects as a way to improve the success of these initiatives. Nonetheless, the success of ICT4D projects does not always correlate to the higher degree of user participation. Instead, this thesis points to the importance of achieving user participation equilibrium and understanding the two competing groups that affect the balance of the equilibrium. Those two competing groups are, on one side, the conditional factors, which determine the actual degree of user participation and, on the other side, the approach adopted in project development, which implies the desired degree of participation. Prior studies have shown that when user participation equilibrium is achieved, that is when the desired degree matches with the actual degree of user participation, user participation would have a positive correlation with success of the project. This thesis proposes a new framework, which categorises the conditional factors affecting user participation in ICT4D projects based on the typology of human actions offered by the Habermas’ theory of communicative action (TCA). The framework is then validated using two research methods: a multiple case study method involving nine organisations called “telecentre” and an asynchronous online focus group (AOFG) method with fourteen ICT4D practitioners. By incorporating a typology of human action, this research has extended the existing knowledge in managing user participation in ICT4D projects. First, the thesis shows the applicability of human action perspective in understanding the conditional factors affecting user participation in the development of ICT4D projects. Second, the thesis also highlights the role of intermediated interaction with technology towards achieving participation equilibrium. Third, the thesis shows the importance of managing social interactions between users and other project stakeholders as the key to achieve participation equilibrium. Finally, the proposed framework lays out the common properties of the sets of conditional factors for each approach used in the development of ICT4D projects. Understanding offered by the proposed framework helps practitioners to focus on a specific set of conditional factors and devise appropriate strategies in managing user participation for a given project based on the approach employed. The framework can also be used as a guide to suggest the most appropriate approach in the development of ICT4D projects given specific resources and conditions.
  • Item
    Thumbnail Image
    Generalized language identification
    LUI, MARCO ; ( 2014)
    Language identification is the task of determining the natural language that a document or part thereof is written in. The central theme of this thesis is generalized language identification, and deals with eliminating the assumptions that limit the applicability of language identification techniques to specific settings that may not be representative of real-world use cases for automatic language identification techniques. Research to date has treated language identification as a supervised machine learning problem, and in this thesis I argue that such a characterization is inadequate, showing how standard document representations do not take into account the variation in a language between different sources of text, and developing a representation that is robust to such variation. I also develop a method that allows for language identification of multilingual documents, i.e. documents that contain text in more than one language. Finally, I investigate the robustness of existing off-the-shelf language identification methods on a novel and challenging domain.
  • Item
    Thumbnail Image
    Breast cancer detection and diagnosis in dynamic contrast-enhanced magnetic resonance imaging
    LIANG, XI ( 2013)
    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast is a medical imaging tool used to detect and diagnose breast disease. A DCE-MR image is a series of three-dimensional (3D) breast MRI scans. It is acquired to form a 4D image (3D spatial + time), before and after the injection of paramagnetic contrast agents. DCE-MRI allows the analysis of the intensity variation of magnetic resonance (MR) signals, before and after the injection of contrast agents over time. The interpretation of 4D DCE-MRI images can be time consuming due to the amount of information involved. Motion artifacts in between the image scans further complicate the diagnosis. A DCE-MR image includes a large amount of data and it is challenging to interpret even for an experienced radiologist. Therefore, a computer-aided diagnosis (CAD) system is desirable in assisting the diagnosis of abnormal findings in the DCE-MR image. We propose a fully automated CAD system that is comprised of five novel components: a new image registration method to recover motions in between MR image acquisitions, a novel lesion detection method to identify all suspicious regions, a new lesion segmentation method to draw lesion contours and a novel lesion feature characterization method. We then classify the automatically detected lesions using our proposed features. The following lists the challenges found in most CAD systems and the contributions in our CAD system of breast DCE-MRI. 1. Image registration. One challenge in the interpretation of DCEMRI is motion artifacts which cause the pattern of tissue enhancement to be unreliable. Image registration is used to recover rigid and nonrigid motions between the 3D image sequences in a 4D breast DCE-MRI. Most existing b-spline based registration methods require lesion segmentation in breast DCE-MRI to preserve the lesion volume before performing the registration. An automatic regularization coefficients generation method is proposed in the b-spline based registration of the breast DCE-MRI, where the tumor regions are transformed in a rigid fashion. Our method does not perform lesion segmentation but computes a map to reflect the tissue rigidity. In the evaluation of our proposed coefficients, the registration methods using our coefficients for rigidity terms are compared against manually assigned coefficients of the rigidity terms and smoothness terms. The evaluation is performed on 30 synthetic and 40 clinical pairs of pre- and post-contrast MRI scans. The results show that the tumor volumes can be well-preserved by using a rigidity term (2:25 +- 4:48% of volume changes) compared to a smoothness term (22:47% +- 20:1%). In our dataset, the volume preservation performance by using our automatically generated coefficients is comparable to the manually assigned rigidity coefficients (2:29% 13:25%), and show no significant difference in volume changes (p > 0:05). 2. Lesion detection. After the motions have been corrected by our registration method, we locate the region of interest (ROI) using our lesion detection method. The aim is to highlight the suspicious ROIs to reduce the ROI searching time and the possibility of overlooking small regions by radiologists. A low signal-to-noise ratio is a general challenge in lesion detection of MRI. In addition, the value ranges of a feature of normal tissue in a patient can overlap with that of malignant tissue in another patient, e.g. tissue intensity values, enhancement et al.. Most existing lesion detection methods face the problem of high false positive rate due to blood vessels or motion artifacts. In our method, we locate suspicious lesions by applying a threshold on essential features. The features are normalized to reduce the variation between patients. We then exclude blood vessel or motion artifacts from the initial results by applying filters that can differentiate them from other tissues. In the evaluation of the system on 21 patients with 50 lesions, all were successfully detected with 5.04 false positive regions per breast. 3. Lesion segmentation. One of the main challenges of existing lesion segmentation methods in breast DCE-MRI is that they require the size of the ROI that encloses a lesion to be small in order to successfully segment the lesion. We propose a lesion segmentation method based on naive Bayes and Markov random field. Our method also requires a ROI generated by a user, but the method is not sensitive to the size of the ROI. In our method, the ROI selected in a DCE-MR image is modeled as a connected graph with local Markov properties where each voxel of the image is regarded as a node. Three edge potentials of the graph are proposed to encourage the smoothness of the segmented regions. In the validation on 72 lesions, our method performs better than a baseline fuzzy-c-means method and another closely related method in segmenting lesions in breast MRI by showing a higher overlap with the ground truth. 4. Feature analysis and lesion classification. The challenge of feature analysis in breast DCE-MRI is that different types of lesions can share similar features. In our study, we extract various morphological, textural and kinetic features of the lesions and apply three classifiers to label them. In the morphological feature analysis, we propose minimum volume enclosing ellipsoid (MVEE) based features to measure the similarity of between a lesion and its MVEE. In statistical testing on 72 lesion, the MVEE-based features are significant in differentiating malignant from benign lesions. 5. CAD applications. The proposed CAD system is versatile. We show two scenarios in which a radiologist makes use of the system. In the first scenario, a user selects a rectangular region of interest (ROI) as input and the CAD automatically localizes and classifies the lesion in the ROI as benign or malignant. In another scenario, the CAD system acts as a “second reader” which fully and automatically identifies all malignant regions. At the time of writing, this is the first automated CAD system that is capable carrying out all these processes without any human interaction. In this thesis, we evaluated the proposed image registration, lesion detection, lesion segmentation, feature extraction and lesion classification using a relatively small database which makes conclusions on generalizability difficult. In the future work, the system requires clinical testing on a large dataset in order to advance this breast MRI CAD to reduce the image interpretation time, eliminate unnecessary biopsy and improve the cancer identification sensitivity for radiologists.
  • Item
    Thumbnail Image
    Audience experience in domestic videogaming
    DOWNS, JOHN ( 2014)
    Videogames are frequently played socially, but not all participants actively play. Audience members observe gameplay, often participating and experiencing the game indirectly. While the existence of non-playing audience members has been previously acknowledged, there have been few attempts to understand what activities audience members engage in while watching videogames, or how their experience is affected by different aspects of the game and social situation. This thesis presents the first substantial body of empirical work on audience behaviour and experience in social videogaming sessions. Existing work was reviewed in a number of areas of literp.ature including the sociality of gameplay, the increasing role of physicality and physical actions in gameplay, and the role of audiences in HCI. Three studies were then conducted based on the research question: How do the sociality and physicality of videogaming sessions influence audience experience? An initial exploratory observational study (N = 6 families) examined the types of activities that audiences engage in while watching highly physical videogames in their homes. This study indicated that audience members can adopt a variety of ephemeral roles that provide them with opportunities to interact with one another, the players, and the game technology. Additionally, participants reported that the physicality of the gameplay heavily influenced their experience. The second study, a naturalistic experimental study (N = 134) consisted of a mixed-model analysis of the factors of game physicality and turn anticipation. Study 2 found that anticipation of a turn affects experience of both audience and player, and similarly found that highly physical games result in more positive audience experiences, although the relationship between physicality and experience is not straightforward. A third study, also an experiment (N = 24), examined the influence of game physicality and visual attention on audience experience within a mediated setting, and a cross-study comparison identified that there appears to be a strong interplay between social context and the experience of physicality. Overall, this thesis contributes an understanding of how sociality, physicality, and the interplay between the two can influence audience behaviour and experience. These findings can be used to inform the design of novel game and interactive experiences that incorporate physicality, turn anticipation, and opportunities for different types of participation in order to influence and enhance audience experience.
  • Item
    Thumbnail Image
    Real-time feedback for surgical simulation using data mining
    Zhou, Yun ( 2014)
    Surgical trainees devote years to master the surgical skills required to safely perform surgery. Traditionally, they refine their psychomotor skills by practising on plastic bones or cadavers under the supervision of expert surgeons. Experts guide trainees through surgical procedures while providing feedback on the quality of their procedure. However, there are limitations to this approach, which include a shortage of cadaver bones, limited availability of expert supervision, and the subjective manner of surgical skill assessment. To address these limitations, the introduction of new techniques such as 3D illusion, haptic feedback and augmented reality have significantly improved the realism of surgical simulators. Such simulators have the potential to provide a cost-effective platform, which allows trainees to practice many surgical cases of varying difficulty, and provides the flexibility of practising repeatedly at their own convenience. However, most simulators lack the automated performance assessment and feedback, which limits the applicability of simulators as self-guided training systems. In thesis, we aim to deliver automated performance assessment and feedback in a virtual simulation environment. The automated performance assessment provides information on the quality of surgical result, which is a critical component for a self-guided training platform. A large number of recent studies have focused on scoring the outcome of surgical tasks. However, this score typically based on the result of a surgical task (such as the shape of a surgical end-product) and ignores the rich information provided by real-time performance attributes, such as motion records. Furthermore, since this assessment is delivered at the end of each task, it does not allow any opportunity to identify and address mistakes as they occur. We propose an event-based framework that provides online assessment with different temporal granularities. The evaluations show that the proposed framework provides accurate performance assessment using both motion records and end-product information. Although automated performance assessment provides expertise score to illustrate the surgical perform, a single score has limited utility in improving surgical technique, which can be equally important. Trainees need constructive human understandable feedback to refine their psychomotor skills. To this end, we propose a Random Forest based approach to generate meaningful automated real-time performance feedback. Our evaluation demonstrates it can significantly improve the surgical techniques. However, this random forest based method makes specific assumptions that all drilling movements made by experts are of "expert quality'' and all operations made by trainees are suboptimal. This hampers the model training process and leads to lower accuracy rates. Therefore we propose a pattern-based approach to capture the differences in technique between experts and trainees and deliver real-time feedback to improve performance, while avoiding the assumption of "polarising'' the quality of drill strokes based on expertise. Our evaluation results show that the proposed approach identifies the stage of the surgical procedure correctly and provides constructive feedback to assist surgical trainees in improving their technique. Another challenge for automated performance assessment is hard to extend existing evaluation models to new specimens. In order to train reliable assessment models for new specimens, the classical machine learning approaches require a new set of human expert examples collected from each new specimen. To eliminate this need, we propose a transfer learning framework to adapt a classifier built on a single specimen to multiple specimens. Once a classifier is trained, we translate the new specimens' features to the original feature space, which allows us to carry out performance evaluation on different specimens using the same classifier. In summary, the major contributions of this thesis involve the development of self-guided training platform for delivering automatic assessment and feedback using data mining techniques.
  • Item
    Thumbnail Image
    Early detection system for Distributed Denial of Service attacks
    VELAUTHAPILLAI, THANESWARAN ( 2014)
    In today's environment, the online services of any organization can become a target of Distributed Denial of Service (DDoS) attacks and the attacks continue to become more complex and sophisticated. Large scale attacks against enterprises and governments around the world have revealed the inability of existing solutions to effectively address DDoS attacks. The main tasks of DDoS defence systems are to detect the DDoS attacks accurately at an early stage and to respond pro-actively to stop the oncoming attack. Detection of the attack is most reliable when undertaken closer to the victim because attack traffic concentrates at the victim, however early attack detection, i.e. detecting that the attack is underway before the attack traffic reaches the victim, is clearly required to be undertaken closer to the attack sources. Assuming that an attack has been detected then protection can be achieved with the help of backbone routers, e.g. by blocking attack traffic from identified attacking sources. However detecting an attack closer to the sources, or in other words far from the victim, is less reliable because attack traffic is more sparse at these points. A naive solution is to measure traffic throughout the network and report all of the measurements to a centralized system. The central system would raise an attack alert if the total detected traffic to a given victim is greater than some threshold. This traditional client-server model represents a central point of failure and a performance bottleneck. Therefore we propose a distributed and cooperative defence system that we call Gossip Detector can provide early attack detection, in the intermediate network between the attack sources and the victim. Gossip Detector uses a gossip-based information exchange protocol to share network traffic information in a cooperative overlay network; with regard to a given victim for which the overlay is setup to defend. The defence system is distributed and deployed at intermediate network routers. Through the exchange of traffic measurements, each node in the cooperative system can reach a decision as to whether an attack is underway or not; and thereby the entire system can reach a consensus as to whether an attack is underway or not. In this way a response can be instigated to try and reduce the impact of the attack on the victim. Detecting an attack at an early time is challenging because network delays become a dominant factor. The research approach adopted in this dissertation includes mathematical analyses of the proposed defence system and the evaluation of the system in a simulated network under different flooding-based attacks using the ns-2 simulator. The simulation results show that Gossip Detector can detect attacks within 0.5 seconds with a probability of attack detection as high as 0.99 and probability of false alarm below 0.01 on a topology of average router delay 12 ms. This compares favourably against other widely known methods including change-point detection, TTL analysis and wavelet analysis. In both analytical results and simulations based results, we demonstrate the effectiveness of the defence system in terms of early attack detection and we show the tradeoffs with consumed bandwidth.
  • Item
    Thumbnail Image
    A model for digital forensic readiness in organisations
    ELYAS, MOHAMED ( 2014)
    Organisations are increasingly reliant upon information systems for almost every facet of their operations. As a result, there are legal, contractual, regulatory, security and operational reasons why this reliance often translates into a need to conduct digital forensic investigations. However, conducting digital forensic investigations and collecting digital evidence is a specialised and challenging task exacerbated by the increased complexity of corporate environments, diversity of computing platforms, and large-scale digitisation of businesses. There is agreement in both professional and academic literature that in order for organisations to meet this challenge, they must develop ‘digital forensic readiness’ – the proactive capability to collect, analyse and preserve digital information. Unfortunately, although digital forensic readiness is becoming a legal and regulatory requirement in many jurisdictions, studies show that most organisations have not developed a significant capability in this domain. A key issue facing organisations intending to develop a forensic readiness capability is the lack of comprehensive and coherent guidance in both the academic and professional literature on how forensic readiness can be achieved. A review of the literature conducted as part of this study found that the academic and professional discourse in forensic readiness is fragmented and dispersed in that it does not build cumulatively on prior knowledge and is not informed by empirical evidence. Further, there is a lack of maturity in the discourse that is rooted in the reliance on informal definitions of key terms and concepts. For example, there is little discussion and understanding of the key organisational factors that contribute to forensic readiness, the relationships between these factors and their precise definitions. Importantly, there is no collective agreement on the primary motivating factors for organisations to becoming forensically ready. Therefore, this research project proposes the following research questions: Research Question 1. What objectives can organisations achieve by being forensically ready? Research Question 2. How can forensic readiness be achieved by organisations? Which in turn suggests the following sub-questions: Sub-Question 2. What factors contribute to making an organisation forensically ready? Sub-Question 3. How do these factors interact to achieve forensic readiness in organisations? A systematic review approach and coding techniques have been utilised to synthesise key elements of the vast and largely fragmented body of knowledge in forensic readiness towards a more holistic and coherent understanding. This led to the development of a comprehensive model that explains how forensic readiness can be achieved and what organisations can achieve by being forensically ready. The proposed model has been extensively validated through multiple focus groups and a multi-round Delphi survey, which involved experienced computer forensic experts from twenty countries and diverse computer forensic backgrounds. The study found there to be four primary objectives for developing a forensic readiness capability: 1) to manage digital evidence; 2) to conduct internal digital forensic investigations; 3) to comply with regulations; and 4) to achieve other non-forensic related objectives (e.g. improve security management). The study also identified the factors that contribute to forensic readiness. These are: 1) a strategy that draws the map for a forensically ready system; 2) human expertise to perform forensic tasks; 3) awareness of forensics in organisational staff; 4) software and hardware to manage digital evidence; 5) system architecture that is tailored for forensics; 6) policies and procedures that outline forensic best practice; and 7) training to educate staff on their forensic responsibilities. Further, the study found three additional organisational factors external to the forensic program: 1) adequate support from senior management; 2) an organisational culture that is supportive of forensics; and 3) good governance. This study makes significant theoretical contributions by introducing a more comprehensive model for forensic readiness that is characterised by the following: 1) providing formal definitions to key concepts in forensic readiness; 2) describing the key factors that contribute to forensic readiness; 3) describing the relationships and interactions between the factors; 4) defining a set of dimensions and properties by which forensic readiness is characterised; and 5) describing the key objectives organisations can achieve by being forensically ready. The study also makes significant contributions to practice. A key attribute of the digital forensic readiness model is its depth (in terms of the various dimensions and properties of each factor), which enables its use as an instrument to assess and guide organisational forensic readiness. Furthermore, this research increases the marketability of forensic readiness by introducing a well-defined list of objectives organisations can achieve by developing a forensic capability.
  • Item
    Thumbnail Image
    Location proof architectures
    SENEVIRATNE, JANAKA ( 2014)
    Upcoming location based services such as pay-as-you-drive insurances mandate verified locations. To enable such services Location Proof Architectures (LPAs) have been proposed in literature to verify or prove a user location. Specifically, an LPA allow a user (or a device on behalf of its user) to obtain a proof of its presence at a location from a trusted third party. In addition to guarding against cheating users who may claim false locations, another major concern in an LPA is to preserve user location privacy. To achieve this a user's identity and location data should be maintained separately in tandem with additional measures that avoid leaking sensitive identity and location data. We identify two types of location proof architectures: 1. sporadic location proofs for specific user locations and 2. continuous location proofs for user routes. In this thesis, we present two sporadic LPAs. Firstly, we propose an LPA where a user cannot falsely claim a location. Also, this LPA preserves user privacy by verifying a user identity and a location independently. Secondly, we propose an LPA that uses pseudonyms. We present a trusted third party free group pseudonym registering system for the LPA and show that our approach can achieve a guaranteed degree of privacy in the LPA. This thesis also introduces a framework for continuous LPA. In a continuos LPA, a verifier receives a sequence of location samples on a user route and assigns a degree of confidence with each possible user route. Specifically, we explain a stochastic model which associates a degree of confidence with a user route based on the distribution pattern of location samples.
  • Item
    Thumbnail Image
    Service value in business-to-business cloud computing
    PADILLA, ROLAND ( 2014)
    This thesis is concerned with determining and measuring the components of service value in the business-to-business cloud computing context. Although service value measurement and its perceptions have been identified as key issues for researchers and practitioners, theoretical and empirical studies have experienced great challenges in measuring perceptions of service value in numerous business contexts. The thesis first determines the components of service value and then measures the service value perceptions of users in a business-to-business context of cloud computing. In this thesis, I: • undertook qualitative in-depth interviews (N=21) of managers who are responsible for deciding on the adoption and maintenance of cloud computing services. Two key findings of the interviews are that the four components of an established service value model in a business-to-consumer setting are appropriate in a business-to-business context of cloud computing and found evidence that an additional component, which we call cloud service governance, applies and does not fit the existing four components; • conducted a survey (N=328) of cloud computing practitioners to demonstrate that the findings from the qualitative in-depth interviews are generalisable to a number of industry sectors and across geographical locations; • assessed the measurement models, comprising both reflective and formative, and structural model by using partial least squares structural equation modeling, and provided evidence of specifying Service Value as a formative second-order hierarchical latent variable by using a sequential latent variable score method; • demonstrated that Service Equity is not a statistically significant component of service value in the first-order model, Service Quality is consistently significant for both first-order model and second-order, formative model, and the additional construct called Cloud Service Governance is significant; and, • for the first time, fully tested a reliable service value instrument for use by the customers of cloud computing, and aiming to engage cloud service providers in order to enhance customer satisfaction and increase repurchase intentions.