Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 124
  • Item
    Thumbnail Image
    Breast cancer detection and diagnosis in dynamic contrast-enhanced magnetic resonance imaging
    LIANG, XI ( 2013)
    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast is a medical imaging tool used to detect and diagnose breast disease. A DCE-MR image is a series of three-dimensional (3D) breast MRI scans. It is acquired to form a 4D image (3D spatial + time), before and after the injection of paramagnetic contrast agents. DCE-MRI allows the analysis of the intensity variation of magnetic resonance (MR) signals, before and after the injection of contrast agents over time. The interpretation of 4D DCE-MRI images can be time consuming due to the amount of information involved. Motion artifacts in between the image scans further complicate the diagnosis. A DCE-MR image includes a large amount of data and it is challenging to interpret even for an experienced radiologist. Therefore, a computer-aided diagnosis (CAD) system is desirable in assisting the diagnosis of abnormal findings in the DCE-MR image. We propose a fully automated CAD system that is comprised of five novel components: a new image registration method to recover motions in between MR image acquisitions, a novel lesion detection method to identify all suspicious regions, a new lesion segmentation method to draw lesion contours and a novel lesion feature characterization method. We then classify the automatically detected lesions using our proposed features. The following lists the challenges found in most CAD systems and the contributions in our CAD system of breast DCE-MRI. 1. Image registration. One challenge in the interpretation of DCEMRI is motion artifacts which cause the pattern of tissue enhancement to be unreliable. Image registration is used to recover rigid and nonrigid motions between the 3D image sequences in a 4D breast DCE-MRI. Most existing b-spline based registration methods require lesion segmentation in breast DCE-MRI to preserve the lesion volume before performing the registration. An automatic regularization coefficients generation method is proposed in the b-spline based registration of the breast DCE-MRI, where the tumor regions are transformed in a rigid fashion. Our method does not perform lesion segmentation but computes a map to reflect the tissue rigidity. In the evaluation of our proposed coefficients, the registration methods using our coefficients for rigidity terms are compared against manually assigned coefficients of the rigidity terms and smoothness terms. The evaluation is performed on 30 synthetic and 40 clinical pairs of pre- and post-contrast MRI scans. The results show that the tumor volumes can be well-preserved by using a rigidity term (2:25 +- 4:48% of volume changes) compared to a smoothness term (22:47% +- 20:1%). In our dataset, the volume preservation performance by using our automatically generated coefficients is comparable to the manually assigned rigidity coefficients (2:29% 13:25%), and show no significant difference in volume changes (p > 0:05). 2. Lesion detection. After the motions have been corrected by our registration method, we locate the region of interest (ROI) using our lesion detection method. The aim is to highlight the suspicious ROIs to reduce the ROI searching time and the possibility of overlooking small regions by radiologists. A low signal-to-noise ratio is a general challenge in lesion detection of MRI. In addition, the value ranges of a feature of normal tissue in a patient can overlap with that of malignant tissue in another patient, e.g. tissue intensity values, enhancement et al.. Most existing lesion detection methods face the problem of high false positive rate due to blood vessels or motion artifacts. In our method, we locate suspicious lesions by applying a threshold on essential features. The features are normalized to reduce the variation between patients. We then exclude blood vessel or motion artifacts from the initial results by applying filters that can differentiate them from other tissues. In the evaluation of the system on 21 patients with 50 lesions, all were successfully detected with 5.04 false positive regions per breast. 3. Lesion segmentation. One of the main challenges of existing lesion segmentation methods in breast DCE-MRI is that they require the size of the ROI that encloses a lesion to be small in order to successfully segment the lesion. We propose a lesion segmentation method based on naive Bayes and Markov random field. Our method also requires a ROI generated by a user, but the method is not sensitive to the size of the ROI. In our method, the ROI selected in a DCE-MR image is modeled as a connected graph with local Markov properties where each voxel of the image is regarded as a node. Three edge potentials of the graph are proposed to encourage the smoothness of the segmented regions. In the validation on 72 lesions, our method performs better than a baseline fuzzy-c-means method and another closely related method in segmenting lesions in breast MRI by showing a higher overlap with the ground truth. 4. Feature analysis and lesion classification. The challenge of feature analysis in breast DCE-MRI is that different types of lesions can share similar features. In our study, we extract various morphological, textural and kinetic features of the lesions and apply three classifiers to label them. In the morphological feature analysis, we propose minimum volume enclosing ellipsoid (MVEE) based features to measure the similarity of between a lesion and its MVEE. In statistical testing on 72 lesion, the MVEE-based features are significant in differentiating malignant from benign lesions. 5. CAD applications. The proposed CAD system is versatile. We show two scenarios in which a radiologist makes use of the system. In the first scenario, a user selects a rectangular region of interest (ROI) as input and the CAD automatically localizes and classifies the lesion in the ROI as benign or malignant. In another scenario, the CAD system acts as a “second reader” which fully and automatically identifies all malignant regions. At the time of writing, this is the first automated CAD system that is capable carrying out all these processes without any human interaction. In this thesis, we evaluated the proposed image registration, lesion detection, lesion segmentation, feature extraction and lesion classification using a relatively small database which makes conclusions on generalizability difficult. In the future work, the system requires clinical testing on a large dataset in order to advance this breast MRI CAD to reduce the image interpretation time, eliminate unnecessary biopsy and improve the cancer identification sensitivity for radiologists.
  • Item
    Thumbnail Image
    Digital forensics: increasing the evidential weight of system activity logs
    AHMAD, ATIF ( 2007)
    The application of investigative techniques within digital environments has lead to the emergence of a new field of specialization that may be termed ‘digital forensics’. Perhaps the primary challenge concerning digital forensic investigations is how to preserve evidence of system activity given the volatility of digital environments and the delay between the time of the incident and the start of the forensic investigation. This thesis hypothesizes that system activity logs present in modern operating systems may be used for digital forensic evidence collection. This is particularly true in modern organizations where there is growing recognition that forensic readiness may have considerable benefits in case of future litigation. An investigation into the weighting of evidence produced by system activity logs present in modern operating systems takes place in this thesis. The term ‘evidential weight’ is used loosely as a measure of the suitability of system activity logs to digital forensic investigations. This investigation is approached from an analytical perspective. The first contribution of this thesis is to determine the evidence collection capability of system activity logs by a simple model of the logging mechanism. The second contribution is the development of evidential weighting criteria that can be applied to system activity logs. A unique and critical role for system activity logs by which they establish the reliability of other kinds of computer-derived evidence from hard disk media is also identified. The primary contribution of this thesis is the identification of a comprehensive range of forensic weighting issues arising from the use of log evidence that concern investigators and legal authorities. This contribution is made in a comprehensive analytical discussion utilizing both the logging model and the evidential weighting criteria. The practical usefulness of the resulting evidential weighting framework is demonstrated by rigorous and systematic application to a real-world logging system.
  • Item
    Thumbnail Image
    Strategic information security policy quality assessment: a multiple constituency perspective
    MAYNARD, SEAN ( 2010)
    An integral part of any information security management program is the information security policy. The purpose of an information security policy is to define the means by which organisations protect the confidentiality, integrity and availability of information and its supporting infrastructure from a range of security threats. The tenet of this thesis is that the quality of information security policy is inadequately addressed by organisations. Further, although information security policies may undergo multiple revisions as part of a process development lifecycle and, as a result, may generally improve in quality, a more explicit systematic and comprehensive process of quality improvement is required. A key assertion of this research is that a comprehensive assessment of information security policy requires the involvement of the multiple stakeholders in organisations that derive benefit from the directives of the information security policy. Therefore, this dissertation used a multiple-constituency approach to investigate how security policy quality can be addressed in organisations, given the existence of multiple stakeholders. The formal research question under investigation was: How can multiple constituency quality assessment be used to improve strategic information security policy? The primary contribution of this thesis to the Information Systems field of knowledge is the development of a model: the Strategic Information Security Policy Quality Model. This model comprises three components: a comprehensive model of quality components, a model of stakeholder involvement and a model for security policy development. The strategic information security policy quality model gives a holistic perspective to organisations to enable management of the security policy quality assessment process. This research contributes six main contributions as stated below:  This research has demonstrated that a multiple constituency approach is effective for information security policy assessment  This research has developed a set of quality components for information security policy quality assessment  This research has identified that efficiency of the security policy quality assessment process is critical for organisations  This research has formalised security policy quality assessment within policy development  This research has developed a strategic information security policy quality model  This research has identified improvements that can be made to the security policy development lifecycle The outcomes of this research contend that the security policy lifecycle can be improved by: enabling the identification of when different stakeholders should be involved, identifying those quality components that each of the different stakeholders should assess as part of the quality assessment, and showing organisations which quality components to include or to ignore based on their individual circumstances. This leads to a higher quality information security policy, and should impact positively on an organisation’s information security.
  • Item
    Thumbnail Image
    The effect of Transactive Memory Systems on performance in virtual teams
    MOHAMED ARIFF, MOHAMED ( 2013)
    Although virtual teams are increasingly common in organizations, research on the formation of Transactive Memory Systems (TMS) in virtual teams and its effect on team performance is relatively rare. Previous studies have reported that TMS quality influences team performance in face-to-face teams. However, the effect of TMS quality on the performance of virtual teams has not been adequately researched in past studies. Specifically, this study extends past research and proposes a model in which task interdependence and TMS quality jointly influence the performance of virtual teams. Based on the conceptual model of Brandon and Hollingshead, this study has hypothesized the effect of: (1) the quality of the TMS formation process on TMS quality; (2) TMS quality on virtual teams' performance; and (3) task interdependence between TMS quality and virtual teams' performance. This study was undertaken in three phases. Firstly, a conceptual phase was conducted to investigate and analyse the existing literature on virtual teams, virtual teams' key characteristics, virtual teams' performance and TMS. The conceptual phase resulted in the development of a research model and relevant hypotheses. Secondly, in the exploratory phase, four separate questionnaire surveys were conducted. The exploratory phase helped develop and test all of the instruments that were to be used in the study. The result of the exploratory phase was the production of a reliable and valid set of instruments to be used in the final phase of this study which was the confirmatory phase. In the confirmatory phase, an online survey was conducted to test the research model and the proposed hypotheses. This phase provided a broader understanding of TMS formation in virtual teams and of the joint effect of task interdependence and TMS quality on virtual teams' performance. The results of this study indicated that: (1) there is a positive effect between the quality of the TMS utilization process and virtual teams' performance; (2) there is a positive effect between TMS quality and virtual teams' performance; (3) task interdependence has a significant negative effect on the relationship between TMS quality and virtual teams' performance; and (4) TMS quality partially mediates the effect between task interdependence and virtual teams' performance. However, the results from this study failed to support two hypothesized relationships which were respectively: (1) the quality of the TMS construction process and (2) the quality of the TMS evaluation process on TMS quality. This study is the first research which has investigated TMS quality in a field study of a virtual team environment as previous studies on TMS have focused on experimental virtual teams. The main contribution of this study is to present a theoretical model that explains the effect of TMS quality on virtual teams' performance. This study also contributes to theory by extending Brandon and Hollingshead's model of the TMS formation process. This study entailed several methodological improvements based on previous studies which included: (1) new instrument items to measure the quality of the TMS formation process construct; (2) a new two-dimensional TMS quality construct which employed the 'who knows what' and 'who does what' dimension respectively; and (3) performing content adequacy assessment using the Q-sort technique which helped to demonstrate the validity and reliability of the instrument items prior to actual data collection. This study provides organizations with a better comprehension of the TMS formation process that affects virtual teams' performance. This study also provides organizations with an explanation about the task interdependence which affects TMS quality which in turn results in better performance of virtual teams.
  • Item
    Thumbnail Image
    Understanding the business benefits of ERP system use
    Staehr, Lorraine Jean ( 2006)
    ERP systems are large, complex, integrated software packages used for business transaction processing by thousands of major organizations worldwide. Yet outcomes from ERP system implementation and use can be very different, and current understanding of how and why such variation exists is limited. Since most studies of ERP systems to date have focused on ERP implementation, this research focused on the post-implementation period. The aim was to better understand the 'what', 'how' and 'why' of achieving business benefits from ERP systems during ERP use. Achieving business benefits from ERP systems was considered as a process of organizational change occurring over time within various societal and organizational contexts. A retrospective, interpretive case study approach was used to study this process. The post-implementation periods of four Australian manufacturing organizations that had implemented ERP systems were studied. This study makes three important contributions to the information systems research literature. First, a new framework was developed to explain 'how' and 'why' business benefits were achieved from ERP systems. This explanatory framework is theoretically based and is firmly grounded in the empirical data. Three types of themes, along with the interrelationships between them, were identified as influencing the business benefits achieved from ERP systems. The first group of themes, the process themes, are 'Education, training and support', 'Technochange management' and 'People resources'. The second group of themes, the outcome themes, are 'Efficient and effective use of the ERP system', 'Business process improvement' and 'New projects to leverage off the ERP system'. The third group of themes, the contextual themes, are the 'External context', the 'Internal context' and the 'ERP planning and implementation phases'. This new framework makes a significant contribution to understanding how and why some organizations achieve more business benefits from ERP systems than others. Second, the case studies provide a rich description of four manufacturing organizations that have implemented and used ERP systems. Examining the 'what' of business benefits from ERP systems in these four manufacturing organizations resulted in a confirmed, amended and improved Shang and Seddon (2000) ERP business benefits framework. This replication and extension of previous research is the third contribution of this study. The results of this research are of interest not only to information systems researchers, but also to information systems practitioners and senior management in organizations that either plan to, or have implemented ERP systems. Overall this research provides an improved understanding of business benefits from ERP systems and a sound foundation for future studies of ERP system use.
  • Item
    Thumbnail Image
    Hierarchical clustering and summarization of network traffic data
    Mahmood, Abdun Naser ( 2008)
    An important task in managing IP networks is understanding the different types of traffic that are utilizing a network, based on a given trace of the packets or flows in the network. One of the key challenges in this task is the volume and complexity of the data that is available in traffic traces. What is needed by network managers in this context is a concise report of the significant traffic patterns that are present in the network. In this thesis, we address the problem of how to generate a succinct traffic report that contains a set of aggregated traffic flows, such that each aggregate flow corresponds to a significant traffic pattern in the network. We view the problem of generating a report of the significant traffic patterns in a network as a form of clustering problem. In particular, some distance-based hierarchical clustering techniques have advantages in terms of scalability when analyzing the types of large traffic traces that arise in this context. However, there are several important problems that need to be addressed before we can effectively use these types of clustering techniques on network traffic traces. The first research problem we address is how to handle non-numeric attributes that appear in network traffic data, such as attributes with a categorical or hierarchical structure. We have proposed a hierarchical similarity measure that is suitable for comparing hierarchical attributes in network traffic data. We have then developed a one-pass, hierarchical clustering scheme that can exploit the structure of hierarchical attributes in combination with categorical and numerical attributes. We demonstrate that our clustering scheme achieves significant improvements in both accuracy and execution time on a standard benchmark dataset, compared to an existing approach based on frequent itemset clustering. The second research problem we address is how to improve the scalability of our hierarchical clustering scheme when computing resources are limited. We propose an adaptive, two-stage sampling technique, which controls the rate at which records from frequently seen patterns are received by our clustering scheme. This enables more computational resources to be allocated to clustering new or unusual traffic patterns. We demonstrate that our two-stage sampling technique can identify less frequent traffic patterns with greater accuracy than when traditional systematic sampling is used. The third research problem we address is how to generate a concise yet accurate summary report from the results of our hierarchical clustering. We present two approaches to summarization, based on the size and the homogeneity of the clusters in the hierarchical cluster tree. We demonstrate that these approaches to summarization can substantially reduce the final report size with little impact on the accuracy of the report.
  • Item
    Thumbnail Image
    Statistical modeling of multiword expressions
    Su, Kim Nam ( 2008)
    In natural languages, words can occur in single units called simplex words or in a group of simplex words that function as a single unit, called multiword expressions (MWEs). Although MWEs are similar to simplex words in their syntax and semantics, they pose their own sets of challenges (Sag et al. 2002). MWEs are arguably one of the biggest roadblocks in computational linguistics due to the bewildering range of syntactic, semantic, pragmatic and statistical idiomaticity they are associated with, and their high productivity. In addition, the large numbers in which they occur demand specialized handling. Moreover, dealing with MWEs has a broad range of applications, from syntactic disambiguation to semantic analysis in natural language processing (NLP) (Wacholder and Song 2003; Piao et al. 2003; Baldwin et al. 2004; Venkatapathy and Joshi 2006). Our goals in this research are: to use computational techniques to shed light on the underlying linguistic processes giving rise to MWEs across constructions and languages; to generalize existing techniques by abstracting away from individual MWE types; and finally to exemplify the utility of MWE interpretation within general NLP tasks. In this thesis, we target English MWEs due to resource availability. In particular, we focus on noun compounds (NCs) and verb-particle constructions (VPCs) due to their high productivity and frequency. Challenges in processing noun compounds are: (1) interpreting the semantic relation (SR) that represents the underlying connection between the head noun and modifier(s); (2) resolving syntactic ambiguity in NCs comprising three or more terms; and (3) analyzing the impact of word sense on noun compound interpretation. Our basic approach to interpreting NCs relies on the semantic similarity of the NC components using firstly a nearest-neighbor method (Chapter 5), then verb semantics based on the observation that it is often an underlying verb that relates the nouns in NCs (Chapter 6), and finally semantic variation within NC sense collocations, in combination with bootstrapping (Chapter 7). Challenges in dealing with verb-particle constructions are: (1) identifying VPCs in raw text data (Chapter 8); and (2) modeling the semantic compositionality of VPCs (Chapter 5). We place particular focus on identifying VPCs in context, and measuring the compositionality of unseen VPCs in order to predict their meaning. Our primary approach to the identification task is to adapt localized context information derived from linguistic features of VPCs to distinguish between VPCs and simple verb-PP combinations. To measure the compositionality of VPCs, we use semantic similarity among VPCs by testing the semantic contribution of each component. Finally, we conclude the thesis with a chapter-by-chapter summary and outline of the findings of our work, suggestions of potential NLP applications, and a presentation of further research directions (Chapter 9).
  • Item
    Thumbnail Image
    Interest-based negotiation in multi-agent systems
    rahwan, iyad ( 2004)
    Software systems involving autonomous interacting software entities (or agents) present new challenges in computer science and software engineering. A particularly challenging problem is the engineering of various forms of interaction among agents. Interaction may be aimed at enabling agents to coordinate their activities, cooperate to reach common objectives, or exchange resources to better achieve their individual objectives. This thesis is concerned with negotiation: a process through which multiple self-interested agents can reach agreement over the exchange of scarce resources. In particular, I focus on settings where agents have limited or uncertain information, precluding them from making optimal individual decisions. I demonstrate that this form of bounded-rationality may lead agents to sub-optimal negotiation agreements. I argue that rational dialogue based on the exchange of arguments can enable agents to overcome this problem. Since agents make decisions based on particular underlying reasons, namely their interests, beliefs and planning knowledge, then rational dialogue over these reasons can enable agents to refine their individual decisions and consequently reach better agreements. I refer to this form of interaction as “interested-based negotiation.” (For complete abstract open document)
  • Item
    Thumbnail Image
    The effects of decision aid structural restrictiveness on decision-making outcomes
    Seow, Poh-Sun ( 2008)
    This study examines the effects of structural restrictiveness embedded within a decision aid on users’ decision-making outcomes. Structural restrictiveness is determined by the rules embedded within computerized decision aids that restrict how users interact with the decision aid. For example, a structurally-restrictive decision aids might force users to consider information and answer specific questions in a prescribed sequence. In contrast, a less structurally-restrictive decision aid would be designed so that users are free to consider information in whatever sequence they desire. The more structurally-restrictive design imposes more limits on users’ decision-making process because they are forced to adapt their decision-making process to match the decision aid. However, it is unclear whether restricting how users interact with decision aids affects their decision-making outcomes. The results indicate that the more structurally-restrictive decision aid did not assist participants to identify more prompted items compared with the less structurally-restrictive decision aid. However, it increased the decision-making bias in recalling non-prompted items. The results contribute to the decision aid literature by highlighting the cost of increasing the degree of structural restrictiveness embedded within decision aids.
  • Item
    Thumbnail Image
    Agent-based 3d visual tracking
    Cheng, Tak Keung ( 2000-07)
    We describe our overall approach to building robot vision systems, and the conceptual systems architecture as a network of agents, which run in parallel, and cooperate to achieve the system’s goals. We present the current state of the 3D Feature-Based Tracker, a robot vision system for tracking and segmenting the 3D motion of objects using image input from a calibrated stereo pair of video cameras. The system runs in a multi-level cycle of prediction and verification or correction. The currently modelled 3D positions and velocities of the feature points are extrapolated a short time into the future to yield predictions of 3D position. These 3D predictions are projected into the two stereo views, and are used to guide a fast and highly focused visual search for the feature points. The image positions at which the features are re-acquired are back-projected in 3D space in order to update the 3D positions and velocities. At a higher level, features are dynamically grouped into clusters with common 3D motion. Predictions from the cluster level can be fed to the lower level to correct errors in the point-wise tracking.