Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 172
  • Item
    Thumbnail Image
    Practical declarative debugging of mercury programs
    MacLarty, Ian Douglas. (University of Melbourne, 2006)
  • Item
    Thumbnail Image
    A multistage computer model of picture scanning, image understanding, and environment analysis, guided by research into human and primate visual systems
    Rogers, T. J. (University of Melbourne, Faculty of Engineering,, 1983)
    This paper describes the design and some testing of a computational model of picture scanning and image understanding (TRIPS), which outputs a description of the scene in a subset of English. This model can be extended to control the analysis of a three dimensional environment and changes of the viewing system's position within that environment. The model design is guided by a summary of neurophysiological, psychological, and psychophysical observations and theories concerning visual perception in humans and other primates, with an emphasis on eye movements. These results indicate that lower level visual information is processed in parallel in a spatial representation while higher level processing is mostly sequential, using a symbolic, post iconic, representation. The emphasis in this paper is on simulating the cognitive aspects of eye movement control and the higher level post iconic representation of images. The design incorporates several subsystems. The highest level control module is described in detail, since computer models Of eye movement which use cognitively guided saccade selection are not common. For other modules, the interfaces with the whole system and the internal computations required are out lined, as existing image processing techniques can be applied to perform these computations. Control is based on a production . system, which uses an "hypothesising" system - a simplified probabilistic associative production system - to determine which production to apply. A framework for an image analysis language (TRIAL), based on "THINGS". and "RELATIONS" is presented, with algorithms described in detail for the matching procedure and the transformations of size, orientation, position, and so On. TRIAL expressions in the productions are used to generate "cognitive expectations" concerning future eye movements and their effects which can influence the control of the system. Models of low level feature extraction, with parallel processing of iconic representations have been common in computer vision literature, as are techniques for image manipulation and syntactic and statistical analysis� Parallel and serial systems have also been extensively investigated. This model proposes an integration Of these approaches using each technique in the domain to which it is suited. The model proposed for the inferotemporal cortex could be also suitable as a model of the posterior parietal cortex. A restricted version of the picture scanning model (TRIPS) has been implemented, which demonstrates the consistency of the model and also exhibits some behavioural characteristics qualitatively similar to primate visual systems. A TRIAL language is shown to be a useful representation for the analysis and description of scenes. key words: simulation, eye movements, computer vision systems, inferotemporal, parietal, image representation, TRIPS, TRIAL.
  • Item
    Thumbnail Image
    Breast cancer detection and diagnosis in dynamic contrast-enhanced magnetic resonance imaging
    LIANG, XI ( 2013)
    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast is a medical imaging tool used to detect and diagnose breast disease. A DCE-MR image is a series of three-dimensional (3D) breast MRI scans. It is acquired to form a 4D image (3D spatial + time), before and after the injection of paramagnetic contrast agents. DCE-MRI allows the analysis of the intensity variation of magnetic resonance (MR) signals, before and after the injection of contrast agents over time. The interpretation of 4D DCE-MRI images can be time consuming due to the amount of information involved. Motion artifacts in between the image scans further complicate the diagnosis. A DCE-MR image includes a large amount of data and it is challenging to interpret even for an experienced radiologist. Therefore, a computer-aided diagnosis (CAD) system is desirable in assisting the diagnosis of abnormal findings in the DCE-MR image. We propose a fully automated CAD system that is comprised of five novel components: a new image registration method to recover motions in between MR image acquisitions, a novel lesion detection method to identify all suspicious regions, a new lesion segmentation method to draw lesion contours and a novel lesion feature characterization method. We then classify the automatically detected lesions using our proposed features. The following lists the challenges found in most CAD systems and the contributions in our CAD system of breast DCE-MRI. 1. Image registration. One challenge in the interpretation of DCEMRI is motion artifacts which cause the pattern of tissue enhancement to be unreliable. Image registration is used to recover rigid and nonrigid motions between the 3D image sequences in a 4D breast DCE-MRI. Most existing b-spline based registration methods require lesion segmentation in breast DCE-MRI to preserve the lesion volume before performing the registration. An automatic regularization coefficients generation method is proposed in the b-spline based registration of the breast DCE-MRI, where the tumor regions are transformed in a rigid fashion. Our method does not perform lesion segmentation but computes a map to reflect the tissue rigidity. In the evaluation of our proposed coefficients, the registration methods using our coefficients for rigidity terms are compared against manually assigned coefficients of the rigidity terms and smoothness terms. The evaluation is performed on 30 synthetic and 40 clinical pairs of pre- and post-contrast MRI scans. The results show that the tumor volumes can be well-preserved by using a rigidity term (2:25 +- 4:48% of volume changes) compared to a smoothness term (22:47% +- 20:1%). In our dataset, the volume preservation performance by using our automatically generated coefficients is comparable to the manually assigned rigidity coefficients (2:29% 13:25%), and show no significant difference in volume changes (p > 0:05). 2. Lesion detection. After the motions have been corrected by our registration method, we locate the region of interest (ROI) using our lesion detection method. The aim is to highlight the suspicious ROIs to reduce the ROI searching time and the possibility of overlooking small regions by radiologists. A low signal-to-noise ratio is a general challenge in lesion detection of MRI. In addition, the value ranges of a feature of normal tissue in a patient can overlap with that of malignant tissue in another patient, e.g. tissue intensity values, enhancement et al.. Most existing lesion detection methods face the problem of high false positive rate due to blood vessels or motion artifacts. In our method, we locate suspicious lesions by applying a threshold on essential features. The features are normalized to reduce the variation between patients. We then exclude blood vessel or motion artifacts from the initial results by applying filters that can differentiate them from other tissues. In the evaluation of the system on 21 patients with 50 lesions, all were successfully detected with 5.04 false positive regions per breast. 3. Lesion segmentation. One of the main challenges of existing lesion segmentation methods in breast DCE-MRI is that they require the size of the ROI that encloses a lesion to be small in order to successfully segment the lesion. We propose a lesion segmentation method based on naive Bayes and Markov random field. Our method also requires a ROI generated by a user, but the method is not sensitive to the size of the ROI. In our method, the ROI selected in a DCE-MR image is modeled as a connected graph with local Markov properties where each voxel of the image is regarded as a node. Three edge potentials of the graph are proposed to encourage the smoothness of the segmented regions. In the validation on 72 lesions, our method performs better than a baseline fuzzy-c-means method and another closely related method in segmenting lesions in breast MRI by showing a higher overlap with the ground truth. 4. Feature analysis and lesion classification. The challenge of feature analysis in breast DCE-MRI is that different types of lesions can share similar features. In our study, we extract various morphological, textural and kinetic features of the lesions and apply three classifiers to label them. In the morphological feature analysis, we propose minimum volume enclosing ellipsoid (MVEE) based features to measure the similarity of between a lesion and its MVEE. In statistical testing on 72 lesion, the MVEE-based features are significant in differentiating malignant from benign lesions. 5. CAD applications. The proposed CAD system is versatile. We show two scenarios in which a radiologist makes use of the system. In the first scenario, a user selects a rectangular region of interest (ROI) as input and the CAD automatically localizes and classifies the lesion in the ROI as benign or malignant. In another scenario, the CAD system acts as a “second reader” which fully and automatically identifies all malignant regions. At the time of writing, this is the first automated CAD system that is capable carrying out all these processes without any human interaction. In this thesis, we evaluated the proposed image registration, lesion detection, lesion segmentation, feature extraction and lesion classification using a relatively small database which makes conclusions on generalizability difficult. In the future work, the system requires clinical testing on a large dataset in order to advance this breast MRI CAD to reduce the image interpretation time, eliminate unnecessary biopsy and improve the cancer identification sensitivity for radiologists.
  • Item
    Thumbnail Image
    Digital forensics: increasing the evidential weight of system activity logs
    AHMAD, ATIF ( 2007)
    The application of investigative techniques within digital environments has lead to the emergence of a new field of specialization that may be termed ‘digital forensics’. Perhaps the primary challenge concerning digital forensic investigations is how to preserve evidence of system activity given the volatility of digital environments and the delay between the time of the incident and the start of the forensic investigation. This thesis hypothesizes that system activity logs present in modern operating systems may be used for digital forensic evidence collection. This is particularly true in modern organizations where there is growing recognition that forensic readiness may have considerable benefits in case of future litigation. An investigation into the weighting of evidence produced by system activity logs present in modern operating systems takes place in this thesis. The term ‘evidential weight’ is used loosely as a measure of the suitability of system activity logs to digital forensic investigations. This investigation is approached from an analytical perspective. The first contribution of this thesis is to determine the evidence collection capability of system activity logs by a simple model of the logging mechanism. The second contribution is the development of evidential weighting criteria that can be applied to system activity logs. A unique and critical role for system activity logs by which they establish the reliability of other kinds of computer-derived evidence from hard disk media is also identified. The primary contribution of this thesis is the identification of a comprehensive range of forensic weighting issues arising from the use of log evidence that concern investigators and legal authorities. This contribution is made in a comprehensive analytical discussion utilizing both the logging model and the evidential weighting criteria. The practical usefulness of the resulting evidential weighting framework is demonstrated by rigorous and systematic application to a real-world logging system.
  • Item
    Thumbnail Image
    Strategic information security policy quality assessment: a multiple constituency perspective
    MAYNARD, SEAN ( 2010)
    An integral part of any information security management program is the information security policy. The purpose of an information security policy is to define the means by which organisations protect the confidentiality, integrity and availability of information and its supporting infrastructure from a range of security threats. The tenet of this thesis is that the quality of information security policy is inadequately addressed by organisations. Further, although information security policies may undergo multiple revisions as part of a process development lifecycle and, as a result, may generally improve in quality, a more explicit systematic and comprehensive process of quality improvement is required. A key assertion of this research is that a comprehensive assessment of information security policy requires the involvement of the multiple stakeholders in organisations that derive benefit from the directives of the information security policy. Therefore, this dissertation used a multiple-constituency approach to investigate how security policy quality can be addressed in organisations, given the existence of multiple stakeholders. The formal research question under investigation was: How can multiple constituency quality assessment be used to improve strategic information security policy? The primary contribution of this thesis to the Information Systems field of knowledge is the development of a model: the Strategic Information Security Policy Quality Model. This model comprises three components: a comprehensive model of quality components, a model of stakeholder involvement and a model for security policy development. The strategic information security policy quality model gives a holistic perspective to organisations to enable management of the security policy quality assessment process. This research contributes six main contributions as stated below:  This research has demonstrated that a multiple constituency approach is effective for information security policy assessment  This research has developed a set of quality components for information security policy quality assessment  This research has identified that efficiency of the security policy quality assessment process is critical for organisations  This research has formalised security policy quality assessment within policy development  This research has developed a strategic information security policy quality model  This research has identified improvements that can be made to the security policy development lifecycle The outcomes of this research contend that the security policy lifecycle can be improved by: enabling the identification of when different stakeholders should be involved, identifying those quality components that each of the different stakeholders should assess as part of the quality assessment, and showing organisations which quality components to include or to ignore based on their individual circumstances. This leads to a higher quality information security policy, and should impact positively on an organisation’s information security.
  • Item
    Thumbnail Image
    Seamless proximity sensing
    Ahmed, Bilal ( 2013)
    Smartphones are uniquely positioned to offer a new breed of location and proximity aware applications that can harness the benefits provided by positioning technologies such as GPS, and advancements in radio communication technologies such as Near Field Communication (NFC) and Bluetooth low energy (BLE). The popularity of location aware applications, that make use of technologies such as GPS, Wi-Fi and 3G, has further strained the already frail battery life that current generation smartphones exhibit. This research project is aimed to perform a comparative assessment of NFC, BLE and Classic Bluetooth (BT) for the purpose of establishing proximity awareness in mobile devices. We demonstrate techniques; in the context of a mobile application to provide seamless proximity awareness using the three technologies, with focus on accuracy and operational range. We present the results of our research and experimentation for the purpose of creating a baseline for proximity estimation using the three technologies. We further investigate the viability of using BT as the underlying wireless technology for peer to peer networking on mobile devices and demonstrate techniques that can be applied programmatically for automatic detection of nearby mobile devices.
  • Item
    Thumbnail Image
    The effect of Transactive Memory Systems on performance in virtual teams
    MOHAMED ARIFF, MOHAMED ( 2013)
    Although virtual teams are increasingly common in organizations, research on the formation of Transactive Memory Systems (TMS) in virtual teams and its effect on team performance is relatively rare. Previous studies have reported that TMS quality influences team performance in face-to-face teams. However, the effect of TMS quality on the performance of virtual teams has not been adequately researched in past studies. Specifically, this study extends past research and proposes a model in which task interdependence and TMS quality jointly influence the performance of virtual teams. Based on the conceptual model of Brandon and Hollingshead, this study has hypothesized the effect of: (1) the quality of the TMS formation process on TMS quality; (2) TMS quality on virtual teams' performance; and (3) task interdependence between TMS quality and virtual teams' performance. This study was undertaken in three phases. Firstly, a conceptual phase was conducted to investigate and analyse the existing literature on virtual teams, virtual teams' key characteristics, virtual teams' performance and TMS. The conceptual phase resulted in the development of a research model and relevant hypotheses. Secondly, in the exploratory phase, four separate questionnaire surveys were conducted. The exploratory phase helped develop and test all of the instruments that were to be used in the study. The result of the exploratory phase was the production of a reliable and valid set of instruments to be used in the final phase of this study which was the confirmatory phase. In the confirmatory phase, an online survey was conducted to test the research model and the proposed hypotheses. This phase provided a broader understanding of TMS formation in virtual teams and of the joint effect of task interdependence and TMS quality on virtual teams' performance. The results of this study indicated that: (1) there is a positive effect between the quality of the TMS utilization process and virtual teams' performance; (2) there is a positive effect between TMS quality and virtual teams' performance; (3) task interdependence has a significant negative effect on the relationship between TMS quality and virtual teams' performance; and (4) TMS quality partially mediates the effect between task interdependence and virtual teams' performance. However, the results from this study failed to support two hypothesized relationships which were respectively: (1) the quality of the TMS construction process and (2) the quality of the TMS evaluation process on TMS quality. This study is the first research which has investigated TMS quality in a field study of a virtual team environment as previous studies on TMS have focused on experimental virtual teams. The main contribution of this study is to present a theoretical model that explains the effect of TMS quality on virtual teams' performance. This study also contributes to theory by extending Brandon and Hollingshead's model of the TMS formation process. This study entailed several methodological improvements based on previous studies which included: (1) new instrument items to measure the quality of the TMS formation process construct; (2) a new two-dimensional TMS quality construct which employed the 'who knows what' and 'who does what' dimension respectively; and (3) performing content adequacy assessment using the Q-sort technique which helped to demonstrate the validity and reliability of the instrument items prior to actual data collection. This study provides organizations with a better comprehension of the TMS formation process that affects virtual teams' performance. This study also provides organizations with an explanation about the task interdependence which affects TMS quality which in turn results in better performance of virtual teams.
  • Item
    Thumbnail Image
    Towards interpreting informal place descriptions
    Tytyk, Igor (The University of Melbourne, 2012)
    Informal place descriptions are human-generated descriptions of locations, ex- pressed by the means of natural language in an arbitrary fashion. The aim we pur- sued in this thesis is _nding methods for better automatic interpretation of situated informal place descriptions. This work presents a framework within which we attempt to automatically classify informal place descriptions for the accuracy of the location information they contain. Having an available corpus of informal place descriptions, we identified placenames contained therein and manually annotated them for properties such as geospatial granularity and identifiability. First, we make use of the annotations and a machine learning method to conduct the classification task, and then report the accuracy scores reaching 84%. Next, we classify the descriptions again, but instead of using the manual annotations we identify the properties of placenames automatically.
  • Item
    Thumbnail Image
    Extracting characteristics of human-produced video descriptions
    Korvas, Matěj ( 2012)
    This thesis contributes to the SMILE project, aiming for video understanding. We focus on the final stage of the project where information extracted from a video should be transformed into a natural language description. Working with a corpus of human-made video descriptions, we examine it to find patterns in the descriptions. We develop a machine-learning procedure for finding statistical dependencies between linguistic features of the descriptions. Evaluating its results when run on a small sample of data, we conclude that it can be successfully extended to larger datasets. e method is generally applicable for finding dependencies in data, and extends methods for association rule mining for the option to specify distributions of features. We show future directions which, if followed, will lead to extracting a specification of common sentence patterns of video descriptions. This would allow for generating naturally sounding descriptions from the video understanding software.
  • Item
    Thumbnail Image
    Understanding the business benefits of ERP system use
    Staehr, Lorraine Jean ( 2006)
    ERP systems are large, complex, integrated software packages used for business transaction processing by thousands of major organizations worldwide. Yet outcomes from ERP system implementation and use can be very different, and current understanding of how and why such variation exists is limited. Since most studies of ERP systems to date have focused on ERP implementation, this research focused on the post-implementation period. The aim was to better understand the 'what', 'how' and 'why' of achieving business benefits from ERP systems during ERP use. Achieving business benefits from ERP systems was considered as a process of organizational change occurring over time within various societal and organizational contexts. A retrospective, interpretive case study approach was used to study this process. The post-implementation periods of four Australian manufacturing organizations that had implemented ERP systems were studied. This study makes three important contributions to the information systems research literature. First, a new framework was developed to explain 'how' and 'why' business benefits were achieved from ERP systems. This explanatory framework is theoretically based and is firmly grounded in the empirical data. Three types of themes, along with the interrelationships between them, were identified as influencing the business benefits achieved from ERP systems. The first group of themes, the process themes, are 'Education, training and support', 'Technochange management' and 'People resources'. The second group of themes, the outcome themes, are 'Efficient and effective use of the ERP system', 'Business process improvement' and 'New projects to leverage off the ERP system'. The third group of themes, the contextual themes, are the 'External context', the 'Internal context' and the 'ERP planning and implementation phases'. This new framework makes a significant contribution to understanding how and why some organizations achieve more business benefits from ERP systems than others. Second, the case studies provide a rich description of four manufacturing organizations that have implemented and used ERP systems. Examining the 'what' of business benefits from ERP systems in these four manufacturing organizations resulted in a confirmed, amended and improved Shang and Seddon (2000) ERP business benefits framework. This replication and extension of previous research is the third contribution of this study. The results of this research are of interest not only to information systems researchers, but also to information systems practitioners and senior management in organizations that either plan to, or have implemented ERP systems. Overall this research provides an improved understanding of business benefits from ERP systems and a sound foundation for future studies of ERP system use.