Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 80
  • Item
    Thumbnail Image
    Breast cancer detection and diagnosis in dynamic contrast-enhanced magnetic resonance imaging
    LIANG, XI ( 2013)
    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast is a medical imaging tool used to detect and diagnose breast disease. A DCE-MR image is a series of three-dimensional (3D) breast MRI scans. It is acquired to form a 4D image (3D spatial + time), before and after the injection of paramagnetic contrast agents. DCE-MRI allows the analysis of the intensity variation of magnetic resonance (MR) signals, before and after the injection of contrast agents over time. The interpretation of 4D DCE-MRI images can be time consuming due to the amount of information involved. Motion artifacts in between the image scans further complicate the diagnosis. A DCE-MR image includes a large amount of data and it is challenging to interpret even for an experienced radiologist. Therefore, a computer-aided diagnosis (CAD) system is desirable in assisting the diagnosis of abnormal findings in the DCE-MR image. We propose a fully automated CAD system that is comprised of five novel components: a new image registration method to recover motions in between MR image acquisitions, a novel lesion detection method to identify all suspicious regions, a new lesion segmentation method to draw lesion contours and a novel lesion feature characterization method. We then classify the automatically detected lesions using our proposed features. The following lists the challenges found in most CAD systems and the contributions in our CAD system of breast DCE-MRI. 1. Image registration. One challenge in the interpretation of DCEMRI is motion artifacts which cause the pattern of tissue enhancement to be unreliable. Image registration is used to recover rigid and nonrigid motions between the 3D image sequences in a 4D breast DCE-MRI. Most existing b-spline based registration methods require lesion segmentation in breast DCE-MRI to preserve the lesion volume before performing the registration. An automatic regularization coefficients generation method is proposed in the b-spline based registration of the breast DCE-MRI, where the tumor regions are transformed in a rigid fashion. Our method does not perform lesion segmentation but computes a map to reflect the tissue rigidity. In the evaluation of our proposed coefficients, the registration methods using our coefficients for rigidity terms are compared against manually assigned coefficients of the rigidity terms and smoothness terms. The evaluation is performed on 30 synthetic and 40 clinical pairs of pre- and post-contrast MRI scans. The results show that the tumor volumes can be well-preserved by using a rigidity term (2:25 +- 4:48% of volume changes) compared to a smoothness term (22:47% +- 20:1%). In our dataset, the volume preservation performance by using our automatically generated coefficients is comparable to the manually assigned rigidity coefficients (2:29% 13:25%), and show no significant difference in volume changes (p > 0:05). 2. Lesion detection. After the motions have been corrected by our registration method, we locate the region of interest (ROI) using our lesion detection method. The aim is to highlight the suspicious ROIs to reduce the ROI searching time and the possibility of overlooking small regions by radiologists. A low signal-to-noise ratio is a general challenge in lesion detection of MRI. In addition, the value ranges of a feature of normal tissue in a patient can overlap with that of malignant tissue in another patient, e.g. tissue intensity values, enhancement et al.. Most existing lesion detection methods face the problem of high false positive rate due to blood vessels or motion artifacts. In our method, we locate suspicious lesions by applying a threshold on essential features. The features are normalized to reduce the variation between patients. We then exclude blood vessel or motion artifacts from the initial results by applying filters that can differentiate them from other tissues. In the evaluation of the system on 21 patients with 50 lesions, all were successfully detected with 5.04 false positive regions per breast. 3. Lesion segmentation. One of the main challenges of existing lesion segmentation methods in breast DCE-MRI is that they require the size of the ROI that encloses a lesion to be small in order to successfully segment the lesion. We propose a lesion segmentation method based on naive Bayes and Markov random field. Our method also requires a ROI generated by a user, but the method is not sensitive to the size of the ROI. In our method, the ROI selected in a DCE-MR image is modeled as a connected graph with local Markov properties where each voxel of the image is regarded as a node. Three edge potentials of the graph are proposed to encourage the smoothness of the segmented regions. In the validation on 72 lesions, our method performs better than a baseline fuzzy-c-means method and another closely related method in segmenting lesions in breast MRI by showing a higher overlap with the ground truth. 4. Feature analysis and lesion classification. The challenge of feature analysis in breast DCE-MRI is that different types of lesions can share similar features. In our study, we extract various morphological, textural and kinetic features of the lesions and apply three classifiers to label them. In the morphological feature analysis, we propose minimum volume enclosing ellipsoid (MVEE) based features to measure the similarity of between a lesion and its MVEE. In statistical testing on 72 lesion, the MVEE-based features are significant in differentiating malignant from benign lesions. 5. CAD applications. The proposed CAD system is versatile. We show two scenarios in which a radiologist makes use of the system. In the first scenario, a user selects a rectangular region of interest (ROI) as input and the CAD automatically localizes and classifies the lesion in the ROI as benign or malignant. In another scenario, the CAD system acts as a “second reader” which fully and automatically identifies all malignant regions. At the time of writing, this is the first automated CAD system that is capable carrying out all these processes without any human interaction. In this thesis, we evaluated the proposed image registration, lesion detection, lesion segmentation, feature extraction and lesion classification using a relatively small database which makes conclusions on generalizability difficult. In the future work, the system requires clinical testing on a large dataset in order to advance this breast MRI CAD to reduce the image interpretation time, eliminate unnecessary biopsy and improve the cancer identification sensitivity for radiologists.
  • Item
    Thumbnail Image
    Strategic information security policy quality assessment: a multiple constituency perspective
    MAYNARD, SEAN ( 2010)
    An integral part of any information security management program is the information security policy. The purpose of an information security policy is to define the means by which organisations protect the confidentiality, integrity and availability of information and its supporting infrastructure from a range of security threats. The tenet of this thesis is that the quality of information security policy is inadequately addressed by organisations. Further, although information security policies may undergo multiple revisions as part of a process development lifecycle and, as a result, may generally improve in quality, a more explicit systematic and comprehensive process of quality improvement is required. A key assertion of this research is that a comprehensive assessment of information security policy requires the involvement of the multiple stakeholders in organisations that derive benefit from the directives of the information security policy. Therefore, this dissertation used a multiple-constituency approach to investigate how security policy quality can be addressed in organisations, given the existence of multiple stakeholders. The formal research question under investigation was: How can multiple constituency quality assessment be used to improve strategic information security policy? The primary contribution of this thesis to the Information Systems field of knowledge is the development of a model: the Strategic Information Security Policy Quality Model. This model comprises three components: a comprehensive model of quality components, a model of stakeholder involvement and a model for security policy development. The strategic information security policy quality model gives a holistic perspective to organisations to enable management of the security policy quality assessment process. This research contributes six main contributions as stated below:  This research has demonstrated that a multiple constituency approach is effective for information security policy assessment  This research has developed a set of quality components for information security policy quality assessment  This research has identified that efficiency of the security policy quality assessment process is critical for organisations  This research has formalised security policy quality assessment within policy development  This research has developed a strategic information security policy quality model  This research has identified improvements that can be made to the security policy development lifecycle The outcomes of this research contend that the security policy lifecycle can be improved by: enabling the identification of when different stakeholders should be involved, identifying those quality components that each of the different stakeholders should assess as part of the quality assessment, and showing organisations which quality components to include or to ignore based on their individual circumstances. This leads to a higher quality information security policy, and should impact positively on an organisation’s information security.
  • Item
    Thumbnail Image
    Seamless proximity sensing
    Ahmed, Bilal ( 2013)
    Smartphones are uniquely positioned to offer a new breed of location and proximity aware applications that can harness the benefits provided by positioning technologies such as GPS, and advancements in radio communication technologies such as Near Field Communication (NFC) and Bluetooth low energy (BLE). The popularity of location aware applications, that make use of technologies such as GPS, Wi-Fi and 3G, has further strained the already frail battery life that current generation smartphones exhibit. This research project is aimed to perform a comparative assessment of NFC, BLE and Classic Bluetooth (BT) for the purpose of establishing proximity awareness in mobile devices. We demonstrate techniques; in the context of a mobile application to provide seamless proximity awareness using the three technologies, with focus on accuracy and operational range. We present the results of our research and experimentation for the purpose of creating a baseline for proximity estimation using the three technologies. We further investigate the viability of using BT as the underlying wireless technology for peer to peer networking on mobile devices and demonstrate techniques that can be applied programmatically for automatic detection of nearby mobile devices.
  • Item
    Thumbnail Image
    The effect of Transactive Memory Systems on performance in virtual teams
    MOHAMED ARIFF, MOHAMED ( 2013)
    Although virtual teams are increasingly common in organizations, research on the formation of Transactive Memory Systems (TMS) in virtual teams and its effect on team performance is relatively rare. Previous studies have reported that TMS quality influences team performance in face-to-face teams. However, the effect of TMS quality on the performance of virtual teams has not been adequately researched in past studies. Specifically, this study extends past research and proposes a model in which task interdependence and TMS quality jointly influence the performance of virtual teams. Based on the conceptual model of Brandon and Hollingshead, this study has hypothesized the effect of: (1) the quality of the TMS formation process on TMS quality; (2) TMS quality on virtual teams' performance; and (3) task interdependence between TMS quality and virtual teams' performance. This study was undertaken in three phases. Firstly, a conceptual phase was conducted to investigate and analyse the existing literature on virtual teams, virtual teams' key characteristics, virtual teams' performance and TMS. The conceptual phase resulted in the development of a research model and relevant hypotheses. Secondly, in the exploratory phase, four separate questionnaire surveys were conducted. The exploratory phase helped develop and test all of the instruments that were to be used in the study. The result of the exploratory phase was the production of a reliable and valid set of instruments to be used in the final phase of this study which was the confirmatory phase. In the confirmatory phase, an online survey was conducted to test the research model and the proposed hypotheses. This phase provided a broader understanding of TMS formation in virtual teams and of the joint effect of task interdependence and TMS quality on virtual teams' performance. The results of this study indicated that: (1) there is a positive effect between the quality of the TMS utilization process and virtual teams' performance; (2) there is a positive effect between TMS quality and virtual teams' performance; (3) task interdependence has a significant negative effect on the relationship between TMS quality and virtual teams' performance; and (4) TMS quality partially mediates the effect between task interdependence and virtual teams' performance. However, the results from this study failed to support two hypothesized relationships which were respectively: (1) the quality of the TMS construction process and (2) the quality of the TMS evaluation process on TMS quality. This study is the first research which has investigated TMS quality in a field study of a virtual team environment as previous studies on TMS have focused on experimental virtual teams. The main contribution of this study is to present a theoretical model that explains the effect of TMS quality on virtual teams' performance. This study also contributes to theory by extending Brandon and Hollingshead's model of the TMS formation process. This study entailed several methodological improvements based on previous studies which included: (1) new instrument items to measure the quality of the TMS formation process construct; (2) a new two-dimensional TMS quality construct which employed the 'who knows what' and 'who does what' dimension respectively; and (3) performing content adequacy assessment using the Q-sort technique which helped to demonstrate the validity and reliability of the instrument items prior to actual data collection. This study provides organizations with a better comprehension of the TMS formation process that affects virtual teams' performance. This study also provides organizations with an explanation about the task interdependence which affects TMS quality which in turn results in better performance of virtual teams.
  • Item
    Thumbnail Image
    Towards interpreting informal place descriptions
    Tytyk, Igor (The University of Melbourne, 2012)
    Informal place descriptions are human-generated descriptions of locations, ex- pressed by the means of natural language in an arbitrary fashion. The aim we pur- sued in this thesis is _nding methods for better automatic interpretation of situated informal place descriptions. This work presents a framework within which we attempt to automatically classify informal place descriptions for the accuracy of the location information they contain. Having an available corpus of informal place descriptions, we identified placenames contained therein and manually annotated them for properties such as geospatial granularity and identifiability. First, we make use of the annotations and a machine learning method to conduct the classification task, and then report the accuracy scores reaching 84%. Next, we classify the descriptions again, but instead of using the manual annotations we identify the properties of placenames automatically.
  • Item
    Thumbnail Image
    Extracting characteristics of human-produced video descriptions
    Korvas, Matěj ( 2012)
    This thesis contributes to the SMILE project, aiming for video understanding. We focus on the final stage of the project where information extracted from a video should be transformed into a natural language description. Working with a corpus of human-made video descriptions, we examine it to find patterns in the descriptions. We develop a machine-learning procedure for finding statistical dependencies between linguistic features of the descriptions. Evaluating its results when run on a small sample of data, we conclude that it can be successfully extended to larger datasets. e method is generally applicable for finding dependencies in data, and extends methods for association rule mining for the option to specify distributions of features. We show future directions which, if followed, will lead to extracting a specification of common sentence patterns of video descriptions. This would allow for generating naturally sounding descriptions from the video understanding software.
  • Item
    Thumbnail Image
    Rapid de novo methods for genome analysis
    HALL, ROSS STEPHEN ( 2013)
    Next generation sequencing methodologies have resulted in an exponential increase in the amount of genomic sequence data available to researchers. Valuable tools in the initial analysis of such data for novel features are de novo techniques - methods which employ a minimum of comparative sequence information from known genomes. In this thesis I describe two heuristic algorithms for the rapid de novo analysis of genomic sequence data. The first algorithm employs a multiple Fast Fourier Transform, mapped to two dimensional spaces. The resulting bitmap clearly illustrates periodic features of a genome including coding density. The compact representation allows mega base scales of genomic data to be rendered in a single bitmap. The second algorithm RTASSS, (RNA Template Assisted Secondary Structure Search) predicts potential members of RNA gene families that are related by similar secondary structure, but not necessarily conserved sequence. RTASSS has the ability to find candidate structures similar to a given template structure without the use of sequence homology. Both algorithms have a linear complexity.
  • Item
    Thumbnail Image
    An energy and spectrum efficient distributed scheduling scheme for Wireless Mesh Networks
    Vijayalayan, Kanthaiah Sivapragasam ( 2013)
    The success of Wireless Mesh Network (WMN) applications depend on the effective energy efficiency, spectrum reuse, scalability, and robustness of scheduling schemes. However, to the best of our knowledge the available schedulers fail to address these requirements simultaneously. This thesis proposes an autonomous, scalable, and deployable scheduler for WMNs with energy efficient transceiver activation and efficient spectrum reuse. Our goals are: (i) to conserve energy for longer sustainability, (ii) to effectively reuse the radio spectrum for higher throughput, lower delay, lower packet loss, and fairness, and (iii) to ensure that the proposed solution serves common WMN applications. Our research identified three major approaches in scheduling, eight key attributes, and detailed the evolution of wireless standards for distributed schedulers. Among the solutions, pseudo random access (PRA) is expected to have the strengths of randomness for scalability and robustness, and determinism for energy efficiency and spectrum reuse. However, literature on the IEEE 802.16s election based transmission timing (EBTT) scheme - the only known standardized PRA solution - is limited in scope. We use a combination of simulations, modelling, and analysis in our research. Since the existing simulators did not support our ambitious range of investigations, we developed our own simulator which we called Election Based Pseudo Random Access (EBPRA) simulator. Moreover, we introduced two types of synthetic mesh networks as a way to decompose the complexities of WMN topologies and systematically study their effects. The benchmarking study on the EBTT against a centralised cyclic access (CCA) scheme revealed less than 50% spectrum reuse, 75% low fairness measure, and more significantly, an energy wastage of up to 90% in reception with collisions in transmissions in the EBTT. Hence we propose an enhanced pseudo random access (EPRA) scheme to mitigate the issues. The EPRA does not introduce additional overheads and can be deployed on IEEE 802.16 nodes with minor firmware modifications. Simulations on the EPRA show significant improvements in the energy efficiency where collisions are eliminated and the reception is near 100% efficient. Moreover, the spectrum reuse and fairness measures also improved. These results validated the findings of the analytical models that we derived. Finally we propose two alternative solutions to handle user data packets, namely: EPRA based single scheduler (EPRA-SS), and EPRA based dual scheduler (EPRA-DS). Since satisfying requirements of voice services means requirements for data service are met, we concentrated our investigation with voice. Through extensive simulations and multidimensional data analysis, we identified the supported ranges of network densities, traffic intensities, and buffer allocations to satisfy per hop delay and packet drop conditions. Hence, we demonstrated for the first time that near 100% energy efficiency should be possible with a distributed scheduler when our EPRA scheme is used. In addition, we have also shown improvements in spectrum reuse for better throughput, shorter delays, and better fairness. Finally, EPRA based schemes have been demonstrated as effective schedulers for user data traffic over WMN deployment scenarios fulfilling our research objectives.
  • Item
    Thumbnail Image
    Mitigating the risk of organisational information leakage through online social networking
    Abdul Molok, Nurul Nuha ( 2013)
    The inadvertent leakage of sensitive organisational information through the proliferation of online social networking (OSN) is a significant challenge in a networked society. Although considerable research has studied information leakage, the advent of OSN amongst employees represents new fundamental problems to organisations. As employees are bringing their own mobile devices to the workplace, which allow them to engage in OSN activities at anytime and anywhere, reported cases involving leakage of organisational information through OSN are on the rise. Despite its opportunities, OSN has the tendency to blur the boundaries between employees’ professional and personal use of social media, presenting challenges for organisations to protect the confidentiality of their valuable information. The thesis investigates two phenomena. First, it explores the disclosure of sensitive organisational information by employees through the use of social media. Second, it looks into organisational security strategies employed to mitigate the associated security risks. During the first multiple-case study, employees across four organisations were interviewed to understand their OSN behaviour and the types of work-related information they disclosed online. In the second multiple-case study, the researcher went back to the same organisations and interviewed security managers to understand potential security impacts of employees’ OSN behaviour, and the various security strategies implemented in the organisations. The findings emerging from these interpretive multiple-case studies, based on rich insights from both employees and security managers, led to the development of a maturity framework. This framework can assist organisations to assess, develop or improve their security strategies to mitigate social media related risks. The framework was evaluated through focus groups with experts in security and social media management. The research, which consists of two sets of multiple case studies and focus groups, has resulted in three main contributions as stated below: 1. Understanding of contextual influences on the disclosure of sensitive organisational information, from multiple perspectives 2. Identification of the influence of managerial attitudes on the deployment of a particular information security strategy, especially in relation to social media use amongst employees 3. Development and evaluation of a Maturity Framework for Mitigating Leakage of Organisational Information through OSN As suggested by the literature, security behaviour can be either intentional or unintentional in nature. However, this research found that information leakage through employees’ OSN was more unintended than intended, which indicated that generally, employees did not mean to cause security problems to organisations. The research also provided evidence that information leakage through OSN was due to influences that could be categorized into personal, organisational and technological factors. Interestingly, employees and security managers had different understandings of why information leakage through OSN happens. Employees demonstrated that leakage was inadvertent, while security managers did not understand that employees had no intention of causing security problems. These findings suggested that information leakage via OSN could be effectively mitigated by organisations, depending on the way the managemet perceived how employees’ OSN behaviour could jeopardise the confidentiality of information. In accordance to the security literature, this research found different kinds of security strategies that organisations employed to mitigate security issues posed by OSN. Interestingly, this research also found that across the organisations, these security strategies varied in their levels of sophistication, revealing certain managerial attitudes which influenced the organisational capability to manage the risk of leakage via employees’ OSN. Since the higher level of strategy sophistication actually results in more risk-averse employee OSN behaviour, this research identified relationships between employee OSN behaviour, OSN security strategies and the managerial attitudes. For example, the organisation that received little management support on security initiatives tended to have poorly developed controls, which resulted in low level of employees’ awareness of risky OSN behaviour. Finally, this research culminated in the development of a Maturity Framework for Mitigating Leakage of Organisational Information through OSN which was evaluated by security experts through focus groups. This framework can be used by organisations to assess how well their current information security measures can be expected to protect them from this insider threat. It can also provide recommendations for organisations to improve their current OSN security strategies.
  • Item
    Thumbnail Image
    Towards realtime multiset correlation in large scale geosimulation
    QI, JIANZHONG ( 2013)
    Geosimulation is a branch of study that emphasizes the spatial structures and behaviors of objects in computer simulation. Its applications include urban computing, geographic information systems (GIS), and geographic theory validation, etc., where real world experiments are infeasible due to the spatio-temporal scales involved. Geosimulation provides a unique perspective of urban dynamics by modeling the interaction of individual objects such as people, business, and public facilities, at time scales approaching ``realtime''. As the scale of geosimulation grows, the costs of correlating the sets of objects for interaction simulation become significant, and this calls for efficient multiset correlation algorithms. We study three key techniques for efficient multiset correlation, including space-constraining, time-constraining, and dimensionality reduction. The space-constraining technique constrains multiset correlation based on spatial proximity. The intuition is that usually only objects that are close to each other can interact with each other, and need to be considered in correlation. As a typical study we investigate the min-dist location selection and facility replacement queries, which correlate three sets of points representing the clients, existing facilities, and potential locations, respectively. The min-dist location selection query finds a location among the set of potential locations for a new facility to be established at, so that the average distance between the clients and their respective nearest facilities is minimized. The min-dist facility replacement query has the same optimization goal, but finds a potential location to establish a new facility to replace an existing one. To constrain the query processing costs, we only compute the impact of choosing a potential location on its nearby clients, since those clients are the only ones whose respective nearest facilities might change because of the chosen potential location. The time-constraining technique constrains multiset correlation based on time relevance. The intuition is that a correlation relationship usually stays valid for a short period of time, during which we do not need to recompute the correlation. As a typical study we investigate the continuous intersection join query, which reports the intersecting objects from two sets of moving objects with non-zero extents at every timestamp. To constrain the query processing costs, the key idea is to compute the intersection for not only the current timestamp but also the near future according to the current object velocities, and only update the intersection if the object velocities have been updated. We design a cost model to help determine to which timestamp in the near future we compute the intersection, so as to achieve the best balance between the cost of computing the intersection for once and the total number of recomputing the intersection. The dimensionality reduction technique reduces the cost of multiset correlation by reducing data dimensionality. As a typical study we investigate mapping based dimensionality reduction for similarity searches on time series data, which correlate the time series based on similarity. We treat every time series as a point in a high dimensional space and map it to a low dimensional space, using its distances to a small number of reference data points in the original high dimensional space as the coordinates. We then index the mapped time series in the low dimensional space, which allows efficient processing of similarity searches. We conduct extensive experiments on our proposed techniques. The results confirm the superiority of our techniques over the baseline approaches.