Computing and Information Systems - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 24
  • Item
    Thumbnail Image
    Breast cancer detection and diagnosis in dynamic contrast-enhanced magnetic resonance imaging
    LIANG, XI ( 2013)
    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast is a medical imaging tool used to detect and diagnose breast disease. A DCE-MR image is a series of three-dimensional (3D) breast MRI scans. It is acquired to form a 4D image (3D spatial + time), before and after the injection of paramagnetic contrast agents. DCE-MRI allows the analysis of the intensity variation of magnetic resonance (MR) signals, before and after the injection of contrast agents over time. The interpretation of 4D DCE-MRI images can be time consuming due to the amount of information involved. Motion artifacts in between the image scans further complicate the diagnosis. A DCE-MR image includes a large amount of data and it is challenging to interpret even for an experienced radiologist. Therefore, a computer-aided diagnosis (CAD) system is desirable in assisting the diagnosis of abnormal findings in the DCE-MR image. We propose a fully automated CAD system that is comprised of five novel components: a new image registration method to recover motions in between MR image acquisitions, a novel lesion detection method to identify all suspicious regions, a new lesion segmentation method to draw lesion contours and a novel lesion feature characterization method. We then classify the automatically detected lesions using our proposed features. The following lists the challenges found in most CAD systems and the contributions in our CAD system of breast DCE-MRI. 1. Image registration. One challenge in the interpretation of DCEMRI is motion artifacts which cause the pattern of tissue enhancement to be unreliable. Image registration is used to recover rigid and nonrigid motions between the 3D image sequences in a 4D breast DCE-MRI. Most existing b-spline based registration methods require lesion segmentation in breast DCE-MRI to preserve the lesion volume before performing the registration. An automatic regularization coefficients generation method is proposed in the b-spline based registration of the breast DCE-MRI, where the tumor regions are transformed in a rigid fashion. Our method does not perform lesion segmentation but computes a map to reflect the tissue rigidity. In the evaluation of our proposed coefficients, the registration methods using our coefficients for rigidity terms are compared against manually assigned coefficients of the rigidity terms and smoothness terms. The evaluation is performed on 30 synthetic and 40 clinical pairs of pre- and post-contrast MRI scans. The results show that the tumor volumes can be well-preserved by using a rigidity term (2:25 +- 4:48% of volume changes) compared to a smoothness term (22:47% +- 20:1%). In our dataset, the volume preservation performance by using our automatically generated coefficients is comparable to the manually assigned rigidity coefficients (2:29% 13:25%), and show no significant difference in volume changes (p > 0:05). 2. Lesion detection. After the motions have been corrected by our registration method, we locate the region of interest (ROI) using our lesion detection method. The aim is to highlight the suspicious ROIs to reduce the ROI searching time and the possibility of overlooking small regions by radiologists. A low signal-to-noise ratio is a general challenge in lesion detection of MRI. In addition, the value ranges of a feature of normal tissue in a patient can overlap with that of malignant tissue in another patient, e.g. tissue intensity values, enhancement et al.. Most existing lesion detection methods face the problem of high false positive rate due to blood vessels or motion artifacts. In our method, we locate suspicious lesions by applying a threshold on essential features. The features are normalized to reduce the variation between patients. We then exclude blood vessel or motion artifacts from the initial results by applying filters that can differentiate them from other tissues. In the evaluation of the system on 21 patients with 50 lesions, all were successfully detected with 5.04 false positive regions per breast. 3. Lesion segmentation. One of the main challenges of existing lesion segmentation methods in breast DCE-MRI is that they require the size of the ROI that encloses a lesion to be small in order to successfully segment the lesion. We propose a lesion segmentation method based on naive Bayes and Markov random field. Our method also requires a ROI generated by a user, but the method is not sensitive to the size of the ROI. In our method, the ROI selected in a DCE-MR image is modeled as a connected graph with local Markov properties where each voxel of the image is regarded as a node. Three edge potentials of the graph are proposed to encourage the smoothness of the segmented regions. In the validation on 72 lesions, our method performs better than a baseline fuzzy-c-means method and another closely related method in segmenting lesions in breast MRI by showing a higher overlap with the ground truth. 4. Feature analysis and lesion classification. The challenge of feature analysis in breast DCE-MRI is that different types of lesions can share similar features. In our study, we extract various morphological, textural and kinetic features of the lesions and apply three classifiers to label them. In the morphological feature analysis, we propose minimum volume enclosing ellipsoid (MVEE) based features to measure the similarity of between a lesion and its MVEE. In statistical testing on 72 lesion, the MVEE-based features are significant in differentiating malignant from benign lesions. 5. CAD applications. The proposed CAD system is versatile. We show two scenarios in which a radiologist makes use of the system. In the first scenario, a user selects a rectangular region of interest (ROI) as input and the CAD automatically localizes and classifies the lesion in the ROI as benign or malignant. In another scenario, the CAD system acts as a “second reader” which fully and automatically identifies all malignant regions. At the time of writing, this is the first automated CAD system that is capable carrying out all these processes without any human interaction. In this thesis, we evaluated the proposed image registration, lesion detection, lesion segmentation, feature extraction and lesion classification using a relatively small database which makes conclusions on generalizability difficult. In the future work, the system requires clinical testing on a large dataset in order to advance this breast MRI CAD to reduce the image interpretation time, eliminate unnecessary biopsy and improve the cancer identification sensitivity for radiologists.
  • Item
    Thumbnail Image
    Seamless proximity sensing
    Ahmed, Bilal ( 2013)
    Smartphones are uniquely positioned to offer a new breed of location and proximity aware applications that can harness the benefits provided by positioning technologies such as GPS, and advancements in radio communication technologies such as Near Field Communication (NFC) and Bluetooth low energy (BLE). The popularity of location aware applications, that make use of technologies such as GPS, Wi-Fi and 3G, has further strained the already frail battery life that current generation smartphones exhibit. This research project is aimed to perform a comparative assessment of NFC, BLE and Classic Bluetooth (BT) for the purpose of establishing proximity awareness in mobile devices. We demonstrate techniques; in the context of a mobile application to provide seamless proximity awareness using the three technologies, with focus on accuracy and operational range. We present the results of our research and experimentation for the purpose of creating a baseline for proximity estimation using the three technologies. We further investigate the viability of using BT as the underlying wireless technology for peer to peer networking on mobile devices and demonstrate techniques that can be applied programmatically for automatic detection of nearby mobile devices.
  • Item
    Thumbnail Image
    The effect of Transactive Memory Systems on performance in virtual teams
    MOHAMED ARIFF, MOHAMED ( 2013)
    Although virtual teams are increasingly common in organizations, research on the formation of Transactive Memory Systems (TMS) in virtual teams and its effect on team performance is relatively rare. Previous studies have reported that TMS quality influences team performance in face-to-face teams. However, the effect of TMS quality on the performance of virtual teams has not been adequately researched in past studies. Specifically, this study extends past research and proposes a model in which task interdependence and TMS quality jointly influence the performance of virtual teams. Based on the conceptual model of Brandon and Hollingshead, this study has hypothesized the effect of: (1) the quality of the TMS formation process on TMS quality; (2) TMS quality on virtual teams' performance; and (3) task interdependence between TMS quality and virtual teams' performance. This study was undertaken in three phases. Firstly, a conceptual phase was conducted to investigate and analyse the existing literature on virtual teams, virtual teams' key characteristics, virtual teams' performance and TMS. The conceptual phase resulted in the development of a research model and relevant hypotheses. Secondly, in the exploratory phase, four separate questionnaire surveys were conducted. The exploratory phase helped develop and test all of the instruments that were to be used in the study. The result of the exploratory phase was the production of a reliable and valid set of instruments to be used in the final phase of this study which was the confirmatory phase. In the confirmatory phase, an online survey was conducted to test the research model and the proposed hypotheses. This phase provided a broader understanding of TMS formation in virtual teams and of the joint effect of task interdependence and TMS quality on virtual teams' performance. The results of this study indicated that: (1) there is a positive effect between the quality of the TMS utilization process and virtual teams' performance; (2) there is a positive effect between TMS quality and virtual teams' performance; (3) task interdependence has a significant negative effect on the relationship between TMS quality and virtual teams' performance; and (4) TMS quality partially mediates the effect between task interdependence and virtual teams' performance. However, the results from this study failed to support two hypothesized relationships which were respectively: (1) the quality of the TMS construction process and (2) the quality of the TMS evaluation process on TMS quality. This study is the first research which has investigated TMS quality in a field study of a virtual team environment as previous studies on TMS have focused on experimental virtual teams. The main contribution of this study is to present a theoretical model that explains the effect of TMS quality on virtual teams' performance. This study also contributes to theory by extending Brandon and Hollingshead's model of the TMS formation process. This study entailed several methodological improvements based on previous studies which included: (1) new instrument items to measure the quality of the TMS formation process construct; (2) a new two-dimensional TMS quality construct which employed the 'who knows what' and 'who does what' dimension respectively; and (3) performing content adequacy assessment using the Q-sort technique which helped to demonstrate the validity and reliability of the instrument items prior to actual data collection. This study provides organizations with a better comprehension of the TMS formation process that affects virtual teams' performance. This study also provides organizations with an explanation about the task interdependence which affects TMS quality which in turn results in better performance of virtual teams.
  • Item
    Thumbnail Image
    Rapid de novo methods for genome analysis
    HALL, ROSS STEPHEN ( 2013)
    Next generation sequencing methodologies have resulted in an exponential increase in the amount of genomic sequence data available to researchers. Valuable tools in the initial analysis of such data for novel features are de novo techniques - methods which employ a minimum of comparative sequence information from known genomes. In this thesis I describe two heuristic algorithms for the rapid de novo analysis of genomic sequence data. The first algorithm employs a multiple Fast Fourier Transform, mapped to two dimensional spaces. The resulting bitmap clearly illustrates periodic features of a genome including coding density. The compact representation allows mega base scales of genomic data to be rendered in a single bitmap. The second algorithm RTASSS, (RNA Template Assisted Secondary Structure Search) predicts potential members of RNA gene families that are related by similar secondary structure, but not necessarily conserved sequence. RTASSS has the ability to find candidate structures similar to a given template structure without the use of sequence homology. Both algorithms have a linear complexity.
  • Item
    Thumbnail Image
    An energy and spectrum efficient distributed scheduling scheme for Wireless Mesh Networks
    Vijayalayan, Kanthaiah Sivapragasam ( 2013)
    The success of Wireless Mesh Network (WMN) applications depend on the effective energy efficiency, spectrum reuse, scalability, and robustness of scheduling schemes. However, to the best of our knowledge the available schedulers fail to address these requirements simultaneously. This thesis proposes an autonomous, scalable, and deployable scheduler for WMNs with energy efficient transceiver activation and efficient spectrum reuse. Our goals are: (i) to conserve energy for longer sustainability, (ii) to effectively reuse the radio spectrum for higher throughput, lower delay, lower packet loss, and fairness, and (iii) to ensure that the proposed solution serves common WMN applications. Our research identified three major approaches in scheduling, eight key attributes, and detailed the evolution of wireless standards for distributed schedulers. Among the solutions, pseudo random access (PRA) is expected to have the strengths of randomness for scalability and robustness, and determinism for energy efficiency and spectrum reuse. However, literature on the IEEE 802.16s election based transmission timing (EBTT) scheme - the only known standardized PRA solution - is limited in scope. We use a combination of simulations, modelling, and analysis in our research. Since the existing simulators did not support our ambitious range of investigations, we developed our own simulator which we called Election Based Pseudo Random Access (EBPRA) simulator. Moreover, we introduced two types of synthetic mesh networks as a way to decompose the complexities of WMN topologies and systematically study their effects. The benchmarking study on the EBTT against a centralised cyclic access (CCA) scheme revealed less than 50% spectrum reuse, 75% low fairness measure, and more significantly, an energy wastage of up to 90% in reception with collisions in transmissions in the EBTT. Hence we propose an enhanced pseudo random access (EPRA) scheme to mitigate the issues. The EPRA does not introduce additional overheads and can be deployed on IEEE 802.16 nodes with minor firmware modifications. Simulations on the EPRA show significant improvements in the energy efficiency where collisions are eliminated and the reception is near 100% efficient. Moreover, the spectrum reuse and fairness measures also improved. These results validated the findings of the analytical models that we derived. Finally we propose two alternative solutions to handle user data packets, namely: EPRA based single scheduler (EPRA-SS), and EPRA based dual scheduler (EPRA-DS). Since satisfying requirements of voice services means requirements for data service are met, we concentrated our investigation with voice. Through extensive simulations and multidimensional data analysis, we identified the supported ranges of network densities, traffic intensities, and buffer allocations to satisfy per hop delay and packet drop conditions. Hence, we demonstrated for the first time that near 100% energy efficiency should be possible with a distributed scheduler when our EPRA scheme is used. In addition, we have also shown improvements in spectrum reuse for better throughput, shorter delays, and better fairness. Finally, EPRA based schemes have been demonstrated as effective schedulers for user data traffic over WMN deployment scenarios fulfilling our research objectives.
  • Item
    Thumbnail Image
    Mitigating the risk of organisational information leakage through online social networking
    Abdul Molok, Nurul Nuha ( 2013)
    The inadvertent leakage of sensitive organisational information through the proliferation of online social networking (OSN) is a significant challenge in a networked society. Although considerable research has studied information leakage, the advent of OSN amongst employees represents new fundamental problems to organisations. As employees are bringing their own mobile devices to the workplace, which allow them to engage in OSN activities at anytime and anywhere, reported cases involving leakage of organisational information through OSN are on the rise. Despite its opportunities, OSN has the tendency to blur the boundaries between employees’ professional and personal use of social media, presenting challenges for organisations to protect the confidentiality of their valuable information. The thesis investigates two phenomena. First, it explores the disclosure of sensitive organisational information by employees through the use of social media. Second, it looks into organisational security strategies employed to mitigate the associated security risks. During the first multiple-case study, employees across four organisations were interviewed to understand their OSN behaviour and the types of work-related information they disclosed online. In the second multiple-case study, the researcher went back to the same organisations and interviewed security managers to understand potential security impacts of employees’ OSN behaviour, and the various security strategies implemented in the organisations. The findings emerging from these interpretive multiple-case studies, based on rich insights from both employees and security managers, led to the development of a maturity framework. This framework can assist organisations to assess, develop or improve their security strategies to mitigate social media related risks. The framework was evaluated through focus groups with experts in security and social media management. The research, which consists of two sets of multiple case studies and focus groups, has resulted in three main contributions as stated below: 1. Understanding of contextual influences on the disclosure of sensitive organisational information, from multiple perspectives 2. Identification of the influence of managerial attitudes on the deployment of a particular information security strategy, especially in relation to social media use amongst employees 3. Development and evaluation of a Maturity Framework for Mitigating Leakage of Organisational Information through OSN As suggested by the literature, security behaviour can be either intentional or unintentional in nature. However, this research found that information leakage through employees’ OSN was more unintended than intended, which indicated that generally, employees did not mean to cause security problems to organisations. The research also provided evidence that information leakage through OSN was due to influences that could be categorized into personal, organisational and technological factors. Interestingly, employees and security managers had different understandings of why information leakage through OSN happens. Employees demonstrated that leakage was inadvertent, while security managers did not understand that employees had no intention of causing security problems. These findings suggested that information leakage via OSN could be effectively mitigated by organisations, depending on the way the managemet perceived how employees’ OSN behaviour could jeopardise the confidentiality of information. In accordance to the security literature, this research found different kinds of security strategies that organisations employed to mitigate security issues posed by OSN. Interestingly, this research also found that across the organisations, these security strategies varied in their levels of sophistication, revealing certain managerial attitudes which influenced the organisational capability to manage the risk of leakage via employees’ OSN. Since the higher level of strategy sophistication actually results in more risk-averse employee OSN behaviour, this research identified relationships between employee OSN behaviour, OSN security strategies and the managerial attitudes. For example, the organisation that received little management support on security initiatives tended to have poorly developed controls, which resulted in low level of employees’ awareness of risky OSN behaviour. Finally, this research culminated in the development of a Maturity Framework for Mitigating Leakage of Organisational Information through OSN which was evaluated by security experts through focus groups. This framework can be used by organisations to assess how well their current information security measures can be expected to protect them from this insider threat. It can also provide recommendations for organisations to improve their current OSN security strategies.
  • Item
    Thumbnail Image
    Towards realtime multiset correlation in large scale geosimulation
    QI, JIANZHONG ( 2013)
    Geosimulation is a branch of study that emphasizes the spatial structures and behaviors of objects in computer simulation. Its applications include urban computing, geographic information systems (GIS), and geographic theory validation, etc., where real world experiments are infeasible due to the spatio-temporal scales involved. Geosimulation provides a unique perspective of urban dynamics by modeling the interaction of individual objects such as people, business, and public facilities, at time scales approaching ``realtime''. As the scale of geosimulation grows, the costs of correlating the sets of objects for interaction simulation become significant, and this calls for efficient multiset correlation algorithms. We study three key techniques for efficient multiset correlation, including space-constraining, time-constraining, and dimensionality reduction. The space-constraining technique constrains multiset correlation based on spatial proximity. The intuition is that usually only objects that are close to each other can interact with each other, and need to be considered in correlation. As a typical study we investigate the min-dist location selection and facility replacement queries, which correlate three sets of points representing the clients, existing facilities, and potential locations, respectively. The min-dist location selection query finds a location among the set of potential locations for a new facility to be established at, so that the average distance between the clients and their respective nearest facilities is minimized. The min-dist facility replacement query has the same optimization goal, but finds a potential location to establish a new facility to replace an existing one. To constrain the query processing costs, we only compute the impact of choosing a potential location on its nearby clients, since those clients are the only ones whose respective nearest facilities might change because of the chosen potential location. The time-constraining technique constrains multiset correlation based on time relevance. The intuition is that a correlation relationship usually stays valid for a short period of time, during which we do not need to recompute the correlation. As a typical study we investigate the continuous intersection join query, which reports the intersecting objects from two sets of moving objects with non-zero extents at every timestamp. To constrain the query processing costs, the key idea is to compute the intersection for not only the current timestamp but also the near future according to the current object velocities, and only update the intersection if the object velocities have been updated. We design a cost model to help determine to which timestamp in the near future we compute the intersection, so as to achieve the best balance between the cost of computing the intersection for once and the total number of recomputing the intersection. The dimensionality reduction technique reduces the cost of multiset correlation by reducing data dimensionality. As a typical study we investigate mapping based dimensionality reduction for similarity searches on time series data, which correlate the time series based on similarity. We treat every time series as a point in a high dimensional space and map it to a low dimensional space, using its distances to a small number of reference data points in the original high dimensional space as the coordinates. We then index the mapped time series in the low dimensional space, which allows efficient processing of similarity searches. We conduct extensive experiments on our proposed techniques. The results confirm the superiority of our techniques over the baseline approaches.
  • Item
    Thumbnail Image
    Digital content and its discontents: interpretive flexibility during the implementation and use of enterprise content management systems
    WIDJAJA, IVO ( 2013)
    The proliferation of digitisation and digital content starting from the late 1990s and throughout 2000s has brought in a class of information technologies called Enterprise Content Management System (ECMS). Many large organisations have turned to ECMS to structure their content management processes and assets through standardisation and formalisation. However, this intention has been found hard to accomplish in practice. Many ECMS implementations, particularly those in multi-layered organisations, face difficulties in fitting package software into various forms of practice. This situation often leads to unintended new practices and unexpected outcomes. This thesis seeks to understand how organisations use ECMS to manage digital content produced by themselves and others. It examines the issues and consequences of using these technologies for undertaking such organisational practice. It asks how ECMS technologies manifest themselves, through interpretation, customisation and use, in diverse organisational settings. The broad question this thesis attempts to answer is: "How might we make better sense of the complex interaction between large enterprise IT systems and organisational practices that occurs during the design, implementation, and use of ECMS technologies?" For such an enquiry, a rich processual view of socio-technical transformation is crucial to describe the complexity and diversity of relations between ECMS and organisations in their natural context. As such, I draw from Science and Technology Studies to describe phenomena typically observed with the Information Systems lens. In particular, I build on the notion of interpretive flexibility outlined in the Social Construction of Technology (SCOT) genre (Pinch & Bijker, 1984) and develop its application in the Information Systems (IS) area (Orlikowski, 1992; Doherty, Coombs, & Loan-Clarke, 2006). The central research question is addressed through three case studies of ECMS projects: a Learning Management System project in an Australian university; a Web Content Management System project in the same Australian university; and a Record Management System project in a public organisation in Australia. These cases bring different social arrangements and work settings, as well as various flavours of ECMS. Going beyond many existing studies that look exclusively at a particular level of the organisation, this thesis examines the interplay between IT artefacts and various social constituents across multiple levels, identified as individual, groups, and the organisation. The thesis confirms the heterogeneity of socio-technological outcomes in the introduction of ECMS despite its unifying rationalist aspirations. To better explain this phenomenon, the thesis unpacks the notion of interpretive flexibility (Pinch & Bijker, 1984; Bijker, Hughes, & Pinch, 1987) into three components: perceptive flexibility, operative flexibility, and constructive flexibility. This elaboration allows a richer description and understanding of different modes of engagement between the social and the technical as various constituents within organisation make sense of, use, and modify IT artefacts. Further, while the original SCOT version of interpretive flexibility provided a view of technology moving through flux to reach a stable closure, the kind of interpretive flexibility elaborated in this thesis suggests sustained fluidity in the ongoing practices around IT artefacts. This fluidity and heterogeneity of outcomes were observed within the practices of various social constituents across multiple levels. Individuals, groups, and the whole organisation could exercise their own interpretive flexibility within the constraint of existing practice. It was observed that a particular level could play a dominant role in deciding how the technology was eventually used and in shaping practice around the ECMS. The thesis contributes to the better understanding of ECMS phenomena in a multilevel context and to the better description of various instances of socio-technical interaction during the re-design of ECMS technologies and the rearrangement of content management practice. By giving prominence to technological form and attributes of ECMS, this thesis has also attempted to provide a more generous space to the role of IT artefacts in IS research (Orlikowski & Lacono, 2001). This thesis has implications for understanding the implementations of ECMS in particular, and the appropriation of enterprise-level IT systems within organisation in general.
  • Item
    Thumbnail Image
    Extending reactivity: a Kanban-based supply chain control system
    ZHANG, JAMES ( 2013)
    The Kanban system is a manufacturing control system used to regulate the material and information flow within a company and between a company and its suppliers. Kanban is a decentralised system which leads to “just-in-time” or “lean” production. This thesis investigates the feasibility and the methods of extending the traditional Kanban System. This allows the Kanban system to operate in adverse environments where there are changes of high rate and large amplitude. The traditional Kanban system is widely used in production control and is a reactive system. It is highly robust in a favourable environment, but its performance suffers when there are environmental uncertainties, such as large fluctuations in demand. The alternative production control approach is to use systems such as MRP or MRP II which include a central planner and rely on demand forecasting to be effective. In this research we use an agent oriented approach in designing an Extended Kanban System (EKS). The EKS treats an individual Kanban as an independent decision maker to afford flexibility and adaptability. The key advantages for the EKS system proposed here are its excellent usability, simplicity and robustness. The main challenge to the EKS approach is to account for the longer-term (temporal) change in environments, such as demand trends, and global (spatial) aspects of coordinating the local response of individual reactive Kanban agents. We use simulation and laboratory based experiments to validate the effectiveness of the EKS. Both the simulation results and the laboratory experiments demonstrate the superiority of the extended system in comparison with the traditional system and other variants of Kanban systems proposed in the literature. This work provides a cost effective way of extending a commonly used industrial information system. The simplicity of implementation, combined with better performance of the EKS, makes the system appealing to a large audience of practitioners who want to adopt the Kanban system in an environment otherwise not suitable for lean production, without embracing the complexity of MRP like information systems. Finally, the research enriches our understanding of relationships among model, rationality and reactiveness. There are a wide range of agents that have implicit, primitive and non-symbolic models. Early works on reactive agents tended to ignore this possibility of models in their objection to the centrality of symbolic models. The approach to rationality used in this thesis consciously considers the active role of the environment in system rationality and provides a more complete and practical framework for the design and analysis of rational systems in production control.
  • Item
    Thumbnail Image
    Automatic identification of locative expressions from informal text
    Liu, Fei ( 2013)
    Informal place descriptions that are rich in locative expressions can be found in various contexts. The ability to extract locative expressions from such informal place descriptions is at the centre of improving the quality of services, such as interpreting geographical queries and emergency calls. While much attention has been focused on the identification of formal place references (e.g., Rathmines Road) from natu- ral language, people tend to make heavy use of informal place references (e.g., my bedroom). This research addresses the problem by developing a model that is able to automatically identify locative expressions from informal text. Moreover, we study and discover insights of what aspects are helpful in the identification task. Utilising an existing manually annotated corpus, we re-annotate locative expressions and use them as the gold standard. Having the gold standard ready, we take a machine learning approach to the identification task with well-reasoned features based on observation and intuition. Further, we study the impacts of various feature setups on the performance of the model and provide analyses of experiment results. With the best performing feature setup, the model is able to achieve significant increase in performance over the baseline systems.