Computing and Information Systems - Research Publications
Now showing items 1-12 of 1376
The effect of enterprise architecture deployment practices on organizational benefits: A dynamic capability perspective
(MDPI AG, 2020-11-01)
In recent years, the literature has emphasized theory building in the context of Enterprise Architecture (EA) research. Specifically, scholars tend to focus on EA-based capabilities that organize and deploy organization-specific resources to align strategic objectives with the technology’s particular use. Despite the growth in EA studies, substantial gaps remain in the literature. The most substantial gaps are that the conceptualization of EA-based capabilities still lacks a firm base in theory and that there is limited empirical evidence on how EA-based capabilities drive business transformation and deliver benefits to the firm. Therefore, this study focuses on EA-based capabilities, using the dynamic capabilities view as a theoretical foundation, and develops and tests a new research model that explains how dynamic enterprise architecture capabilities lead to organizational benefits. The research model’s hypotheses are tested using a dataset that contains responses from 299 CIO’s, IT managers, and lead architects. Based on this study’s outcomes, we contend that dynamic enterprise architecture capabilities positively enhance firms’ process innovation and business–IT alignment. These mediating forces are both positively associated with organizational benefits. The firms’ EA resources and specifically EA deployment practices are essential in cultivating dynamic enterprise architecture capabilities. This study advances our understanding of how to efficaciously de-lineate dynamic enterprise architecture capabilities in delivering benefits to the organization.
Gain-Loss Framing: Comparing the Push Notification Message to Increase Purchase Intention in e-Marketplace Mobile Application
(Institute of Electrical and Electronics Engineers (IEEE), 2020-10-16)
Cart abandonment is a phenomenon that occurs when a customer begins an online transaction but does not complete the sale until the payment process of an e-marketplace app. It has become an issue for e-marketplace sellers and developers because it can reduce sales and delay income. One way to reduce cart-abandonment rates is to send push notifications as a reminder to persuade customers to complete their transaction. A stimulus-organism-response (SOR) framework is used as the conceptual foundation. The stimulus is the push notification, then the organism perceives the value of the product, and purchase intention is the response. The push notification is designed to use message framing, which can be gain-framed or loss-framed. The personalization effect is also analyzed, because it has been shown to persuade users with marketing and promotional products effectively. This study also compares two product types: utilitarian and hedonic. The study was conducted with a 2 (gain-framed and loss-framed) × 2 (hedonic and utilitarian) × 2 (personalized and general) between-subjects experiment. Hypotheses were tested by comparing the perceived value for each group using analysis of variance and a mediation analysis of 600 datasets. The result shows a general message with gain-framed and the loss-framed message is not significantly different from the perceived value of the product, regardless the product type. A personalized message with gain-framed content has a better effect than loss-framed content as it relates to the perceived value of a utilitarian product, but the improvement is not significant for a hedonic product. Perceived value of the product also has a significant effect on purchase intention and partially mediated the effect of message-framing toward product purchase intention. Thus, it is better to design the push notification as a personalized message with gain-framed message content in order to reduce cart-abandonment rates.
Prediction of rifampicin resistance beyond the RRDR using structure-based machine learning approaches.
(Nature Publishing Group, 2020-10-22)
Rifampicin resistance is a major therapeutic challenge, particularly in tuberculosis, leprosy, P. aeruginosa and S. aureus infections, where it develops via missense mutations in gene rpoB. Previously we have highlighted that these mutations reduce protein affinities within the RNA polymerase complex, subsequently reducing nucleic acid affinity. Here, we have used these insights to develop a computational rifampicin resistance predictor capable of identifying resistant mutations even outside the well-defined rifampicin resistance determining region (RRDR), using clinical M. tuberculosis sequencing information. Our tool successfully identified up to 90.9% of M. tuberculosis rpoB variants correctly, with sensitivity of 92.2%, specificity of 83.6% and MCC of 0.69, outperforming the current gold-standard GeneXpert-MTB/RIF. We show our model can be translated to other clinically relevant organisms: M. leprae, P. aeruginosa and S. aureus, despite weak sequence identity. Our method was implemented as an interactive tool, SUSPECT-RIF (StrUctural Susceptibility PrEdiCTion for RIFampicin), freely available at https://biosig.unimelb.edu.au/suspect_rif/ .
Mapping the Patient's Journey in Healthcare through Process Mining
Nowadays, assessing and improving customer experience has become a priority, and has emerged as a key differentiator for business and organizations worldwide. A customer journey (CJ) is a strategic tool, a map of the steps customers follow when engaging with a company or organization to obtain a product or service. The increase of the need to obtain knowledge about customers' perceptions and feelings when interacting with participants, touchpoints, and channels through different stages of the customer life cycle. This study aims to describe the application of process mining techniques in healthcare as a tool to asses customer journeys. The appropriateness of the approach presented is illustrated through a case study of a key healthcare process. Results depict how a healthcare process can be mapped through the CJ components, and its analysis can serve to understand and improve the patient's experience.
Classification performance of administrative coding data for detection of invasive fungal infection in paediatric cancer patients
(PUBLIC LIBRARY SCIENCE, 2020-09-09)
BACKGROUND: Invasive fungal infection (IFI) detection requires application of complex case definitions by trained staff. Administrative coding data (ICD-10-AM) may provide a simplified method for IFI surveillance, but accuracy of case ascertainment in children with cancer is unknown. OBJECTIVE: To determine the classification performance of ICD-10-AM codes for detecting IFI using a gold-standard dataset (r-TERIFIC) of confirmed IFIs in paediatric cancer patients at a quaternary referral centre (Royal Children's Hospital) in Victoria, Australia from 1st April 2004 to 31st December 2013. METHODS: ICD-10-AM codes denoting IFI in paediatric patients (<18-years) with haematologic or solid tumour malignancies were extracted from the Victorian Admitted Episodes Dataset and linked to the r-TERIFIC dataset. Sensitivity, positive predictive value (PPV) and the F1 scores of the ICD-10-AM codes were calculated. RESULTS: Of 1,671 evaluable patients, 113 (6.76%) had confirmed IFI diagnoses according to gold-standard criteria, while 114 (6.82%) cases were identified using the codes. Of the clinical IFI cases, 68 were in receipt of ≥1 ICD-10-AM code(s) for IFI, corresponding to an overall sensitivity, PPV and F1 score of 60%, respectively. Sensitivity was highest for proven IFI (77% [95% CI: 58-90]; F1 = 47%) and invasive candidiasis (83% [95% CI: 61-95]; F1 = 76%) and lowest for other/unspecified IFI (20% [95% CI: 5.05-72%]; F1 = 5.00%). The most frequent misclassification was coding of invasive aspergillosis as invasive candidiasis. CONCLUSION: ICD-10-AM codes demonstrate moderate sensitivity and PPV to detect IFI in children with cancer. However, specific subsets of proven IFI and invasive candidiasis (codes B37.x) are more accurately coded.
Assessment of Smoke Contamination in Grapevine Berries and Taint in Wines Due to Bushfires Using a Low-Cost E-Nose and an Artificial Intelligence Approach
Bushfires are increasing in number and intensity due to climate change. A newly developed low-cost electronic nose (e-nose) was tested on wines made from grapevines exposed to smoke in field trials. E-nose readings were obtained from wines from five experimental treatments: (i) low-density smoke exposure (LS), (ii) high-density smoke exposure (HS), (iii) high-density smoke exposure with in-canopy misting (HSM), and two controls: (iv) control (C; no smoke treatment) and (v) control with in-canopy misting (CM; no smoke treatment). These e-nose readings were used as inputs for machine learning algorithms to obtain a classification model, with treatments as targets and seven neurons, with 97% accuracy in the classification of 300 samples into treatments as targets (Model 1). Models 2 to 4 used 10 neurons, with 20 glycoconjugates and 10 volatile phenols as targets, measured: in berries one hour after smoke (Model 2; R = 0.98; R2 = 0.95; b = 0.97); in berries at harvest (Model 3; R = 0.99; R2 = 0.97; b = 0.96); in wines (Model 4; R = 0.99; R2 = 0.98; b = 0.98). Model 5 was based on the intensity of 12 wine descriptors determined via a consumer sensory test (Model 5; R = 0.98; R2 = 0.96; b = 0.97). These models could be used by winemakers to assess near real-time smoke contamination levels and to implement amelioration strategies to minimize smoke taint in wines following bushfires.
Artificial intelligence for clinical decision support in neurology.
(Oxford University Press (OUP), 2020)
Artificial intelligence is one of the most exciting methodological shifts in our era. It holds the potential to transform healthcare as we know it, to a system where humans and machines work together to provide better treatment for our patients. It is now clear that cutting edge artificial intelligence models in conjunction with high-quality clinical data will lead to improved prognostic and diagnostic models in neurological disease, facilitating expert-level clinical decision tools across healthcare settings. Despite the clinical promise of artificial intelligence, machine and deep-learning algorithms are not a one-size-fits-all solution for all types of clinical data and questions. In this article, we provide an overview of the core concepts of artificial intelligence, particularly contemporary deep-learning methods, to give clinician and neuroscience researchers an appreciation of how artificial intelligence can be harnessed to support clinical decisions. We clarify and emphasize the data quality and the human expertise needed to build robust clinical artificial intelligence models in neurology. As artificial intelligence is a rapidly evolving field, we take the opportunity to iterate important ethical principles to guide the field of medicine is it moves into an artificial intelligence enhanced future.
Recognizing animal personhood in compassionate conservation
Compassionate conservation is based on the ethical position that actions taken to protect biodiversity should be guided by compassion for all sentient beings. Critics argue that there are 3 core reasons harming animals is acceptable in conservation programs: the primary purpose of conservation is biodiversity protection; conservation is already compassionate to animals; and conservation should prioritize compassion to humans. We used argument analysis to clarify the values and logics underlying the debate around compassionate conservation. We found that objections to compassionate conservation are expressions of human exceptionalism, the view that humans are of a categorically separate and higher moral status than all other species. In contrast, compassionate conservationists believe that conservation should expand its moral community by recognizing all sentient beings as persons. Personhood, in an ethical sense, implies the individual is owed respect and should not be treated merely as a means to other ends. On scientific and ethical grounds, there are good reasons to extend personhood to sentient animals, particularly in conservation. The moral exclusion or subordination of members of other species legitimates the ongoing manipulation and exploitation of the living worlds, the very reason conservation was needed in the first place. Embracing compassion can help dismantle human exceptionalism, recognize nonhuman personhood, and navigate a more expansive moral space.
Developing Non-Stochastic Privacy-Preserving Policies Using Agglomerative Clustering
(Institute of Electrical and Electronics Engineers, 2020-06-15)
We consider a non-stochastic privacy-preserving problem in which an adversary aims to infer sensitive information S from publicly accessible data X without using statistics. We consider the problem of generating and releasing a quantization X^ of X to minimize the privacy leakage of S to X^ while maintaining a certain level of utility (or, inversely, the quantization loss). The variables S and X are treated as bounded and non-probabilistic, but are otherwise general. We consider two existing non-stochastic privacy measures, namely the maximum uncertainty reduction L0(S→X^) and the refined information I∗(S;X^) (also called the maximin information) of S . For each privacy measure, we propose a corresponding agglomerative clustering algorithm that converges to a locally optimal quantization solution X^ by iteratively merging elements in the alphabet of X . To instantiate the solution to this problem, we consider two specific utility measures, the worst-case resolution of X by observing X^ and the maximal distortion of the released data X^ . We show that the value of the maximin information I∗(S;X^) can be determined by dividing the confusability graph into connected subgraphs. Hence, I∗(S;X^) can be reduced by merging nodes connecting subgraphs. The relation to the probabilistic information-theoretic privacy is also studied by noting that the Gács-Körner common information is the stochastic version of I∗ and indicates the attainability of statistical indistinguishability.
Cutoff Scanning Matrix (CSM): structural classification and function prediction by protein inter-residue distance patterns
BACKGROUND: The unforgiving pace of growth of available biological data has increased the demand for efficient and scalable paradigms, models and methodologies for automatic annotation. In this paper, we present a novel structure-based protein function prediction and structural classification method: Cutoff Scanning Matrix (CSM). CSM generates feature vectors that represent distance patterns between protein residues. These feature vectors are then used as evidence for classification. Singular value decomposition is used as a preprocessing step to reduce dimensionality and noise. The aspect of protein function considered in the present work is enzyme activity. A series of experiments was performed on datasets based on Enzyme Commission (EC) numbers and mechanistically different enzyme superfamilies as well as other datasets derived from SCOP release 1.75. RESULTS: CSM was able to achieve a precision of up to 99% after SVD preprocessing for a database derived from manually curated protein superfamilies and up to 95% for a dataset of the 950 most-populated EC numbers. Moreover, we conducted experiments to verify our ability to assign SCOP class, superfamily, family and fold to protein domains. An experiment using the whole set of domains found in last SCOP version yielded high levels of precision and recall (up to 95%). Finally, we compared our structural classification results with those in the literature to place this work into context. Our method was capable of significantly improving the recall of a previous study while preserving a compatible precision level. CONCLUSIONS: We showed that the patterns derived from CSMs could effectively be used to predict protein function and thus help with automatic function annotation. We also demonstrated that our method is effective in structural classification tasks. These facts reinforce the idea that the pattern of inter-residue distances is an important component of family structural signatures. Furthermore, singular value decomposition provided a consistent increase in precision and recall, which makes it an important preprocessing step when dealing with noisy data.
GASS-WEB: a web server for identifying enzyme active sites based on genetic algorithms
(OXFORD UNIV PRESS, 2017-07-03)
Enzyme active sites are important and conserved functional regions of proteins whose identification can be an invaluable step toward protein function prediction. Most of the existing methods for this task are based on active site similarity and present limitations including performing only exact matches on template residues, template size restraints, despite not being capable of finding inter-domain active sites. To fill this gap, we proposed GASS-WEB, a user-friendly web server that uses GASS (Genetic Active Site Search), a method based on an evolutionary algorithm to search for similar active sites in proteins. GASS-WEB can be used under two different scenarios: (i) given a protein of interest, to match a set of specific active site templates; or (ii) given an active site template, looking for it in a database of protein structures. The method has shown to be very effective on a range of experiments and was able to correctly identify >90% of the catalogued active sites from the Catalytic Site Atlas. It also managed to achieve a Matthew correlation coefficient of 0.63 using the Critical Assessment of protein Structure Prediction (CASP 10) dataset. In our analysis, GASS was ranking fourth among 18 methods. GASS-WEB is freely available at http://gass.unifei.edu.br/.
Availability of structured and unstructured clinical data for comparative effectiveness research and quality improvement: a multisite assessment.
(Ubiquity Press, Ltd., 2014)
INTRODUCTION: A key attribute of a learning health care system is the ability to collect and analyze routinely collected clinical data in order to quickly generate new clinical evidence, and to monitor the quality of the care provided. To achieve this vision, clinical data must be easy to extract and stored in computer readable formats. We conducted this study across multiple organizations to assess the availability of such data specifically for comparative effectiveness research (CER) and quality improvement (QI) on surgical procedures. SETTING: This study was conducted in the context of the data needed for the already established Surgical Care and Outcomes Assessment Program (SCOAP), a clinician-led, performance benchmarking, and QI registry for surgical and interventional procedures in Washington State. METHODS: We selected six hospitals, managed by two Health Information Technology (HIT) groups, and assessed the ease of automated extraction of the data required to complete the SCOAP data collection forms. Each data element was classified as easy, moderate, or complex to extract. RESULTS: Overall, a significant proportion of the data required to automatically complete the SCOAP forms was not stored in structured computer-readable formats, with more than 75 percent of all data elements being classified as moderately complex or complex to extract. The distribution differed significantly between the health care systems studied. CONCLUSIONS: Although highly desirable, a learning health care system does not automatically emerge from the implementation of electronic health records (EHRs). Innovative methods to improve the structured capture of clinical data are needed to facilitate the use of routinely collected clinical data for patient phenotyping.