School of Mathematics and Statistics - Research Publications
Now showing items 1-12 of 1178
Predicting qualitative phenotypes from microarray data - the Eadgene pig data set.
(Springer Science and Business Media LLC, 2009-07-16)
BACKGROUND: The aim of this work was to study the performances of 2 predictive statistical tools on a data set that was given to all participants of the Eadgene-SABRE Post Analyses Working Group, namely the Pig data set of Hazard et al. (2008). The data consisted of 3686 gene expressions measured on 24 animals partitioned in 2 genotypes and 2 treatments. The objective was to find biomarkers that characterized the genotypes and the treatments in the whole set of genes. METHODS: We first considered the Random Forest approach that enables the selection of predictive variables. We then compared the classical Partial Least Squares regression (PLS) with a novel approach called sparse PLS, a variant of PLS that adapts lasso penalization and allows for the selection of a subset of variables. RESULTS: All methods performed well on this data set. The sparse PLS outperformed the PLS in terms of prediction performance and improved the interpretability of the results. CONCLUSION: We recommend the use of machine learning methods such as Random Forest and multivariate methods such as sparse PLS for prediction purposes. Both approaches are well adapted to transcriptomic data where the number of features is much greater than the number of individuals.
Modelling collective navigation via non-local communication
(ROYAL SOC, 2021-09-29)
Collective migration occurs throughout the animal kingdom, and demands both the interpretation of navigational cues and the perception of other individuals within the group. Navigational cues orient individuals towards a destination, while it has been demonstrated that communication between individuals enhances navigation through a reduction in orientation error. We develop a mathematical model of collective navigation that synthesizes navigational cues and perception of other individuals. Crucially, this approach incorporates uncertainty inherent to cue interpretation and perception in the decision making process, which can arise due to noisy environments. We demonstrate that collective navigation is more efficient than individual navigation, provided a threshold number of other individuals are perceptible. This benefit is even more pronounced in low navigation information environments. In navigation 'blindspots', where no information is available, navigation is enhanced through a relay that connects individuals in information-poor regions to individuals in information-rich regions. As an expository case study, we apply our framework to minke whale migration in the northeast Atlantic Ocean, and quantify the decrease in navigation ability due to anthropogenic noise pollution.
Modular assembly of dynamic models in systems biology
(PUBLIC LIBRARY SCIENCE, 2021-10-01)
It is widely acknowledged that the construction of large-scale dynamic models in systems biology requires complex modelling problems to be broken up into more manageable pieces. To this end, both modelling and software frameworks are required to enable modular modelling. While there has been consistent progress in the development of software tools to enhance model reusability, there has been a relative lack of consideration for how underlying biophysical principles can be applied to this space. Bond graphs combine the aspects of both modularity and physics-based modelling. In this paper, we argue that bond graphs are compatible with recent developments in modularity and abstraction in systems biology, and are thus a desirable framework for constructing large-scale models. We use two examples to illustrate the utility of bond graphs in this context: a model of a mitogen-activated protein kinase (MAPK) cascade to illustrate the reusability of modules and a model of glycolysis to illustrate the ability to modify the model granularity.
Training for object recognition with increasing spatial frequency: A comparison of deep learning with human vision
(SAGE PUBLICATIONS LTD, 2021-12-01)
The ontogenetic development of human vision and the real-time neural processing of visual input exhibit a striking similarity-a sensitivity toward spatial frequencies that progresses in a coarse-to-fine manner. During early human development, sensitivity for higher spatial frequencies increases with age. In adulthood, when humans receive new visual input, low spatial frequencies are typically processed first before subsequent processing of higher spatial frequencies. We investigated to what extent this coarse-to-fine progression might impact visual representations in artificial vision and compared this to adult human representations. We simulated the coarse-to-fine progression of image processing in deep convolutional neural networks (CNNs) by gradually increasing spatial frequency information during training. We compared CNN performance after standard and coarse-to-fine training with a wide range of datasets from behavioral and neuroimaging experiments. In contrast to humans, CNNs that are trained using the standard protocol are very insensitive to low spatial frequency information, showing very poor performance in being able to classify such object images. By training CNNs using our coarse-to-fine method, we improved the classification accuracy of CNNs from 0% to 32% on low-pass-filtered images taken from the ImageNet dataset. The coarse-to-fine training also made the CNNs more sensitive to low spatial frequencies in hybrid images with conflicting information in different frequency bands. When comparing differently trained networks on images containing full spatial frequency information, we saw no representational differences. Overall, this integration of computational, neural, and behavioral findings shows the relevance of the exposure to and processing of inputs with variation in spatial frequency content for some aspects of high-level object representations.
Orthogonal Representations of Object Shape and Category in Deep Convolutional Neural Networks and Human Visual Cortex
(NATURE RESEARCH, 2020-02-12)
Deep Convolutional Neural Networks (CNNs) are gaining traction as the benchmark model of visual object recognition, with performance now surpassing humans. While CNNs can accurately assign one image to potentially thousands of categories, network performance could be the result of layers that are tuned to represent the visual shape of objects, rather than object category, since both are often confounded in natural images. Using two stimulus sets that explicitly dissociate shape from category, we correlate these two types of information with each layer of multiple CNNs. We also compare CNN output with fMRI activation along the human visual ventral stream by correlating artificial with neural representations. We find that CNNs encode category information independently from shape, peaking at the final fully connected layer in all tested CNN architectures. Comparing CNNs with fMRI brain data, early visual cortex (V1) and early layers of CNNs encode shape information. Anterior ventral temporal cortex encodes category information, which correlates best with the final layer of CNNs. The interaction between shape and category that is found along the human visual ventral pathway is echoed in multiple deep networks. Our results suggest CNNs represent category information independently from shape, much like the human visual system.
An exponential filter model predicts lightness illusions
(FRONTIERS MEDIA SA, 2015-06-24)
Lightness, or perceived reflectance of a surface, is influenced by surrounding context. This is demonstrated by the Simultaneous Contrast Illusion (SCI), where a gray patch is perceived lighter against a black background and vice versa. Conversely, assimilation is where the lightness of the target patch moves toward that of the bounding areas and can be demonstrated in White's effect. Blakeslee and McCourt (1999) introduced an oriented difference-of-Gaussian (ODOG) model that is able to account for both contrast and assimilation in a number of lightness illusions and that has been subsequently improved using localized normalization techniques. We introduce a model inspired by image statistics that is based on a family of exponential filters, with kernels spanning across multiple sizes and shapes. We include an optional second stage of normalization based on contrast gain control. Our model was tested on a well-known set of lightness illusions that have previously been used to evaluate ODOG and its variants, and model lightness values were compared with typical human data. We investigate whether predictive success depends on filters of a particular size or shape and whether pooling information across filters can improve performance. The best single filter correctly predicted the direction of lightness effects for 21 out of 27 illusions. Combining two filters together increased the best performance to 23, with asymptotic performance at 24 for an arbitrarily large combination of filter outputs. While normalization improved prediction magnitudes, it only slightly improved overall scores in direction predictions. The prediction performance of 24 out of 27 illusions equals that of the best performing ODOG variant, with greater parsimony. Our model shows that V1-style orientation-selectivity is not necessary to account for lightness illusions and that a low-level model based on image statistics is able to account for a wide range of both contrast and assimilation effects.
Complex cells decrease errors for the Muller-Lyer illusion in a model of the visual ventral stream
(FRONTIERS RESEARCH FOUNDATION, 2014-09-24)
To improve robustness in object recognition, many artificial visual systems imitate the way in which the human visual cortex encodes object information as a hierarchical set of features. These systems are usually evaluated in terms of their ability to accurately categorize well-defined, unambiguous objects and scenes. In the real world, however, not all objects and scenes are presented clearly, with well-defined labels and interpretations. Visual illusions demonstrate a disparity between perception and objective reality, allowing psychophysicists to methodically manipulate stimuli and study our interpretation of the environment. One prominent effect, the Müller-Lyer illusion, is demonstrated when the perceived length of a line is contracted (or expanded) by the addition of arrowheads (or arrow-tails) to its ends. HMAX, a benchmark object recognition system, consistently produces a bias when classifying Müller-Lyer images. HMAX is a hierarchical, artificial neural network that imitates the "simple" and "complex" cell layers found in the visual ventral stream. In this study, we perform two experiments to explore the Müller-Lyer illusion in HMAX, asking: (1) How do simple vs. complex cell operations within HMAX affect illusory bias and precision? (2) How does varying the position of the figures in the input image affect classification using HMAX? In our first experiment, we assessed classification after traversing each layer of HMAX and found that in general, kernel operations performed by simple cells increase bias and uncertainty while max-pooling operations executed by complex cells decrease bias and uncertainty. In our second experiment, we increased variation in the positions of figures in the input images that reduced bias and uncertainty in HMAX. Our findings suggest that the Müller-Lyer illusion is exacerbated by the vulnerability of simple cell operations to positional fluctuations, but ameliorated by the robustness of complex cell responses to such variance.
The Muller-Lyer Illusion in a Computational Model of Biological Object Recognition
(PUBLIC LIBRARY SCIENCE, 2013-02-15)
Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections.
Retraction of a peer reviewed article suggests ongoing problems with Australian forensic science.
(Elsevier BV, 2021)
We describe events arising from the case of Joby Rowe, convicted of the homicide of his three month old daughter, and explore what they illustrate about systemic problems in the forensic science community in Australia. A peer reviewed journal article that scrutinized the forensic evidence presented in the Rowe case was retracted by a forensic science journal for reasons unrelated to quality or accuracy, under pressure from forensic medical experts criticized in the article. Details of the retraction obtained through freedom of information mechanisms reveal improper pressure and subversion of publishing processes in order to avoid scrutiny. The retraction was supported by the editorial board and two Australian forensic science societies, which is indicative of serious deficiencies in the leadership of forensic science in Australia. We propose paths forward including blind peer review, publication of expert reports, and a criminal cases review authority, that would help stimulate a culture that encourages scrutiny, and relies on evidence-based rather than eminence-based knowledge.
Home-based pulmonary rehabilitation early after hospitalisation in COPD (early HomeBase): protocol for a randomised controlled trial
(BMJ PUBLISHING GROUP, 2021-11-01)
INTRODUCTION: Chronic obstructive pulmonary disease (COPD) is characterised by exacerbations of respiratory disease, frequently requiring hospital admission. Pulmonary rehabilitation can reduce the likelihood of future hospitalisation, but programme uptake is poor. This study aims to compare hospital readmission rates, clinical outcomes and costs between people with COPD who undertake a home-based programme of pulmonary rehabilitation commenced early (within 2 weeks) of hospital discharge with usual care. METHODS: A multisite randomised controlled trial, powered for superiority, will be conducted in Australia. Eligible patients admitted to one of the participating sites for an exacerbation of COPD will be invited to participate. Participants will be randomised 1:1. Intervention group participants will undertake an 8-week programme of home-based pulmonary rehabilitation commencing within 2 weeks of hospital discharge. Control group participants will receive usual care and a weekly phone call for attention control. Outcomes will be measured by a blinded assessor at baseline, after the intervention (week 9-10 posthospital discharge), and at 12 months follow-up. The primary outcome is hospital readmission at 12 months follow-up. ETHICS AND DISSEMINATION: Human Research Ethics approval for all sites provided by Alfred Health (Project 51216). Findings will be published in peer-reviewed journals, conferences and lay publications. TRIAL REGISTRATION NUMBER: ACTRN12619001122145.
Map and model-moving from observation to prediction in toxicogenomics.
(Oxford University Press (OUP), 2019-06-01)
BACKGROUND: Chemicals induce compound-specific changes in the transcriptome of an organism (toxicogenomic fingerprints). This provides potential insights about the cellular or physiological responses to chemical exposure and adverse effects, which is needed in assessment of chemical-related hazards or environmental health. In this regard, comparison or connection of different experiments becomes important when interpreting toxicogenomic experiments. Owing to lack of capturing response dynamics, comparability is often limited. In this study, we aim to overcome these constraints. RESULTS: We developed an experimental design and bioinformatic analysis strategy to infer time- and concentration-resolved toxicogenomic fingerprints. We projected the fingerprints to a universal coordinate system (toxicogenomic universe) based on a self-organizing map of toxicogenomic data retrieved from public databases. Genes clustering together in regions of the map indicate functional relation due to co-expression under chemical exposure. To allow for quantitative description and extrapolation of the gene expression responses we developed a time- and concentration-dependent regression model. We applied the analysis strategy in a microarray case study exposing zebrafish embryos to 3 selected model compounds including 2 cyclooxygenase inhibitors. After identification of key responses in the transcriptome we could compare and characterize their association to developmental, toxicokinetic, and toxicodynamic processes using the parameter estimates for affected gene clusters. Furthermore, we discuss an association of toxicogenomic effects with measured internal concentrations. CONCLUSIONS: The design and analysis pipeline described here could serve as a blueprint for creating comparable toxicogenomic fingerprints of chemicals. It integrates, aggregates, and models time- and concentration-resolved toxicogenomic data.