School of Mathematics and Statistics  Theses
http://hdl.handle.net/11343/295
20200914T17:04:30Z

A study of optimised network flows for prediction of force transmission and crack propagation in bonded granular media
http://hdl.handle.net/11343/242052
Kahagalage, Sanath Darshana
2020
A study of optimised network flows for prediction of force transmission and crack propagation in bonded granular media
This thesis focuses on study bonded granular materials. We mainly analyse discrete element method simulation data for unconfined concrete specimens subjected to uniaxial tension and compression. In these systems, the contacts can support compressive, tensile and shear forces. Thus, under applied loads, a member grain can transmit tensile and/or compressive forces to its neighbours resulting in a highly heterogeneous contact force network.
The objective of this thesis is twofold. The first objective of this thesis is to develop algorithms for the identification and characterisation of two classes of force transmission patterns in these systems: (a) force chains, (b) force (energy) bottlenecks. The former comprises a subgroup of grains that transmit the majority of the load through the sample, while the latter comprises a subgroup of contacts that are prone to force congestion and damage. These two classes are related and coevolve as loading history proceeds. Here this coevolution is characterised quantitatively to gain new insights into the interdependence between force transmission and failure in bonded grain assemblies.
The second objective of this thesis is to establish the extent to which the ultimate (dominant) crack location can be predicted early in the prefailure regime for disordered and heterogeneous bonded granular media based on known microstructural features. To achieve this, a new datadriven model is developed within the framework of Network Flow Theory which takes as input data on contact network and contact strengths. We tested this model for a range of samples undergoing quasibrittle failure subject to various loading conditions (i.e., uniaxial tension, uniaxial compression) as well as fieldscale data for an openpit mine. In all cases, the location of the ultimate (primary) macrocrack/failure zone is predicted early in the prefailure regime as well as those of other secondary cracks.
We uncovered an optimised force transmission and damage propagation in the prefailure regime, especially by using data from uniaxial tension tests on concrete samples. Tensile force chains emerged in routes that can transmit the global transmission capacity of the contact network through the shortest transmission pathways. Macrocracks developed along with force/energy bottlenecks. We brought some of the commonly used optimisation based fracture criteria into a single framework and showed how heterogeneity and disorder in the contact network affect the prediction.
© 2020 Sanath Darshana Kahagalage
PhD thesis
20200101T00:00:00Z

SeibergWitten Theory and Topological Recursion
http://hdl.handle.net/11343/241752
Chaimanowong, Wee
2020
SeibergWitten Theory and Topological Recursion
KontsevichSoibelman (2017) reformulated EynardOrantin topological recursion (2007)
in terms of Airy structure which provides some geometrical insights into the relationship between the moduli space of curves and topological recursion.
In this work, we investigate the analytical approach to this relationship using the SeibergWitten family of curves as the main example. In particular, we are going to show that the formula computing the Hitchin systems' Special Kahler's prepotential from the genus zero part of topological recursion as obtained by BaragliaHuang (2017) can be generalized for a more general family of curves embedded inside a foliated symplectic surface, including the SeibergWitten family. Consequently, we obtain a similar formula relating the SeibergWitten prepotential to the genus zero part of topological recursion on a SeibergWitten curve.
Finally, we investigate the connection between SeibergWitten theory and Frobenius manifolds which potentially enable the generalization of the current result to include the higher genus parts of topological recursion in the future.
© 2020 Wee Chaimanowong
PhD thesis
20200101T00:00:00Z

Risk Analysis and Probabilistic Decision Making for Censored Failure Data
http://hdl.handle.net/11343/241687
Attanayake, Dona Nayomi Sandarekha
2019
Risk Analysis and Probabilistic Decision Making for Censored Failure Data
Operation and maintenance of a fleet always require a high level of readiness, reduced cost, and improved safety. In order to achieve these goals, it is essential to develop and determine an appropriate maintenance programme for the components in use. A failure analysis involving failure model selection, robust parameter estimation, probabilistic decision making, and assessing the costeffectiveness of the decisions are the key to the selection of a proper maintenance programme. Two significant challenges faced in failure analysis studies are, minimizing the uncertainty associated with model selection and making strategic decisions based on few observed failures. In this thesis, we try to resolve some of these problems and evaluate the costeffectiveness of the selections. We focus on choosing the best model from a model space and robust estimation of quantiles leading to the selection of optimal repair and replacement time of units. We first explore the repair and replacement cost of a unit in a system. We design a simulation study to assess the performance of the parameter estimation methods, maximum likelihood estimation (MLE), and median rank regression method (MRR) in estimating quantiles of the Weibull distribution. Then, we compare the models; Weibull, gamma, lognormal, loglogistic, and inverseGaussian in failure analysis. With an example, we show that the Weibull and the gamma distributions provide competing fits to the failure data. Next, we demonstrate the use of Bayesian model averaging in accounting for that model uncertainty. We derive an average model for the failure observations with respective posterior model probabilities. Then, we illustrate the costeffectiveness of the selected model by comparing the distribution of the total replacement and repair cost. In the second part of the thesis, we discuss the prior information. Initially, we assume, the parameters of the Weibull distribution are dependent by a function of the form rho = sigma/mu and reparameterize the Weibull distribution. Then we propose a new Jeffreys’ prior for the parameters mu and rho. Finally, we designed a simulation study to assess the performance of the new Jeffreys’ prior compared to the MLE.
© 2019 Dona Nayomi Sandarekha Attanayake
PhD thesis
20190101T00:00:00Z

Mathematical models of calcium signalling in the context of cardiac hypertrophy
http://hdl.handle.net/11343/241392
Hunt, Hilary
2020
Mathematical models of calcium signalling in the context of cardiac hypertrophy
Throughout the average human lifespan, our hearts beat over 2 billion times. With each beat, calcium floods the cytoplasm of every heart cell, causing it to contract until calcium reuptake allows the heart to relax, ready for the next beat. However, calcium is known to be critical in other cell functions, including growth. Calcium plays a central role in mediating hypertrophic signalling in ventricular cardiomyocytes on top of its contractile function. How intracellular calcium can encode several different, specific signals at once is not well understood.
In heart cells, calcium release from ryanodine receptors (RyRs) triggers contraction. Under hypertrophic stimulation, calcium release from inositol 1,4,5trisphosphate receptor (IP3R) channels modifies the calcium contraction signal, triggering dephosphorylation and nuclear import of the transcription factor nuclear factor of activated T cells (NFAT), with resulting gene expression linked to cell growth.
Several hypotheses have been proposed as to how the modified cytosolic calcium contraction signal transmits the hypertrophic signal to downstream signalling proteins, including changes to amplitude, duration, duty cycle, and signal localisation. We investigate the form of these signals within the cardiac myocyte using mathematical modelling. Using a compartmental heart cell model, we show that the effect of calcium channel interaction on the global calcium signal supports the idea that increased calcium duty cycle is a plausible mechanism for IP3dependent hypertrophic signalling in cardiomyocytes.
A corresponding calcium signal within the nucleus must be present to maintain NFAT in the nucleus and thus allow NFAT to alter gene expression, initiating hypertrophic remodelling. Yet the nuclear membrane is permeable to calcium and this must all occur on a background of rising and falling calcium with each heartbeat. The mechanisms shaping calcium dynamics within the nucleus remain unclear.
We use a spatial model of calcium diffusion into the nucleus to determine the effects of buffers and cytosolic transient shape on nuclear calcium dynamics. Using experimental data, we estimate the diffusion coefficient and the effects of buffers on nuclear [Ca2+]. Additionally, we explore the effects of altered cytosolic calcium transients and calcium release on nuclear calcium. To approximate experimental measurements of nuclear calcium, we find that there must be perinuclear Ca2+ release and nonlinear diffusion. Comparisons of 1D and 3D models of calcium in the nucleus suggest that spatial variation in calcium concentration within the nucleus will not have a large effect on calciummediated gene regulation.
This work brings us closer to understanding the signalling pathway that leads to pathological hypertrophic cardiac remodelling.
© 2020 Hilary Hunt
PhD thesis
20200101T00:00:00Z

Understanding the regulation of epidermal tissue thickness by cellular and subcellular processes using multiscale modelling
http://hdl.handle.net/11343/241286
Miller, Claire Margaret
2020
Understanding the regulation of epidermal tissue thickness by cellular and subcellular processes using multiscale modelling
The epidermis is the outermost layer of the skin, providing a protective barrier for our bodies. Two important aspects to the barrier function of the epidermis are maintenance of its barrier layer and constant cell turnover. The main barrier layer in the epidermis is the outermost layer, called the stratum corneum. This layer blocks both the entry of antigens and the loss of internal water and solutes. If antigens do enter the system, cell turnover has been hypothesised to propel them out the system by providing a constant upwards velocity of cells which carry the toxins with them.
The majority of severe diseases of the epidermis relate to a reduction in thickness of the stratum corneum. Decreased thickness reduces the barrier function of the layer, causing discomfort and inflammation. Due to its importance to barrier function, the maintenance of stratum corneum thickness, and consequently overall tissue thickness, is the focus of this thesis.
In order to maintain both stratum corneum thickness and overall tissue thickness it is necessary for the system to balance cell proliferation and cell loss. Cell loss in the epidermis occurs when dead cells at the top of the tissue are lost to the environment through a process called desquamation. Cell proliferation occurs in the base, or basal, layer. As the basal cells proliferate, cells above them are pushed upwards through the tissue, causing constant upwards movement in the tissue. Not only does this contribute directly to the barrier function through the cell turnover as discussed above, but the velocity of the cells is likely to be key in regulating the tissue thickness. Assuming the cell loss occurs at a fairly constant rate, the combination of the velocity and the loss rate determine tissue thickness.
In order to investigate these processes we develop a three dimensional discrete, multiscale, multicellular model, focussing on maintenance of cell proliferation and desquamation. Using this model, we are able to investigate how subcellular and cellular level processes interact to maintain a homeostatic tissue.
Our model is able to reproduce a system that selfregulates its thickness. The first aspect of this regulation is maintaining a constant rate of proliferation in the epidermis, and consequently a constant upwards velocity of cells. The second aspect is a maintained rate of desquamation. The model shows that hypothesised biological models for the degradation of cellcell adhesion from the literature are able to provide a consistent rate of cell loss which balances proliferation. An investigation into a disorder which disrupts this desquamation model shows reduced tissue thickness, consequently diminishing the protective role of the tissue.
In developing the multiscale model we have begun to delve deeper into the relationship between subcellular and cellular processes and epidermal tissue structure. The model is developed with scope for the integration of further subcellular processes. This provides it with the potential for further experiments into the causes and effects of behaviours and diseases of the epidermis, with much higher time and cost efficiency than other experimental methods.
© 2020 Claire Margaret Miller
PhD thesis
20200101T00:00:00Z

Biorthogonal Polynomial Sequences and the Asymmetric Simple Exclusion Process
http://hdl.handle.net/11343/240391
Moore, William Barton
2019
Biorthogonal Polynomial Sequences and the Asymmetric Simple Exclusion Process
The diffusion algebra equations of the stationary state of the three parameter Asymmetric Simple Exclusion Process are represented as a linear functional, acting on a tensor algebra. From the linear functional, a pair of sequences (P and Q) of monic polynomials are constructed which are biorthogonal, that is, they are orthogonal with respect to each other and not necessarily themselves. The uniqueness and existence of the pair of sequences arises from the determinant of the bimoment matrix whose elements satisfy a pair of qrecurrence relations. The determinant is evaluated using an LDUdecomposition. If the action of the linear functional is represented as an inner product, then the action of the polynomials Q on a boundary vector V, generates a basis whose orthogonal dual vectors are given by the action of P on the dual boundary vector W}. This basis gives the representation of the algebra which is associated with the AlSalamChihara polynomials obtained by Sasamoto.
Several theorems associated with the three parameter asymmetric simple exclusion process are proven combinatorially. The theorems involve the linear functional which, for the three parameter case, is a substitution morphism on a qWeyl algebra. The two polynomial sequences, P and Q, are represented in terms of qbinomial lattice paths.
A combinatorial representation for the value of the linear functional defining the matrix elements of a bimoment matrix is established in terms of the value of a qrook polynomial and utilised to provide combinatorial proofs for results pertaining to the linear functional. Combinatorial proofs are provided for theorems in terms of the p,qbinomial coefficients, which are closely related to the combinatorics of the three parameter ASEP.
The results for the three parameter diffusion algebra of the Asymmetric Simple Exclusion Process are extended to five parameters. A pair of basis changes are derived from the LDU decomposition of the bimoment matrix. In order to derive the LDU decomposition a recurrence relation satisfied by the lower triangular matrix elements is conjectured. Associated with this pair of bases are three sequences of orthogonal polynomials. The first pair of orthogonal polynomials generate the new basis vectors (the boundary basis) by their action on the boundary vectors (written is the standard basis), whilst the third orthogonal polynomials are essentially the AskeyWilson polynomials. All theses results are ultimately related to the LDU decomposition of a matrix.
© 2019 William Barton Moore
PhD thesis
20190101T00:00:00Z

Exploring the statistical aspects of expert elicited experiments
http://hdl.handle.net/11343/238547
Dharmarathne, Hetti Arachchige Sameera Gayan
2020
Exploring the statistical aspects of expert elicited experiments
We explore the statistical aspects of some of the known methods of analysing experts’ elicited data to identify potential improvements on the accuracy of their outcomes in this study. It can be identified that potential correlation structures induced in the probability predictions by the characteristics of experimental designs are ignored in computing experts’ Brier scores. We show that the accuracy of the standard error estimates of experts’ Brier scores can be improved by incorporating the withinquestion correlations of probability predictions in the second chapter of this thesis. Missing probability predictions of events can impact on assessing the prediction accuracy of experts using different sets of events (Merkle et al., 2016; Hanea et al., 2018). It is shown in the third chapter that multiple imputation method using a mixedeffects model with questions’ effects as random effects can effectively estimate missing predictions to enhance the comparability of experts’ Brier scores.
Testing experts’ calibration on eliciting credible intervals of unknown quantities using hit rates; observed proportions of elicited intervals that contain realized values of given quantities (McBride, Fidler, and Burgman, 2012), has a property of obtaining lower values of power to correctly identify wellcalibrated experts and more importantly, the power tends to decrease as the number of elicited intervals increases. The equivalence test of a single binomial proportion can be used to overcome these problems as shown in the fourth chapter. There is a possibility of allocating higher weights to some of the not wellcalibrated experts by the way experts’ calibration is assessed in the Cooke’s classical model (Cooke, 1991) to derive experts’ weights. We show that the multinomial equivalence test can be used to overcome this problem in the fifth chapter.
Experts’ weights that derived from experiments to combine experts’ elicited subjective probability distributions to obtain aggregated probability distributions of unknown quantities (O’Hagan, 2019) are random variables subject to uncertainty. We derive shrinkage experts’ weights with reduced mean squared errors in the sixth chapter to enhance the precision of the resulting aggregated distributions of quantities.
© 2020 Hetti Arachchige Sameera Gayan Dharmarathne
PhD thesis
20200101T00:00:00Z

Nonparametric estimation for streaming data
http://hdl.handle.net/11343/237540
Mao, Jiadong
2020
Nonparametric estimation for streaming data
Streaming data are a type of highfrequency and nonstationary time series data. The collection of streaming data is sequential and potentially neverending. Examples of streaming data, including data from sensor networks, mobile devices and the Internet, are prevalent in our daily lives. An estimator for streaming data needs to be computationally efficient so that it is relatively easy to update the estimator using newly arrived data. In addition, the estimator has to be adaptive to the nonstationarity of data. These constraints make streaming data analysis more challenging than analysing the conventional nonstreaming data sets.
Although streaming data analysis has been discussed in the machine learning community for more than two decades, it has received limited attention from statistical researchers. Estimation methods that are both computationally efficient and theoretically justified are still lacking. In this thesis, we propose nonparametric density and regression estimation methods for streaming data, where the smoothing parameters are chosen in a computationally efficient and fully datadriven way. These methods extend some classical kernel smoothing techniques, such as the kernel density estimator and the NadarayaWatson regression estimator, to address the theoretical and computational challenges arising from streaming data analysis. Asymptotic analyses provide these methods with theoretical justification. Numerical studies have shown the superiority of our methods over conventional ones. Through some realdata examples, we show that these methods are potentially useful in modelling realworld problems. Finally, we discuss some directions for future research, including extending these methods to model higherdimensional streaming data and to streaming data classification.
© 2020 Jiadong Mao
PhD thesis
20200101T00:00:00Z

Stress testing mixed integer programming solvers through new test instance generation methods
http://hdl.handle.net/11343/233868
Bowly, Simon Andrew
2019
Stress testing mixed integer programming solvers through new test instance generation methods
Optimisation algorithms require careful tuning and analysis to perform well in practice. Their performance is strongly affected by algorithm parameter choices, software, and hardware and must be analysed empirically. To conduct such analysis, researchers and developers require highquality libraries of test instances. Improving the diversity of these test sets is essential to driving the development of welltested algorithms.
This thesis is focused on producing synthetic test sets for Mixed Integer Programming (MIP) solvers. Synthetic data should be carefully designed to be unbiased, diverse with respect to measurable features of instances, have tunable properties to replicate realworld problems, and challenge the vast array of algorithms available. This thesis outlines a framework, methods and algorithms developed to ensure these requirements can be met with synthetically generated data for a given problem.
Over many years of development, MIP solvers have become increasingly complex. Their overall performance depends on the interactions of many different components. To cope with this complexity, we propose several extensions over existing approaches to generating optimisation test cases. First, we develop alternative encodings for problem instances which restrict consideration to relevant instances. This approach provides more control over instance features and reduces the computational effort required when we have to resort to searchbased generation approaches. Second, we consider more detailed performance metrics for MIP solvers in order to produce test cases which are not only challenging but from which useful insights can be gained.
This work makes several key contributions:
1. Performance metrics are identified which are relevant to component algorithms in MIP solvers. This helps to define a more comprehensive performance metric space which looks beyond benchmarking statistics such as CPU time required to solve a problem. Using these more detailed performance metrics we aim to produce explainable and insightful predictions of algorithm performance in terms of instance features.
2. A framework is developed for encoding problem instances to support the design of new instance generators. The concepts of completeness and correctness defined in this framework guide the design process and ensure all problem instances of potential interest are captured in the scheme. Instance encodings can be generalised to develop search algorithms in problem space with the same guarantees as the generator.
3. Using this framework new generators are defined for LP and MIP instances which control feasibility and boundedness of the LP relaxation, and integer feasibility of the resulting MIP. Key features of the LP relaxation solution, which are directly controlled by the generator, are shown to affect problem difficulty in our analysis of the results. The encodings used to control these properties are extended into problem space search operators to generate further instances which discriminate between solver configurations.
This work represents the early stages of an iterative methodology required to generate diverse test sets which continue to challenge the state of the art. The framework, algorithms and codes developed in this thesis are intended to support continuing development in this area.
© 2019 Simon Andrew Bowly
PhD thesis
20190101T00:00:00Z

Intelligent Management of Elective Surgery Patient Flow
http://hdl.handle.net/11343/230922
Kumar, Ashwani
2019
Intelligent Management of Elective Surgery Patient Flow
Rapidly growing demand and soaring costs for healthcare services in Australia and across the world are jeopardising the sustainability of governmentfunded healthcare systems. We need to be innovative and more efficient in delivering healthcare services in order to keep the system sustainable. In this thesis, we utilise a number of scientific tools to improve the patient flow in a surgical suite of a hospital and subsequently develop a structured approach for intelligent patient flow management. First, we analyse and understand the patient flow process in a surgical suite. Then we obtain data from the partner hospital and extract valuable information from a large database. Next, we use machine learning techniques, such as classification and regression tree analysis, random forest, and knearest neighbour regression, to classify patients into lower variability resource user groups and fit discrete phasetype distributions to the clustered length of stay data.
We use length of stay scenarios sampled from the fitted distributions in our sequential stochastic mixedinteger programming model for tactical master surgery scheduling. Our mixedinteger programming model has the particularity that the scenarios are utilised in a chronologically sequential manner, not in parallel. Moreover, we exploit the randomness in the sample path to reduce the requirement of optimising the process for many scenarios which helps us obtain highquality schedules while keeping the problem algorithmically tractable. Last, we model the patient flow process in a healthcare facility as a stochastic process and develop a model to predict the probability of the healthcare facility exceeding capacity the next day as a function of the number of inpatients and the next day scheduled patients, their resource user groups, and their elapsed length of stay. We evaluate the model's performance using the receiver operating characteristic curve and illustrate the computation of the optimal threshold probability by using costbenefit analysis that helps the hospital management make decisions.
© 2019 Ashwani Kumar
PhD thesis
20190101T00:00:00Z

Copulabased spatiotemporal modelling for count data
http://hdl.handle.net/11343/230863
Qiao, Pu Xue
2019
Copulabased spatiotemporal modelling for count data
Modelling of spatiotemporal count data has received considerable attention in recent statistical research. However, the presence of massive correlation between locations, time points and variables imposes a great computational challenge. In existing literature, latent models under the Bayesian framework are predominately used. Despite numerous theoretical and practical advantages, likelihood analysis of spatiotemporal modelling on count data is less wide spread, due to the difficulty in identifying the general class of multivariate distributions for discrete responses.
In this thesis, we propose a Gaussian copula regression model (copSTM) for the analysis of multivariate spatiotemporal data on lattice. Temporal effects are modelled through the conditional marginal expectations of the response variables using an observationdriven time series model, while spatial and crossvariable correlations are captured in a block dependence structure, allowing for both positive and negative correlations. The proposed copSTM model is flexible and sufficiently generalizable to many situations. We provide pairwise composite likelihood inference tools. Numerical examples suggest that the proposed composite likelihood estimator produces satisfactory estimation performance.
While variable selection of generalized linear models is a well developed topic, model subsetting in applications of Gaussian copula models remains a relatively open research area. The main reason is the computational burden that is already quite heavy for simply fitting the model. It is therefore not computationally affordable to evaluate many candidate submodels. This makes penalized likelihood approaches extremely inefficient because they need to search through different levels of penalty strength, apart from the fact suggested by our numerical experience that optimization of penalized composite likelihoods with many popular penalty terms (e.g LASSO and SCAD) usually does not converge in copula models. Thus, we propose to use a criterionbased selection approach that borrows strength from the Gibbs sampling technique.The methodology guarantees to converge to the model with the lowest criterion value, yet without searching through all possible models exhaustively.
Finally, we present an R package implementing the estimation and selection of the copSTM model in C++. We show examples comparing our package to many available R packages (on some special cases of the copSTM), confirming the correctness and efficiency of the package functions. The package copSTM provides a competitive toolkit option for the analysis spatiotemporal count data on lattice in terms of both model flexibility and computational efficiency.
© 2019 Pu Xue Qiao
PhD thesis
20190101T00:00:00Z

Singular vectors for the WN algebras and the BRST cohomology for relaxed highestweight Lk(sl(2)) modules
http://hdl.handle.net/11343/228926
Siu, Steve Wai Chun
2019
Singular vectors for the WN algebras and the BRST cohomology for relaxed highestweight Lk(sl(2)) modules
This thesis presents the computation of singular vectors of the W_n algebras and the BRST cohomology of modules of the simple vertex operator algebra L_k(sl2) associated to the affine Lie algebra of sl2 in the relaxed category
We will first recall some general theory on vertex operator algebras. We will then introduce the module categories that are relevant for conformal field theory. They are the category O of highestweight modules and the relaxed category which contains O as well as the relaxed highestweight modules with spectral flow and nonsplit extensions. We will then introduce the W_n algebras and the simple vertex operator algebra L_k(sl2). Properties of the Heisenberg algebra, the bosonic and the fermionic ghosts will be discussed as they are required in the free field realisations of W_n and L_k(sl2) as well as the construction of the BRST complex.
We will then compute explicitly the singular vectors of W_n algebras in their Fock representations. In particular, singular vectors can be realised as the image of screening operators of the W_n algebras. One can then realise screening operators in terms of Jack functions when acting on a highestweight state, thereby obtaining explicit formulae of the singular vectors in terms of symmetric functions.
We will then discuss the BRST construction and the BRST cohomology for modules in category O. Lastly we compute the BRST cohomology for L_k(sl2) modules in the relaxed category. In particular, we compute the BRST cohomology for the highestweight modules with positive spectral flow for all degrees and the BRST cohomology for the highestweight modules with negative spectral flow for one degree.
© 2019 Steve Wai Chun Siu
PhD thesis
20190101T00:00:00Z

Missing data analysis, combinatorial model selection and structure learning
http://hdl.handle.net/11343/228925
Kwok, Chun Fung
2019
Missing data analysis, combinatorial model selection and structure learning
This thesis examines three problems in statistics: the missing data problem in the context of extracting trends from time series data, the combinatorial model selection problem in regression analysis, and the structure learning problem in graphical modelling / system identification.
The goal of the first problem is to study how uncertainty in the missing data affects trend extraction. This work derives an analytical bound to characterise the error of the estimated trend in terms of the error of the imputation. It works for any imputation method and various trendextraction methods, including a large subclass of linear filters and the SeasonalTrend decomposition based on Loess (STL).
The second problem is to tackle the combinatorial complexity which arises from the bestsubset selection in regression analysis. Given p variables, a model can be formed by taking a subset of the variables, and the total number of models p is $2^p$. This work shows that if a hierarchical structure can be established on the model space, then the proposed algorithm, Gibbs Stochastic Search (GSS), can recover the true model with probability one in the limit and high probability with finite samples. The core idea is that when a hierarchical structure exists, every evaluation of a wrong model would give information about the correct model. By aggregating these information, one may recover the correct model without exhausting the model space. As an extension, parallelisation of the algorithm is also considered.
The third problem is about inferring from data the systemic relationship between a set of variables. This work proposes a flexible class of multivariate distributions in a form of a directed acyclic graphical model, which uses a graph and models each node conditioning on the rest using a Generalised Linear Model (GLM), and it shows that while the number of possible graphs is $\Omega(2^{p \choose 2})$, a hierarchical structure exists and the GSS algorithm applies. Hence, a systemic relationship may be recovered from the data. Other applications like imputing missing data and simulating data with complex covariance structure are also investigated.
© 2019 Chun Fung Kwok
PhD thesis
20190101T00:00:00Z

Exact solutions in multispecies exclusion processes
http://hdl.handle.net/11343/228841
Chen, Zeying
2019
Exact solutions in multispecies exclusion processes
The exclusion process has been the default model for the transportation phenomenon. One fundamental issue is to compute the exact formulae analytically. Such formulae enable us to obtain the limiting distribution through asymptotics analysis, and they also allow us to uncover relationships between different processes, and even between very different systems. Extensive results have been reported for singlespecies systems, but few for multicomponent systems and mixtures. In this thesis, we focus on multispecies exclusion processes, and propose two approaches for exact solutions.
The first one is due to duality, which is defined by a function that covaries in time with respect to the evolution of two processes. It relates physical quantities, such as the particle flow, in a system with many particles to one with few particles, so that the quantity of interest in the first process can be calculated explicitly via the second one. Historically, published dualities have mostly been found by trial and error. Only very recently have attempts been made to derive these functions algebraically. We propose a new method to derive dualities systematically, by exploiting the mathematical structure provided by the deformed quantum KnizhnikZamolodchikov equation. With this method, we not only recover the wellknown selfduality in singlespecies asymmetric simple exclusion processes (ASEPs), and also obtain the duality for twospecies ASEPs.
Solving the master equation is an alternative method. We consider an exclusion process with 2 species particles: the AHR (ArndtHeinzlRittenberg) model and give a full derivation of its Green's function via coordinate Bethe ansatz. Hence using the Green's function, we obtain an integral formula for its joint current distributions, and then study its limiting distribution with step type initial conditions. We show that the longtime behaviour is governed by a product of the Gaussian and the Gaussian unitary ensemble (GUE) TracyWidom distributions, which is related to the random matrix theory. Such result agrees with the prediction made by the nonlinear fluctuating hydrodynamic theory (NLFHD). This is the first analytic verification of the prediction of NLFHD in a multispecies system.
© 2019 Zeying Chen
PhD thesis
20190101T00:00:00Z

YangBaxter integrable dimers and fused restrictedsolidonsolid lattice models
http://hdl.handle.net/11343/227744
Vittorini Orgeas, Alessandra
2019
YangBaxter integrable dimers and fused restrictedsolidonsolid lattice models
The main objects of investigation in this thesis are two YangBaxter integrable lattice models of statistical mechanics in two dimensions: nonunitary RSOS models and dimers. At criticality they admit continuum descriptions with nonunitary conformal field theories (CFTs) in (1+1) dimensions. CFTs are quantum field theory invariant under conformal transformations. They play a major role in the theory of phase transition and critical phenomena. In quantum field theory unitarity is the requirement that the probability is conserved, hence realistic physical problems are associated with unitary quantum field theories. Nevertheless, in statistical mechanics this property loses a physical meaning and statistical systems like polymers and percolations, which model physical problems with longrange interactions, in the continuum scaling limit give rise to nonunitary conformal field theories.
Both the nonunitary RSOS models and dimers are defined on a twodimensional square lattice. Restricted solidonsolid (RSOS) models are so called because their degrees of freedom are in the form of a finite (therefore restricted) set of heights which live on the sites of the lattice and their interactions take place between the four sites around each face of the lattice (solidonsolid). Each allowed configuration of heights maps to a specific Boltzmann weight. RSOS are integrable in the sense that their Boltzmann weights and transfer matrices satisfy the YangBaxter equation. The CFTs associated to critical RSOS models are minimal models, the simplest family of rational conformal field theories. The process of fusion on elementary RSOS models has a different outcome on the CFT side depending on both the level of fusion and the value of their crossing parameter λ. Precisely, in the interval 0 < λ < π/2, the 2x2 fused RSOS models correspond to higherlevel conformal cosets with integer level of fusion equal to two. Instead in the complementary interval π/n < λ < π the 2x2 fused RSOS models are related to minimal models with integer level of fusion equal to one. To prove this conjecture onedimensional sums, deriving from the wellknown YangBaxter corner transfer matrix method, have been calculated, extended in the continuum limit and ultimately compared to the conformal characters of the related minimal models.
The dimer model has been for a long time object of various scientific studies, firstly as simple prototype of diatomic molecules and then as equivalent formulation to the wellknown domino tilings problem of statistical mechanics. However, only more recently it has attracted attention as conformal field theory thanks to its relation with another famous integrable lattice model, the sixvertex model. What is particularly interesting of dimers is the property of being a freefermion model and at the same time showing nonlocal properties due to the longrange steric effects propagating from the boundaries. This nonlocality translates then in the dependance of their bulk free energy on the boundary conditions. We formulate the dimer model as a YangBaxter integrable freefermion sixvertex model. This model is integrable in different geometries (cylinder, torus and strip) and with a variety of different integrable boundary conditions. The exact solution for the partition function arises from the complete spectra of eigenvalues of the transfer matrix. This is obtained by solving some functional equations, in the form of inversion identities, usually associated to the transfer matrix of the freefermion sixvertex model, and using the physical combinatorics of the pattern of zeros of the transfer matrix eigenvalues to classify the eigenvalues according to their degeneracies. In the case of the cylinder and torus, the transfer matrix can be diagonalized, while, in the other cases, we observe that in a certain representation the double row transfer matrix exhibits non trivial Jordancells. Remarkably, the spectrum of eigenvalues of dimers and critical dense polymers agree sectors by sectors. The similarity with critical dense polymers, which is a logarithmic field theory, raises the question whether also the freefermion dimer model manifests a logarithmic behaviour in the continuum scaling limit. The debate is still open. However, in our papers we provide a final answer and argue that the type of conformal field theory which best describe dimers is a logarithmic field theory, as it results by looking at the numerically estimate of the finite size corrections to the critical free energy of the freefermion sixvertexequivalent dimer model.
The thesis is organized as follows. The first chapter is an introduction which has the purpose to inform the reader about the basics of statistical mechanics, from one side, and CFTs, on the other side, with a specific focus on the two lattice models that have been studied (nonunitary RSOS and dimers) and the theories associated to the their continuum description at criticality (minimal models and logarithmic CFTs). The second chapter considers the family of nonunitary RSOS models with π/n < λ < π and brings forward the discussion around the onedimensional sums of the elementary and fused models, and the associated conformal characters in the continuum scaling limit.
The third and fourth chapters are dedicated to dimers, starting with periodic conditions on a cylinder and torus, and then more general integrable boundary conditions on a strip. In each case, a combinatorial analysis of the pattern of zeros of the transfer matrix eigenvalues is presented and extensively treated. It follows then the analysis of the finitesize corrections to the critical free energy. Finally, the central charges and minimal conformal dimensions of critical dimers are discussed in depth with concluding remarks about the logarithmic hypothesis. Next, there is a conclusion where the main results of these studies are summarized and put into perspective with possible future research goals.
© 2019 Alessandra Vittorini Orgeas
PhD thesis
20190101T00:00:00Z

Consolidation problems in freight transportation systems: mathematical models and algorithms
http://hdl.handle.net/11343/227683
Belin Castellucci, Pedro
2019
Consolidation problems in freight transportation systems: mathematical models and algorithms
Freight distribution systems are under stress. With the world population growing, the migration of people to urban areas and technologies that allow purchases from virtually anywhere, efficient freight distribution can be challenging. An inefficient movement of goods may lead to business not being economically viable and also has social and environmental negative effects. An important strategy to be incorporated in freight distribution systems is the consolidation of goods, i.e., group goods by their destination. This strategy increases vehicles utilisation, reducing the number of vehicles and the number of trips required for the distribution and, consequently, costs, traffic, noise and air pollution. In this thesis, we explore consolidation in three different contexts (or cases) from an optimisation point of view. Each context is related to optimisation problems for which we developed mathematical programming models and solution methods.
The first case in which we explore consolidation is in container loading problems (CLPs). CLPs are a class of packing problems which aims at positioning threedimensional boxes inside a container efficiently. The literature has incorporated many practical aspects into container loading solution method (e.g. restricting orientation of boxes, stability and weight distribution). However, to the best of our knowledge, the case considering more dynamic systems (e.g. crossdocking) in which goods might have a schedule of arrival were yet to be contemplated by the literature. We define an extension of CLP which we call Container Loading Problem with Time Availability Constraints (CLPTAC), which considers boxes are not always available for loading. We propose an extension of a CLP model that is suitable for CLPTAC and solution methods which can also handle cases with uncertainty in the schedule of the arrival of the boxes.
The second case is a more broad view of the network, considering an open vehicle routing problem with crossdock selection. The traditional vehicle routing problem has been fairly studied. Its open version (i.e. with routes that start and end at different points) has not received the same attention. We propose a version of the open vehicle routing problem in which some nodes of the network are consolidation centres. Instead of shippers sending goods directly to their consumers, they must send to one of the available consolidation centres, then, goods are resorted and forwarded to their destination. For this problem, we propose a mixed integer linear programming model for cost minimisation and a solution method based on the Benders decomposition framework.
A third case in which we explored consolidation is in collaborative logistics. Particularly, we focus on the shared use of the currently available infrastructure. We defined a hubselection problem in which one of the suppliers is selected as a hub. In a hub facility, other suppliers might meet to exchange their goods allowing one supplier to satisfy the demand from others. For this problem, we propose a mixed integer linear programming model and a heuristic based on the model. Moreover, we compared a traditional distribution strategy, with each supplier handling its demand, against the collaborative one.
In this thesis, we explore these three cases which are related to consolidation for improving the efficiency in freight distribution systems. We extend some problems (e.g. versions of CLP) to apply them to a more dynamic setting and we also define optimisation problems for networks with consolidation centres. Furthermore, we propose solution methods for each of the defined problems and evaluate them using randomly generated instances, benchmarks from the literature and some cases based on realworld characteristics.
Completed under a Cotutelle arrangement between the University of Melbourne and Universidade de Sao Paulo, Brazil; © 2019 Pedro Belin Castellucci
PhD thesis
20190101T00:00:00Z

Models of infectious disease transmission to explore the effects of immune boosting
http://hdl.handle.net/11343/227531
Leung, Ngo Nam
2019
Models of infectious disease transmission to explore the effects of immune boosting
Despite advances in prevention and control, infectious diseases continue to be a burden to human health. Many factors, including the waning and boosting of immunity, are involved in the spread of disease. The link between immunological processes at an individual level and populationlevel immunity is complex and subtle. Mathematical models are a useful tool to understand the biological mechanisms behind the observed epidemiological patterns of an infectious disease.
Here I construct deterministic, compartment models of infectious disease transmission to study the effects of waning and boosting of immunity on infectious disease dynamics. While waning immunity has been studied in many models, incorporation of immune boosting in models of transmission has largely been neglected. In my study, I look at three different aspects of immune boosting: (i) the influence of immune boosting on the critical vaccination thresholds required for disease control; (ii) the effect of immune boosting and crossimmunity on infectious disease dynamics; and (iii) the influence of differentiating vaccineacquired immunity from natural (infectionacquired) immunity and different mechanisms of immune boosting on infection prevalence.
Models can provide support for public health control measures in terms of critical vaccination thresholds. There is a direct relationship, from mathematical theory, between the critical vaccination threshold and the basic reproduction number, R0. Key epidemiological quantities, such as R0, are measured from data, but the selection of the model used to infer these quantities matters. I show how the inferred values of R0and thus, critical vaccination thresholdscan vary when immune boosting is taken into account.
I also investigate the effects of interactions between immune boosting and crossimmunity on infectious disease dynamics, using a twopathogen transmission model. Immunity to one pathogen that confers immunity to another pathogen, or to another strain of a given pathogen, is known as crossimmunity. Varying levels of susceptibility to infection conferred by crossimmunity are included in the model. Using a combination of numerical simulations and bifurcation analyses, I show that immune boosting at strong levels can lead to recurrent epidemics (or periodic solutions) independent of crossimmunity. Where immune boosting is weak, crossimmunity allows the model to generate periodic solutions.
For some diseases, there are differences in infectionacquired immunity and vaccineacquired immunity. I explore the effect of vaccination and immune boosting on epidemiological patterns of infectious disease. I construct and analyse a model that differentiates vaccineacquired immunity from infectionacquired immunity in the form of duration of protection. The model also distinguishes between primary and secondary infections. I show that vaccination is effective at reducing primary infections but not necessarily secondary infections, which can maintain overall transmission. Two different mechanisms through which immune boosting provides protection are also explored. Whether immune boosting delays or bypasses a primary infection can determine whether primary or secondary infections contribute most to transmission.
© 2019 Ngo Nam Leung
PhD thesis
20190101T00:00:00Z

Quantitative Epidemiology: A Bayesian Perspective
http://hdl.handle.net/11343/227413
Zarebski, Alexander Eugene
2019
Quantitative Epidemiology: A Bayesian Perspective
Influenza inflicts a substantial burden on society but accurate and timely forecasts of seasonal epidemics can help mitigate this burden by informing interventions to reduce transmission. Recently, both statistical (correlative) and mechanistic (causal) models have been used to forecast epidemics. However, since mechanistic models are based on the causal process underlying the epidemic they are poised to be more useful in the design of intervention strategies. This study investigate approaches to improve epidemic forecasting using mechanistic models. In particular, it reports on efforts to improve a forecasting system targeting seasonal influenza epidemics in major cities across Australia.
To improve the forecasting system we first needed a way to benchmark its performance. We investigate model selection in the context of forecasting, deriving a novel method which extends the notion of Bayes factors to a predictive setting. Applying this methodology we found that accounting for seasonal variation in absolute humidity improves forecasts of seasonal influenza in Melbourne, Australia. This result holds even when accounting for the uncertainty in predicting seasonal variation in absolute humidity.
Our initial attempts to forecast influenza transmission with mechanistic models were hampered by high levels of uncertainty in forecasts produced early in the season. While substantial uncertainty seems inextricable from longterm prediction, it seemed plausible that historical data could assist in reducing this uncertainty. We define a class of prior distributions which simplify the process of incorporating existing knowledge into an analysis, and in doing so offer a refined interpretation of the prior distribution. As an example we used historical time series of influenza epidemics to reduce initial uncertainty in forecasts for Sydney, Australia. We explore potential pitfalls that may be encountered when using this class of prior distribution.
Deviating from the theme of forecasting, we consider the use of branching processes to model early transmission in an epidemic. An inhomogeneous branching process is derived which allows the study of transmission dynamics early in an epidemic. A generation dependent offspring distribution allows for the branching process to have subexponential growth on average. The multiscale nature of a branching process allows us to utilise both time series of incidence and infection networks. This methodology is applied to data collected during the 2014–2016 Ebola epidemic in WestAfrica leading to the inference that transmission grew subexponentially in Guinea, Liberia and Sierra Leone.
Throughout this thesis, we demonstrate the utility of mechanistic models in epidemiology and how a Bayesian approach to statistical inference is complementary to this.
© Alexander Eugene Zarebski
PhD thesis
20190101T00:00:00Z

Integrated Wishart bridges and their applications
http://hdl.handle.net/11343/227173
Leung, Jason
2018
Integrated Wishart bridges and their applications
This thesis focuses on the study of Wishart processes, which can be considered as the matrixvalued squareroot processes. In mathematical finance, the squareroot processes find applications in interest rates modelling (the CoxIngersollRoss model), and in the Heston volatility model where the squareroot processes model the stochastic volatility of a risky asset.
The main results of this thesis are concerned with the change of measure and time integrals of Wishart processes, which we shall call the integrated Wishart processes, as well as the generalised HartmanWatson law of Wishart processes. In particular, we are interested in the joint conditional Laplace transform of the time integral of a Wishart process and its generalised HartmanWatson law. Applications of the integrated Wishart processes in Monte Carlo simulation and path simulation of multifactor stochastic volatility processes are also discussed.
© 2018 Jason Leung
PhD thesis
20180101T00:00:00Z

Coset construction for the N=2 and osp(12) minimal models
http://hdl.handle.net/11343/227092
Liu, Tianshu
2019
Coset construction for the N=2 and osp(12) minimal models
The thesis presents the study of the N=2 and osp(12) minimal models at admissible levels using the method of coset constructions. These sophisticated minimal models are rich in mathematical structure and come with various interesting features for us to investigate. First, some general principles of conformal field theory are reviewed, notations used throughout the thesis are established. The ideas are then illustrated with three examples of bosonic conformal field theories, namely, the free boson, the Virasoro minimal models, and the admissiblelevel WessZuminoWitten models of affine sl(2). The concept of supersymmetry is then introduced, and examples of fermionic conformal field theories are discussed.
Of the two minimal models of interest, the N=2 minimal model, tensored with a free boson, can be extended into an sl(2) minimal model tensored with a pair of fermionic ghosts, whereas an osp(12) minimal model is an extension of the tensor product of certain Virasoro and sl(2) minimal models. We can therefore induce the known structures of the representations of the coset components and get a rather complete picture for the minimal models we want to investigate. In particular, the irreducible highestweight modules (including the relaxed highestweight modules, which result in a continuous spectrum) are classified, their characters and Grothendieck fusion rules are computed. The genuine fusion products and the projective covers of the irreducibles are conjectured.
The thesis concludes with a vision of how this method can be used for the study of other affine superalgebras. This provides a promising approach to solving superconformal field theories that are currently little known in the literature.The thesis presents the study of the N=2 and osp(12) minimal models at admissible levels using the method of coset constructions. These sophisticated minimal models are rich in mathematical structure and come with various interesting features for us to investigate. First, some general principles of conformal field theory are reviewed, notations used throughout the thesis are established. The ideas are then illustrated with three examples of bosonic conformal field theories, namely, the free boson, the Virasoro minimal models, and the admissiblelevel WessZuminoWitten models of affine sl(2). The concept of supersymmetry is then introduced, and examples of fermionic conformal field theories are discussed.
Of the two minimal models of interest, the N=2 minimal model, tensored with a free boson, can be extended into an sl(2) minimal model tensored with a pair of fermionic ghosts, whereas an osp(12) minimal model is an extension of the tensor product of certain Virasoro and sl(2) minimal models. We can therefore induce the known structures of the representations of the coset components and get a rather complete picture for the minimal models we want to investigate. In particular, the irreducible highestweight modules (including the relaxed highestweight modules, which result in a continuous spectrum) are classified, their characters and Grothendieck fusion rules are computed. The genuine fusion products and the projective covers of the irreducibles are conjectured.
The thesis concludes with a vision of how this method can be used for the study of other affine superalgebras. This provides a promising approach to solving superconformal field theories that are currently little known in the literature.
© 2019 Dr. Tianshu Liu
PhD thesis
20190101T00:00:00Z

Sparse composite likelihood approaches for high dimensional data
http://hdl.handle.net/11343/227052
Huang, Zhendong
2019
Sparse composite likelihood approaches for high dimensional data
The idea of the likelihood function, which plays an important role in the his tory of statistics, has been widely used in many areas in parametric statistics. Composite likelihood approaches are useful tools to make statistical inferences regarding parametric models when the classical fulllikelihood methods fail due to difficulties related to model complexity, computational burden, etc. In this thesis, new methodologies regarding composite likelihoods with sparse compo sition rules are developed and used to address specific problems in the fields of biology, environmental science and engineering.
A new method for composite likelihood estimation with sparse and continuous composition rule is proposed in Chapter 3 with asymptotic properties and performance thoroughly studied. An algorithm for simultaneously searching composition rules and parameter estimation is also introduced. The framework of composite likelihoods and the sparse approach are further extended and improved for application in extremes data and functional magnetic resonance imaging data. In generals, the results of our research show that the proposed composite likelihood methods are useful tools for handling high dimensional data in many applications and have great potential for further development. A summary and future directions are also discussed.
© 2019 Dr. Zhendong Huang
PhD thesis
20190101T00:00:00Z

The coupling time for the Ising heatbath dynamics & efficient optimization for statistical inference
http://hdl.handle.net/11343/225723
Hyndman, Timothy Luke
2019
The coupling time for the Ising heatbath dynamics & efficient optimization for statistical inference
In this thesis we consider two separate topics of study. The first topic concerns the Ising heatbath Glaubers dynamics. These dynamics describe a continuous time Markov chain, whose states are assignments of spins to each vertex in a given graph. We define a coupling of two such Markov chains, as well as the coupling time which is the time it takes for these chains to have the same spin configuration. We prove that on certain graphs, at certain temperatures, the distribution of the coupling time converges to a Gumbel distribution. We begin by proving this for the 1 dimensional cycle at all temperatures. We then extend our proof to apply to a certain class of transitive graphs at sufficiently high temperatures. Fundamental to our proofs are the promising new framework of information percolation, used by Lubetzky and Sly to prove the existence of cutoff for the Ising model, and compound Poisson approximation. We also prove a general result which relates the coupling times of the discrete and continuous dynamics.
The second topic of this thesis concerns two optimization problems that arise in statistical inference. The first of these is that of maximum likelihood mixtures, and the second is a deconvolution technique. In both of these problems, we try to solve an optimization problem to find a discrete probability distribution. We often find that the solution we obtain has surprisingly few points of support. We explore this phenomenon empirically for each of these problems. For the case of maximum likelihood mixtures, we spend some time discussing the results of Lindsay that concern the number of points of support in the maximizing distribution. We then prove some new results which extend Lindsay’s results. For the deconvolution problem, we propose using a new method to take advantage of this phenomenon, based on our empirical exploration. We use this method in our new R package ‘deconvolve’.
© 2019 Dr. Timothy Luke Hyndman
PhD thesis
20190101T00:00:00Z

An explained sum of squares approach to nonparametric regression with measurement error
http://hdl.handle.net/11343/225665
Tran, Jason GiaSon
2019
An explained sum of squares approach to nonparametric regression with measurement error
We introduce a new method in nonparametric regression problems in the presence of measurement error, known as the explained sum of squares. We discuss its theoretical properties and provide evidence of practical application in both univariate and multivariate settings. We show that our estimator is theoretically consistent, and is a novel approach to typical deconvolution methods popular in the measurement error literature.
© 2019 Jason GiaSon Tran
Masters Research thesis
20190101T00:00:00Z

Stochastic spatialtemporal models for rainfall processes
http://hdl.handle.net/11343/225623
Aryal, Nanda Ram
2018
Stochastic spatialtemporal models for rainfall processes
Currently clustered rainfall models have been fitted using Generalized Method of Moments (GMM), because typically they have intractable likelihood func tions. GMM fitting matches theoretical and observed moments of the process and thus is restricted to models for which analytic expressions are available for the moments. We show that Approximate Bayesian Computation (ABC) can also be used to fit clustered rainfall models. We also validate that ABC readily adapts to more general, and thus more realistic, variants of spatial temporal rainfall models.
ABC fitting compares the observed process with simulations and hence places no restrictions on the statistics used for the comparison. This opens up the possibility of fitting much more realistic stochastic rainfall models. The penalty we pay for this increased flexibility is an increase in computational time. Simulated Method of Moments (SMM) is used to initialize the ABC. This can also be used to estimate the weights of the distance measure in the ABCMCMC setting. We found that our method requires much smaller computation time in comparison with what previous authors have suggested using a separate ABC step to estimate initialisation.
A spatialtemporal rainfall model based on a cluster process is constructed by taking a primary process, called the storm arrival process, and attaching to each storm centre a finite secondary process, called a cell process. The total intensity at a point in R2 × [0, ∞) is the sum of the intensities of all cells active at that point. Typically, the model parameters are interde pendent.This dependency produces complexity in model fitting procedures, and has also restricted further extension of the model, particularly finding theoretical expressions for the moments. Fortunately, ABC can be applied without having analytical expressions for the moments. We reparameterized the models and the parameters were log transformed to reduce dependence and skewness, also simplifies the chain proposal in MCMC steps.
We also present two new stochastic spatialtemporal rainfall models that yield with better representation of observed rainfall processes, and also cap ture the dependence between size and intensity for rain cells.
© 2018 Dr. Nanda Ram Aryal
PhD thesis
20180101T00:00:00Z

Enumerative problems in algebraic geometry motivated from physics
http://hdl.handle.net/11343/225589
Leigh, Oliver
2019
Enumerative problems in algebraic geometry motivated from physics
This thesis contains two chapters which reflect the two main viewpoints of modern enumerative geometry.
In chapter 1 we develop a theory for stable maps to curves with divisible ramification. For a fixed integer r>0, we show that the condition of every ramification locus being divisible by r is equivalent to the existence of an rth root of a canonical section. We consider this condition in regards to both absolute and relative stable maps and construct natural moduli spaces in these situations. We construct an analogue of the FantechiPandharipande branch morphism and when the domain curves are genus zero we construct a virtual fundamental class. This theory is anticipated to have applications to rspin Hurwitz theory. In particular it is expected to provide a proof of the rspin ELSV formula [SSZ'15, Conj. 1.4] when used with virtual localisation.
In chapter 2 we further the study of the DonaldsonThomas theory of the banana threefolds which were recently discovered and studied in [Bryan'19]. These are smooth proper CalabiYau threefolds which are fibred by Abelian surfaces such that the singular locus of a singular fibre is a nonnormal toric curve known as a "banana configuration". In [Bryan'19] the DonaldsonThomas partition function for the rank 3 sublattice generated by the banana configurations is calculated. In this chapter we provide calculations with a view towards the rank 4 sublattice generated by a section and the banana configurations. We relate the findings to the PandharipandeThomas theory for a rational elliptic surface and present new GopakumarVafa invariants for the banana threefold.
© 2019 Dr. Oliver Leigh; Completed under a Cotutelle arrangement between the University of Melbourne and The University of British Columbia.
PhD thesis
20190101T00:00:00Z

Comparison theorems for torusequivariant elliptic cohomology theories
http://hdl.handle.net/11343/225560
Spong, Matthew James
2019
Comparison theorems for torusequivariant elliptic cohomology theories
In 1994, Grojnowski gave a construction of an equivariant elliptic cohomology theory associated to an elliptic curve over the complex numbers. Grojnowski’s construction has seen numerous applications in algebraic topology and geometric representation theory, however the construction is somewhat ad hoc and there has been significant interest in the question of its geometric interpretation.
We show that there are two global models for Grojnowski’s theory, which shed light on its geometric meaning. The first model is constructed as the Borelequivariant cohomology of a double free loop space, and is a holomorphic version of a construction of Rezk from 2016. The second model is constructed as the loop groupequivariant Ktheory of a free loop space, and is a slight modification of a construction given in 2014 by Kitchloo, motivated by ideas in conformal field theory. We investigate the properties of each model and establish their precise relationship to Grojnowski’s theory.
© 2019 Dr. Matthew James Spong
PhD thesis
20190101T00:00:00Z

CAT(0) structures on link exteriors: variations on a theme
http://hdl.handle.net/11343/224389
Dow, Ana J.
2019
CAT(0) structures on link exteriors: variations on a theme
This thesis adapts the cubical CAT(0) Aitchison complex, A(L), of alternating link exteriors to construct CAT(0) polyhedral metric structures, A′(L), on the exterior of various links from cubes and prisms. The first class of links for which this is done is a subclass of adequate links, called suitable adequate links. This class is characterised by the fact that it decomposes into strongly alternating tangles, that satisfy certain mild conditions. An example of a suitable adequate link, used throughout the thesis to illustrate the theory, is 1533593. The positive and negative state sum surfaces of adequate links are essential. Put another way, the checkerboard surfaces of the projection of an adequate link onto its Turaev surface are essential. These two surfaces provide a topological way of characterising an alternating link and they form totally geodesic surfaces in the Aitchison complex. In the suitable adequate case, these essential state sum surfaces are shown to be combinatorially generalised pleated. Elaborating on the technique for constructing A′(L) for suitable adequate links, CAT(0) semicubical complexes are constructed for the exterior of links belonging to a class of amenable links, which includes the class of suitable adequate links, but also links that have no adequate diagram, such as 11n07 . Innovations on the boundary pattern of the polyhedral decomposition of these link exteriors, allow the technique to be adapted once more so as a CAT(0) semicubing can be placed on the link exterior of adequate Montesinos link (3,−3,3,−2) and the planar 2cable of the Figure Eight knot. Finally, by increasing the number of pieces in the polyhedral decomposition of the link exterior, a CAT(0) semi cubing can be placed on the exterior of a class of links dubbed ‘stacked links’: links whose projection onto a higher genus Heegaard surface decompose into strongly alternating tangles, satisfying a variant of the mild condition imposed on the tangles of the class of suitable adequate links.
© 2019 Dr. Ana J. Dow
PhD thesis
20190101T00:00:00Z

Transport equations and boundary conditions for oscillatory rarefied gas flows
http://hdl.handle.net/11343/224064
Liu, Nicholas Zhixian
2018
Transport equations and boundary conditions for oscillatory rarefied gas flows
A wide range of flow phenomena in everyday life can be modelled accurately using classical continuum theory: the NavierStokes equations with associated noslip conditions. However, oscillatory flows generated by nanoscale devices violate the basic assumptions underpinning continuum theory. Study of such flows, under the assumption of small perturbations from equilibrium, requires analysis of the unsteady Boltzmann equation. At sufficiently high oscillation frequencies, the resulting flow can result in wave propagation. This thesis presents a rigorous asymptotic analysis of slightly rarefied wave motion. Particular emphasis is placed on the boundary layer structure, transport equations and associated slip boundary conditions, valid for a general curved oscillating boundary with a velocityindependent body force.
© 2018 Nicholas Zhixian Liu
Masters Coursework thesis
20180101T00:00:00Z

A combinatorial curvature flow for ideal triangulations
http://hdl.handle.net/11343/222445
Tianyu, Yang
2019
A combinatorial curvature flow for ideal triangulations
We investigate a combinatorial analogue of the Ricci curvature flow for 3dimensional hyperbolic cone structures, obtained by gluing together hyperbolic ideal tetrahedra. Our aim is to find a hyperbolic structure for triangulated 3dimensional cusped manifolds such as the figure eight knot complement, by fixing the ``no shearing'' condition around edges and the condition of ``completeness'' at each cusp and varying the edge lengths to adjust the cone angles around the edges. To achieve this goal, we first work out a combinatorial curvature flow which only deals with a single ideal tetrahedron. We then deal with this kind of flow on a bipyramid (or double tetrahedron)  two ideal tetrahedra glued together along a triangular face. Now using the single and double tetrahedron cases as building blocks, we study a general combinatorial curvature flow for the 3manifold case with the geometric metric as the equilibrium for the flow. We then apply theorems worked out in this general case to 2tetrahedron manifolds, such as the figure eight knot complement, and to the 3tetrahedron manifold, m011. Most of the tools, such as evolution equations for curvatures, the covolume function and the analysis of the flat boundary and degenerating cases, can be used repeatedly in different situations. The conclusion is that for geometric triangulations (with all dihedral angles positive for the complete hyperbolic structure), the flow can find the geometric metric exponentially fast as long as the flat boundary can be avoided, as in the m011 case in Chapter 6.
© 2019 Dr. Tianyu Yang
PhD thesis
20190101T00:00:00Z

Degree bounded geometric spanning trees with a bottleneck objective function
http://hdl.handle.net/11343/221999
Andersen, Patrick
2019
Degree bounded geometric spanning trees with a bottleneck objective function
We introduce the geometric $\delta$minimum bottleneck spanning tree problem ($\delta$MBST), which is the problem of finding a spanning tree for a set of points in a geometric space (e.g., the Euclidean plane) such that no vertex in the tree has a degree that exceeds $\delta$, and the length of the longest edge in the tree is minimum. We give complexity results for this problem and describe several approximation algorithms whose performances we investigate, both analytically and through computational experimentation.
© 2019 Dr Patrick Andersen
PhD thesis
20190101T00:00:00Z