School of Mathematics and Statistics http://hdl.handle.net/11343/293 2019-08-21T07:07:19Z Integrated Wishart bridges and their applications http://hdl.handle.net/11343/227173 Integrated Wishart bridges and their applications Leung, Jason This thesis focuses on the study of Wishart processes, which can be considered as the matrix-valued square-root processes. In mathematical finance, the square-root processes find applications in interest rates modelling (the Cox-Ingersoll-Ross model), and in the Heston volatility model where the square-root processes model the stochastic volatility of a risky asset. The main results of this thesis are concerned with the change of measure and time integrals of Wishart processes, which we shall call the integrated Wishart processes, as well as the generalised Hartman-Watson law of Wishart processes. In particular, we are interested in the joint conditional Laplace transform of the time integral of a Wishart process and its generalised Hartman-Watson law. Applications of the integrated Wishart processes in Monte Carlo simulation and path simulation of multi-factor stochastic volatility processes are also discussed. © 2019 Jason Leung 2018-01-01T00:00:00Z Coset construction for the N=2 and osp(1|2) minimal models http://hdl.handle.net/11343/227092 Coset construction for the N=2 and osp(1|2) minimal models Liu, Tianshu The thesis presents the study of the N=2 and osp(1|2) minimal models at admissible levels using the method of coset constructions. These sophisticated minimal models are rich in mathematical structure and come with various interesting features for us to investigate. First, some general principles of conformal field theory are reviewed, notations used throughout the thesis are established. The ideas are then illustrated with three examples of bosonic conformal field theories, namely, the free boson, the Virasoro minimal models, and the admissible-level Wess-Zumino-Witten models of affine sl(2). The concept of supersymmetry is then introduced, and examples of fermionic conformal field theories are discussed. Of the two minimal models of interest, the N=2 minimal model, tensored with a free boson, can be extended into an sl(2) minimal model tensored with a pair of fermionic ghosts, whereas an osp(1|2) minimal model is an extension of the tensor product of certain Virasoro and sl(2) minimal models. We can therefore induce the known structures of the representations of the coset components and get a rather complete picture for the minimal models we want to investigate. In particular, the irreducible highest-weight modules (including the relaxed highest-weight modules, which result in a continuous spectrum) are classified, their characters and Grothendieck fusion rules are computed. The genuine fusion products and the projective covers of the irreducibles are conjectured. The thesis concludes with a vision of how this method can be used for the study of other affine superalgebras. This provides a promising approach to solving superconformal field theories that are currently little known in the literature.The thesis presents the study of the N=2 and osp(1|2) minimal models at admissible levels using the method of coset constructions. These sophisticated minimal models are rich in mathematical structure and come with various interesting features for us to investigate. First, some general principles of conformal field theory are reviewed, notations used throughout the thesis are established. The ideas are then illustrated with three examples of bosonic conformal field theories, namely, the free boson, the Virasoro minimal models, and the admissible-level Wess-Zumino-Witten models of affine sl(2). The concept of supersymmetry is then introduced, and examples of fermionic conformal field theories are discussed. Of the two minimal models of interest, the N=2 minimal model, tensored with a free boson, can be extended into an sl(2) minimal model tensored with a pair of fermionic ghosts, whereas an osp(1|2) minimal model is an extension of the tensor product of certain Virasoro and sl(2) minimal models. We can therefore induce the known structures of the representations of the coset components and get a rather complete picture for the minimal models we want to investigate. In particular, the irreducible highest-weight modules (including the relaxed highest-weight modules, which result in a continuous spectrum) are classified, their characters and Grothendieck fusion rules are computed. The genuine fusion products and the projective covers of the irreducibles are conjectured. The thesis concludes with a vision of how this method can be used for the study of other affine superalgebras. This provides a promising approach to solving superconformal field theories that are currently little known in the literature. © 2019 Dr. Tianshu Liu 2019-01-01T00:00:00Z Sparse composite likelihood approaches for high dimensional data http://hdl.handle.net/11343/227052 Sparse composite likelihood approaches for high dimensional data Huang, Zhendong The idea of the likelihood function, which plays an important role in the his- tory of statistics, has been widely used in many areas in parametric statistics. Composite likelihood approaches are useful tools to make statistical inferences regarding parametric models when the classical full-likelihood methods fail due to difficulties related to model complexity, computational burden, etc. In this thesis, new methodologies regarding composite likelihoods with sparse compo- sition rules are developed and used to address specific problems in the fields of biology, environmental science and engineering. A new method for composite likelihood estimation with sparse and continuous composition rule is proposed in Chapter 3 with asymptotic properties and performance thoroughly studied. An algorithm for simultaneously searching composition rules and parameter estimation is also introduced. The framework of composite likelihoods and the sparse approach are further extended and improved for application in extremes data and functional magnetic resonance imaging data. In generals, the results of our research show that the proposed composite likelihood methods are useful tools for handling high dimensional data in many applications and have great potential for further development. A summary and future directions are also discussed. © 2019 Dr. Zhendong Huang 2019-01-01T00:00:00Z The coupling time for the Ising heat-bath dynamics & efficient optimization for statistical inference http://hdl.handle.net/11343/225723 The coupling time for the Ising heat-bath dynamics & efficient optimization for statistical inference Hyndman, Timothy Luke In this thesis we consider two separate topics of study. The first topic concerns the Ising heat-bath Glaubers dynamics. These dynamics describe a continuous time Markov chain, whose states are assignments of spins to each vertex in a given graph. We define a coupling of two such Markov chains, as well as the coupling time which is the time it takes for these chains to have the same spin configuration. We prove that on certain graphs, at certain temperatures, the distribution of the coupling time converges to a Gumbel distribution. We begin by proving this for the 1 dimensional cycle at all temperatures. We then extend our proof to apply to a certain class of transitive graphs at sufficiently high temperatures. Fundamental to our proofs are the promising new framework of information percolation, used by Lubetzky and Sly to prove the existence of cutoff for the Ising model, and compound Poisson approximation. We also prove a general result which relates the coupling times of the discrete and continuous dynamics. The second topic of this thesis concerns two optimization problems that arise in statistical inference. The first of these is that of maximum likelihood mixtures, and the second is a deconvolution technique. In both of these problems, we try to solve an optimization problem to find a discrete probability distribution. We often find that the solution we obtain has surprisingly few points of support. We explore this phenomenon empirically for each of these problems. For the case of maximum likelihood mixtures, we spend some time discussing the results of Lindsay that concern the number of points of support in the maximizing distribution. We then prove some new results which extend Lindsay’s results. For the deconvolution problem, we propose using a new method to take advantage of this phenomenon, based on our empirical exploration. We use this method in our new R package ‘deconvolve’. © 2019 Dr. Timothy Luke Hyndman 2019-01-01T00:00:00Z