School of Mathematics and Statistics - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 25
  • Item
    No Preview Available
    Off-lattice and parallel implementations of the pivot algorithm
    Clisby, N ; Ho, DTC (IOP Publishing, 2021-12-09)
    Abstract The pivot algorithm is the most efficient known method for sampling polymer configurations for self-avoiding walks and related models. Here we introduce two recent improvements to an efficient binary tree implementation of the pivot algorithm: an extension to an off-lattice model, and a parallel implementation.
  • Item
    Thumbnail Image
    Semiglobal Practical Stability of a Class of Parameterized Networked Control Systems
    Wang, B ; Nesic, D (IEEE, 2012-01-01)
    This paper studies a class of parameterized networked control systems that are designed via an emulation procedure. In the first step, a controller is designed ignoring network so that semiglobal practical stability is achieved for the closed-loop. In the second step, it is shown that if the same controller is emulated and implemented over a large class of networks, then the networked control system is also semiglobally practically asymptotically stable; in this case, the controller parameter needs to be sufficiently small and communication bandwidth sufficiently high.
  • Item
    Thumbnail Image
    Dynamics of undeforming regions in the lead up to failure: jumping scales from lab to field
    Tordesillas, A ; Zhou, S ; Campbell, L ; Bellett, P ; Aguirre, MA ; Luding, S ; Pugnaloni, LA ; Soto, R (EDP Sciences, 2021)
    Knowledge transfer from micromechanics of granular media failure to geohazard forecasting and mitigation has been slow. But in the face of a rapidly expanding data infrastructure on the motion of individual grains for laboratory samples – and ground motion data at the field scale – opportunities to accelerate this knowledge transfer are emerging. In particular, such data assets coupled with data-driven approaches enable ‘new eyes’ to re-examine granular failure. To this end, effective strategies that can jump scales from bench to field are urgently needed. Here we demonstrate one strategy that focusses on the study of deformation patterns in the precursory failure regime using kinematic data. Unlike previous studies which focus on regions of high strains, here we probe the development and evolution of near-undeforming regions through the lens of explosive percolation. We find a common dynamical signature in which undeforming regions, which are initially transient in the precursory failure regime, become persistent from the time of imminent failure. We demonstrate the robustness of these findings for data on individual grain motions in a classical laboratory test and ground motion in two real landslides at vastly different scales.
  • Item
    Thumbnail Image
    Differential operators on modular forms mod p
    Ghitza, A (RIMS, 2019)
    We give a survey of recent work on the construction of differential operators on various types of modular forms (mod p) . We also discuss a framework for determining the effect of such operators on the mod p Galois representations attached to Hecke eigenforms.
  • Item
    Thumbnail Image
    Analytic evaluation of Hecke eigenvalues for Siegel modular forms of degree two
    Ghitza, A ; Colman, O ; Ryan, NC (Mathematical Sciences Publishers, 2019)
    The standard approach to evaluate Hecke eigenvalues of a Siegel modular eigenform F is to determine a large number of Fourier coefficients of F and then compute the Hecke action on those coefficients. We present a new method based on the numerical evaluation of F at explicit points in the upper half-space and of its image under the Hecke operators. The approach is more efficient than the standard method and has the potential for further optimization by identifying good candidates for the points of evaluation, or finding ways of lowering the truncation bound. A limitation of the algorithm is that it returns floating point numbers for the eigenvalues; however, the working precision can be adjusted at will to yield as close an approximation as needed.
  • Item
    Thumbnail Image
    Logic and the 2-Simplicial Transformer
    Murfet, D ; Clift, J ; Doyrn, D ; Wallbridge, J (International Conference on Learning Representations, 2020)
    We introduce the 2-simplicial Transformer, an extension of the Transformer which includes a form of higher-dimensional attention generalising the dot-product attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.
  • Item
    Thumbnail Image
    Data-Driven Approach to Multiple-Source Domain Adaptation
    Stojanov, P ; Gong, M ; Carbonell, J ; Zhang, K (PMLR, 2019)
    A key problem in domain adaptation is determining what to transfer across different domains. We propose a data-driven method to represent these changes across multiple source domains and perform unsupervised domain adaptation. We assume that the joint distributions follow a specific generating process and have a small number of identifiable changing parameters, and develop a data-driven method to identify the changing parameters by learning low-dimensional representations of the changing class-conditional distributions across multiple source domains. The learned low-dimensional representations enable us to reconstruct the target-domain joint distribution from unlabeled target-domain data, and further enable predicting the labels in the target domain. We demonstrate the efficacy of this method by conducting experiments on synthetic and real datasets.
  • Item
    Thumbnail Image
    Geometry-Consistent Generative Adversarial Networks for One-Sided Unsupervised Domain Mapping
    Fu, H ; Gong, M ; Wang, C ; Batmanghelich, K ; Zhang, K ; Tao, D (IEEE, 2019)
    Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples. Finding the optimal GXY without paired data is an ill-posed problem, so appropriate constraints are required to obtain reasonable solutions. One of the most prominent constraints is cycle consistency, which enforces the translated image by GXY to be translated back to the input image by an inverse mapping GYX. While cycle consistency requires the simultaneous training of GXY and GY X, recent studies have shown that one-sided domain mapping can be achieved by preserving pairwise distances between images. Although cycle consistency and distance preservation successfully constrain the solution space, they overlook the special properties that simple geometric transformations do not change the semantic structure of images. Based on this special property, we develop a geometry-consistent generative adversarial network (GcGAN), which enables one-sided unsupervised domain mapping. GcGAN takes the original image and its counterpart image transformed by a predefined geometric transformation as inputs and generates two images in the new domain coupled with the corresponding geometry-consistency constraint. The geometry-consistency constraint reduces the space of possible solutions while keep the correct solutions in the search space. Quantitative and qualitative comparisons with the baseline (GAN alone) and the state-of-the-art methods including CycleGAN and DistanceGAN demonstrate the effectiveness of our method.
  • Item
    No Preview Available
    Causal Discovery with Linear Non-Gaussian Models under Measurement Error: Structural Identifiability Results.
    Zhang, K ; Gong, M ; Ramsey, J ; Batmanghelich, K ; Spirtes, P ; Glymour, C (Association for Uncertainty in Artificial Intelligence (AUAI), 2018)
    Causal discovery methods aim to recover the causal process that generated purely observational data. Despite its successes on a number of real problems, the presence of measurement error in the observed data can produce serious mistakes in the output of various causal discovery methods. Given the ubiquity of measurement error caused by instruments or proxies used in the measuring process, this problem is one of the main obstacles to reliable causal discovery. It is still unknown to what extent the causal structure of relevant variables can be identified in principle. This study aims to take a step towards filling that void. We assume that the underlining process or the measurement-error free variables follows a linear, non-Guassian causal model, and show that the so-called ordered group decomposition of the causal model, which contains major causal information, is identifiable. The causal structure identifiability is further improved with different types of sparsity constraints on the causal structure. Finally, we give rather mild conditions under which the whole causal structure is fully identifiable.
  • Item
    No Preview Available
    Deep Ordinal Regression Network for Monocular Depth Estimation
    Fu, H ; Gong, M ; Wang, C ; Batmanghelich, K ; Tao, D (IEEE, 2018)
    Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The proposed deep ordinal regression network (DORN) achieves state-of-the-art results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.