 School of Mathematics and Statistics  Research Publications
School of Mathematics and Statistics  Research Publications
Permanent URI for this collection
24 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1  10 of 24

ItemNo Preview AvailableOfflattice and parallel implementations of the pivot algorithmClisby, N ; Ho, DTC (IOP Publishing, 20211209)Abstract The pivot algorithm is the most efficient known method for sampling polymer configurations for selfavoiding walks and related models. Here we introduce two recent improvements to an efficient binary tree implementation of the pivot algorithm: an extension to an offlattice model, and a parallel implementation.

ItemSemiglobal Practical Stability of a Class of Parameterized Networked Control SystemsWang, B ; Nesic, D (IEEE, 20120101)This paper studies a class of parameterized networked control systems that are designed via an emulation procedure. In the first step, a controller is designed ignoring network so that semiglobal practical stability is achieved for the closedloop. In the second step, it is shown that if the same controller is emulated and implemented over a large class of networks, then the networked control system is also semiglobally practically asymptotically stable; in this case, the controller parameter needs to be sufficiently small and communication bandwidth sufficiently high.

ItemDifferential operators on modular forms mod pGhitza, A (RIMS, 2019)We give a survey of recent work on the construction of differential operators on various types of modular forms (mod p) . We also discuss a framework for determining the effect of such operators on the mod p Galois representations attached to Hecke eigenforms.

ItemAnalytic evaluation of Hecke eigenvalues for Siegel modular forms of degree twoGhitza, A ; Colman, O ; Ryan, NC (Mathematical Sciences Publishers, 2019)The standard approach to evaluate Hecke eigenvalues of a Siegel modular eigenform F is to determine a large number of Fourier coefficients of F and then compute the Hecke action on those coefficients. We present a new method based on the numerical evaluation of F at explicit points in the upper halfspace and of its image under the Hecke operators. The approach is more efficient than the standard method and has the potential for further optimization by identifying good candidates for the points of evaluation, or finding ways of lowering the truncation bound. A limitation of the algorithm is that it returns floating point numbers for the eigenvalues; however, the working precision can be adjusted at will to yield as close an approximation as needed.

ItemLogic and the 2Simplicial TransformerMurfet, D ; Clift, J ; Doyrn, D ; Wallbridge, J (International Conference on Learning Representations, 2020)We introduce the 2simplicial Transformer, an extension of the Transformer which includes a form of higherdimensional attention generalising the dotproduct attention, and uses this attention to update entity representations with tensor products of value vectors. We show that this architecture is a useful inductive bias for logical reasoning in the context of deep reinforcement learning.

ItemDataDriven Approach to MultipleSource Domain AdaptationStojanov, P ; Gong, M ; Carbonell, J ; Zhang, K (PMLR, 2019)A key problem in domain adaptation is determining what to transfer across different domains. We propose a datadriven method to represent these changes across multiple source domains and perform unsupervised domain adaptation. We assume that the joint distributions follow a specific generating process and have a small number of identifiable changing parameters, and develop a datadriven method to identify the changing parameters by learning lowdimensional representations of the changing classconditional distributions across multiple source domains. The learned lowdimensional representations enable us to reconstruct the targetdomain joint distribution from unlabeled targetdomain data, and further enable predicting the labels in the target domain. We demonstrate the efficacy of this method by conducting experiments on synthetic and real datasets.

ItemGeometryConsistent Generative Adversarial Networks for OneSided Unsupervised Domain MappingFu, H ; Gong, M ; Wang, C ; Batmanghelich, K ; Zhang, K ; Tao, D (IEEE, 2019)Unsupervised domain mapping aims to learn a function to translate domain X to Y by a function GXY in the absence of paired examples. Finding the optimal GXY without paired data is an illposed problem, so appropriate constraints are required to obtain reasonable solutions. One of the most prominent constraints is cycle consistency, which enforces the translated image by GXY to be translated back to the input image by an inverse mapping GYX. While cycle consistency requires the simultaneous training of GXY and GY X, recent studies have shown that onesided domain mapping can be achieved by preserving pairwise distances between images. Although cycle consistency and distance preservation successfully constrain the solution space, they overlook the special properties that simple geometric transformations do not change the semantic structure of images. Based on this special property, we develop a geometryconsistent generative adversarial network (GcGAN), which enables onesided unsupervised domain mapping. GcGAN takes the original image and its counterpart image transformed by a predefined geometric transformation as inputs and generates two images in the new domain coupled with the corresponding geometryconsistency constraint. The geometryconsistency constraint reduces the space of possible solutions while keep the correct solutions in the search space. Quantitative and qualitative comparisons with the baseline (GAN alone) and the stateoftheart methods including CycleGAN and DistanceGAN demonstrate the effectiveness of our method.

ItemNo Preview AvailableCausal Discovery with Linear NonGaussian Models under Measurement Error: Structural Identifiability Results.Zhang, K ; Gong, M ; Ramsey, J ; Batmanghelich, K ; Spirtes, P ; Glymour, C (Association for Uncertainty in Artificial Intelligence (AUAI), 2018)Causal discovery methods aim to recover the causal process that generated purely observational data. Despite its successes on a number of real problems, the presence of measurement error in the observed data can produce serious mistakes in the output of various causal discovery methods. Given the ubiquity of measurement error caused by instruments or proxies used in the measuring process, this problem is one of the main obstacles to reliable causal discovery. It is still unknown to what extent the causal structure of relevant variables can be identified in principle. This study aims to take a step towards filling that void. We assume that the underlining process or the measurementerror free variables follows a linear, nonGuassian causal model, and show that the socalled ordered group decomposition of the causal model, which contains major causal information, is identifiable. The causal structure identifiability is further improved with different types of sparsity constraints on the causal structure. Finally, we give rather mild conditions under which the whole causal structure is fully identifiable.

ItemNo Preview AvailableDeep Ordinal Regression Network for Monocular Depth EstimationFu, H ; Gong, M ; Wang, C ; Batmanghelich, K ; Tao, D (IEEE, 2018)Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an illposed problem. Recent methods have gained significant improvement by exploring imagelevel information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable lowresolution feature maps. To obtain highresolution depth maps, skipconnections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacingincreasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multiscale network structure which avoids unnecessary spatial pooling and captures multiscale information in parallel. The proposed deep ordinal regression network (DORN) achieves stateoftheart results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.

ItemCausal discovery and forecasting in nonstationary environments with statespace modelsHuang, B ; Zhang, K ; Gong, M ; Glymour, C ; Chaudhuri, K ; Salakhutdinov, R (ICML Press, 2019)In many scientific fields, such as economics and neuroscience, we are often faced with nonstationary time series, and concerned with both finding causal relations and forecasting the values of variables of interest, both of which are particularly challenging in such nonstationary environments. In this paper, we study causal discovery and forecasting for nonstationary time series. By exploiting a particular type of statespace model to represent the processes, we show that nonstationarity helps to identify the causal structure, and that forecasting naturally benefits from learned causal knowledge. Specifically, we allow changes in both causal strengths and noise variances in the nonlinear statespace models, which, interestingly, renders both the causal structure and model parameters identifiable. Given the causal model, we treat forecasting as a problem in Bayesian inference in the causal model, which exploits the timevarying property of the data and adapts to new observations in a principled manner. Experimental results on synthetic and realworld data sets demonstrate the efficacy of the proposed methods.
 «
 1 (current)
 2
 3
 »