Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 12
  • Item
    Thumbnail Image
    Fast Rate Generalization Error Bounds: Variations on a Theme
    Wu, X ; Manton, JH ; Aickelin, U ; Zhu, J (IEEE, 2022)
    A recent line of works, initiated by [1] and [2], has shown that the generalization error of a learning algorithm can be upper bounded by information measures. In most of the relevant works, the convergence rate of the expected generalization error is in the form of O(\sqrt λ I/n ) where λ is an assumption-dependent coefficient and I is some information-Theoretic quantities such as the mutual information between the data sample and the learned hypothesis. However, such a learning rate is typically considered to be "slow", compared to a "fast rate"of O(1 /n) in many learning scenarios. In this work, we first show that the square root does not necessarily imply a slow rate, and a fast rate result can still be obtained using this bound by evaluating λ under an appropriate assumption. Furthermore, we identify the key conditions needed for the fast rate generalization error, which we call the ( η, c)-central condition. Under this condition, we give information-Theoretic bounds on the generalization error and excess risk, with a convergence rate of O (1 /n) for specific learning algorithms such as empirical risk minimization. Finally, analytical examples are given to show the effectiveness of the bounds.
  • Item
    Thumbnail Image
    Modeling the respiratory central pattern generator with resonate-and-fire Izhikevich-Neurons
    Tolmachev, P ; Dhingra, RR ; Pauley, M ; Dutschmann, M ; Manton, JH ; Cheng, L ; Leung, ACS ; Ozawa, S (Springer Nature, 2018-01-01)
    Computational models of the respiratory central pattern generator (rCPG) are usually based on biologically-plausible Hodgkin Huxley neuron models. Such models require numerous parameters and thus are prone to overfitting. The HH approach is motivated by the assumption that the biophysical properties of neurons determine the network dynamics. Here, we implement the rCPG using simpler Izhikevich resonate-and-fire neurons. Our rCPG model generates a 3-phase respiratory motor pattern based on established connectivities and can reproduce previous experimental and theoretical observations. Further, we demonstrate the flexibility of the model by testing whether intrinsic bursting properties are necessary for rhythmogenesis. Our simulations demonstrate that replacing predicted mandatory bursting properties of pre-inspiratory neurons with spike adapting properties yields a model that generates comparable respiratory activity patterns. The latter supports our view that the importance of the exact modeling parameters of specific respiratory neurons is overestimated.
  • Item
    Thumbnail Image
    New Insights on Learning Rules for Hopfield Networks: Memory and Objective Function Minimisation
    Tolmachev, P ; Manton, JH (IEEE, 2020-07-01)
    Hopfield neural networks are a possible basis for modelling associative memory in living organisms. After summarising previous studies in the field, we take a new look at learning rules, exhibiting them as descent-type algorithms for various cost functions. We also propose several new cost functions suitable for learning. We discuss the role of biases — the external inputs — in the learning process in Hopfield networks. Furthermore, we apply Newton's method for learning memories, and experimentally compare the performances of various learning rules. Finally, to add to the debate whether allowing connections of a neuron to itself enhances memory capacity, we numerically investigate the effects of self-coupling.
  • Item
    Thumbnail Image
    Information-theoretic analysis for transfer learning
    Wu, X ; Manton, JH ; Aickelin, U ; Zhu, J (IEEE, 2020)
    Transfer learning, or domain adaptation, is concerned with machine learning problems in which training and testing data come from possibly different distributions (denoted as μ and μ', respectively). In this work, we give an informationtheoretic analysis on the generalization error and the excess risk of transfer learning algorithms, following a line of work initiated by Russo and Zhou. Our results suggest, perhaps as expected, that the Kullback-Leibler (KL) divergence D(μμ') plays an important role in characterizing the generalization error in the settings of domain adaptation. Specifically, we provide generalization error upper bounds for general transfer learning algorithms, and extend the results to a specific empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase. We further apply the method to iterative, noisy gradient descent algorithms, and obtain upper bounds which can be easily calculated, only using parameters from the learning algorithms. A few illustrative examples are provided to demonstrate the usefulness of the results. In particular, our bound is tighter in specific classification problems than the bound derived using Rademacher complexity.
  • Item
  • Item
  • Item
  • Item
  • Item
    Thumbnail Image
    Design of continuous-time flows on intertwined orbit spaces
    Absil, PA ; Lageman, C ; Manton, JH (IEEE, 2007-01-01)
  • Item
    Thumbnail Image
    Spiking Neuron Channel
    Ikeda, S ; Manton, JH (IEEE, 2009)