Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 30
  • Item
    No Preview Available
    On Distributed Nonconvex Optimisation via Modified ADMM
    Mafakheri, B ; Manton, JH ; Shames, I (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2023)
  • Item
    Thumbnail Image
    Display of Native Antigen on cDC1 That Have Spatial Access to Both T and B Cells Underlies Efficient Humoral Vaccination.
    Kato, Y ; Steiner, TM ; Park, H-Y ; Hitchcock, RO ; Zaid, A ; Hor, JL ; Devi, S ; Davey, GM ; Vremec, D ; Tullett, KM ; Tan, PS ; Ahmet, F ; Mueller, SN ; Alonso, S ; Tarlinton, DM ; Ploegh, HL ; Kaisho, T ; Beattie, L ; Manton, JH ; Fernandez-Ruiz, D ; Shortman, K ; Lahoud, MH ; Heath, WR ; Caminschi, I (American Association of Immunologists, 2020-10-01)
    Follicular dendritic cells and macrophages have been strongly implicated in presentation of native Ag to B cells. This property has also occasionally been attributed to conventional dendritic cells (cDC) but is generally masked by their essential role in T cell priming. cDC can be divided into two main subsets, cDC1 and cDC2, with recent evidence suggesting that cDC2 are primarily responsible for initiating B cell and T follicular helper responses. This conclusion is, however, at odds with evidence that targeting Ag to Clec9A (DNGR1), expressed by cDC1, induces strong humoral responses. In this study, we reveal that murine cDC1 interact extensively with B cells at the border of B cell follicles and, when Ag is targeted to Clec9A, can display native Ag for B cell activation. This leads to efficient induction of humoral immunity. Our findings indicate that surface display of native Ag on cDC with access to both T and B cells is key to efficient humoral vaccination.
  • Item
    Thumbnail Image
    A Bayesian approach to (online) transfer learning: Theory and algorithms
    Wu, X ; Manton, JH ; Aickelin, U ; Zhu, J (Elsevier BV, 2023-11)
    Transfer learning is a machine learning paradigm where knowledge from one problem is utilized to solve a new but related problem. While conceivable that knowledge from one task could help solve a related task, if not executed properly, transfer learning algorithms can impair the learning performance instead of improving it – commonly known as negative transfer. In this paper, we use a parametric statistical model to study transfer learning from a Bayesian perspective. Specifically, we study three variants of transfer learning problems, instantaneous, online, and time-variant transfer learning. We define an appropriate objective function for each problem and provide either exact expressions or upper bounds on the learning performance using information-theoretic quantities, which allow simple and explicit characterizations when the sample size becomes large. Furthermore, examples show that the derived bounds are accurate even for small sample sizes. The obtained bounds give valuable insights into the effect of prior knowledge on transfer learning, at least with respect to our Bayesian formulation of the transfer learning problem. In particular, we formally characterize the conditions under which negative transfer occurs. Lastly, we devise several (online) transfer learning algorithms that are amenable to practical implementations, some of which do not require the parametric assumption. We demonstrate the effectiveness of our algorithms with real data sets, focusing primarily on when the source and target data have strong similarities.
  • Item
    No Preview Available
    TRACKING AND REGRET BOUNDS FOR ONLINE ZEROTH-ORDER EUCLIDEAN AND RIEMANNIAN OPTIMIZATION
    Maass, A ; Manzie, C ; Nesic, D ; Manton, JH ; Shames, I (SIAM PUBLICATIONS, 2022)
  • Item
    No Preview Available
    Low-Complexity Multi-Task Learning Aided Neural Networks for Equalization in Short-Reach Optical Interconnects
    Xu, Z ; Dong, S ; Manton, JH ; Shieh, W (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022-01-01)
  • Item
    Thumbnail Image
    Fast Rate Generalization Error Bounds: Variations on a Theme
    Wu, X ; Manton, JH ; Aickelin, U ; Zhu, J (IEEE, 2022)
    A recent line of works, initiated by [1] and [2], has shown that the generalization error of a learning algorithm can be upper bounded by information measures. In most of the relevant works, the convergence rate of the expected generalization error is in the form of O(\sqrt λ I/n ) where λ is an assumption-dependent coefficient and I is some information-Theoretic quantities such as the mutual information between the data sample and the learned hypothesis. However, such a learning rate is typically considered to be "slow", compared to a "fast rate"of O(1 /n) in many learning scenarios. In this work, we first show that the square root does not necessarily imply a slow rate, and a fast rate result can still be obtained using this bound by evaluating λ under an appropriate assumption. Furthermore, we identify the key conditions needed for the fast rate generalization error, which we call the ( η, c)-central condition. Under this condition, we give information-Theoretic bounds on the generalization error and excess risk, with a convergence rate of O (1 /n) for specific learning algorithms such as empirical risk minimization. Finally, analytical examples are given to show the effectiveness of the bounds.
  • Item
    No Preview Available
    An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications
    Wu, X ; Manton, JH ; Aickelin, U ; Zhu, J ( 2022-07-12)
    Transfer learning, or domain adaptation, is concerned with machine learning problems in which training and testing data come from possibly different probability distributions. In this work, we give an information-theoretic analysis on the generalization error and excess risk of transfer learning algorithms, following a line of work initiated by Russo and Xu. Our results suggest, perhaps as expected, that the Kullback-Leibler (KL) divergence D(μ||μ′) plays an important role in the characterizations where μ and μ′ denote the distribution of the training data and the testing test, respectively. Specifically, we provide generalization error upper bounds for the empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase. We further apply the analysis to approximated ERM methods such as the Gibbs algorithm and the stochastic gradient descent method. We then generalize the mutual information bound with ϕ-divergence and Wasserstein distance. These generalizations lead to tighter bounds and can handle the case when μ is not absolutely continuous with respect to μ′. Furthermore, we apply a new set of techniques to obtain an alternative upper bound which gives a fast (and optimal) learning rate for some learning problems. Finally, inspired by the derived bounds, we propose the InfoBoost algorithm in which the importance weights for source and target data are adjusted adaptively in accordance to information measures. The empirical results show the effectiveness of the proposed algorithm.
  • Item
    Thumbnail Image
    Tracking and regret bounds for online zeroth-order Euclidean and Riemannian optimisation
    Maass, AI ; Manzie, C ; Nesic, D ; Manton, JH ; Shames, I ( 2020-10-01)
    We study numerical optimisation algorithms that use zeroth-order information to minimise time-varying geodesically-convex cost functions on Riemannian manifolds. In the Euclidean setting, zeroth-order algorithms have received a lot of attention in both the time-varying and time-invariant cases. However, the extension to Riemannian manifolds is much less developed. We focus on Hadamard manifolds, which are a special class of Riemannian manifolds with global nonpositive curvature that offer convenient grounds for the generalisation of convexity notions. Specifically, we derive bounds on the expected instantaneous tracking error, and we provide algorithm parameter values that minimise the algorithm’s performance. Our results illustrate how the manifold geometry in terms of the sectional curvature affects these bounds. Additionally, we provide dynamic regret bounds for this online optimisation setting. To the best of our knowledge, these are the first regret bounds even for the Euclidean version of the problem. Lastly, via numerical simulations, we demonstrate the applicability of our algorithm on an online Karcher mean problem.
  • Item
    No Preview Available
    On Riemannian and non-Riemannian Optimisation, and Optimisation Geometry
    Lefevre, J ; Bouchard, F ; Said, S ; Le Bihan, N ; Manton, JH (ELSEVIER, 2021)
  • Item
    No Preview Available
    Hidden Markov chains and fields with observations in Riemannian manifolds
    Said, S ; Le Bihan, N ; Manton, JH (ELSEVIER, 2021)