Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 40
  • Item
    No Preview Available
    An Information-Theoretic Analysis for Transfer Learning: Error Bounds and Applications
    Wu, X ; Manton, JH ; Aickelin, U ; Zhu, J ( 2022-07-12)
    Transfer learning, or domain adaptation, is concerned with machine learning problems in which training and testing data come from possibly different probability distributions. In this work, we give an information-theoretic analysis on the generalization error and excess risk of transfer learning algorithms, following a line of work initiated by Russo and Xu. Our results suggest, perhaps as expected, that the Kullback-Leibler (KL) divergence D(μ||μ′) plays an important role in the characterizations where μ and μ′ denote the distribution of the training data and the testing test, respectively. Specifically, we provide generalization error upper bounds for the empirical risk minimization (ERM) algorithm where data from both distributions are available in the training phase. We further apply the analysis to approximated ERM methods such as the Gibbs algorithm and the stochastic gradient descent method. We then generalize the mutual information bound with ϕ-divergence and Wasserstein distance. These generalizations lead to tighter bounds and can handle the case when μ is not absolutely continuous with respect to μ′. Furthermore, we apply a new set of techniques to obtain an alternative upper bound which gives a fast (and optimal) learning rate for some learning problems. Finally, inspired by the derived bounds, we propose the InfoBoost algorithm in which the importance weights for source and target data are adjusted adaptively in accordance to information measures. The empirical results show the effectiveness of the proposed algorithm.
  • Item
    Thumbnail Image
    Achieving QoS for Real-Time Bursty Applications over Passive Optical Networks
    Roy, D ; Rao, AS ; Alpcan, T ; Das, G ; Palaniswami, M ( 2021-09-05)
    Emerging real-time applications such as those classified under ultra-reliable low latency (uRLLC) generate bursty traffic and have strict Quality of Service (QoS) requirements. Passive Optical Network (PON) is a popular access network technology, which is envisioned to handle such applications at the access segment of the network. However, the existing standards cannot handle strict QoS constraints. The available solutions rely on instantaneous heuristic decisions and maintain QoS constraints (mostly bandwidth) in an average sense. Existing works with optimal strategies are computationally complex and are not suitable for uRLLC applications. This paper presents a novel computationally-efficient, far-sighted bandwidth allocation policy design for facilitating bursty traffic in a PON framework while satisfying strict QoS (age of information/delay and bandwidth) requirements of modern applications. To this purpose, first we design a delay-tracking mechanism which allows us to model the resource allocation problem from a control-theoretic viewpoint as a Model Predictive Control (MPC). MPC helps in taking far-sighted decisions regarding resource allocations and captures the time-varying dynamics of the network. We provide computationally efficient polynomial-time solutions and show its implementation in the PON framework. Compared to existing approaches, MPC reduces delay violations by approximately 15% for a delay-constrained application of 1ms target. Our approach is also robust to varying traffic arrivals.
  • Item
    Thumbnail Image
    Achieving AI-enabled Robust End-to-End Quality of Experience over Radio Access Networks
    Roy, D ; Rao, AS ; Alpcan, T ; Das, G ; Palaniswami, M ( 2022-01-13)
    Emerging applications such as Augmented Reality, the Internet of Vehicles and Remote Surgery require both computing and networking functions working in harmony. The End-to-end (E2E) quality of experience (QoE) for these applications depends on the synchronous allocation of networking and computing resources. However, the relationship between the resources and the E2E QoE outcomes is typically stochastic and non-linear. In order to make efficient resource allocation decisions, it is essential to model these relationships. This article presents a novel machine-learning based approach to learn these relationships and concurrently orchestrate both resources for this purpose. The machine learning models further help make robust allocation decisions regarding stochastic variations and simplify robust optimization to a conventional constrained optimization. When resources are insufficient to accommodate all application requirements, our framework supports executing some of the applications with minimal degradation (graceful degradation) of E2E QoE. We also show how we can implement the learning and optimization methods in a distributed fashion by the Software-Defined Network (SDN) and Kubernetes technologies. Our results show that deep learning-based modelling achieves E2E QoE with approximately 99.8\% accuracy, and our robust joint-optimization technique allocates resources efficiently when compared to existing differential services alternatives.
  • Item
    Thumbnail Image
    Online Slice Reconfiguration for End-to-End QoE in 6G Applications
    Roy, D ; Rao, AS ; Alpcan, T ; Wick, A ; Das, G ; Palaniswami, M ( 2022-01-13)
  • Item
    Thumbnail Image
    Parameter and state estimation of nonlinear systems using a multi-observer under the supervisory framework
    Chong, MS ; Nešić, D ; Postoyan, R ; Kuhlmann, L ( 2014-03-18)
    We present a hybrid scheme for the parameter and state estimation of nonlinear continuous-time systems, which is inspired by the supervisory setup used for control. State observers are synthesized for some nominal parameter values and a criterion is designed to select one of these observers at any given time instant, which provides state and parameter estimates. Assuming that a persistency of excitation condition holds, the convergence of the parameter and state estimation errors to zero is ensured up to a margin, which can be made as small as desired by increasing the number of observers. To reduce the potential computational complexity of the scheme, we explain how the sampling of the parameter set can be dynamically updated using a zoom-in procedure. This strategy typically requires a fewer number of observers for a given estimation error margin compared to the static sampling policy. The results are shown to be applicable to linear systems and to a class of nonlinear systems. We illustrate the applicability of the approach by estimating the synaptic gains and the mean membrane potentials of a neural mass model.
  • Item
    Thumbnail Image
    Hands-Off Control as Green Control
    Nagahara, M ; Quevedo, DE ; Nesic, D ( 2014-07-09)
    In this article, we introduce a new paradigm of control, called hands-off control, which can save energy and reduce CO2 emissions in control systems. A hands-off control is defined as a control that has a much shorter support than the horizon length. The maximum hands-off control is the minimum support (or sparsest) control among all admissible controls. With maximum hands-off control, actuators in the feedback control system can be stopped during time intervals over which the control values are zero. We show the maximum hands-off control is given by L 1 optimal control, for which we also show numerical computation formulas.
  • Item
    Thumbnail Image
    Maximum Hands-Off Control: A Paradigm of Control Effort Minimization
    Nagahara, M ; Quevedo, DE ; Nesic, D ( 2014-08-13)
    In this paper, we propose a paradigm of control, called a maximum hands-off control. A hands-off control is defined as a control that has a short support per unit time. The maximum hands-off control is the minimum support (or sparsest) per unit time among all controls that achieve control objectives. For finite horizon continuous-time control, we show the equivalence between the maximum hands-off control and L 1 -optimal control under a uniqueness assumption called normality. This result rationalizes the use of L 1 optimality in computing a maximum hands-off control. The same result is obtained for discrete-time hands-off control. We also propose an L 1 / L 2 -optimal control to obtain a smooth hands-off control. Furthermore, we give a self-triggered feedback control algorithm for linear time-invariant systems, which achieves a given sparsity rate and practical stability in the case of plant disturbances. An example is included to illustrate the effectiveness of the proposed control.
  • Item
    Thumbnail Image
    Stabilization of nonlinear systems using event-triggered output feedback controllers
    Abdelrahim, M ; Postoyan, R ; Daafouz, J ; Nešić, D ( 2014-08-25)
    The objective is to design output feedback event-triggered controllers to stabilize a class of nonlinear systems. One of the main difficulties of the problem is to ensure the existence of a minimum amount of time between two consecutive transmissions, which is essential in practice. We solve this issue by combining techniques from event-triggered and time-triggered control. The idea is to turn on the event-triggering mechanism only after a fixed amount of time has elapsed since the last transmission. This time is computed based on results on the stabilization of time-driven sampled-data systems. The overall strategy ensures an asymptotic stability property for the closed-loop system. The results are proved to be applicable to linear time-invariant (LTI) systems as a particular case.
  • Item
    Thumbnail Image
    Co-design of output feedback laws and event-triggering conditions for linear systems
    Abdelrahim, M ; Postoyan, R ; Daafouz, J ; Nešić, D ( 2014-08-26)
    We present a procedure to simultaneously design the output feedback law and the event-triggering condition to stabilize linear systems. The closed-loop system is shown to satisfy a global asymptotic stability property and the existence of a strictly positive minimum amount of time between two transmissions is guaranteed. The event-triggered controller is obtained by solving linear matrix inequalities (LMIs). We then exploit the flexibility of the method to maximize the guaranteed minimum amount of time between two transmissions. Finally, we provide a (heuristic) method to reduce the amount of transmissions, which is supported by numerical simulations.
  • Item
    Thumbnail Image
    Optimization Methods on Riemannian Manifolds via Extremum Seeking Algorithms
    Taringoo, F ; Dower, PM ; Nesic, D ; Tan, Y ( 2014-12-09)
    This paper formulates the problem of Extremum Seeking for optimization of cost functions defined on Riemannian manifolds. We extend the conventional extremum seeking algorithms for optimization problems in Euclidean spaces to optimization of cost functions defined on smooth Riemannian manifolds. This problem falls within the category of online optimization methods. We introduce the notion of geodesic dithers which is a perturbation of the optimizing trajectory in the tangent bundle of the ambient state manifolds and obtain the extremum seeking closed loop as a perturbation of the averaged gradient system. The main results are obtained by applying closeness of solutions and averaging theory on Riemannian manifolds. The main results are further extended for optimization on Lie groups. Numerical examples on Riemannian manifolds (Lie groups) SOp3q and SEp3q are also presented at the end of the paper.