Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 27
  • Item
    No Preview Available
    Zero-Error Feedback Capacity for Bounded Stabilization and Finite-State Additive Noise Channels
    Saberi, A ; Farokhi, F ; Nair, GN (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022-10)
  • Item
    No Preview Available
    Bounded Estimation Over Finite-State Channels: Relating Topological Entropy and Zero-Error Capacity
    Saberi, A ; Farokhi, F ; Nair, GN (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022-08)
  • Item
    Thumbnail Image
    Feedback control using a strategic sensor
    Farokhi, F (TAYLOR & FRANCIS LTD, 2021-01-02)
    A dynamic estimation and control problem with a strategic sensor is considered. The strategic sensor may provide corrupted messages about the state measurements of a discrete-time linear time-invariant dynamical system to the system operator (or the controller). The system operator then uses this information to construct an estimate of the state of the system (and perhaps private variables of the sensor). The estimate is used to control the system to achieve the operator's desired objective. The problem is formulated as a game, which might be conflicting to that of the strategic sensor. An equilibrium of the game is computed and its properties are investigated.
  • Item
    Thumbnail Image
    Structured preconditioning of conjugate gradients for path-graph network optimal control problems
    Zafar, A ; Cantoni, M ; Farokhi, F (IEEE, 2021-01-01)
    A structured preconditioned conjugate gradient (PCG) based linear system solver is developed for implementing Newton updates in second-order methods for a class of con- strained network optimal control problems. Of specific interest are problems with discrete-time dynamics arising from the path-graph interconnection of N heterogeneous sub-systems. The arithmetic complexity of each PCG step is O(NT), where T is the length of the time horizon. The proposed preconditioning involves a fixed number of block Jacobi iterations per PCG step. A decreasing analytic bound on the effective conditioning is given in terms of this number. The computations are decomposable across the spatial and temporal dimensions of the optimal control problem into sub-problems of size independent of N and T. Numerical results are provided for two example systems.
  • Item
    Thumbnail Image
    Do Auto-Regressive Models Protect Privacy? Inferring Fine-Grained Energy Consumption From Aggregated Model Parameters
    Sheikh, NU ; Asghar, HJ ; Farokhi, F ; Kaafar, MA (IEEE COMPUTER SOC, 2022-11-01)
    We investigate the extent to which statistical predictive models leak information about their training data. More specifically, based on the use case of household (electrical) energy consumption, we evaluate whether white-box access to auto-regressive (AR) models trained on such data together with background information, such as household energy data aggregates (e.g., monthly billing information) and publicly-available weather data, can lead to inferring fine-grained energy data of any particular household. We construct two adversarial models aiming to infer fine-grained energy consumption patterns. Both threat models use monthly billing information of target households. The second adversary has access to the AR model for a cluster of households containing the target household. Using two real-world energy datasets, we demonstrate that this adversary can apply maximum a posteriori estimation to reconstruct daily consumption of target households with significantly lower error than the first adversary, which serves as a baseline. Such fine-grained data can essentially expose private information, such as occupancy levels. Finally, we use differential privacy (DP) to alleviate the privacy concerns of the adversary in dis-aggregating energy data. Our evaluations show that differentially private model parameters offer strong privacy protection against the adversary with moderate utility, captured in terms of model fitness to the cluster.
  • Item
    Thumbnail Image
    Noiseless Privacy: Definition, Guarantees, and Applications
    Farokhi, F (Institute of Electrical and Electronics Engineers (IEEE), 2021)
    In this paper, we define noiseless privacy, as a nonstochastic rival to differential privacy, requiring that the outputs of a mechanism (i.e., function composition of a privacy-preserving mapping and a query) attain only a few values while varying the data of an individual (the logarithm of the number of the distinct values is bounded by the privacy budget). Therefore, the output of the mechanism is not fully informative of the data of the individuals in the dataset. We prove several guarantees for noiselessly-private mechanisms. The information content of the output about the data of an individual, even if an adversary knows all the other entries of the private dataset, is bounded by the privacy budget. The zero-error capacity of memory-less channels using noiselessly private mechanisms for transmission is upper bounded by the privacy budget. The performance of a non-stochastic hypothesis-testing adversary is bounded again by the privacy budget. Assuming that an adversary has access to a stochastic prior on the dataset, we prove that the estimation error of the adversary for individual entries of the dataset is lower bounded by a decreasing function of the privacy budget. In this case, we also show that the maximal leakage is bounded by the privacy budget. In addition to privacy guarantees, we prove that noiselessly-private mechanisms admit composition theorem and post-processing does not weaken their privacy guarantees. We prove that quantization or binning can ensure noiseless privacy if the number of quantization levels is appropriately selected based on the sensitivity of the query and the privacy budget. Finally, we illustrate the privacy merits of noiseless privacy using multiple datasets in energy, transport, and finance.
  • Item
    Thumbnail Image
    Distributionally-robust machine learning using locally differentially-private data
    Farokhi, F (SPRINGER HEIDELBERG, 2022-05)
    We consider machine learning, particularly regression, using locally-differentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The radius of the ambiguity set is selected based on privacy budget, spread of data, and size of the problem. Machine learning with private dataset is rewritten as a distributionally-robust optimization. For general distributions, the distributionally-robust optimization problem can be relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For Gaussian data, the distributionally-robust optimization problem can be solved exactly to find an optimal regularizer. Training with this regularizer can be posed as a semi-definite program.
  • Item
    Thumbnail Image
    A game-theoretic approach to adversarial linear Gaussian classification
    Farokhi, F (Elsevier BV, 2021-09)
    We employ a game-theoretic model to analyze the interaction between an adversary and a classifier. There are two (i.e., positive and negative) classes to which data points can belong. The adversary wants to maximize the probability of miss-detection for the positive class (i.e., false negative probability) while it does not want to significantly modify the data point so that it still maintains favourable traits of the original class. The classifier, on the other hand, wants maximize the probability of correct detection for the positive class (i.e., true positive probability) subject to a lower-bound on the probability of correct detection for the negative class (i.e., true negative probability). For conditionally Gaussian data points (conditioned on the class) and linear support vector machine classifiers, we rewrite the optimization problems of the adversary and the classifier as convex problems and use best response dynamics to learn an equilibrium of the game. This results in computing a linear support vector machine classifier that is robust against adversarial input manipulations.
  • Item
    Thumbnail Image
    Why Does Regularization Help with Mitigating Poisoning Attacks?
    Farokhi, F (Springer, 2021)
    We use distributionally-robust optimization for machine learning to mitigate the effect of data poisoning attacks. We provide performance guarantees for the trained model on the original data (not including the poison records) by training the model for the worst-case distribution on a neighbourhood around the empirical distribution (extracted from the training dataset corrupted by a poisoning attack) defined using the Wasserstein distance. We relax the distributionally-robust machine learning problem by finding an upper bound for the worst-case fitness based on the empirical sampled-averaged fitness and the Lipschitz-constant of the fitness function (on the data for given model parameters) as regularizer. For regression models, we prove that this regularizer is equal to the dual norm of the model parameters.
  • Item
    Thumbnail Image
    A Fundamental Bound on Performance of Non-Intrusive Load Monitoring Algorithms with Application to Smart-Meter Privacy
    Farokhi, F (Elsevier BV, 2020)
    We prove that the expected estimation error of nonintrusive load monitoring algorithms is lower bounded by the trace of the inverse of the cross-correlation matrix between the derivatives of the load profiles of the appliances. We use this fundamental bound to develop privacy-preserving policies. Particularly, we devise a load-scheduling policy by maximizing the lower bound on the expected estimation error of non-intrusive load monitoring algorithms.