Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 5 of 5
  • Item
    No Preview Available
    Zero-Error Feedback Capacity for Bounded Stabilization and Finite-State Additive Noise Channels
    Saberi, A ; Farokhi, F ; Nair, GN (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022-10)
  • Item
    No Preview Available
    Bounded Estimation Over Finite-State Channels: Relating Topological Entropy and Zero-Error Capacity
    Saberi, A ; Farokhi, F ; Nair, GN (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2022-08)
  • Item
    Thumbnail Image
    Distributionally Robust Optimization With Noisy Data for Discrete Uncertainties Using Total Variation Distance
    Farokhi, F (Institute of Electrical and Electronics Engineers, 2023)
    Stochastic programs, where uncertainty distribution must be inferred from noisy data samples, are considered. They are approximated with distributionally/robust optimizations that minimize the worst-case expected cost over ambiguity sets, i.e., sets of distributions that are sufficiently compatible with observed data. The ambiguity sets capture probability distributions whose convolution with the noise distribution is within a ball centered at the empirical noisy distribution of data samples parameterized by total variation distance. Using the prescribed ambiguity set, the solutions of the distributionally/robust optimizations converge to the solutions of the original stochastic programs when the number of the data samples grow to infinity. Therefore, the proposed distributionally/robust optimization problems are asymptotically consistent. The distributionally/robust optimization problems can be cast as tractable optimization problems.
  • Item
    Thumbnail Image
    Do Auto-Regressive Models Protect Privacy? Inferring Fine-Grained Energy Consumption From Aggregated Model Parameters
    Sheikh, NU ; Asghar, HJ ; Farokhi, F ; Kaafar, MA (IEEE COMPUTER SOC, 2022-11-01)
    We investigate the extent to which statistical predictive models leak information about their training data. More specifically, based on the use case of household (electrical) energy consumption, we evaluate whether white-box access to auto-regressive (AR) models trained on such data together with background information, such as household energy data aggregates (e.g., monthly billing information) and publicly-available weather data, can lead to inferring fine-grained energy data of any particular household. We construct two adversarial models aiming to infer fine-grained energy consumption patterns. Both threat models use monthly billing information of target households. The second adversary has access to the AR model for a cluster of households containing the target household. Using two real-world energy datasets, we demonstrate that this adversary can apply maximum a posteriori estimation to reconstruct daily consumption of target households with significantly lower error than the first adversary, which serves as a baseline. Such fine-grained data can essentially expose private information, such as occupancy levels. Finally, we use differential privacy (DP) to alleviate the privacy concerns of the adversary in dis-aggregating energy data. Our evaluations show that differentially private model parameters offer strong privacy protection against the adversary with moderate utility, captured in terms of model fitness to the cluster.
  • Item
    Thumbnail Image
    Distributionally-robust machine learning using locally differentially-private data
    Farokhi, F (SPRINGER HEIDELBERG, 2022-05)
    We consider machine learning, particularly regression, using locally-differentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The radius of the ambiguity set is selected based on privacy budget, spread of data, and size of the problem. Machine learning with private dataset is rewritten as a distributionally-robust optimization. For general distributions, the distributionally-robust optimization problem can be relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For Gaussian data, the distributionally-robust optimization problem can be solved exactly to find an optimal regularizer. Training with this regularizer can be posed as a semi-definite program.