Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 36
  • Item
    Thumbnail Image
    Information-theoretic privacy through chaos synchronization and optimal additive noise
    Murguia, C ; Shames, I ; Farokhi, F ; Nešić, D ; Farokhi, F (Springer, 2020)
    We study the problem of maximizing privacy of data sets by adding random vectors generated via synchronized chaotics oscillators. In particular, we consider the setup where information about data sets, queries, is sent through public (unsecured) communication channels to a remote station. To hide private features (specific entries) within the data set, we corrupt the response to queries by adding random vectors.We send the distorted query (the sum of the requested query and the random vector) through the public channel. The distribution of the additive random vector is designed to minimize the mutual information (our privacy metric) between private entries of the data set and the distorted query. We cast the synthesis of this distribution as a convex program in the probabilities of the additive random vector. Once we have the optimal distribution, we propose an algorithm to generate pseudorandom realizations from this distribution using trajectories of a chaotic oscillator. At the other end of the channel, we have a second chaotic oscillator, which we use to generate realizations from the same distribution. Note that if we obtain the same realizations on both sides of the channel, we can simply subtract the realization from the distorted query to recover the requested query. To generate equal realizations, we need the two chaotic oscillators to be synchronized, i.e., we need them to generate exactly the same trajectories on both sides of the channel synchronously in time. We force the two chaotic oscillators into exponential synchronization using a driving signal. Simulations are presented to illustrate our results.
  • Item
    Thumbnail Image
    Fisher information privacy with application to smart meter privacy using HVAC units
    Farokhi, F ; Sandberg, H ; Farokhi, F (Springer, 2020)
    In this chapter, we use Heating, Ventilation, and Air Conditioning (HVAC) units to preserve the privacy of households with smart meters in addition to regulating indoor temperature. We model the effect of the HVAC unit as an additive noise in the household consumption. The Cramér-Rao bound is used to relate the inverse of the trace of the Fisher information matrix to the quality of an adversary’s estimation error of the household private consumption from the aggregate consumption of the household with the HVAC unit. This establishes the Fisher information as the measure of privacy leakage. We compute the optimal privacy-preserving policy for controlling the HVAC unit through minimizing a weighted sum of the Fisher information and the cost operating the HVAC unit. The optimization problem also contains the constraints on the temperatures of the house.
  • Item
    Thumbnail Image
    Feedback control using a strategic sensor
    Farokhi, F (TAYLOR & FRANCIS LTD, 2021-01-02)
    A dynamic estimation and control problem with a strategic sensor is considered. The strategic sensor may provide corrupted messages about the state measurements of a discrete-time linear time-invariant dynamical system to the system operator (or the controller). The system operator then uses this information to construct an estimate of the state of the system (and perhaps private variables of the sensor). The estimate is used to control the system to achieve the operator's desired objective. The problem is formulated as a game, which might be conflicting to that of the strategic sensor. An equilibrium of the game is computed and its properties are investigated.
  • Item
    Thumbnail Image
    Measuring Information Leakage in Non-stochastic Brute-Force Guessing
    Farokhi, F ; Ding, N (IEEE, 2021)
    We propose an operational measure of information leakage in a non-stochastic setting to formalize privacy against a brute-force guessing adversary. We use uncertain variables, non-probabilistic counterparts of random variables, to construct a guessing framework in which an adversary is interested in determining private information based on uncertain reports. We consider brute-force trial-and-error guessing in which an adversary can potentially check all the possibilities of the private information that are compatible with the available outputs to find the actual private realization. The ratio of the worst-case number of guesses for the adversary in the presence of the output and in the absence of it captures the reduction in the adversary's guessing complexity and is thus used as a measure of private information leakage. We investigate the relationship between the newly-developed measure of information leakage with maximin information and stochastic maximal leakage that are shown to arise in one-shot guessing.
  • Item
    Thumbnail Image
    Structured preconditioning of conjugate gradients for path-graph network optimal control problems
    Zafar, A ; Cantoni, M ; Farokhi, F (IEEE, 2021-01-01)
    A structured preconditioned conjugate gradient (PCG) based linear system solver is developed for implementing Newton updates in second-order methods for a class of con- strained network optimal control problems. Of specific interest are problems with discrete-time dynamics arising from the path-graph interconnection of N heterogeneous sub-systems. The arithmetic complexity of each PCG step is O(NT), where T is the length of the time horizon. The proposed preconditioning involves a fixed number of block Jacobi iterations per PCG step. A decreasing analytic bound on the effective conditioning is given in terms of this number. The computations are decomposable across the spatial and temporal dimensions of the optimal control problem into sub-problems of size independent of N and T. Numerical results are provided for two example systems.
  • Item
    Thumbnail Image
    Do Auto-regressive Models Protect Privacy Inferring Fine-grained Energy Consumption from Aggregated Model Parameters
    Sheikh, NU ; Asghar, HJ ; Farokhi, F ; Kaafar, MA (Institute of Electrical and Electronics Engineers (IEEE), 2021-01-01)
    We investigate the extent to which statistical predictive models leak information about their training data. More specifically, based on the use case of household (electrical) energy consumption, we evaluate whether white-box access to auto-regressive (AR) models trained on such data together with background information, such as household energy data aggregates (e.g., monthly billing information) and publicly-available weather data, can lead to inferring fine-grained energy data of any particular household. We construct two adversarial models aiming to infer fine-grained energy consumption patterns. Both threat models use monthly billing information of target households. The second adversary has access to the AR model for a cluster of households containing the target household. Using two real-world energy datasets, we demonstrate that this adversary can apply maximum a posteriori estimation to reconstruct daily consumption of target households with significantly lower error than the first adversary, which serves as a baseline. Such fine-grained data can essentially expose private information, such as occupancy levels. Finally, we use differential privacy (DP) to alleviate the privacy concerns of the adversary in dis-aggregating energy data. Our evaluations show that differentially private model parameters offer strong privacy protection against the adversary with moderate utility, captured in terms of model fitness to the cluster.
  • Item
    Thumbnail Image
    Noiseless Privacy: Definition, Guarantees, and Applications
    Farokhi, F (Institute of Electrical and Electronics Engineers (IEEE), 2021)
    In this paper, we define noiseless privacy, as a nonstochastic rival to differential privacy, requiring that the outputs of a mechanism (i.e., function composition of a privacy-preserving mapping and a query) attain only a few values while varying the data of an individual (the logarithm of the number of the distinct values is bounded by the privacy budget). Therefore, the output of the mechanism is not fully informative of the data of the individuals in the dataset. We prove several guarantees for noiselessly-private mechanisms. The information content of the output about the data of an individual, even if an adversary knows all the other entries of the private dataset, is bounded by the privacy budget. The zero-error capacity of memory-less channels using noiselessly private mechanisms for transmission is upper bounded by the privacy budget. The performance of a non-stochastic hypothesis-testing adversary is bounded again by the privacy budget. Assuming that an adversary has access to a stochastic prior on the dataset, we prove that the estimation error of the adversary for individual entries of the dataset is lower bounded by a decreasing function of the privacy budget. In this case, we also show that the maximal leakage is bounded by the privacy budget. In addition to privacy guarantees, we prove that noiselessly-private mechanisms admit composition theorem and post-processing does not weaken their privacy guarantees. We prove that quantization or binning can ensure noiseless privacy if the number of quantization levels is appropriately selected based on the sensitivity of the query and the privacy budget. Finally, we illustrate the privacy merits of noiseless privacy using multiple datasets in energy, transport, and finance.
  • Item
    Thumbnail Image
    Non-Stochastic Private Function Evaluation
    Farokhi, F ; Nair, G (IEEE, 2021-04-11)
    We consider private function evaluation to provide query responses based on private data of multiple untrusted entities in such a way that each cannot learn something substantially new about the data of others. First, we introduce perfect non-stochastic privacy in a two-party scenario. Perfect privacy amounts to conditional unrelatedness of the query response and the private uncertain variable of other individuals conditioned on the uncertain variable of a given entity. We show that perfect privacy can be achieved for queries that are functions of the common uncertain variable, a generalization of the common random variable. We compute the closest approximation of the queries that do not take this form. To provide a trade-off between privacy and utility, we relax the notion of perfect privacy. We define almost perfect privacy and show that this new definition equates to using conditional disassociation instead of conditional unrelatedness in the definition of perfect privacy. Then, we generalize the definitions to multi-party function evaluation (more than two data entities). We prove that uniform quantization of query responses, where the quantization resolution is a function of privacy budget and sensitivity of the query (cf., differential privacy), achieves function evaluation privacy.
  • Item
    Thumbnail Image
    Measuring Information Leakage in Non-stochastic Brute-Force Guessing
    Farokhi, F ; Ding, N (IEEE, 2021-04-11)
    We propose an operational measure of information leakage in a non-stochastic setting to formalize privacy against a brute-force guessing adversary. We use uncertain variables, non-probabilistic counterparts of random variables, to construct a guessing framework in which an adversary is interested in determining private information based on uncertain reports. We consider brute-force trial-and-error guessing in which an adversary can potentially check all the possibilities of the private information that are compatible with the available outputs to find the actual private realization. The ratio of the worst-case number of guesses for the adversary in the presence of the output and in the absence of it captures the reduction in the adversary’s guessing complexity and is thus used as a measure of private information leakage. We investigate the relationship between the newly-developed measure of information leakage with maximin information and stochastic maximal leakage that are shown to arise in one-shot guessing.
  • Item
    Thumbnail Image
    Distributionally-robust machine learning using locally differentially-private data
    Farokhi, F (SPRINGER HEIDELBERG, 2021-06-10)
    We consider machine learning, particularly regression, using locally-differentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The radius of the ambiguity set is selected based on privacy budget, spread of data, and size of the problem. Machine learning with private dataset is rewritten as a distributionally-robust optimization. For general distributions, the distributionally-robust optimization problem can be relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For Gaussian data, the distributionally-robust optimization problem can be solved exactly to find an optimal regularizer. Training with this regularizer can be posed as a semi-definite program.