 Electrical and Electronic Engineering  Research Publications
Electrical and Electronic Engineering  Research Publications
Permanent URI for this collection
36 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1  10 of 36

ItemInformationtheoretic privacy through chaos synchronization and optimal additive noiseMurguia, C ; Shames, I ; Farokhi, F ; Nešić, D ; Farokhi, F (Springer, 2020)We study the problem of maximizing privacy of data sets by adding random vectors generated via synchronized chaotics oscillators. In particular, we consider the setup where information about data sets, queries, is sent through public (unsecured) communication channels to a remote station. To hide private features (specific entries) within the data set, we corrupt the response to queries by adding random vectors.We send the distorted query (the sum of the requested query and the random vector) through the public channel. The distribution of the additive random vector is designed to minimize the mutual information (our privacy metric) between private entries of the data set and the distorted query. We cast the synthesis of this distribution as a convex program in the probabilities of the additive random vector. Once we have the optimal distribution, we propose an algorithm to generate pseudorandom realizations from this distribution using trajectories of a chaotic oscillator. At the other end of the channel, we have a second chaotic oscillator, which we use to generate realizations from the same distribution. Note that if we obtain the same realizations on both sides of the channel, we can simply subtract the realization from the distorted query to recover the requested query. To generate equal realizations, we need the two chaotic oscillators to be synchronized, i.e., we need them to generate exactly the same trajectories on both sides of the channel synchronously in time. We force the two chaotic oscillators into exponential synchronization using a driving signal. Simulations are presented to illustrate our results.

ItemFisher information privacy with application to smart meter privacy using HVAC unitsFarokhi, F ; Sandberg, H ; Farokhi, F (Springer, 2020)In this chapter, we use Heating, Ventilation, and Air Conditioning (HVAC) units to preserve the privacy of households with smart meters in addition to regulating indoor temperature. We model the effect of the HVAC unit as an additive noise in the household consumption. The CramérRao bound is used to relate the inverse of the trace of the Fisher information matrix to the quality of an adversary’s estimation error of the household private consumption from the aggregate consumption of the household with the HVAC unit. This establishes the Fisher information as the measure of privacy leakage. We compute the optimal privacypreserving policy for controlling the HVAC unit through minimizing a weighted sum of the Fisher information and the cost operating the HVAC unit. The optimization problem also contains the constraints on the temperatures of the house.

ItemFeedback control using a strategic sensorFarokhi, F (TAYLOR & FRANCIS LTD, 20210102)A dynamic estimation and control problem with a strategic sensor is considered. The strategic sensor may provide corrupted messages about the state measurements of a discretetime linear timeinvariant dynamical system to the system operator (or the controller). The system operator then uses this information to construct an estimate of the state of the system (and perhaps private variables of the sensor). The estimate is used to control the system to achieve the operator's desired objective. The problem is formulated as a game, which might be conflicting to that of the strategic sensor. An equilibrium of the game is computed and its properties are investigated.

ItemMeasuring Information Leakage in Nonstochastic BruteForce GuessingFarokhi, F ; Ding, N (IEEE, 2021)We propose an operational measure of information leakage in a nonstochastic setting to formalize privacy against a bruteforce guessing adversary. We use uncertain variables, nonprobabilistic counterparts of random variables, to construct a guessing framework in which an adversary is interested in determining private information based on uncertain reports. We consider bruteforce trialanderror guessing in which an adversary can potentially check all the possibilities of the private information that are compatible with the available outputs to find the actual private realization. The ratio of the worstcase number of guesses for the adversary in the presence of the output and in the absence of it captures the reduction in the adversary's guessing complexity and is thus used as a measure of private information leakage. We investigate the relationship between the newlydeveloped measure of information leakage with maximin information and stochastic maximal leakage that are shown to arise in oneshot guessing.

ItemStructured preconditioning of conjugate gradients for pathgraph network optimal control problemsZafar, A ; Cantoni, M ; Farokhi, F (IEEE, 20210101)A structured preconditioned conjugate gradient (PCG) based linear system solver is developed for implementing Newton updates in secondorder methods for a class of con strained network optimal control problems. Of specific interest are problems with discretetime dynamics arising from the pathgraph interconnection of N heterogeneous subsystems. The arithmetic complexity of each PCG step is O(NT), where T is the length of the time horizon. The proposed preconditioning involves a fixed number of block Jacobi iterations per PCG step. A decreasing analytic bound on the effective conditioning is given in terms of this number. The computations are decomposable across the spatial and temporal dimensions of the optimal control problem into subproblems of size independent of N and T. Numerical results are provided for two example systems.

ItemDo Autoregressive Models Protect Privacy Inferring Finegrained Energy Consumption from Aggregated Model ParametersSheikh, NU ; Asghar, HJ ; Farokhi, F ; Kaafar, MA (Institute of Electrical and Electronics Engineers (IEEE), 20210101)We investigate the extent to which statistical predictive models leak information about their training data. More specifically, based on the use case of household (electrical) energy consumption, we evaluate whether whitebox access to autoregressive (AR) models trained on such data together with background information, such as household energy data aggregates (e.g., monthly billing information) and publiclyavailable weather data, can lead to inferring finegrained energy data of any particular household. We construct two adversarial models aiming to infer finegrained energy consumption patterns. Both threat models use monthly billing information of target households. The second adversary has access to the AR model for a cluster of households containing the target household. Using two realworld energy datasets, we demonstrate that this adversary can apply maximum a posteriori estimation to reconstruct daily consumption of target households with significantly lower error than the first adversary, which serves as a baseline. Such finegrained data can essentially expose private information, such as occupancy levels. Finally, we use differential privacy (DP) to alleviate the privacy concerns of the adversary in disaggregating energy data. Our evaluations show that differentially private model parameters offer strong privacy protection against the adversary with moderate utility, captured in terms of model fitness to the cluster.

ItemNoiseless Privacy: Definition, Guarantees, and ApplicationsFarokhi, F (Institute of Electrical and Electronics Engineers (IEEE), 2021)In this paper, we define noiseless privacy, as a nonstochastic rival to differential privacy, requiring that the outputs of a mechanism (i.e., function composition of a privacypreserving mapping and a query) attain only a few values while varying the data of an individual (the logarithm of the number of the distinct values is bounded by the privacy budget). Therefore, the output of the mechanism is not fully informative of the data of the individuals in the dataset. We prove several guarantees for noiselesslyprivate mechanisms. The information content of the output about the data of an individual, even if an adversary knows all the other entries of the private dataset, is bounded by the privacy budget. The zeroerror capacity of memoryless channels using noiselessly private mechanisms for transmission is upper bounded by the privacy budget. The performance of a nonstochastic hypothesistesting adversary is bounded again by the privacy budget. Assuming that an adversary has access to a stochastic prior on the dataset, we prove that the estimation error of the adversary for individual entries of the dataset is lower bounded by a decreasing function of the privacy budget. In this case, we also show that the maximal leakage is bounded by the privacy budget. In addition to privacy guarantees, we prove that noiselesslyprivate mechanisms admit composition theorem and postprocessing does not weaken their privacy guarantees. We prove that quantization or binning can ensure noiseless privacy if the number of quantization levels is appropriately selected based on the sensitivity of the query and the privacy budget. Finally, we illustrate the privacy merits of noiseless privacy using multiple datasets in energy, transport, and finance.

ItemNonStochastic Private Function EvaluationFarokhi, F ; Nair, G (IEEE, 20210411)We consider private function evaluation to provide query responses based on private data of multiple untrusted entities in such a way that each cannot learn something substantially new about the data of others. First, we introduce perfect nonstochastic privacy in a twoparty scenario. Perfect privacy amounts to conditional unrelatedness of the query response and the private uncertain variable of other individuals conditioned on the uncertain variable of a given entity. We show that perfect privacy can be achieved for queries that are functions of the common uncertain variable, a generalization of the common random variable. We compute the closest approximation of the queries that do not take this form. To provide a tradeoff between privacy and utility, we relax the notion of perfect privacy. We define almost perfect privacy and show that this new definition equates to using conditional disassociation instead of conditional unrelatedness in the definition of perfect privacy. Then, we generalize the definitions to multiparty function evaluation (more than two data entities). We prove that uniform quantization of query responses, where the quantization resolution is a function of privacy budget and sensitivity of the query (cf., differential privacy), achieves function evaluation privacy.

ItemMeasuring Information Leakage in Nonstochastic BruteForce GuessingFarokhi, F ; Ding, N (IEEE, 20210411)We propose an operational measure of information leakage in a nonstochastic setting to formalize privacy against a bruteforce guessing adversary. We use uncertain variables, nonprobabilistic counterparts of random variables, to construct a guessing framework in which an adversary is interested in determining private information based on uncertain reports. We consider bruteforce trialanderror guessing in which an adversary can potentially check all the possibilities of the private information that are compatible with the available outputs to find the actual private realization. The ratio of the worstcase number of guesses for the adversary in the presence of the output and in the absence of it captures the reduction in the adversary’s guessing complexity and is thus used as a measure of private information leakage. We investigate the relationship between the newlydeveloped measure of information leakage with maximin information and stochastic maximal leakage that are shown to arise in oneshot guessing.

ItemDistributionallyrobust machine learning using locally differentiallyprivate dataFarokhi, F (SPRINGER HEIDELBERG, 20210610)We consider machine learning, particularly regression, using locallydifferentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The radius of the ambiguity set is selected based on privacy budget, spread of data, and size of the problem. Machine learning with private dataset is rewritten as a distributionallyrobust optimization. For general distributions, the distributionallyrobust optimization problem can be relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For Gaussian data, the distributionallyrobust optimization problem can be solved exactly to find an optimal regularizer. Training with this regularizer can be posed as a semidefinite program.