 Electrical and Electronic Engineering  Research Publications
Electrical and Electronic Engineering  Research Publications
Permanent URI for this collection
25 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1  10 of 25

ItemFeedback control using a strategic sensorFarokhi, F (TAYLOR & FRANCIS LTD, 20210102)A dynamic estimation and control problem with a strategic sensor is considered. The strategic sensor may provide corrupted messages about the state measurements of a discretetime linear timeinvariant dynamical system to the system operator (or the controller). The system operator then uses this information to construct an estimate of the state of the system (and perhaps private variables of the sensor). The estimate is used to control the system to achieve the operator's desired objective. The problem is formulated as a game, which might be conflicting to that of the strategic sensor. An equilibrium of the game is computed and its properties are investigated.

ItemStructured preconditioning of conjugate gradients for pathgraph network optimal control problemsZafar, A ; Cantoni, M ; Farokhi, F (IEEE, 20210101)A structured preconditioned conjugate gradient (PCG) based linear system solver is developed for implementing Newton updates in secondorder methods for a class of con strained network optimal control problems. Of specific interest are problems with discretetime dynamics arising from the pathgraph interconnection of N heterogeneous subsystems. The arithmetic complexity of each PCG step is O(NT), where T is the length of the time horizon. The proposed preconditioning involves a fixed number of block Jacobi iterations per PCG step. A decreasing analytic bound on the effective conditioning is given in terms of this number. The computations are decomposable across the spatial and temporal dimensions of the optimal control problem into subproblems of size independent of N and T. Numerical results are provided for two example systems.

ItemDo Autoregressive Models Protect Privacy Inferring Finegrained Energy Consumption from Aggregated Model ParametersSheikh, NU ; Asghar, HJ ; Farokhi, F ; Kaafar, MA (Institute of Electrical and Electronics Engineers (IEEE), 20210101)We investigate the extent to which statistical predictive models leak information about their training data. More specifically, based on the use case of household (electrical) energy consumption, we evaluate whether whitebox access to autoregressive (AR) models trained on such data together with background information, such as household energy data aggregates (e.g., monthly billing information) and publiclyavailable weather data, can lead to inferring finegrained energy data of any particular household. We construct two adversarial models aiming to infer finegrained energy consumption patterns. Both threat models use monthly billing information of target households. The second adversary has access to the AR model for a cluster of households containing the target household. Using two realworld energy datasets, we demonstrate that this adversary can apply maximum a posteriori estimation to reconstruct daily consumption of target households with significantly lower error than the first adversary, which serves as a baseline. Such finegrained data can essentially expose private information, such as occupancy levels. Finally, we use differential privacy (DP) to alleviate the privacy concerns of the adversary in disaggregating energy data. Our evaluations show that differentially private model parameters offer strong privacy protection against the adversary with moderate utility, captured in terms of model fitness to the cluster.

ItemNoiseless Privacy: Definition, Guarantees, and ApplicationsFarokhi, F (Institute of Electrical and Electronics Engineers (IEEE), 2021)In this paper, we define noiseless privacy, as a nonstochastic rival to differential privacy, requiring that the outputs of a mechanism (i.e., function composition of a privacypreserving mapping and a query) attain only a few values while varying the data of an individual (the logarithm of the number of the distinct values is bounded by the privacy budget). Therefore, the output of the mechanism is not fully informative of the data of the individuals in the dataset. We prove several guarantees for noiselesslyprivate mechanisms. The information content of the output about the data of an individual, even if an adversary knows all the other entries of the private dataset, is bounded by the privacy budget. The zeroerror capacity of memoryless channels using noiselessly private mechanisms for transmission is upper bounded by the privacy budget. The performance of a nonstochastic hypothesistesting adversary is bounded again by the privacy budget. Assuming that an adversary has access to a stochastic prior on the dataset, we prove that the estimation error of the adversary for individual entries of the dataset is lower bounded by a decreasing function of the privacy budget. In this case, we also show that the maximal leakage is bounded by the privacy budget. In addition to privacy guarantees, we prove that noiselesslyprivate mechanisms admit composition theorem and postprocessing does not weaken their privacy guarantees. We prove that quantization or binning can ensure noiseless privacy if the number of quantization levels is appropriately selected based on the sensitivity of the query and the privacy budget. Finally, we illustrate the privacy merits of noiseless privacy using multiple datasets in energy, transport, and finance.

ItemDistributionallyrobust machine learning using locally differentiallyprivate dataFarokhi, F (SPRINGER HEIDELBERG, 20210610)We consider machine learning, particularly regression, using locallydifferentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The radius of the ambiguity set is selected based on privacy budget, spread of data, and size of the problem. Machine learning with private dataset is rewritten as a distributionallyrobust optimization. For general distributions, the distributionallyrobust optimization problem can be relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For Gaussian data, the distributionallyrobust optimization problem can be solved exactly to find an optimal regularizer. Training with this regularizer can be posed as a semidefinite program.

ItemA gametheoretic approach to adversarial linear Gaussian classificationFarokhi, F (Elsevier BV, 202109)We employ a gametheoretic model to analyze the interaction between an adversary and a classifier. There are two (i.e., positive and negative) classes to which data points can belong. The adversary wants to maximize the probability of missdetection for the positive class (i.e., false negative probability) while it does not want to significantly modify the data point so that it still maintains favourable traits of the original class. The classifier, on the other hand, wants maximize the probability of correct detection for the positive class (i.e., true positive probability) subject to a lowerbound on the probability of correct detection for the negative class (i.e., true negative probability). For conditionally Gaussian data points (conditioned on the class) and linear support vector machine classifiers, we rewrite the optimization problems of the adversary and the classifier as convex problems and use best response dynamics to learn an equilibrium of the game. This results in computing a linear support vector machine classifier that is robust against adversarial input manipulations.

ItemWhy Does Regularization Help with Mitigating Poisoning Attacks?Farokhi, F (Springer, 2021)We use distributionallyrobust optimization for machine learning to mitigate the effect of data poisoning attacks. We provide performance guarantees for the trained model on the original data (not including the poison records) by training the model for the worstcase distribution on a neighbourhood around the empirical distribution (extracted from the training dataset corrupted by a poisoning attack) defined using the Wasserstein distance. We relax the distributionallyrobust machine learning problem by finding an upper bound for the worstcase fitness based on the empirical sampledaveraged fitness and the Lipschitzconstant of the fitness function (on the data for given model parameters) as regularizer. For regression models, we prove that this regularizer is equal to the dual norm of the model parameters.

ItemA Fundamental Bound on Performance of NonIntrusive Load Monitoring Algorithms with Application to SmartMeter PrivacyFarokhi, F (Elsevier BV, 2020)We prove that the expected estimation error of nonintrusive load monitoring algorithms is lower bounded by the trace of the inverse of the crosscorrelation matrix between the derivatives of the load profiles of the appliances. We use this fundamental bound to develop privacypreserving policies. Particularly, we devise a loadscheduling policy by maximizing the lower bound on the expected estimation error of nonintrusive load monitoring algorithms.

ItemLinear quadratic control computation for systems with a directed tree structureZafar, A ; Farokhi, F ; Cantoni, M (ELSEVIER, 20200101)

ItemPrivacy Against State Estimation: An Optimization Framework based on the Data Processing InequalityMurguia, C ; Shames, I ; Farokhi, F ; Nesic, D (ELSEVIER, 20200101)
 «
 1 (current)
 2
 3
 »