Electrical and Electronic Engineering - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 14
  • Item
    Thumbnail Image
    On Privacy of Quantized Sensor Measurements through Additive Noise
    Murguia, C ; Shames, I ; Farokhi, F ; Nesic, D (IEEE, 2018-01-01)
    We study the problem of maximizing privacy of quantized sensor measurements by adding random variables. In particular, we consider the setting where information about the state of a process is obtained using noisy sensor measurements. This information is quantized and sent to a remote station through an unsecured communication network. It is desired to keep the state of the process private; however, because the network is not secure, adversaries might have access to sensor information, which could be used to estimate the process state. To avoid an accurate state estimation, we add random numbers to the quantized sensor measurements and send the sum to the remote station instead. The distribution of these random variables is designed to minimize the mutual information between the sum and the quantized sensor measurements for a desired level of distortion - how different the sum and the quantized sensor measurements are allowed to be. Simulations are presented to illustrate our results.
  • Item
    Thumbnail Image
    Security Versus Privacy
    Farokhi, F ; Esfahani, PM (IEEE, 2019-01-18)
    Linear queries can be submitted to a server containing private data. The server provides response to the queries that are corrupted using an additive noise to preserve the privacy of those whose data is stored on the server. A measure of privacy is defined which is inversely proportional to the trace of the Fisher information matrix. It is assumed that an adversary can inject a false bias to the responses. Thus, a measure of the security based on the Kullback-Leiber divergence of the probability density functions of the response with and without the bias is defined. An optimization problem for balancing privacy and security is proposed and solved. It is shown that the level of guaranteed privacy times the level of security is always upper bounded by a constant. Therefore, by increasing the level of privacy, the security guarantees can only be weakened and vice versa.
  • Item
    Thumbnail Image
    Measuring Information Leakage in Non-stochastic Brute-Force Guessing
    Farokhi, F ; Ding, N (IEEE, 2021)
    We propose an operational measure of information leakage in a non-stochastic setting to formalize privacy against a brute-force guessing adversary. We use uncertain variables, non-probabilistic counterparts of random variables, to construct a guessing framework in which an adversary is interested in determining private information based on uncertain reports. We consider brute-force trial-and-error guessing in which an adversary can potentially check all the possibilities of the private information that are compatible with the available outputs to find the actual private realization. The ratio of the worst-case number of guesses for the adversary in the presence of the output and in the absence of it captures the reduction in the adversary's guessing complexity and is thus used as a measure of private information leakage. We investigate the relationship between the newly-developed measure of information leakage with maximin information and stochastic maximal leakage that are shown to arise in one-shot guessing.
  • Item
    Thumbnail Image
    Non-Stochastic Private Function Evaluation
    Farokhi, F ; Nair, G (IEEE, 2021-04-11)
    We consider private function evaluation to provide query responses based on private data of multiple untrusted entities in such a way that each cannot learn something substantially new about the data of others. First, we introduce perfect non-stochastic privacy in a two-party scenario. Perfect privacy amounts to conditional unrelatedness of the query response and the private uncertain variable of other individuals conditioned on the uncertain variable of a given entity. We show that perfect privacy can be achieved for queries that are functions of the common uncertain variable, a generalization of the common random variable. We compute the closest approximation of the queries that do not take this form. To provide a trade-off between privacy and utility, we relax the notion of perfect privacy. We define almost perfect privacy and show that this new definition equates to using conditional disassociation instead of conditional unrelatedness in the definition of perfect privacy. Then, we generalize the definitions to multi-party function evaluation (more than two data entities). We prove that uniform quantization of query responses, where the quantization resolution is a function of privacy budget and sensitivity of the query (cf., differential privacy), achieves function evaluation privacy.
  • Item
    Thumbnail Image
    Measuring Information Leakage in Non-stochastic Brute-Force Guessing
    Farokhi, F ; Ding, N (IEEE, 2021-04-11)
    We propose an operational measure of information leakage in a non-stochastic setting to formalize privacy against a brute-force guessing adversary. We use uncertain variables, non-probabilistic counterparts of random variables, to construct a guessing framework in which an adversary is interested in determining private information based on uncertain reports. We consider brute-force trial-and-error guessing in which an adversary can potentially check all the possibilities of the private information that are compatible with the available outputs to find the actual private realization. The ratio of the worst-case number of guesses for the adversary in the presence of the output and in the absence of it captures the reduction in the adversary’s guessing complexity and is thus used as a measure of private information leakage. We investigate the relationship between the newly-developed measure of information leakage with maximin information and stochastic maximal leakage that are shown to arise in one-shot guessing.
  • Item
    Thumbnail Image
    Using Renyi-divergence and Arimoto-Renyi Information to Quantify Membership Information Leakage
    Farokhi, F (IEEE, 2021)
    Membership inference attacks, i.e., adversarial attacks inferring whether a data record is used for training a machine learning model, has been recently shown to pose a legitimate privacy risk in machine learning literature. In this paper, we propose two measures of information leakage for investigating membership inference attacks backed by results on binary hypothesis testing in information theory literature. The first measure of information leakage is defined using Rényi α-divergence of the distribution of output of a machine learning model for data records that are in and out of the training dataset. The second measure of information leakage is based on Arimoto-Rényi α-information between the membership random variable (whether the data record is in or out of the training dataset) and the output of the machine learning model. These measures of leakage are shown to be related to each other. We compare the proposed measures of information leakage with α-leakage from the information-theoretic privacy literature to establish some useful properties. We establish an upper bound for α-divergence information leakage as a function of the privacy budget for differentially-private machine learning models.
  • Item
    Thumbnail Image
    Secure Control of Nonlinear Systems Using Semi-Homomorphic Encryption
    Lin, Y ; Farokhi, F ; Shames, I ; Nesic, D (IEEE, 2018-01-01)
    A secure nonlinear networked control system (NCS) design using semi-homomorphic encryption, namely, Paillier encryption is studied. Under certain assumptions, control signal computation using encrypted signal directly is allowed by semi-homomorphic encryption. Thus, the security of the NCSs is further enhanced by concealing information on the controller side. However, additional technical difficulties in the design and analysis of NCSs are induced compared to standard NCSs. In this paper, the stabilization of a nonlinear discrete time NCS is considered. More specifically, sufficient conditions on the encryption parameters that guarantee stability of the NCS are provided, and a trade-off between the encryption parameters and the ultimate bound of the state is shown.
  • Item
    Thumbnail Image
    Secure and Private Cloud-Based Control Using Semi-Homomorphic Encryption
    Farokhi, F ; Shames, I ; Batterham, N (Elsevier, 2016)
    Networked control systems with encrypted sensors measurements is considered. Semi-homomorphic encryption is used so that the controller can perform the required computation on the encrypted data. Specifically, in this paper, the Paillier encryption technique is utilized that allows summation of decrypted data to be performed by multiplication of the encrypted data. Conditions on the parameters of the encryption technique are provided that guarantee the stability of the closed-loop system and ensure certain bounds on the closed-loop performance.
  • Item
    Thumbnail Image
    Compressive Sensing in Fault Detection
    Farokhi, F ; Shames, I (IEEE, 2018-08-09)
    Randomly generated tests are used to identify faulty sensors in large-scale discrete-time linear time-invariant dynamical systems with high probability. It is proved that the number of the required tests for successfully identifying the location of the faulty sensors (with high probability) scales logarithmically with the number of the sensors and quadratically with the maximum number of faulty sensors. It is also proved that the problem of decoding the identity of the faulty sensors based on the random tests can be cast as a linear programming problem and therefore can be solved reliably and efficiently even for large-scale systems. A numerical example based on automated irrigation networks is utilized to demonstrate the results.
  • Item
    Thumbnail Image
    An Explicit Formula for the Zero-Error Feedback Capacity of a Class of Finite-State Additive Noise Channels
    Saberi, A ; Farokhi, F ; Nair, GN (IEEE, 2020)
    It is known that for a discrete channel with correlated additive noise, the ordinary capacity with or without feedback both equal log q−H(Z), where H(Z)is the entropy rate of the noise process Z and q is the alphabet size. In this paper, a class of finite-state additive noise channels is introduced. It is shown that the zero-error feedback capacity of such channels is either zero or C 0 f = log q - h(Z), where h(Z) is the topological entropy of the noise process. Moreover, the zero-error capacity without feedback is lower-bounded by log q - 2h(Z). We explicitly compute the zero-error feedback capacity for several examples, including channels with isolated errors and a Gilbert-Elliot channel.