Publications

Forthcoming
T. Chanyaswad, A. Dytso, H. V. Poor, and P. Mittal, “A Differential Privacy Mechanism Design Under Matrix-Valued Query,” Forthcoming. Preprint VersionAbstract
Traditionally, differential privacy mechanism design has been tailored for a scalar-valued query function. Although many mechanisms such as the Laplace and Gaussian mechanisms can be extended to a matrix-valued query function by adding i.i.d. noise to each element of the matrix, this method is often sub-optimal as it forfeits an opportunity to exploit the structural characteristics typically associated with matrix analysis. In this work, we consider the design of differential privacy mechanism specifically for a matrix-valued query function. The proposed solution is to utilize a matrix-variate noise, as opposed to the traditional scalar-valued noise. Particularly, we propose a novel differential privacy mechanism called the Matrix-Variate Gaussian (MVG) mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution. We prove that the MVG mechanism preserves (ϵ,δ)-differential privacy, and show that it allows the structural characteristics of the matrix-valued query function to naturally be exploited. Furthermore, due to the multi-dimensional nature of the MVG mechanism and the matrix-valued query, we introduce the concept of directional noise, which can be utilized to mitigate the impact the noise has on the utility of the query. Finally, we demonstrate the performance of the MVG mechanism and the advantages of directional noise using three matrix-valued queries on three privacy-sensitive datasets. We find that the MVG mechanism notably outperforms four previous state-of-the-art approaches, and provides comparable utility to the non-private baseline. Our work thus presents a promising prospect for both future research and implementation of differential privacy for matrix-valued query functions.
T. Chanyaswad, A. Dytso, H. V. Poor, and P. Mittal, “MVG Mechanism: Differential Privacy under Matrix-Valued Query”. Forthcoming. Preprint VersionAbstract
Differential privacy mechanism design has traditionally been tailored for a scalar-valued query function. Although many mechanisms such as the Laplace and Gaussian mechanisms can be extended to a matrix-valued query function by adding i.i.d. noise to each element of the matrix, this method is often suboptimal as it forfeits an opportunity to exploit the structural characteristics typically associated with matrix analysis. To address this challenge, we propose a novel differential privacy mechanism called the Matrix-Variate Gaussian (MVG) mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution, and we rigorously prove that the MVG mechanism preserves (ϵ,δ)-differential privacy. Furthermore, we introduce the concept of directional noise made possible by the design of the MVG mechanism. Directional noise allows the impact of the noise on the utility of the matrix-valued query function to be moderated. Finally, we experimentally demonstrate the performance of our mechanism using three matrix-valued queries on three privacy-sensitive datasets. We find that the MVG mechanism notably outperforms four previous state-of-the-art approaches, and provides comparable utility to the non-private baseline. Our work thus presents a promising prospect for both future research and implementation of differential privacy for matrix-valued query functions.
T. Chanyaswad, C. Liu, and P. Mittal, “Coupling Random Orthonormal Projection with Gaussian Generative Model for Non-Interactive Private Data Release,” Forthcoming. Preprint VersionAbstract

A key challenge facing the design of differential privacy in the non-interactive setting is to maintain the utility of the released data. To overcome this challenge, we utilize the Diaconis Freedman-Meckes (DFM) effect, which states that most projections of high-dimensional data are nearly Gaussian. Hence, we propose the RON-Gauss model that leverages the novel combination of dimensionality reduction via random orthonormal (RON) projection and the Gaussian generative model for synthesizing differentially-private data. We analyze how RON Gauss benefits from the DFM effect, and present multiple algorithms for a range of machine learning applications, including both unsupervised and supervised learning. Furthermore, we rigorously prove that (a) our algorithms satisfy the strong ϵ-differential privacy guarantee, and (b) RON projection can lower the level of perturbation required for differential privacy. Finally, we illustrate the effectiveness of RON-Gauss under three common machine learning applications -- clustering, classification, and regression -- on three large real-world datasets. Our empirical results show that (a) RON-Gauss outperforms previous approaches by up to an order of magnitude, and (b) loss in utility compared to the non-private real data is small. Thus, RON-Gauss can serve as a key enabler for real-world deployment of privacy-preserving data release.

2018
M. Al, T. Chanyaswad, and S. Y. Kung, “Multi-Kernel, Deep Neural Network, and Hybrid Models for Privacy-Preserving Machine Learning,” 2018 International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
T. Chanyaswad, M. Al, and S. Y. Kung, “Outlier Removal for Enhancing Kernel-Based Classifier via the Discriminant Information,” 2018 International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
2017
T. Chanyaswad, M. Al, J. M. Chang, and S. Y. Kung, “Differential Mutual Information Forward Search For Multi-Kernel Discriminant-Component Selection With An Application To Privacy-Preserving Classification,” IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2017. Publisher's VersionAbstract
In machine learning, feature engineering has been a pivotal stage in building a high-quality predictor. Particularly, this work explores the multiple Kernel Discriminant Component Analysis (mKDCA) feature-map and its variants. However, seeking the right subset of kernels for mKDCA feature-map can be challenging. Therefore, we consider the problem of kernel selection, and propose an algorithm based on Differential Mutual Information (DMI) and incremental forward search. DMI serves as an effective metric for selecting kernels, as is theoretically supported by mutual information and Fisher's discriminant analysis. On the other hand, incremental forward search plays a role in removing redundancy among kernels. Finally, we illustrate the potential of the method via an application in privacy-aware classification, and show on three mobile-sensing datasets that selecting an effective set of kernels for mKDCA feature-maps can enhance the utility classification performance, while successfully preserve the data privacy. Specifically, the results show that the proposed DMI forward search method can perform better than the state-of-the-art, and, with much smaller computational cost, can perform as well as the optimal, yet computationally expensive, exhaustive search.
A. Filipowicz, T. Chanyaswad, and S. Y. Kung, “Desensitized RDCA Subspaces for Compressive Privacy in Machine Learning,” arXiv preprint arXiv:1707.07770 [cs.CR], 2017. Publisher's VersionAbstract

The quest for better data analysis and artificial intelligence has lead to more and more data being collected and stored. As a consequence, more data are exposed to malicious entities. This paper examines the problem of privacy in machine learning for classification. We utilize the Ridge Discriminant Component Analysis (RDCA) to desensitize data with respect to a privacy label. Based on five experiments, we show that desensitization by RDCA can effectively protect privacy (i.e. low accuracy on the privacy label) with small loss in utility. On HAR and CMU Faces datasets, the use of desensitized data results in random guess level accuracies for privacy at a cost of 5.14% and 0.04%, on average, drop in the utility accuracies. For Semeion Handwritten Digit dataset, accuracies of the privacy-sensitive digits are almost zero, while the accuracies for the utility-relevant digits drop by 7.53% on average. This presents a promising solution to the problem of privacy in machine learning for classification.

T. Chanyaswad, M. J. Chang, and S. Y. Kung, “A Compressive Multi-Kernel Method for Privacy-Preserving Machine Learning,” International Joint Conference on Neural Networks (IJCNN) 2017. IEEE, 2017. Publisher's VersionAbstract

As the analytic tools become more powerful, and more data are generated on a daily basis, the issue of data privacy arises. This leads to the study of the design of privacy-preserving machine learning algorithms. Given two objectives, namely, utility maximization and privacy-loss minimization, this work is based on two previously non-intersecting regimes — Compressive Privacy and multi-kernel method. Compressive Privacy is a privacy framework that employs utility-preserving lossy-encoding scheme to protect the privacy of the data, while multi-kernel method is a kernel-based machine learning regime that explores the idea of using multiple kernels for building better predictors. In relation to the neural-network architecture, multi-kernel method can be described as a two-hidden-layered network with its width proportional to the number of kernels. The compressive multi-kernel method proposed consists of two stages — the compression stage and the multi-kernel stage. The compression stage follows the Compressive Privacy paradigm to provide the desired privacy protection. Each kernel matrix is compressed with a lossy projection matrix derived from the Discriminant Component Analysis (DCA). The multikernel stage uses the signal-to-noise ratio (SNR) score of each kernel to non-uniformly combine multiple compressive kernels. The proposed method is evaluated on two mobile-sensing datasets — MHEALTH and HAR — where activity recognition is defined as utility and person identification is defined as privacy. The results show that the compression regime is successful in privacy preservation as the privacy classification accuracies are almost at the random-guess level in all experiments. On the other hand, the novel SNR-based multi-kernel shows utility classification accuracy improvement upon the state-of-the-art in both datasets. These results indicate a promising direction for research in privacy-preserving machine learning.

S. Y. Kung, T. Chanyaswad, J. M. Chang, and P. - Y. Wu, “Collaborative PCA/DCA Learning Methods for Compressive Privacy,” ACM Transactions on Embedded Computing Systems (TECS), vol. 16, no. 3, pp. 76, 2017. Publisher's VersionAbstract

In the internet era, the data being collected on consumers like us are growing exponentially and attacks on our privacy are becoming a real threat. To better assure our privacy, it is safer to let data owner control the data to be uploaded to the network, as opposed to taking chance with the data servers or the third parties. To this end, we propose a privacy-preserving technique, named Compressive Privacy (CP), to enable the data creator to compress data via collaborative learning, so that the compressed data uploaded onto the internet will be useful only for the intended utility and will not be easily diverted to malicious applications.

For data in a high-dimensional feature vector space, a common approach to data compression is dimension reduction or, equivalently, subspace projection. The most prominent tool is Principal Component Analysis (PCA). For unsupervised learning, PCA can best recover the original data given a specific reduced dimensionality. However, for supervised learning environment, it is more effective to adopt a supervised PCA, known as the Discriminant Component Analysis (DCA), in order to maximize the discriminant capability.

The DCA subspace analysis embraces two different subspaces. The signal subspace components of DCA are associated with the discriminant distance/power (related to the classification effectiveness), while the noise subspace components of DCA are tightly coupled with the recoverability and/or privacy protection. This paper will present three DCA-related data compression methods useful for privacy-preserving applications.

  • Utility-driven DCA: Because the rank of the signal subspace is limited by the number of classes, DCA can effectively support classification using a relatively small dimensionality (i.e. high compression).
  • Desensitized PCA: By incorporating a signal-subspace ridge into DCA, it leads to a variant especially effective for extracting privacy-preserving components. In this case, the eigenvalues of the noise-space are made to become insensitive to the privacy labels and are ordered according to their corresponding component powers.
  • Desensitized K-means/SOM: Since the revelation of the K-means or SOM cluster structure could leak sensitive information, it will be safer perform K-means or SOM clustering on desensitized PCA subspace.
2016
T. Chanyaswad, M. J. Chang, P. Mittal, and S. Y. Kung, “Discriminant-Component Eigenfaces for Privacy-Preserving Face Recognition,” IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP). IEEE, 2016. Publisher's VersionAbstract

Over the past decades, face recognition has been a problem of critical interest in the machine learning and signal processing communities. However, conventional approaches such as eigenfaces do not protect the privacy of user data, which is emerging as an important design consideration in today's society. In this work, we leverage a supervised-learning subspace projection method called Discriminant Component Analysis (DCA) for privacy-preserving face recognition. By projecting the data onto the lower-dimensional signal subspace prescribed by DCA, high performance of face recognition is achievable without compromising privacy of the data owners. We evaluate our approach on three image datasets: Yale, Olivetti and Glasses datasets - the last is derived from the former two. Our approach can serve as a key enabler for real-world deployment of privacy-preserving face recognition applications, and provides a promising direction to researchers and private sectors.

2012
V. V. Kulkarni, T. Chanyaswad, M. Riedel, and J. Kim, “Robust Tunable in vitro Transcriptional Oscillator Networks,” 50th Annual Allerton Conference on Communication, Control, and Computing. IEEE, pp. 114-119, 2012. Publisher's VersionAbstract

Synthetic biology is facilitating novel methods and components to build in vivo and in vitro circuits to better understand and re-engineer biological networks. Circadian oscillators serve as molecular clocks that govern several important cellular processes such as cell division and apoptosis. Hence, successful demonstration of synthetic oscillators have become a primary design target for many synthetic biology endeavors. Recently, three synthetic transcriptional oscillators were demonstrated by Kim and Winfree utilizing modular architecture of synthetic gene analogues and a few enzymes. However, the periods and amplitudes of synthetic oscillators were sensitive to initial conditions and allowed limited tunability. In addition, it being a closed system, the oscillations were observe to die out after a certain period of time. To increase tunability and robustness of synthetic biochemical oscillators in the face of disturbances and modeling uncertainties, a control theoretic approach for real-time adjustment of oscillator behaviors would be required. In this paper, assuming an open system implementation is feasible, we demonstrate how dynamic inversion techniques can be used to synthesize the required controllers.