Spring 2023 CS Cyber Security Reading Group
Members: Shubham Ayachit, Yashwanth Bandala, Dominika Bobik, Ethan Brinks, Yu Cai, Bo Chen (co-advisor), Haoyang Chen, Niusen Chen, John Claassen, Brandon Cox, Dev Sanghani, Josh Dafoe, Matthew Gervasi, Thomas Grifka, Trevor Hornsby, Stuart Hoxie, Ryan Klemm, Xinyu Lei (co-advisor), Xinyun Liu, Jean Mayo (co-advisor), Jacson Ott, Aditya Patil, Caleb Rother, Shuo Sun, Harsh Singh, Yuchen Wang, Drew Youngstrom, Xiaoyong Yuan
Detailed schedule:
Time: 4:00 – 5:00pm Friday, February 3, 2023
Location: Rekhi Hall G009/Zoom
Presenter: Haoyang Chen
Title: ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models, Yugeng Liu, Rui Wen, etc. Usenix Security 2022.
Abstract: Inference attacks against Machine Learning (ML) models allow adversaries to learn sensitive information about training data, model parameters, etc. While researchers have studied, in depth, several kinds of attacks, they have done so in isolation. As a result, we lack a comprehensive picture of the risks caused by the attacks, e.g., the different scenarios they can be applied to, the common factors that influence their performance, the relationship among them, or the effective- ness of possible defenses. In this paper, we fill this gap by presenting a first-of-its-kind holistic risk assessment of different inference attacks against machine learning models. We concentrate on four attacks – namely, membership inference, model inversion, attribute inference, and model stealing – and establish a threat model taxonomy.
Our extensive experimental evaluation, run on five model architectures and four image datasets, shows that the complexity of the training dataset plays an important role with respect to the attack’s performance, while the effectiveness of model stealing and membership inference attacks are negatively correlated. We also show that defenses like DP-SGD and Knowledge Distillation can only mitigate some of the inference attacks. Our analysis relies on a modular re-usable software, ML-DOCTOR, which enables ML model owners to assess the risks of deploying their models, and equally serves as a benchmark tool for researchers and practitioners.
Time: 4:00 – 5:00pm Friday, February 17, 2023
Location: Rekhi Hall G009/Zoom
Presenter: Xinyun Liu
Title: Flame: Taming Backdoors in Federated Learning, Thien Duc Nguyen, etc. Usenix Security 2022.
Abstract: Federated Learning (FL) is a collaborative machine learning approach allowing participants to jointly train a model with- out having to share their private, potentially sensitive local datasets with others. Despite its benefits, FL is vulnerable to so-called backdoor attacks, in which an adversary injects manipulated model updates into the federated model aggregation process so that the resulting model will provide targeted false predictions for specific adversary-chosen inputs. Proposed defenses against backdoor attacks based on detecting and filtering out malicious model updates consider only very specific and limited attacker models, whereas defenses based on differential privacy-inspired noise injection significantly deteriorate the benign performance of the aggregated model. To address these deficiencies, we introduce FLAME, a defense framework that estimates the sufficient amount of noise to be injected to ensure the elimination of backdoors. To minimize the required amount of noise, FLAME uses a model clustering and weight clipping approach. This ensures that FLAME can maintain the benign performance of the aggregated model while effectively eliminating adversarial backdoors. Our evaluation of FLAME on several datasets stemming from application areas including image classification, word prediction, and IoT intrusion detection demonstrates that FLAME re- moves backdoors effectively with a negligible impact on the benign performance of the models.
Time: 4:00 – 5:00pm Friday, March 3, 2023
Location: Rekhi Hall G009/Zoom
Presenter: Ryan Klemm
Title
and abstracted to be added.
Time: 4:00 – 5:00pm Friday, March 17, 2023
Location: Rekhi Hall G009/Zoom
Presenter: Caleb Rother
Title:
Storj: A Peer-to-Peer Cloud Storage Network (storj white paper)
Abstract:
A peer-to-peer cloud storage network implementing end-to-end encryption would
allow users to transfer and share data without reliance on a third
party data provider. The removal of central controls would eliminate
most traditional data failures and outages, as well as significantly increasing
security, privacy, and data control. A peer-to-peer network and basic
encryption serve as a solution for most problems, but we must offer proper incentivisation for users to properly participate in this
network. We propose a solution to these additional problems by using a
challenge algorithm. In this way we can periodically cryptographically check
the integrity and availability of a file, and offer direct rewards to those
maintaining the file. In absence of a peer-to-peer network the described methods
may be used to allow users to control, migrate, validate their data on 3rd
party data providers without the provider having direct access to the data.
Time: 3:00 – 4:00pm Friday, March 31, 2023
Location: Rekhi Hall 214/Zoom
Presenter: Dr. Qiben Yan (Michigan State University)
Title:
Securing Cyber Physical Systems Against Novel Physical-Layer Attacks
Abstract:
Society’s broad adoption of Internet-of-Things (IoT) and Cyber Physical Systems
(CPS) has been driven, in part, by decades of research and development in the
advanced sensing technologies, which now provide fine- grained physical data to
the IoT devices. These devices are revolutionizing every industry and
transforming our lives. However, their heterogeneous sensing modality and tight
coupling with the physical world introduce new attack vectors that could
compromise cyber and/or physical information through the physical interfaces.
In
this talk, I will describe our recent work on the attack and defense of cyber
physical systems. Specifically, I will describe our work on attacking voice
assistant systems through physical-layer signal injection attacks via solid
media and charging cables. I will talk about the adversarial attacks towards
AI-driven voice systems. I will then talk about our research on camera systems
security of drones and unmanned ground vehicles. Our proposed attacks exploit
the vulnerability in the vision systems and aim to create fake obstacles.
Finally, I will present our research on the threat detection in smart home IoT
systems. I will conclude with a cross-layer view and practice in IoT security,
which is important to address the practical security challenges in a wide range
of real-world scenarios.
Time: 3:00 – 4:00pm Friday, April 21, 2023
Location: Rekhi Hall G009 214/Zoom
Presenter:
Niusen Chen
Title: HiPDS: A Storage
Hardware-independent Plausibly Deniable Storage System
Abstract: To be added