Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the codebase for gradient leakage attacks #377

Merged
merged 375 commits into from
Sep 18, 2024

Conversation

silviafeiwang
Copy link
Collaborator

A general update for the code under the folder examples/gradient_leakage_attacks.

Description

  • Refactored the code
  • Fixed bugs
  • Incorporated more models such as ViTs
  • Ported malicious gradient leakage attacks such as the fishing attack
  • Updated .yml configuration examples
  • Added requirement.txt for pip packages required for gradient leakage attacks

How has this been tested?

Run experiments by cmd python dlg.py -c untrained_eval.yml while altering hyperparameters regarding datasets, models, attacks and defenses, etc.

Types of changes

  • Bug fix (non-breaking change which fixes an issue) Fixes #
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)

Checklist:

  • My code has been formatted using Black and checked using PyLint.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.

EthanHugh and others added 30 commits July 31, 2022 21:50
* removed the hard coded gt plots for now, will fix the way it looks in another patch

* added our defense (Outpost) as an accepted defence

* Added proper handling for exceptions when loss becomes NaN
* changed the gt plots to be saved as a pdf by default now. Added customisations to the plot for multiple images, with adge cases handled as well

* added the config file
…e same layout as the ground truth (which is customisable) (#16)
Still need to comment out mps device: "derivative for aten::mps_linear_backward is not implemented" error in `rec_loss.backward()`
to avoid changing the basic trainer
train_step_start() and perform_forward_and_backward_passes()
and improve some function docstrings
Copy link

netlify bot commented Sep 18, 2024

Deploy Preview for platodocs canceled.

Name Link
🔨 Latest commit b944697
🔍 Latest deploy log https://app.netlify.com/sites/platodocs/deploys/66eb4a86a8afb50008cfe071

@baochunli baochunli merged commit 9f0af62 into main Sep 18, 2024
7 checks passed
@baochunli baochunli deleted the gradient-leakage-attack branch September 18, 2024 22:08
@silviafeiwang silviafeiwang restored the gradient-leakage-attack branch September 20, 2024 19:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants