homework | security作业 | 代做Python – Defense Homework white-box attacks

Defense Homework

homework | Objective作业 | 代做Python – 该题目是一个常规的 white-box attacks的练习题目代写, 涉及了网络安全/Python等代写方面, 该题目是值得借鉴的homework代写的题目

Objective-c代写 代写Objective-c 代写swift app代做 ios代做

Defense Homework

For this homework, you will be implementing the adversarial training defense for white-box attacks.
The task will be the same as  homework 1, MNIST classification.
You are provided an implementation of normal training, for which the learned model is fairly
accurate (around 98% accuracy).
We are providing an implementation of an untargeted white-box attack (FGSM). The provided
training code results in a model for which FGSM successfully attacks around 60% of the time
(40% accuracy).
To run the code, all you need to do is run:  Python Evaluator.py. There is no server/app.py in this
assignment.
Your  Objective is instead to train a model (using adversarial training) that provides a similar level of
accuracy (around 97-98) on original test data but is much more robust to attacks (only 10%
success rate of FGSM attacks, i.e. 90% accuracy).
Important change: For submission on Gradescope, you will have to submit both the trained
model (the pth file) as well as your py code. Gradescope allows uploading of multiple files, so
upload the 2 files: defense_homework.py and defense_homework-model.pth.
We will evaluate your model on how accurate it is, both on original and adversarial data (similar to
first homework, you get full points if you meet the expectations, otherwise we see your code).
Source code (you should only need to add/change between the TODO lines):
https://github.com/ucinlp/maestro-class/tree/main/playground/defense_homework
(https://github.com/ucinlp/maestro-class/tree/main/playground/defense_homework)