python 代写-O1
时间:2021-11-29
Competitions
抵御对抗性攻击 Resilience to Adversarial Attack
这是一场学生竞赛,旨在解决现代深度学习中的两个关键问题:
O1 如何找到更好的对抗性攻击
O2 如何训练对对抗性攻击具有更好稳健性的深度学习模型
我们提供了一个模板代码(219ass.py),其中有两个代码块分别对应于训练和攻击。
两个代码块都填充了代表基线方法的最简单的实现,希望参与者将基线方法替换为自
己的实现,以在上述 O1 和 O2 方面获得更好的性能。
1.1 任务要求
最后,我们将收集学生提交的作品,并根据预先指定的指标对他们进行排名,同时考
虑 O1 和 O2。 假设我们有 n个学生参加这个比赛,我们有一组 S个提交。
每个有学号 i的学生都会提交一个 i.zip包,里面包含两个文件:
1. i.pt,这是保存训练好的模型的文件。
2.competition_i.py,这是在使用您的实现更新 219ass.py 中的两个代码块后的脚本。
1.2 Code
Load packages First, the following code piece imports a few packages that are
needed.
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import torch.optim as optim
import torchvision
from torchvision import transforms
from torch.autograd import Variable
import argparse
import time
Note: You can add necessary packages for your implementation.
Define competition ID The below line of code defines the student number. By
replacing it with your own student number, it will automatically output the file i.pt once
you trained a model.
id_ = 1000
Set training parameters The following is to set the hyper-parameters for training. It
con- siders e.g., batch size, number of epochs, whether to use CUDA, learning rate,
and random seed. You may change them if needed.
parser = argparse.ArgumentParser(description='PyTorch MNIST Training')
parser.add_argument('--batch-size', type=int, default=128, metavar='N',
help='input batch size for training (default: 128)')
parser.add_argument('--test-batch-size', type=int, default=128, metavar='N',
help='input batch size for testing (default: 128)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
args = parser.parse_args(args=[])
Toggle GPU/CPU Depending on whether you have GPU in your computer, you
may toggle between devices with the below code. Just to remark that, for this
competition, the usual CPU is sufficient, and a GPU is not needed.
use_cuda = not args.no_cuda and torch.cuda.is_available()
#device = torch.device("cuda" if use_cuda else "cpu")
device = torch.device("cpu")
torch.manual_seed(args.seed)
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}

Loading dataset and define network structure In this competition, we use the
same dataset (FashionMNIST) and the same network architecture. The following
code specify how to load dataset and how to construct a 3-layer neural network.
Please do not change this part of code.
##########################################################################
train_set = torchvision.datasets.FashionMNIST(root='data', train=True,
download=True, transform=transforms.Compose([transforms.ToTensor()]))
train_loader = DataLoader(train_set, batch_size=args.batch_size, shuffle=True )
test_set = torchvision.datasets.FashionMNIST(root='data', train=False,
download=True, transform=transforms.Compose([transforms.ToTensor()]))
test_loader = DataLoader(test_set, batch_size=args.batch_size, shuffle=True)

# 定义全连接网络 define fully connected network
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(28*28, 128)
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, 32)
self.fc4 = nn.Linear(32, 10)
def forward(self, x): x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
x = F.relu(x)
x = self.fc4(x)
output = F.log_softmax(x, dim=1)
return output
##########################################################################
Adversarial Attack The part is the place needing your implementation, for O1. In
the template code, it includes a baseline method which uses random sampling to find
adversarial attacks. You can only replace the middle part of the function with your
own implementation (as indicated in the code), and are not allowed to change others.
def adv_attack(model, X, y, device):
X_adv = Variable(X.data)
##################################################################
random_noise = torch.FloatTensor(*X_adv.shape).uniform_(-0.1, 0.1).to( device)
X_adv = Variable(X_adv.data + random_noise)
##################################################################
return X_adv
Evaluation Functions Below are two supplementary functions that return loss and
accuracy over test dataset and adversarially attacked test dataset, respectively. We
note that the function adv_attack is used in the second function. You are not allowed
to change these two functions.
'def eval_test(model, device, test_loader):
model.eval() test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
data = data.view(data.size(0) ,28*28)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).
item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = correct / len(test_loader.dataset)
return test_loss , test_accuracy
def eval_adv_test(model, device, test_loader): model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
data = data.view(data.size(0) ,28*28)
adv_data = adv_attack(model, data, target, device=device)
output = model(adv_data)
test_loss += F.cross_entropy(output, target, size_average=False).
Item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = correct / len(test_loader.dataset)
return test_loss , test_accuracy
Adversarial Training Below is the second place needing your implementation, for
O2. In the template code, there is a baseline method. You can replace relevant part
of the code as indicated in the code.
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device) data = data.view(data.size(0) ,28*28)
#use adverserial data to train the defense model
#adv_data = adv_attack(model, data, target, device=device)

#clear gradients
optimizer.zero_grad()
#compute loss
#loss = F.cross_entropy(model(adv_data), target)
loss = F.cross_entropy(model(data), target)
#get gradients and update
loss.backward() optimizer.step()

def train_model():
model = Net().to(device)

optimizer = optim.SGD(model.parameters(), lr=args.lr)
for epoch in range(1, args.epochs + 1):
start_time = time.time()
#training
train(args, model, device, train_loader, optimizer, epoch)
#get trnloss and testloss t
rnloss, trnacc = eval_test(model, device, train_loader)
advloss, advacc = eval_adv_test(model, device, train_loader)
#print trnloss and testloss
print('Epoch '+str(epoch)+': '+str(int(time.time()-start_time))+'s', end=', ')
print('trn_loss: {:.4f}, trn_acc: {:.2f}%'.format(trnloss, 100. * trnacc), end=', ')
print('adv_loss: {:.4f}, adv_acc: {:.2f}%'.format(advloss, 100. * advacc))
############################################################
#save the model
torch.save(model.state_dict(), str(id_)+'.pt')
return model
Define Distance Metrics In this competition, we take the L∞ as the distance
measure. You are not allowed to change the code.
def p_distance(model, train_loader, device):
p = [ ]
for [batch_idx, (data, target) in enumerate (train_loader): data, target = data.to(device),
target.to(device)
data = data.view(data.size(0) ,28*28)
adv_data = adv_attack(model, data, target, device=device) p.append(torch.norm(data-
adv_data, float('inf')))
print('epsilon p: ',max(p))
Supplementary Code for Test Purpose In addition to the above code, we also
provide two lines of code for testing purpose. You must comment them out in your
submission. The first line is to call the train_model() method to train a new model,
and the second is to check the quality of attack based on a model.
model = train_model()
p_distance(model, train_loader, device)

1.3 提交要求
下面总结了完成提交所需的操作:
1. 你必须用你的学生 ID i 分配变量 id_;
2. 需要使用你的对抗性攻击方法更新 adv_attack 函数;
3. 如果需要,可以更改解析器中定义的超参数;
4. 必须确保扰动距离小于 0.11,(可以通过下式计算 p_distance 函数);
5. 需要更新 train_model 函数(以及它调用的一些其他函数如训练)用你自己的训练方法;
6. 需要使用“model = train_model()”这一行来训练模型并检查是否有一个文件 i.pt,用于存储训练模型的
权重;
7.必须提交 i.zip,其中包括两个文件 i.pt(保存的模型)和 competition_i.py

请确保满足以下限制条件:
• 提交文件:请遵循上述要求。
• 确保您的代码可以顺利运行。
• 注释掉两行“model = train_model()”
和“p_distance(model, train_loader, device)”

1.4 评价标准
假设,在提交 S中,我们有 n个可以顺利正确运行的提交。我们可以通过读取文件 i.pt 得到模型 Mi。
然后,我们收集以下矩阵

用于使用 Mi 评估 Atkj 的相互评估分数(在函数 adv_attack 中定义)。 分数 sij 是使用来自文件竞争
_j.py 的 adv_attack 函数攻击来自 i.pt 的模型获得的测试准确率。

让 AttackAbility 是 AttackAbilityj 的向量

让 DefenceAbility 成为 DefenceAbilityi 的向量。 然后,对于向量 AttackAbility 和 DefenceAbility,我
们应用 Softmax 函数进行归一化得到

然后,提交 i 的最终分数是

为了减少随机性的影响,我们可能会进行 3 轮上述过程,以获得每次提交的平均 FinalScorei


essay、essay代写