[問題] pytorch load model 準確率很差已刪文

看板Python作者時間4年前 (2020/09/05 19:17), 4年前編輯推噓1(100)
留言1則, 1人參與, 4年前最新討論串1/1
https://reurl.cc/9XOYEY 我是依照官方的方法做的 兩種方法我都試過 我想測試的是 先用train data訓練好後把 model 儲存起來 然後另外開一個jupyter notebook分頁 把剛剛儲存的model load進來再用test data算準確率 但是只有10%而已 (若是一氣呵成的話有52%) 請問這是哪裡出了問題? (Train model部分) import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import matplotlib.pyplot as plt import numpy as np class Net(nn.Module): def __init__(self): super(Net, self).__init__() # Conv2d(in_channels: int, out_channels: int, kernel_size: n*n) self.conv1=nn.Conv2d(3, 6, 5) self.conv2=nn.Conv2d(6, 16, 5) # MaxPool2d(kernel_size: n*n, stride: int) self.pool=nn.MaxPool2d(2, 2) # Linear(in_features: int, out_features: int) self.fc1=nn.Linear(16*5*5, 120) self.fc2=nn.Linear(120, 84) self.fc3=nn.Linear(84, 10) def forward(self, x): x=self.pool(F.relu(self.conv1(x))) x=self.pool(F.relu(self.conv2(x))) x=x.view(-1, 16*5*5) x=F.relu(self.fc1(x)) x=F.relu(self.fc2(x)) x=self.fc3(x) return x net=Net() transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) trainset=torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader=torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True) criterion=nn.CrossEntropyLoss() optimizer=optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss=0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels=data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs=net(inputs) loss=criterion(outputs, labels) loss.backward() optimizer.step() PATH = "state_dict_model.pt" torch.save(net.state_dict(), PATH) (Load model部分) import torch import torchvision import torchvision.transforms as transforms import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import matplotlib.pyplot as plt import numpy as np transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))]) testset=torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader=torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False) PATH = "state_dict_model.pt" model = Net() model.load_state_dict(torch.load(PATH)) model.eval() correct=0 total=0 with torch.no_grad(): for data in testloader: images, labels=data outputs=net(images) # torch.max : 若有設定維度的話 , 會 return 兩個 tensor , 分別為以下的 max_prob 與 predicted # max_prob : 所有 class 的最高機率 # predicted : 最高機率對應到的 class max_prob, predicted=torch.max(outputs, 1) # 因為 batch_size 設定為 4 , 所以跑一次迴圈會有 4 筆資料進來 total+=labels.size(0) # 看這 4 筆資料有幾筆猜中 correct+=(predicted==labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % (100*correct/total)) -- ※ 發信站: 批踢踢實業坊(ptt.cc), 來自: 220.141.102.9 (臺灣) ※ 文章網址: https://www.ptt.cc/bbs/Python/M.1599304652.A.A5D.html ※ 編輯: Kuba4ma (220.141.102.9 臺灣), 09/05/2020 19:21:18 ※ 編輯: Kuba4ma (220.141.102.9 臺灣), 09/05/2020 19:27:09

09/05 20:42, 4年前 , 1F
所以你的code呢?
09/05 20:42, 1F
※ 編輯: Kuba4ma (220.141.102.9 臺灣), 09/05/2020 21:04:36
文章代碼(AID): #1VKtFCfT (Python)