国产探花免费观看_亚洲丰满少妇自慰呻吟_97日韩有码在线_资源在线日韩欧美_一区二区精品毛片,辰东完美世界有声小说,欢乐颂第一季,yy玄幻小说排行榜完本

首頁 > 編程 > Python > 正文

Pytorch入門之mnist分類實例

2020-02-22 23:40:00
字體:
來源:轉載
供稿:網友

本文實例為大家分享了Pytorch入門之mnist分類的具體代碼,供大家參考,具體內容如下

#!/usr/bin/env python# -*- coding: utf-8 -*-__author__ = 'denny'__time__ = '2017-9-9 9:03'import torchimport torchvisionfrom torch.autograd import Variableimport torch.utils.data.dataloader as Datatrain_data = torchvision.datasets.MNIST( './mnist', train=True, transform=torchvision.transforms.ToTensor(), download=True)test_data = torchvision.datasets.MNIST( './mnist', train=False, transform=torchvision.transforms.ToTensor())print("train_data:", train_data.train_data.size())print("train_labels:", train_data.train_labels.size())print("test_data:", test_data.test_data.size())train_loader = Data.DataLoader(dataset=train_data, batch_size=64, shuffle=True)test_loader = Data.DataLoader(dataset=test_data, batch_size=64)class Net(torch.nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = torch.nn.Sequential(  torch.nn.Conv2d(1, 32, 3, 1, 1),  torch.nn.ReLU(),  torch.nn.MaxPool2d(2)) self.conv2 = torch.nn.Sequential(  torch.nn.Conv2d(32, 64, 3, 1, 1),  torch.nn.ReLU(),  torch.nn.MaxPool2d(2) ) self.conv3 = torch.nn.Sequential(  torch.nn.Conv2d(64, 64, 3, 1, 1),  torch.nn.ReLU(),  torch.nn.MaxPool2d(2) ) self.dense = torch.nn.Sequential(  torch.nn.Linear(64 * 3 * 3, 128),  torch.nn.ReLU(),  torch.nn.Linear(128, 10) ) def forward(self, x): conv1_out = self.conv1(x) conv2_out = self.conv2(conv1_out) conv3_out = self.conv3(conv2_out) res = conv3_out.view(conv3_out.size(0), -1) out = self.dense(res) return outmodel = Net()print(model)optimizer = torch.optim.Adam(model.parameters())loss_func = torch.nn.CrossEntropyLoss()for epoch in range(10): print('epoch {}'.format(epoch + 1)) # training----------------------------- train_loss = 0. train_acc = 0. for batch_x, batch_y in train_loader: batch_x, batch_y = Variable(batch_x), Variable(batch_y) out = model(batch_x) loss = loss_func(out, batch_y) train_loss += loss.data[0] pred = torch.max(out, 1)[1] train_correct = (pred == batch_y).sum() train_acc += train_correct.data[0] optimizer.zero_grad() loss.backward() optimizer.step() print('Train Loss: {:.6f}, Acc: {:.6f}'.format(train_loss / (len( train_data)), train_acc / (len(train_data)))) # evaluation-------------------------------- model.eval() eval_loss = 0. eval_acc = 0. for batch_x, batch_y in test_loader: batch_x, batch_y = Variable(batch_x, volatile=True), Variable(batch_y, volatile=True) out = model(batch_x) loss = loss_func(out, batch_y) eval_loss += loss.data[0] pred = torch.max(out, 1)[1] num_correct = (pred == batch_y).sum() eval_acc += num_correct.data[0] print('Test Loss: {:.6f}, Acc: {:.6f}'.format(eval_loss / (len( test_data)), eval_acc / (len(test_data))))

以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持武林站長站。

發表評論 共有條評論
用戶名: 密碼:
驗證碼: 匿名發表
主站蜘蛛池模板: 呼伦贝尔市| 峨眉山市| 岚皋县| 柳江县| 石渠县| 楚雄市| 安西县| 汽车| 九龙县| 昌图县| 壶关县| 贵港市| 永昌县| 婺源县| 大悟县| 江门市| 西宁市| 游戏| 沙湾县| 措勤县| 成都市| 铅山县| 乐都县| 嘉祥县| 洪洞县| 察雅县| 三门县| 渝中区| 磴口县| 桑日县| 皮山县| 仙居县| 汉源县| 治多县| 屏山县| 丹棱县| 阳原县| 沅江市| 临城县| 长岛县| 宣威市|