線性回歸實戰
使用PyTorch定義線性回歸模型一般分以下幾步:
1.設計網絡架構
2.構建損失函數(loss)和優化器(optimizer)
3.訓練(包括前饋(forward)、反向傳播(backward)、更新模型參數(update))
#author:yuquanle#data:2018.2.5#Study of LinearRegression use PyTorchimport torchfrom torch.autograd import Variable# train datax_data = Variable(torch.Tensor([[1.0], [2.0], [3.0]]))y_data = Variable(torch.Tensor([[2.0], [4.0], [6.0]]))class Model(torch.nn.Module): def __init__(self): super(Model, self).__init__() self.linear = torch.nn.Linear(1, 1) # One in and one out def forward(self, x): y_pred = self.linear(x) return y_pred# our modelmodel = Model()criterion = torch.nn.MSELoss(size_average=False) # Defined loss functionoptimizer = torch.optim.SGD(model.parameters(), lr=0.01) # Defined optimizer# Training: forward, loss, backward, step# Training loopfor epoch in range(50): # Forward pass y_pred = model(x_data) # Compute loss loss = criterion(y_pred, y_data) print(epoch, loss.data[0]) # Zero gradients optimizer.zero_grad() # perform backward pass loss.backward() # update weights optimizer.step()# After traininghour_var = Variable(torch.Tensor([[4.0]]))print("predict (after training)", 4, model.forward(hour_var).data[0][0])迭代十次打印結果:
0 123.87958526611328
1 55.19491195678711
2 24.61777114868164
3 11.005026817321777
4 4.944361686706543
5 2.2456750869750977
6 1.0436556339263916
7 0.5079189538955688
8 0.2688019871711731
9 0.16174012422561646
predict (after training) 4 7.487752914428711
loss還在繼續下降,此時輸入4得到的結果還不是預測的很準
當迭代次數設置為50時:
0 35.38422393798828
5 0.6207122802734375
10 0.012768605723977089
15 0.0020055510103702545
20 0.0016929294215515256
25 0.0015717096393927932
30 0.0014619173016399145
35 0.0013598509831354022
40 0.0012649153359234333
45 0.00117658288218081
50 0.001094428705982864
predict (after training) 4 8.038028717041016
此時,函數已經擬合比較好了
再運行一次:
0 159.48605346679688
5 2.827991485595703
10 0.08624256402254105
15 0.03573693335056305
20 0.032463930547237396
25 0.030183646827936172
30 0.02807590737938881
35 0.026115568354725838
40 0.02429218217730522
45 0.022596003487706184
50 0.0210183784365654
predict (after training) 4 7.833342552185059
發現同為迭代50次,但是當輸入為4時,結果不同,感覺應該是使用pytorch定義線性回歸模型時:
torch.nn.Linear(1, 1),只需要知道輸入和輸出維度,里面的參數矩陣是隨機初始化的(具體是不是隨機的還是按照一定約束條件初始化的我不確定),所有每次計算loss會下降到不同的位置(模型的參數更新從而也不同),導致結果不一樣。
|
新聞熱點
疑難解答