實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò)的權(quán)重和偏置更新,很重要的一部就是使用BackPropagation(反向傳播)算法。具體來說,反向傳播算法就是用誤差的反向傳播來計(jì)算w(權(quán)重)和b(偏置)相對于目標(biāo)函數(shù)的導(dǎo)數(shù),這樣就可以在原來的w,b的基礎(chǔ)上減去偏導(dǎo)數(shù)來更新。其中我上次寫的python實(shí)現(xiàn)梯度下降中有一個函數(shù)backprop(x,y)就是用來實(shí)現(xiàn)反向傳播的算法。(注:代碼并非自己總結(jié),github上有這個代碼的實(shí)現(xiàn)https://github.com/LCAIZJ/neural-networks-and-deep-learning)
def backprop(self,x,y): nabla_b = [np.zeros(b.shape) for b in self.biases] nabla_w = [np.zeros(w.shape) for w in self.weights] # 通過輸入x,前向計(jì)算輸出層的值 activation = x activations = [x]# 存儲的是所以的輸出層 zs = [] for b,w in zip(self.biases,self.weights): z = np.dot(w,activation)+b zs.append(z) activation = sigmoid(z) activations.append(activation) # 計(jì)算輸出層的error delta = self.cost_derivative(activations[-1],y)*sigmoid_prime(zs[:-1]) nabla_b[-1] = delta nabla_w[-1] = np.dot(delta,activations[-2].transpose()) #反向更新error for l in xrange(2,self.num_layers): z = zs[-l] sp = sigmoid_prime(z) delta = np.dot(self.weight[-l+1].transpose(),delta)*sp nabla_b[-l] = delta nabla_w[-l] = np.dot(delta,activations[-l-1].transpose()) return (nabla_b,nabla_w)
其中,傳入的x和y是一個單獨(dú)的實(shí)例。
def cost_derivative(self,output_activation,y): return (output_activation-y)def sigmoid(z): return 1.0/(1.0+np.exp(z))def sigmoid_prime(z): return sigmoid(z)*(1-sigmoid(z))
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持武林站長站。
新聞熱點(diǎn)
疑難解答
圖片精選