1.tf.train.exponential_decay() 指數(shù)衰減學(xué)習(xí)率:
#tf.train.exponential_decay(learning_rate, global_steps, decay_steps, decay_rate, staircase=True/False):#指數(shù)衰減學(xué)習(xí)率#learning_rate-學(xué)習(xí)率#global_steps-訓(xùn)練輪數(shù)#decay_steps-完整的使用一遍訓(xùn)練數(shù)據(jù)所需的迭代輪數(shù);=總訓(xùn)練樣本數(shù)/batch#decay_rate-衰減速度#staircase-衰減方式;=True,那就表明每decay_steps次計算學(xué)習(xí)速率變化,更新原始學(xué)習(xí)速率;=alse,那就是每一步都更新學(xué)習(xí)速率。learning_rate = tf.train.exponential_decay(initial_learning_rate = 0.001global_step = tf.Variable(0, trainable=False)decay_steps = 100decay_rate = 0.95total_loss = slim.losses.get_total_loss()learning_rate = tf.train.exponential_decay(initial_learning_rate, global_step, decay_steps, decay_rate, True, name='learning_rate')optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss, global_step)
2.tf.train.ExponentialMovingAverage(decay, steps) 滑動平均更新參數(shù):
initial_learning_rate = 0.001global_step = tf.Variable(0, trainable=False)decay_steps = 100decay_rate = 0.95total_loss = slim.losses.get_total_loss()learning_rate = tf.train.exponential_decay(initial_learning_rate, global_step, decay_steps, decay_rate, True, name='learning_rate')optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss, global_step)ema = tf.train.ExponentialMovingAverage(decay=0.9999)#tf.trainable_variables--返回的是需要訓(xùn)練的變量列表averages_op = ema.apply(tf.trainable_variables())with tf.control_dependencies([optimizer]): train_op = tf.group(averages_op)
以上這篇有關(guān)Tensorflow梯度下降常用的優(yōu)化方法分享就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持武林站長站。
|
新聞熱點(diǎn)
疑難解答
圖片精選