当前位置: 代码迷 >> 综合 >> tf.train
  详细解决方案

tf.train

热度:85   发布时间:2023-09-18 18:51:55.0

tf.train

前言

tf.train属于Training类,一般用于梯度的计算。由于使用缘故探究下tf.train的简单用法。

官方

1.tf.train.GradientDescentOptimizer

Optimizer that implements the gradient descent algorithm.

tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')

Construct a new gradient descent optimizer.

Args:

  • learning_rate: A Tensor or a floating point value. The learning rate to use.
  • use_locking: If True use locks for update operation.s
  • name: Optional name prefix for the operations created when applying gradients. Defaults to “GradientDescent”.

Returns:
An Operation that updates the variables in ‘var_list’. If ‘global_step’ was not None, that operation also increments global_step.

2.tf.train.Optimizer.minimize

tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, name=None)

Add operations to minimize ‘loss’ by updating ‘var_list’.

This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.

Args:

  • loss: A Tensor containing the value to minimize.
  • global_step: Optional Variable to increment by one after the variables have been updated.
  • var_list: Optional list of variables.Variable to update to minimize ‘loss’. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.
  • gate_gradients: How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.
  • name: Optional name for the returned operation.

例子

import tensorflow as tf
import numpy as np# Creat data
X_Data = np.random.rand(100).astype(np.float32)
Y_Data = X_Data*0.1 + 0.3# Creat TF Structure Start #
Weight = tf.random_uniform([1], -1.0, 1.0)
biases = tf.Variable(tf.zeros([1]))y = Weight*X_Data + biasesloss = tf.reduce_mean(tf.square(y-Y_Data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)init = tf.global_variables_initializer()#Creat TF Structure Endsess = tf.Session()
sess.run(init)  # Importantfor step in range(201):sess.run(train)if step%20 == 0:print(step, sess.run(Weight), sess.run(biases))

结果
tf.train

理解

  • tf.train.Optimizer.minimize中参数loss一般用tf.reduce_mean计算出来

Reference

  • 官方文档:Training
  相关解决方案