1. 神经网络函数
名称 | 含义 |
---|---|
tf.nn.relu | 线性整流函数(Rectified Linear Unit, ReLU) |
tf.map_fn() | 函数作为参数传入 |
tf.cast() | 类型转换,如 tf.cast(x, tf.float32) |
tf.size() | tensor元素个数, |
tf.shape() | tensorf的维度, |
tf.get_shape() | 获得动态tensor的维度 |
tf.size:
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
tf.size(t) # 12
tf.shape:
sess =tf.Session()
a = tf.constant([[1.0, 2.0], [3.0, 4.0],[5,6]])
print(sess.run(tf.shape(a)[0])) #length of the tensor
print(sess.run(tf.shape(a)[1])) # length of dim 1
tf.get_shape():
c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
print(c.get_shape())
==> TensorShape([Dimension(2), Dimension(3)])
tf.reduce_max:
这是一个返回某个维度最大的数。
t1 = [[[1, 2, 3], [41, 5, 6]],[[11, 2, 3], [4, 5, 9]]]
with tf.Session() as sess:alpha = tf.reduce_max(tf.abs(t1), (1,2), keepdims = True)l2_norm =tf.cast(alpha, tf.float64)*\tf.sqrt(tf.reduce_sum(tf.pow(t1/alpha, 2),(1,2), keepdims = True))print(sess.run(alpha))print(sess.run(l2_norm))
output:
2.几个概念
global_step:变量,每个batch,自动加一 ,初始化如下:
global_step = tf.Variable(0, name='global_step',trainable=False)
一般,优化器里边,会有global_step,
minimize(loss,global_step=None,var_list=None,gate_gradients=GATE_OP,aggregation_method=None,colocate_gradients_with_ops=False,name=None,grad_loss=None
)
参考:
- Tensorflow一些常用基本概念与函数;
- 关于global_step理解 Stack Overflow;