关于TensorFlow形状排名的困惑
我知道已经有类似标题的问题,但是在您将此作为重复报告之前,请允许我说所有对这些问题的答案都是非常特殊的,并且不适用于我的问题。关于TensorFlow形状排名的困惑
我很难理解为什么我不能在TensorFlow中使用两个张量的矩阵乘法(以及技术上的矩阵向量乘法)。我有形状(1000,1000)的张量v和形状(1000)的另一个张量h_previous。我正在做大量的矩阵乘法,其中两个的张量在程序中完全一样,但这只是抛出了一个神秘的错误。这里是图的关键部分:
# Variables # Encoder input
X = tf.placeholder(tf.float32, shape=[k, None])
we = tf.Variable(tf.truncated_normal([500, k], -0.1, 0.1))
# Encoder update gate
wz = tf.Variable(tf.truncated_normal([1000, 500], -0.1, 0.1))
uz = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder reset gate
wr = tf.Variable(tf.truncated_normal([1000, 500], -0.1, 0.1))
ur = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder h~ [find name]
w = tf.Variable(tf.truncated_normal([1000, 500], -0.1, 0.1))
u = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder representation weight
v = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder
h_previous = tf.zeros([1000])
for t in range(N):
# Current vector and its embedding
xt = tf.reshape(tf.slice(X, [t, 0], [1, k]), [k])
e = tf.matmul(we, xt)
# Reset calculation
r = tf.sigmoid(tf.matmul(wr, e) + tf.matmul(ur, h_previous))
# Update calculation
z = tf.sigmoid(tf.matmul(wz, e) + tf.matmul(uz, h_previous))
# Hidden-tilde calculation
h_tilde = tf.tanh(tf.matmul(w, e) + tf.matmul(u, r * h_previous))
# Hidden calculation
one = tf.ones([1000])
h = z * h_previous + (one - z) * h_tilde
h_previous = h
c = tf.tanh(tf.matmul(v, h_previous))
我很难过。有没有人有任何线索?提前致谢。 :)
回答:
我已经修复了你的代码在几个地方,现在它的工作(见下文)。通常,tf.matmul的输入应该是两个二维矩阵(请参阅文档here),而您通过了二维矩阵(大小为1000x1000)和一维矩阵(大小为1000)。如果将第二个矩阵重塑为1000x1或1x1000,则matmul将起作用。
k = 77 N = 17
# Variables
# Encoder input
X = tf.placeholder(tf.float32, shape=[k, None])
we = tf.Variable(tf.truncated_normal([500, k], -0.1, 0.1))
# Encoder update gate
wz = tf.Variable(tf.truncated_normal([1000, 500], -0.1, 0.1))
uz = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder reset gate
wr = tf.Variable(tf.truncated_normal([1000, 500], -0.1, 0.1))
ur = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder h~ [find name]
w = tf.Variable(tf.truncated_normal([1000, 500], -0.1, 0.1))
u = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder representation weight
v = tf.Variable(tf.truncated_normal([1000, 1000], -0.1, 0.1))
# Encoder
h_previous = tf.zeros([1000, 1])
for t in range(N):
# Current vector and its embedding
xt = tf.reshape(tf.slice(X, [t, 0], [1, k]), [k, 1])
e = tf.matmul(we, xt)
# Reset calculation
r = tf.sigmoid(tf.matmul(wr, e) + tf.matmul(ur, h_previous))
# Update calculation
z = tf.sigmoid(tf.matmul(wz, e) + tf.matmul(uz, h_previous))
# Hidden-tilde calculation
h_tilde = tf.tanh(tf.matmul(w, e) + tf.matmul(u, r * h_previous))
# Hidden calculation
one = tf.ones([1000])
h = z * h_previous + (one - z) * h_tilde
h_previous = h
c = tf.tanh(tf.matmul(v, h_previous))
以上是 关于TensorFlow形状排名的困惑 的全部内容, 来源链接: utcz.com/qa/257635.html