8wDlpd.png
8wDFp9.png
8wDEOx.png
8wDMfH.png
8wDKte.png

获取 TensorFlow/Keras 中的中间层的输出

Andrew Foulds 2月前

47 0

我正在尝试获取 Keras 中中间层的输出,以下是我的代码:XX = model.input # Keras Sequential() model objectYY = model.layers[0].outputF = K.function([XX], [YY]) # K r...

我正在尝试获取 Keras 中中间层的输出,以下是我的代码:

XX = model.input # Keras Sequential() model object
YY = model.layers[0].output
F = K.function([XX], [YY]) # K refers to keras.backend


Xaug = X_train[:9]
Xresult = F([Xaug.astype('float32')])

运行此程序,我收到一个错误:

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'dropout_1/keras_learning_phase' with dtype bool

我知道因为我在模型中使用了 dropout 层,所以我必须 learning_phase() 根据 keras 文档 。我将代码更改为以下内容:

XX = model.input
YY = model.layers[0].output
F = K.function([XX, K.learning_phase()], [YY])


Xaug = X_train[:9]
Xresult = F([Xaug.astype('float32'), 0])

现在我遇到了一个无法弄清楚的新错误:

TypeError: Cannot interpret feed_dict key as Tensor: Can not convert a int into a Tensor.

任何帮助,将不胜感激。
PS:我是 TensorFlow 和 Keras 的新手。

编辑 1: 以下是我正在使用的完整代码。我正在使用这篇 NIPS paper ,这是 Kera 的实现

input_shape =  X_train.shape[1:]

# initial weights
b = np.zeros((2, 3), dtype='float32')
b[0, 0] = 1
b[1, 1] = 1
W = np.zeros((100, 6), dtype='float32')
weights = [W, b.flatten()]

locnet = Sequential()
locnet.add(Convolution2D(64, (3, 3), input_shape=input_shape, padding='same'))
locnet.add(Activation('relu'))
locnet.add(Convolution2D(64, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Convolution2D(128, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(Convolution2D(128, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Convolution2D(256, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(Convolution2D(256, (3, 3), padding='same'))
locnet.add(Activation('relu'))
locnet.add(MaxPooling2D(pool_size=(2, 2)))
locnet.add(Dropout(0.5))
locnet.add(Flatten())
locnet.add(Dense(100))
locnet.add(Activation('relu'))
locnet.add(Dense(6, weights=weights))


model = Sequential()

model.add(SpatialTransformer(localization_net=locnet,
                             output_size=(128, 128), input_shape=input_shape))

model.add(Convolution2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(128, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Convolution2D(256, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256))
model.add(Activation('relu'))

model.add(Dense(num_classes))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

#==============================================================================
# Start Training
#==============================================================================
#define training results logger callback
csv_logger = keras.callbacks.CSVLogger(training_logs_path+'.csv')
model.fit(X_train, y_train,
          batch_size=batch_size,
          epochs=20,
          validation_data=(X_valid, y_valid),
          shuffle=True,
          callbacks=[SaveModelCallback(), csv_logger])




#==============================================================================
# Visualize what Transformer layer has learned
#==============================================================================

XX = model.input
YY = model.layers[0].output
F = K.function([XX, K.learning_phase()], [YY])


Xaug = X_train[:9]
Xresult = F([Xaug.astype('float32'), 0])

# input
for i in range(9):
    plt.subplot(3, 3, i+1)
    plt.imshow(np.squeeze(Xaug[i]))
    plt.axis('off')

for i in range(9):
    plt.subplot(3, 3, i + 1)
    plt.imshow(np.squeeze(Xresult[0][i]))
    plt.axis('off')
帖子版权声明 1、本帖标题:获取 TensorFlow/Keras 中的中间层的输出
    本站网址:http://xjnalaquan.com/
2、本网站的资源部分来源于网络,如有侵权,请联系站长进行删除处理。
3、会员发帖仅代表会员个人观点,并不代表本站赞同其观点和对其真实性负责。
4、本站一律禁止以任何方式发布或转载任何违法的相关信息,访客发现请向站长举报
5、站长邮箱:yeweds@126.com 除非注明,本帖由Andrew Foulds在本站《tensorflow》版块原创发布, 转载请注明出处!
最新回复 (0)
  • 这应该可行。1) 您能向我们展示您的模型吗?2) 您能尝试在模型中添加另一层吗?3) 如果不太麻烦的话,您可以尝试使用功能风格构建模型吗?;

  • 最简单的方法是在 Keras 中创建一个新模型,而无需调用后端。您需要函数模型 API 来实现此目的:

    from keras.models import Model
    
    XX = model.input 
    YY = model.layers[0].output
    new_model = Model(XX, YY)
    
    Xaug = X_train[:9]
    Xresult = new_model.predict(Xaug)
    
  • 似乎预测会忽略 batch_normalization 层。有没有办法包含这些层并仍然获得中间层的输出?

  • 您可以尝试:

    model1 = tf.keras.models.Sequential(base_model.layers[:1])
    model2 = tf.keras.models.Sequential(base_model.layers[1:])
    
    Xaug = X_train[:9]
    out = model1(Xaug)
    
返回
作者最近主题: