原始模型如下import logstf.get_logger().setLevel(logging.DEBUG)logging.basicConfig(level=logging.DEBUG, filename='testLog1.log', filemode='w', format='%(ascti...
原始模型如下
import logging
tf.get_logger().setLevel(logging.DEBUG)
logging.basicConfig(level=logging.DEBUG, filename='testLog1.log', filemode='w',
format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s')
inp = Input(shape=(batching_size,1))
c1 = Conv1D(2,32,2,'same',activation='relu')(inp)
c2 = Conv1D(4,32,2,'same',activation='relu')(c1)
c3 = Conv1D(8,32,2,'same',activation='relu')(c2)
c4 = Conv1D(16,32,2,'same',activation='relu')(c3)
c5 = Conv1D(32,32,2,'same',activation='relu')(c4)
dc1 = Conv1DTranspose(32,32,1,padding='same')(c5)
conc1 = Concatenate()([c5,dc1])
dc2 = Conv1DTranspose(16,32,2,padding='same')(conc1)
conc2 = Concatenate()([c4,dc2])
dc3 = Conv1DTranspose(8,32,2,padding='same')(conc2)
conc3 = Concatenate()([c3,dc3])
dc4 = Conv1DTranspose(4,32,2,padding='same')(conc3)
conc4 = Concatenate()([c2,dc4])
dc5 = Conv1DTranspose(2,32,2,padding='same')(conc4)
conc5 = Concatenate()([c1,dc5])
dc6 = Conv1DTranspose(1,32,2,padding='same')(conc5)
conc6 = Concatenate()([inp,dc6])
dc7 = Conv1DTranspose(1,32,1,padding='same',activation='linear')(conc6)
model = tf.keras.models.Model(inp,dc7)
model.compile(optimizer=tf.keras.optimizers.Adam(0.002),loss=tf.keras.losses.MeanAbsoluteError())
history = model.fit(train_dataset, epochs=1)
tflite_model = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model.optimizations = [tf.lite.Optimize.DEFAULT]
# Full Inter post-training quantization.
tflite_model.representative_dataset = representative_data_gen
tflite_model.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,]
tf.lite.OpsSet.SELECT_TF_OPS, # enable TensorFlow ops.
tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # use both select ops and built - ins
tflite_model.inference_input_type = tf.int8
tflite_model.inference_output_type = tf.int8
tflite_model_quant_INT8 = tflite_model.convert()
keras 和 TFLite 模型都运行良好。
由于某种原因,我必须用 UpSampling1D 和 Conv1D 替换 'Conv1DTranspose' 运算符。我通过 Colab T4(系统 RAM 12.7 GB,GPU RAM15.0 GB)完成此操作。运行后,它崩溃并显示所有内存耗尽并重新启动 Colab 会话。
由于它因内存而崩溃,所以我考虑减少替换的 Ops 数量。即使我只替换最后一个 \'dc7\' 如下所示,它仍然崩溃。
def conv1d_transpose(x, filters, kernel_size, strides=1, padding='same', activation=None):
x = UpSampling1D(size=strides)(x)
x = Conv1D(filters=filters, kernel_size=kernel_size, padding=padding, activation=activation)(x)
return x
.....
dc7 = conv1d_transpose(conc6, 1, 32, 1, padding='same', activation='linear') # Replacement
....
我认为可调参数与 Conv1DTranspose 的参数相同,因此我认为这不会因资源问题而导致停机。带替换的 Keras 模型在推理时运行良好,只是如果我尝试执行“完整整数训练后量化”,它就会崩溃。相反,执行默认动态训练后量化似乎可以正常完成。“logging”在这里什么也没显示。
如果有任何提示或指导,请帮忙。谢谢。