我正在使用 tensorflow cpp api 使用 resnet50 模型对 imagenet 2012 验证数据集进行推理。我已使用 model=tf.keras.applications.ResNet50(w...) 保存了预训练模型(在 resnet50 上)
我正在使用 tensorflow cpp api 使用 resnet50 模型对 imagenet 2012 验证数据集进行推理。我已使用 model=tf.keras.applications.ResNet50(weights='imagenet') save_path = '/home/parveen/Models/models/resnet50_v1_saved_model' tf.saved_model.save(model, save_path)
.pb 格式保存了预训练模型(在 resnet50 上)。
现在我正在使用 tensorflow cpp-api 加载已保存的模型并进行推理。由于它是一个验证数据集,所以我有图像的类标签。但我的推理预测与基本事实文件不匹配。我是深度学习的新手,这不是我的领域。请帮我找出原因!
在这里我对图像进行预处理:`Tensor LoadAndPreprocessImage(const std::string& image_path) {cv::Mat image = cv::imread(image_path, cv::IMREAD_COLOR);tensorflow::Tensor errorTensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1}));
if (image.empty()) {
std::cerr << "Error: Image not loaded. Check the file path and file existence: " << image_path << std::endl;
return errorTensor;
}
cv::resize(image, image, cv::Size(224, 224));
image.convertTo(image, CV_32F);
image = image / 255.0;
Tensor input_tensor(DT_FLOAT, TensorShape({1, 224, 224, 3}));
auto tensor_mapped = input_tensor.tensor<float, 4>();
for (int y = 0; y < 224; ++y) {
for (int x = 0; x < 224; ++x) {
for (int c = 0; c < 3; ++c) {
tensor_mapped(0, y, x, c) = image.at<cv::Vec3f>(y, x)[c];
}
}
}
return input_tensor;
}`
以下是推理的代码:`int main() {std::string model_dir = \'/home/parveen/Models/saved_models/resnet50_v1_saved_model\';std::string dataset_dir = \'/home/parveen/Models/dataset/val_imagenet12\';
// Load the model
SavedModelBundleLite bundle;
SessionOptions session_options;
RunOptions run_options;
Status status = LoadSavedModel(session_options, run_options, model_dir, {"serve"}, &bundle);
if (!status.ok()) {
std::cerr << "Error loading model: " << status << std::endl;
return -1;
}
// Temporary storage for results (image path, predicted class index)
std::map<std::string, int> results_map;
// Iterate over all images in the dataset directory
for (const auto& entry : fs::directory_iterator(dataset_dir)) {
break;
std::string image_path = entry.path().string();
// Load and preprocess the image
Tensor input_tensor = LoadAndPreprocessImage(image_path);
// Run the model
std::vector<Tensor> outputs;
status = bundle.GetSession()->Run({{"serving_default_input_1:0", input_tensor}},
{"StatefulPartitionedCall:0"}, {}, &outputs);
if (!status.ok()) {
std::cerr << "Error running the model on image: " << image_path << " Error: " << status << std::endl;
continue; // Skip to next image
}
// Output the results (Top-1 classification result)
auto scores = outputs[0].flat<float>();
int max_index = std::distance(scores.data(), std::max_element(scores.data(), scores.data() + scores.size()));
// Store the result in the map
results_map[image_path] = max_index;
}
// Open output file for saving sorted results
std::ofstream results_file("inference_results.txt");
if (!results_file.is_open()) {
std::cerr << "Error: Unable to open results file." << std::endl;
return -1;
}
// Write sorted results to the file
for (const auto& [image_path, class_index] : results_map) {
results_file << image_path << " Predicted class index: " << class_index << std::endl;
}
results_file.close();`