您还可以在 SageMaker JumpStart 上使用 Llama3 模型,如下所示:from sagemaker.jumpstart.model import JumpStartModelmodel = JumpStartModel(model_id = \'meta-textgeneration-llama-3-70b-instruct...
您还可以在 SageMaker JumpStart 上使用 Llama3 模型,如下所示:
from sagemaker.jumpstart.model import JumpStartModel
model = JumpStartModel(model_id = "meta-textgeneration-llama-3-70b-instruct")
predictor = model.deploy(accept_eula=False)
response = predictor.predict({"inputs": "this is where you place your prompt", "parameters": {"max_new_tokens":128, "do_sample":"true"}})
但是,如何改善延迟和/或吞吐量(Sagemaker MultiDataModel)?
我正在做一个项目,需要使用 OpenCV 和自适应阈值检测和掩盖图像中的花朵。尽管我付出了努力,但结果并不一致。有些花朵被很好地掩盖了,但
我正在做一个项目,需要使用 OpenCV 和自适应阈值检测和掩盖图像中的花朵。尽管我付出了努力,但结果并不一致。有些花朵被很好地掩盖了,但其他花朵要么被部分掩盖,要么根本检测不到。我在 TensorFlow 中使用 Oxford Flowers 102 数据集来完成这项任务。下面是我使用的代码:
(train_dataset, test_dataset, validation_dataset), ds_info = tfds.load('oxford_flowers102', split=['test', 'train','validation'], with_info=True, as_supervised=True)
def normalize_img(image, label):
image = tf.image.resize(image, (256, 256))
return tf.cast(image, tf.float32) / 255.0, label
train_dataset = train_dataset.map(normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(ds_info.splits['train'].num_examples)
train_dataset = train_dataset.batch(32)
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
def detect_flowers_and_mask(image):
image_np = (image.numpy() * 255).astype(np.uint8)
gray_image = cv2.cvtColor(image_np, cv2.COLOR_RGB2GRAY)
binary_image = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 11, 2)
contours, _ = cv2.findContours(binary_image, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if contours:
max_contour = max(contours, key=cv2.contourArea)
mask = np.zeros_like(gray_image, dtype=np.uint8)
cv2.drawContours(mask, [max_contour], -1, (255, 255, 255), thickness=cv2.FILLED)
masked_image = cv2.bitwise_and(image_np, image_np, mask=mask)
else:
masked_image = image_np
return masked_image
plt.figure(figsize=(15, 15))
for i, (image_batch, label_batch) in enumerate(train_dataset.take(20)):
for image, label in zip(image_batch, label_batch):
masked_image = detect_flowers_and_mask(image)
plt.subplot(4, 5, i+1)
plt.imshow(masked_image)
plt.title(ds_info.features['label'].int2str(label.numpy()))
plt.axis("off")
plt.tight_layout()
plt.show()
您可以尝试在黑色背景上设置阈值,然后反转。在背景颜色范围上使用 cv2.inRange(),或者简单地在某些非常低或黑色的值上设置阈值。我认为自适应阈值在这里不合适。