How to use IceVisionTransformAdapter
properly in Object Detection task?
#1269
-
I am trying to use |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @IncubatorShokuhou The So it's not quite the same as from dataclasses import dataclass
import albumentations as alb
from icevision.tfms import A
from flash import InputTransform
from flash.core.integrations.icevision.transforms import IceVisionTransformAdapter
from flash.image import ObjectDetectionData
train_transform = A.Compose([
A.RandomCrop(width=256, height=256),
A.HorizontalFlip(p=0.5),
A.RandomBrightnessContrast(p=0.2),
])
transform = A.Compose([
A.CenterCrop(width=256, height=256),
])
@dataclass
class CustomTransform(InputTransform):
def per_sample_transform(self):
return IceVisionTransformAdapter([transform])
def train_per_sample_transform(self):
return IceVisionTransformAdapter([train_transform])
datamodule = ObjectDetectionData.from_coco(
train_folder="data/coco128/images/train2017/",
train_ann_file="data/coco128/annotations/instances_train2017.json",
val_split=0.1,
batch_size=4,
# NOTE: the below will be simplifed in 0.8.0
train_transform=CustomTransform,
val_transform=CustomTransform,
test_transform=CustomTransform,
predict_transform=CustomTransform,
transform_kwargs={},
) Hope that helps 😃 Let me know if you have any specific example requests |
Beta Was this translation helpful? Give feedback.
Hi @IncubatorShokuhou The
IceVisionTransformAdapter
is intended to let you adapt the transforms from IceVision / Albumentations with the object / keypoint detection + instance segmentation tasks. Internally, it just changes the format of your samples so that the boxes, masks, etc. are all transformed correctly.So it's not quite the same as
albumentations.Compose
. A general recipe usingalb.Compose
would look like this: