-
-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Everything classified wrongly (seems like there is some system error I do) #152
Comments
Running
|
I compared the scripts and the meaningful difference is that process_image.py doesn't use If I employ Is this expected? I mean, the original image is 2300x2300 and resize reduces to 224x224 which is a big loss. But the problem here is that |
It seems the two different resize operations, one with skimage and one with PyTorch upsample, are changing the range of the pixel values. I'll look more into it. |
I tried the same code without explicit resizing and with import torchxrayvision as xrv
import skimage, torch, torchvision
print(xrv.__file__)
# Prepare the image:
#img = skimage.io.imread("16747_3_1.jpg")
img = skimage.io.imread("covid-19-pneumonia-58-prior.jpg")
#img = skimage.io.imread("test2.png")
img = xrv.datasets.normalize(img, 255) # convert 8-bit image to [-1024, 1024] range
img = img.mean(2)[None, ...] # Make single color channel
#transform = torchvision.transforms.Compose([xrv.datasets.XRayCenterCrop(),xrv.datasets.XRayResizer(224)])
transform = torchvision.transforms.Compose([xrv.datasets.XRayCenterCrop()])
#transform = torchvision.transforms.Compose([xrv.datasets.XRayCenterCrop(),xrv.datasets.XRayResizer(512)])
img = transform(img)
img = torch.from_numpy(img)
# Load model and process image
#model = xrv.models.DenseNet(weights="densenet121-res224-all")
model = xrv.models.ResNet(weights="resnet50-res512-all")
# model = xrv.baseline_models.jfhealthcare.DenseNet()
outputs = model(img[None,...]) # or model.features(img[None,...])
# Print results
cpu_tensor = outputs[0].cpu();
result = zip(model.pathologies, cpu_tensor.detach().numpy())
result_sorted = sorted(result, key=lambda x: x[1], reverse=True)
for finding, percentage in result_sorted:
print(f"{finding}: {percentage * 100:.0f}%") |
I'm using this code which is pretty much the same as the code from the README. But the classification on the test image is completely wrong, as the image represents pneumonia, why?
The text was updated successfully, but these errors were encountered: