Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

understanding feature vector extraction for an image ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 2048]) #2

Open
shubhamturai opened this issue Jun 5, 2019 · 4 comments

Comments

@shubhamturai
Copy link

shubhamturai commented Jun 5, 2019

I am trying to extract the feature vector from the trained network of yours. Firstly, I am not able to get a single feature vector for a single image. I do not understand the exact reason behind it.

`import torch
from torch.autograd import Variable
from backbone import Network_D
from PIL import Image
from torchvision.transforms import ToTensor
import numpy
import time
model = Network_D()
model.load_state_dict(torch.load('/home/shubam/Documents/person_reidentification/SphereReID/res/model_final.pkl'))
img_path='/home/shubam/Documents/person_reidentification/SphereReID/dataset/Market-1501-v15.09.15/bounding_box_test/0000_c3s2_105228_06.jpg'
image = Image.open(img_path)
image = ToTensor()(image).unsqueeze(0) # unsqueeze to add artificial first dimension
image = Variable(image)

image = torch.cat((image, image), 0)

image = torch.cat((image, torch.zeros_like(image)), 0)

#print("the size of image is {}".format(image))
t1=time.time()
feature = model.forward(image).detach().cpu().numpy()
print("the feature is {}".format(feature[0, :]))
t2=time.time()
print("time for cpu is {}".format(t2-t1))
feature = model.forward(image).detach().cuda()#.numpy()
print("the feature is {}".format(feature[0, :]))
t3=time.time()
print("time for gpu is {}".format(t3-t2))
`
I get this output:
Traceback (most recent call last):
File "/home/shubam/Documents/person_reidentification/SphereReID/model_converter.py", line 18, in
feature = model.forward(image).detach().cpu().numpy()
File "/home/shubam/Documents/person_reidentification/SphereReID/backbone.py", line 54, in forward
x = self.bn2(x)
File "/home/shubam/.virtualenv/tracker/local/lib/python2.7/site-packages/torch-1.0.1.post2-py2.7-linux-x86_64.egg/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/shubam/.virtualenv/tracker/local/lib/python2.7/site-packages/torch-1.0.1.post2-py2.7-linux-x86_64.egg/torch/nn/modules/batchnorm.py", line 76, in forward
exponential_average_factor, self.eps)
File "/home/shubam/.virtualenv/tracker/local/lib/python2.7/site-packages/torch-1.0.1.post2-py2.7-linux-x86_64.egg/torch/nn/functional.py", line 1619, in batch_norm
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 2048])

Secondly, I tried to concatenate zeros like the image to increase the dimensions such that torch.Size([2, 2048)

`image = Variable(image)

image = torch.cat((image, image), 0)

image = torch.cat((image, torch.zeros_like(image)), 0)
#print("the size of image is {}".format(image))
t1=time.time()
feature = model.forward(image).detach().cpu().numpy()
print("the feature is {}".format(feature))
t2=time.time()
print("time for cpu is {}".format(t2-t1))
feature = model.forward(image).detach().cuda()#.numpy()
print("the feature is {}".format(feature))
t3=time.time()
print("time for gpu is {}".format(t3-t2))

`
the output is:
the feature is [[ 0.4966124 0.57359195 0.683048 ... -0.6020484 -0.46173286
0.577405 ]
[-0.4953278 -0.60051304 -0.6328474 ... 0.6000462 0.48753467
-0.6212439 ]]
time for cpu is 0.0762948989868
the feature is tensor([[-0.4958, 0.5337, -0.6106, ..., -0.5830, 0.4491, 0.5779],
[ 0.4971, -0.5606, 0.6608, ..., 0.5810, -0.4233, -0.6217]],
device='cuda:0')
time for gpu is 0.0752019882202

here you can see that both the vectors are different and gives different value in different ways of calculation. could you please suggest me a way to find a feature vector for a given image. Thank you.

Now, you can see that the image with zeros also gets some non zero values as the feature vector.

@CoinCheung
Copy link
Owner

Have you ever tried to add model.eval()?

It is some pytorch knowledge. If you run nn.BatchNorm1d in the training mode, you should ensure batchsize to be greater than 1. If you run inference, you should add model.eval().

@shubhamturai
Copy link
Author

I thank you for your response. Yes, I did try doing it and I have got what I wanted. I would like to ask if your CMC after the evaluation is actually for rank-1? I got CMC : 0.935867, mAP : 0.828507856565 after 200 epochs for MARKET 1501 dataset. I would appreciate if you could give me suggestions about how to adapt your work for MARS dataset.

@CoinCheung
Copy link
Owner

Hi,

for the first question, the answer is yes. It is for rank-1.

As for MARS dataset, I have to confess that I have no experience playing with that dataset, so I cannot provide efficient suggestions. Maybe you could read the specification of that dataset more carefully and find a chance to understand its structure. Then you will know how to use that dataset properly.

@shubhamturai
Copy link
Author

Thanks Mr. Cheung. I have tried to train on MARS Dataset in your very own style. However, I find that the matmul while evaluate.py would be very huge that a computer is not able to calculate. Thus I decided to reduce the number of images for evaluation.py to 20% of the available test images, and query images. But I see that the CMC: 0.63501203, map: 0.671273149781.

Market1501:
train_images: 12,937
test_images: 19,733
query_images: 3,369
CMC : 0.935867, mAP : 0.828507856565, epochs: 200

MARS:
train_images: 509,914
test_images: 681,089 out of which only 20% are taken (139,982)
query_images: 114,493 which are actually extracted from the test images itself, but again 20% of query images (23,490) are taken because of matmul in evaluate.py
CMC: 0.63501203, map: 0.671273149781, epochs: 200

Now, what would be the effect of decreasing the number of test images on the evaluation score? However, the number of training images remain unchanged. Could you please suggest me a way such that I can use all the test and query images instead of taking only 20% of images( frequently)?
The model training seems to be okay. But not sure how far the accuracy has come. Looking forward to your suggestions. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants