Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch/install/bin/luajit: bad argument #3 to '?' (number expected, got userdata) #3

Open
shay86 opened this issue Jun 8, 2017 · 3 comments

Comments

@shay86
Copy link

shay86 commented Jun 8, 2017

Hi,
I post previously about issue( input and gradOutput shapes do not match), and you recommended me install specific torch with Cuda 5 or 6. I did it, but it didn't workout unfortunately. Then i try your second recommendation to reshape the data in spatial pleases and i try as below:(trainer.lua)

        -- evaluate function for complete mini batch
        for i = 1,#inputs do
            -- estimate f
            local output = model:forward(inputs[i])
            model:add(nn.Reshape(inputs[i]))  -- here
            --clip extra output from shift and stitch
            -- output = tablex.map(function(t) return t[{{1,targets[i][1]:size(1)},{1,t:size(2)}}] end, output)
            -- print(inputs[i])
            -- print(output)

            output = output[{ {1, targets[i]:size(1)},{} }] 
            -- print(targets[i])
            local err
            if cuda then
                err = criterion:forward(output:cuda(), targets[i]:cuda())
                model:add(nn.Reshape(targets[i]))  -- here
            else
                err = criterion:forward(output, targets[i])
                model:add(nn.Reshape(targets[i]))  -- here
            end
            f = f + err

            -- estimate df/dW

this time it gave me this error:
/torch/install/bin/luajit: bad argument #3 to '?' (number expected, got userdata)
stack traceback:
[C]: at 0x7fa0ce476830
[C]: in function '__newindex'
/home/shayan/torch/install/share/lua/5.1/nn/Reshape.lua:19: in function '__init'
/home/shayan/torch/install/share/lua/5.1/torch/init.lua:91: in function </home/shayan/torch/install/share/lua/5.1/torch/init.lua:87>
[C]: in function 'Reshape'
./Trainer.lua:71: in function 'opfunc'
/home/shayan/torch/install/share/lua/5.1/optim/sgd.lua:44: in function 'optimMethod'
./Trainer.lua:111: in function 'train'
main.lua:93: in function 'main'
main.lua:126: in main chunk
[C]: in function 'dofile'
...ayan/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

Asking for input shape numbers. I don't know the input shape number. Moreover, I read the code, and I didn't get where is the input shape. The only thing I am sure of is the input always greater than the gradOutput by one or tow numbers as you can see from errors below :

5.1/nn/THNN.lua:110: input and gradOutput shapes do not match: input [524 x 2], gradOutput [522 x 2]
5.1/nn/THNN.lua:110: input and gradOutput shapes do not match: input [162 x 2], gradOutput [161 x 2]
5.1/nn/THNN.lua:110: input and gradOutput shapes do not match: input [566 x 2], gradOutput [564 x 2]
5.1/nn/THNN.lua:110: input and gradOutput shapes do not match: input [254 x 2], gradOutput [253 x 2]

But no specific number
Can you help me to solve this problem. I am trying to find other solution but I am new to torch and lua (To be onset I am learning them just for your code ). I am studying a lot of tutorials but i didn't get all your code yet.

Thanks in advance.

@ebetica
Copy link
Collaborator

ebetica commented Jun 13, 2017

Do not use nn.Reshape, since it does not allow for dynamic reshaping. Just use torch.reshape to get the outputs and inputs into the same shape. The last few elements in the input do not matter, and is just padding. It should be okay to simply remove them so the input and gradOutput are the same size.

@shay86
Copy link
Author

shay86 commented Jun 14, 2017

Hi,
this what i try

        -- evaluate function for complete mini batch
        for i = 1,#inputs do
            -- estimate f
            --local output = model:forward(inputs[i])
            local x=torch.reshape(inputs[i],161,2)
            local output = model:forward(x[i])
            --clip extra output from shift and stitch
            -- output = tablex.map(function(t) return t[{{1,targets[i][1]:size(1)},{1,t:size(2)}}] end, output)
            -- print(inputs[i])
            -- print(output)

            output = output[{ {1, targets[i]:size(1)},{} }] 
            -- print(targets[i])
            local err
            if cuda then
                local y=torch.reshape(targets[i],161,2)
                err = criterion:forward(output:cuda(), y[i]:cuda())
                --err = criterion:forward(output:cuda(), targets[i]:cuda())
            else
                local y=torch.reshape(targets[i],161,2)
                err = criterion:forward(output, y[i])
               
               -- err = criterion:forward(output, targets[i])
            end

but i got this error
n.ClassNLLCriterion
Epoch 0: training...
/home/shayan/torch/install/bin/luajit: inconsistent tensor size at /home/shayan/torch/pkg/torch/lib/TH/generic/THTensorCopy.c:17
stack traceback:
[C]: at 0x7ff3ba267440
[C]: in function 'reshape'
./Trainer.lua:70: in function 'opfunc'
/home/shayan/torch/install/share/lua/5.1/optim/sgd.lua:44: in function 'optimMethod'
./Trainer.lua:114: in function 'train'
main.lua:92: in function 'main'
main.lua:125: in main chunk
[C]: in function 'dofile'
...ayan/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

do you have a specific number for the input and the output shape?? or it is dynamic? if it is dynamic how I make it dynamic here too?
thanks for your time and help

@shay86
Copy link
Author

shay86 commented Jun 18, 2017

hi
i have a question? how you use PSI-BLAST PSSM features in the code .Are you inter it with the amino acid sequence to the CNN as input? or as a three diminution image... I mean the amino acid sequence as two diminution array and the PSI-BLAST PSSM as the third diminution of cores here is 20 diminution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants