-
-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher-order derivatives of ffjord on gpu fail #652
Comments
@DhairyaLGandhi any good way around this? |
Just curious whether there has been any movement on this? Or whether there could be an alternative to getting the laplacian which works by using other functions / AD packages. |
Not sure if its related, but if I simply try to call the loss of this FFJORD code I get a scalar indexing on a GPU array error. Seems to point to the [:, :, end] slice on the solve in forward_ffjord. Code: nn = Chain(
Dense(2, 32, tanh),
Dense(32, 2),
) |> gpu
tspan = (0.0f0, 1.0f0)
ffjord_mdl = FFJORD(nn, tspan, Tsit5())
function loss(x)
e = randn(Float32, size(x)) |> gpu
logpx, λ₁, λ₂ = ffjord_mdl(x, ffjord_mdl.p, e)
return logpx
end
function lapl(x)
return Zygote.diaghessian(x -> sum(loss(x)), x)
end
data_dist = Normal(0.0f0, 1.0f0)
train_data = gpu(rand(data_dist, 2))
loss(train_data) Error:
Environment status:
|
#614 is probably the solution when it's finished. |
A small update: FluxML/NNlibCUDA.jl#48 fixes the original bug in this issue. However, there remains another bug (that now looks Zygote related) in the diaghessian call. The scalar indexing in the forward call of the loss also remains. |
I need to calculate the Laplacian of the densities modelled by a normalizing flow w.r.t. to the inputs. On CPU, I can e.g. use the following code (which works, but seems to scale poorly with the number of samples)
However, when I attempt to run this code on GPU
I get the following error:
These are my package versions:
Any help would be appreciated!
The text was updated successfully, but these errors were encountered: