out of memory cuda #283
Replies: 10 comments 15 replies
-
Try to down abbscale (open the json file) to 1. |
Beta Was this translation helpful? Give feedback.
-
I got the same issue, it only happon on fox |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm having the same issue , do you have any idea to solve it? Thank you!! |
Beta Was this translation helpful? Give feedback.
-
Hi! ProjectPath/data/nerf/fox/transforms.json. at the line14, the value "aabb_scale" , it default 4,change it to 1.
…------------------ 原始邮件 ------------------
发件人: "NVlabs/instant-ngp" ***@***.***>;
发送时间: 2022年3月9日(星期三) 晚上10:29
***@***.***>;
***@***.******@***.***>;
主题: Re: [NVlabs/instant-ngp] out of memory cuda (Discussion #283)
Hi, I'm having the same issue , do you have any idea to solve it? Thank you!!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Same issue, modified abbscale to 1, reduced number of images, but probably my GPU is not enough (GTX 1050 notebook version). |
Beta Was this translation helpful? Give feedback.
-
To get a better idea where memory is allocated and where to cut from to accommodate this model for your GPU, define TCNN_VERBOSE_MEMORY_ALLOCS you may find if you are on an older GPU as I am, most of the memory footprint is sensitive to the multi-res hash encoding hyper parameters, which will affect the memory usage exponentially. I believe memory grows O(n^3) with some of those hyper parameters. |
Beta Was this translation helpful? Give feedback.
-
8 images of size 216x384 with aabscale modded to 1 and leventt's tweaks vmem usage was at 3.9 gb and outcome looked unrecognizable. Nonrtx gpus are sol. |
Beta Was this translation helpful? Give feedback.
-
Hello i am trying to run testbed on bunny.obj using SDF and keep getting this error. |
Beta Was this translation helpful? Give feedback.
-
I am running a gtx3080 set it to one and still nothing, to be fair the images are huge... but not sure what to do to process such images in the future if it crashes,,,, can we not process large photos or many of them. Seems like a big limitation... hmm |
Beta Was this translation helpful? Give feedback.
-
INFO Loading NeRF dataset from
INFO data\nerf\fox\transforms.json
SUCCESS Loaded 8 images of size 216x384 after 0s
INFO cam_aabb=[min=[1.13457,0.406441,0.398352], max=[1.84657,1.63633,0.546498]]
INFO Loading network config from: configs\nerf\base.json
INFO GridEncoding: Nmin=16 b=1.66248 F=2 T=2^19 L=16
Warning: FullyFusedMLP is not supported for the selected architecture 52. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
Warning: FullyFusedMLP is not supported for the selected architecture 52. Falling back to CutlassMLP. For maximum performance, raise the target GPU architecture to 75+.
INFO Density model: 3--[HashGrid]-->32--[FullyFusedMLP(neurons=64,layers=3)]-->1
INFO Color model: 3--[Composite]-->16+16--[FullyFusedMLP(neurons=64,layers=4)]-->3
INFO total_encoding_params=13623184 total_network_params=9728
ERROR Uncaught exception: E:\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/gpu_memory.h:531 cuMemSetAccess(m_base_address + m_size, n_bytes_to_allocate, &access_desc, 1) failed with error CUDA_ERROR_OUT_OF_MEMORY
Could not free memory: E:\instant-ngp\dependencies\tiny-cuda-nn\include\tiny-cuda-nn/gpu_memory.h:452 cuMemAddressFree(m_base_address, m_max_size) failed with error CUDA_ERROR_INVALID_VALUE
I reduced the number of images as well as their resolution and crated a new json yet I still get memory error. Tried the --width --height trick as well. I'm on gtx970 and I allwoed testbed, python and related executables with Optimize Cuda in nvcpl. Cuda 11.6 + optix 7.3 (no issues with building) + python 3.8 + win 11
Beta Was this translation helpful? Give feedback.
All reactions