You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I used this environment for stable-baseline training, the following error occurred: numpy core._ exceptions . MemoryError: Unable to allocate 183. GiB for an array with shape (1000000,1,3,256,256)and dat a type uint 8
I know that directly using camera, lidar and birdeye images as input requires a lot of memory, so I added an extractor to test.py to process the conversion of images into one-dimensional vectors. The code is in the stable baselines document and is used to handle dictionary type status input:
import gym
import torch as th
from torch import nn
from stable_baselines3.common.torch_layers import BaseFeaturesExtractor
class CustomCombinedExtractor(BaseFeaturesExtractor):
def __init__(self, observation_space: spaces.Dict):
# We do not know features-dim here before going over all the items,
# so put something dummy for now. PyTorch requires calling
# nn.Module.__init__ before adding modules
super().__init__(observation_space, features_dim=1)
extractors = {}
total_concat_size = 0
# We need to know size of the output of this extractor,
# so go over all the spaces and compute output feature sizes
for key, subspace in observation_space.spaces.items():
if key == "image":
# We will just downsample one channel of the image by 4x4 and flatten.
# Assume the image is single-channel (subspace.shape[0] == 0)
extractors[key] = nn.Sequential(nn.MaxPool2d(4), nn.Flatten())
total_concat_size += subspace.shape[1] // 4 * subspace.shape[2] // 4
elif key == "vector":
# Run through a simple MLP
extractors[key] = nn.Linear(subspace.shape[0], 16)
total_concat_size += 16
self.extractors = nn.ModuleDict(extractors)
# Update the features dim manually
self._features_dim = total_concat_size
def forward(self, observations) -> th.Tensor:
encoded_tensor_list = []
# self.extractors contain nn.Modules that do all the processing.
for key, extractor in self.extractors.items():
encoded_tensor_list.append(extractor(observations[key]))
# Return a (B, self._features_dim) PyTorch tensor, where B is batch dimension.
return th.cat(encoded_tensor_list, dim=1)
I use this class as the policy_kwargs of the function, but it still report the same error: numpy. core._exceptions . MemoryError: Unable to allocate 183. GiB for an array with shape (1000000,1,3,256,256)and dat a type uint 8.
The text was updated successfully, but these errors were encountered:
When I used this environment for stable-baseline training, the following error occurred:
numpy core._ exceptions . MemoryError: Unable to allocate 183. GiB for an array with shape (1000000,1,3,256,256)and dat a type uint 8
I know that directly using camera, lidar and birdeye images as input requires a lot of memory, so I added an extractor to test.py to process the conversion of images into one-dimensional vectors. The code is in the stable baselines document and is used to handle dictionary type status input:
I use this class as the policy_kwargs of the function, but it still report the same error:
numpy. core._exceptions . MemoryError: Unable to allocate 183. GiB for an array with shape (1000000,1,3,256,256)and dat a type uint 8.
The text was updated successfully, but these errors were encountered: