Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'tuple' object has no attribute 'to' #27

Open
Yisikanadai opened this issue Dec 21, 2024 · 2 comments
Open

AttributeError: 'tuple' object has no attribute 'to' #27

Yisikanadai opened this issue Dec 21, 2024 · 2 comments

Comments

@Yisikanadai
Copy link

Yisikanadai commented Dec 21, 2024

/home/ubuntu417/anaconda3/envs/go2_rl/bin/python unitree_rl_gym-main/legged_gym/scripts/train.py --task=go2
Importing module 'gym_38' (/home/ubuntu417/unitree_go2/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/ubuntu417/unitree_go2/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.13.0+cu116
Using /home/ubuntu417/.cache/torch_extensions/py38_cu116 as PyTorch extensions root...
Emitting ninja build file /home/ubuntu417/.cache/torch_extensions/py38_cu116/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Loading extension module gymtorch...
Device count 1
/home/ubuntu417/unitree_go2/isaacgym/python/isaacgym/_bindings/src/gymtorch
ninja: no work to do.
Setting seed: 1
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
Traceback (most recent call last):
File "unitree_rl_gym-main/legged_gym/scripts/train.py", line 18, in
train(args)
File "unitree_rl_gym-main/legged_gym/scripts/train.py", line 14, in train
ppo_runner.learn(num_learning_iterations=train_cfg.runner.max_iterations, init_at_random_ep_len=True)
File "/home/ubuntu417/unitree_go2/rsl_rl-1.0.2/rsl_rl/runners/on_policy_runner.py", line 94, in learn
obs = obs.to(self.device)
AttributeError: 'tuple' object has no attribute 'to'

I print the obs

(tensor([ 0.3188, -0.7567, 0.4211, 0.2648, -0.1023, 0.1416, 0.0095, -0.0221,
-1.0391, 1.7729, -1.4516, -0.2500, 0.0288, -0.0544, -0.2970, -0.0498,
0.1930, -0.0523, 0.0227, 0.2263, -0.0120, 0.0081, -0.2732, 0.0702,
0.0331, -0.0871, 0.8401, 0.0096, -0.1943, 0.2382, -0.0160, -0.2962,
0.2006, 0.0689, 0.3692, -0.2438, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
device='cuda:0'), tensor([ 0.5073, 0.1631, -0.3483, -0.1181, -0.0402, 0.0271, -0.0341, -0.0391,
-0.9990, 0.4094, -1.3506, 0.1190, 0.0476, -0.1898, -0.2205, -0.0270,
0.1521, -0.2034, -0.0153, 0.2692, 0.3511, -0.0042, -0.0101, -0.0339,
-0.0519, 0.1259, 0.5441, -0.0084, -0.2618, 0.6837, 0.0273, -0.0903,
-0.6574, 0.0496, 0.0299, 0.1390, 0.0000, 0.0000, 0.0000, 0.0000,
0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
device='cuda:0'))

@Yisikanadai Yisikanadai changed the title Tensors must have same number of dimensions: got 2 and 1 RuntimeError: Tensors must have same number of dimensions: got 2 and 1 Dec 21, 2024
@Yisikanadai Yisikanadai changed the title RuntimeError: Tensors must have same number of dimensions: got 2 and 1 AttributeError: 'tuple' object has no attribute 'to' Dec 21, 2024
@craipy-hub
Copy link
Collaborator

Check the state of the network input and check the network type is LSTM or MLP

@Yisikanadai
Copy link
Author

Yisikanadai commented Dec 21, 2024

Check the state of the network input and check the network type is LSTM or MLP

Thanks to reply!!!

the network type is MLP
here is the network

Actor MLP: Sequential(
  (0): Linear(in_features=48, out_features=512, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=512, out_features=256, bias=True)
  (3): ELU(alpha=1.0)
  (4): Linear(in_features=256, out_features=128, bias=True)
  (5): ELU(alpha=1.0)
  (6): Linear(in_features=128, out_features=12, bias=True)
)
Critic MLP: Sequential(
  (0): Linear(in_features=48, out_features=512, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=512, out_features=256, bias=True)
  (3): ELU(alpha=1.0)
  (4): Linear(in_features=256, out_features=128, bias=True)
  (5): ELU(alpha=1.0)
  (6): Linear(in_features=128, out_features=1, bias=True)
)

here is the network input

obs[0]shape: torch.Size([48]) ,obs[1]shape: torch.Size([48])
critic_obs[0]shape: torch.Size([48]) ,critic_obs[1]shape: torch.Size([48])

the learn code is here

    def learn(self, num_learning_iterations, init_at_random_ep_len=False):
        # initialize writer
        if self.log_dir is not None and self.writer is None:
            self.writer = SummaryWriter(log_dir=self.log_dir, flush_secs=10)
        if init_at_random_ep_len:
            self.env.episode_length_buf = torch.randint_like(self.env.episode_length_buf, high=int(self.env.max_episode_length))
        obs = self.env.get_observations()
        privileged_obs = self.env.get_privileged_observations()
        critic_obs = privileged_obs if privileged_obs is not None else obs
        print("obs[0]shape:",obs[0].shape,",obs[1]shape:",obs[1].shape)
        print("critic_obs[0]shape:",obs[0].shape,",critic_obs[1]shape:",obs[1].shape)
        obs = obs[0].to(self.device)      ## change
        critic_obs = critic_obs[0].to(self.device)         ## change
        self.alg.actor_critic.train() # switch to train mode (for dropout for example)

        ep_infos = []
        rewbuffer = deque(maxlen=100)
        lenbuffer = deque(maxlen=100)
        cur_reward_sum = torch.zeros(self.env.num_envs, dtype=torch.float, device=self.device)
        cur_episode_length = torch.zeros(self.env.num_envs, dtype=torch.float, device=self.device)

        tot_iter = self.current_learning_iteration + num_learning_iterations
        for it in range(self.current_learning_iteration, tot_iter):
            start = time.time()
            # Rollout
            with torch.inference_mode():
                for i in range(self.num_steps_per_env):
                    actions = self.alg.act(obs, critic_obs)
                    obs, privileged_obs, rewards, dones, infos = self.env.step(actions)
                    critic_obs = privileged_obs if privileged_obs is not None else obs
                    obs, critic_obs, rewards, dones = obs.to(self.device), critic_obs.to(self.device), rewards.to(self.device), dones.to(self.device)
                    self.alg.process_env_step(rewards, dones, infos)
.......

I try to change obs = obs.to(self.device) to obs = obs[0].to(self.device)
but get a new error

Actor MLP: Sequential(
  (0): Linear(in_features=48, out_features=512, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=512, out_features=256, bias=True)
  (3): ELU(alpha=1.0)
  (4): Linear(in_features=256, out_features=128, bias=True)
  (5): ELU(alpha=1.0)
  (6): Linear(in_features=128, out_features=12, bias=True)
)
Critic MLP: Sequential(
  (0): Linear(in_features=48, out_features=512, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=512, out_features=256, bias=True)
  (3): ELU(alpha=1.0)
  (4): Linear(in_features=256, out_features=128, bias=True)
  (5): ELU(alpha=1.0)
  (6): Linear(in_features=128, out_features=1, bias=True)
)
obs[0]shape: torch.Size([48]) ,obs[1]shape: torch.Size([48])
critic_obs[0]shape: torch.Size([48]) ,critic_obs[1]shape: torch.Size([48])
Traceback (most recent call last):
  File "legged_gym/scripts/train.py", line 18, in <module>
    train(args)
  File "legged_gym/scripts/train.py", line 14, in train
    ppo_runner.learn(num_learning_iterations=train_cfg.runner.max_iterations, init_at_random_ep_len=True)
  File "/home/ubuntu417/unitree_go2/rsl_rl-1.0.2/rsl_rl/runners/on_policy_runner.py", line 114, in learn
    obs, privileged_obs, rewards, dones, infos = self.env.step(actions)
  File "/home/ubuntu417/unitree_go2/unitree_rl_gym-main/legged_gym/envs/base/legged_robot.py", line 74, in step
    self.post_physics_step()
  File "/home/ubuntu417/unitree_go2/unitree_rl_gym-main/legged_gym/envs/base/legged_robot.py", line 110, in post_physics_step
    self.compute_observations() # in some cases a simulation step might be required to refresh some obs (for example body positions)
  File "/home/ubuntu417/unitree_go2/unitree_rl_gym-main/legged_gym/envs/base/legged_robot.py", line 182, in compute_observations
    self.obs_buf = torch.cat((  self.base_lin_vel * self.obs_scales.lin_vel,
RuntimeError: Tensors must have same number of dimensions: got 2 and 1

I find the code is here

    def compute_observations(self):
        """ Computes observations
        """
        print((self.base_lin_vel * self.obs_scales.lin_vel).shape)
        print((self.base_ang_vel  * self.obs_scales.ang_vel).shape)
        print((self.commands[:, :3] * self.commands_scale).shape)
        print(((self.dof_pos - self.default_dof_pos) * self.obs_scales.dof_pos).shape)
        print((self.dof_vel * self.obs_scales.dof_vel).shape)
        print((self.actions).shape)
        self.obs_buf = torch.cat((  self.base_lin_vel * self.obs_scales.lin_vel,
                                    self.base_ang_vel  * self.obs_scales.ang_vel,
                                    self.projected_gravity,
                                    self.commands[:, :3] * self.commands_scale,
                                    (self.dof_pos - self.default_dof_pos) * self.obs_scales.dof_pos,
                                    self.dof_vel * self.obs_scales.dof_vel,
                                    self.actions
                                    ),dim=-1)
        # add perceptive inputs if not blind
        # add noise if needed
        if self.add_noise:
            self.obs_buf += (2 * torch.rand_like(self.obs_buf) - 1) * self.noise_scale_vec

the output is

torch.Size([4096, 3])
torch.Size([4096, 3])
torch.Size([4096, 3])
torch.Size([4096, 12])
torch.Size([4096, 12])
torch.Size([12])

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants