Skip to content

Charlescai123/PhyDRL-Locomotion-Rewrite

Repository files navigation

PhyDRL-Locomotion (A Physics Deep Reinforcement Learning Framework for Quadruped Robot Locomotion)

Tensorflow Python Linux Pybullet


This repository implements a physics-regulated Deep Reinforcement Learning framework to increase stability performance and provides safety critical guarantees for the quadruped robots during walk. It includes:

  • A fairly precise locomotion controller interfaces running in both simulation and real robot. It is designed in convenience of code reuse.
  • The policy training process in simulator using PhyDRL and its sim-to-real deployment on the Unitree A1 quadruped robot

Table of Content


Code Structure

This version adapts from the codebase of fast_and_efficient, with a modularized structure. All the configuration settings are under the folder config, and the parameters are spawn in hierarchy for each instance:

├── agents
│      ├── cmaes                                      <- CMA-ES framework
│      └── phydrl                                     <- PhyDRL framework
│           ├── ...
│           ├── networks                              <- Talyor neural network & DDPG
│           ├── replay_buffers                        <- Replay memory
│           ├── trainer                               <- Training environments
│           ├── utils                                 <- Utilities (logger/add training noise)
│           └── a1_ddpg.py                            <- Training framework
├── config                                            
│      ├── phydrl
│           ├── ddpg_params.py                        <- DDPG parameters
│           ├── logger_params.py                      <- Logger parameters
│           ├── taylor_params.py                      <- Taylor neural network parameters
│           └── trainer_params.py                     <- PhyDRL trainer parameters
│      ├── locomotion
│           ├── controllers                           <- Low-level controller parameters
│           ├── gait_generator                        <- Gait generator parameters
│           └── robots                                <- Robot model and motor parameters
│      ├── a1_real_params.py                          <- Config for real A1 robot
│      ├── a1_sim_params.py                           <- Config for simulated A1 robot
│      └── a1_trainer_params.py                       <- Config for trained A1 robot
├── examples                                
│      ├── a1_exercise_example.py                     <- Robot makes a sinuous move
│      ├── a1_sim_to_real_example.py                  <- Robot sim-to-real (for testing)
│      ├── a1_mpc_controller_example.py               <- Running MPC controller in simulator/real plant
│      ├── main_drl.py                                <- Training A1 with PhyDRL
│      └── main_mpc.py                                <- Testing trained PhyDRL policy
├── locomotion
│      ├── gait_generator                             
│           ├── gait_generator.py                     <- An abstract class
│           └── offset_gait_generator.py              <- Actual gait generator
│      ├── low_level_controllers                      
│           ├── mpc_osqp.cc                           <- OSQP library for stance state controller
│           ├── qp_torque_optimizer.py                <- QP solver for stance acceleration controller
│           ├── stance_leg_controller_mpc.py          <- Stance controller (objective func -> state)
│           ├── stance_leg_controller_quadprog.py     <- Stance controller (objective func -> acceleration)
│           └── swing_leg_controller.py               <- Swing controller (using Raibert formula)
│      ├── robots
│           ├── ...
│           ├── a1.py                                 <- A1 robot (for simulation)
│           ├── a1_robot.py                           <- A1 robot (for real plant)
│           ├── a1_robot_phydrl.py                    <- A1 robot (for PhyDRL training)
│           ├── motors.py                             <- A1 motor model
│           └── quadruped.py                          <- An abstract base class for all robots
│      ├── state_estimators
│           ├── a1_robot_state_estimator.py           <- State estimator for real A1
│           ├── com_velocity_estimator.py             <- CoM velocity estimator simulator/real plant 
│           └── moving_window_fillter.py              <- A filter used in CoM velocity estimator
│      └── locomotion_controller.py        
├── ...
├── logs                                              <- Log files for training
├── models                                            <- Trained model saved path
├── sim_envs                                          <- Environments for simulation
├── third_party                                       <- Code by third parties (unitree, qpsolver, etc.)
├── requirements.txt                                  <- Depencencies for code environment
├── setup.py
└── utils.py                         

Environment Setup

Setup for Local PC

It is recommended to create a separate virtualenv or conda environment to avoid conflicting with existing system packages. The required packages have been tested under Python 3.8.5, though they should be compatible with other Python versions.

Follow the steps below to build the Python environment:

  1. First, install all dependent packages by running:

    pip install -r requirements.txt
  2. Second, install the C++ binding for the convex MPC controller:

    python setup.py install
  3. Lastly, build and install the interface to Unitree's SDK. The Unitree repo keeps releasing new SDK versions. For convenience, we have included the version that we used in third_party/unitree_legged_sdk.

    First, make sure the required packages are installed, following Unitree's guide. Most nostably, please make sure to install Boost and LCM:

    sudo apt install libboost-all-dev liblcm-dev

    Then, go to third_party/unitree_legged_sdk and create a build folder:

    cd third_party/unitree_legged_sdk
    mkdir build && cd build

    Now, build the libraries and move them to the main directory by running:

    cmake ..
    make
    mv robot_interface* ../../..

Setup for Real Robot

Follow the steps if you want to run controllers on the real robot:

  1. Setup correct permissions for non-sudo user (optional)

    Since the Unitree SDK requires memory locking and high process priority, root priority with sudo is usually required to execute commands. To run the SDK without sudo, write the following to /etc/security/limits.d/90-unitree.conf:

    <username> soft memlock unlimited
    <username> hard memlock unlimited
    <username> soft nice eip
    <username> hard nice eip

    Log out and log back in for the above changes to take effect.

  2. Connect to the real robot

    Configure the wireless on the real robot with the manual, and make sure you can ping into the robot's low-level controller (IP:192.168.123.10) on your local PC.

PhyDRL Runtime

PhyDRL adopts a model-based control approach with DDPG as its learning strategy. Through the utilization of the residual control framework and a CLF(Control Lyapunov Function)-like reward design, it demonstrates great performance and holds promise for addressing safety-critical control challenges.

Training

To train the A1 using PhyDRL, refer to the following command:

python -m examples.main_drl --gpu --mode=train --id={your phydrl name}

Testing

To test the trained PhyDRL policy, refer to the following command:

python -m examples.main_drl --gpu --mode=test --id={your phydrl name}

Running Convex MPC Controller

We introduce two kinds of MPC controllers for the stance leg, one of which incorporates state as its objective function while another uses acceleration for optimization. To test each variant, you need to modify the value of objective_function in the config file.

In Simulation

The config parameters are loaded from config/a1_sim_params.py. You can also change the running speed in the a1_mpc_controller_example.py file after spawning an instance for the parameters. To run it in simulation:

python -m examples.a1_mpc_controller_example --show_gui=True --max_time_secs=10 --use_real_robot=False --world=plane

In Real A1 Robot

The config parameters are loaded from config/a1_real_params.py. You can also change the running speed in the a1_mpc_controller_example.py file after spawning an instance for the parameters. To run it in real A1 robot:

  1. Start the robot. After it stands up, get it enter joint-damping mode by pressing L2+B on the remote controller. Then the robot should lay down on the ground.

  2. Establish a connection to the A1 hotspot (UnitreeRoboticsA1-xxxx) on your local machine and verify its communication with the low-level controller onboard.

  3. Run the following command:

    python -m examples.a1_mpc_controller_example --show_gui=False --max_time_secs=15 --use_real_robot=True 

Trouble-shootings

In case your onboard motor damaged due to unknown problems, refer to the instruction manual for its replacement

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published