Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Walking - the more classic approach #1474

Open
pejotejo opened this issue Oct 22, 2024 · 2 comments
Open

Walking - the more classic approach #1474

pejotejo opened this issue Oct 22, 2024 · 2 comments
Assignees

Comments

@pejotejo
Copy link
Contributor

We could try to build a Backlash(PID) compensation like BHuman. Also one could train the joint angle predictions (maybe with data from the ITL). In addition to that one could try to compensate not high enough joint angles with other ones.
We should also try to test our walking on different terrains (soft, hard, angled, uneven...).
If we go to the ITL we could validate our IMU Filtering.

@pejotejo pejotejo added this to the Seasongoal 2025 milestone Oct 22, 2024
@pejotejo pejotejo moved this to Open in Development Oct 22, 2024
@Vivituhh Vivituhh self-assigned this Oct 23, 2024
@philipniklas-r
Copy link

If we go to the ITL we could validate our IMU Filtering.

If I understand it correctly here https://github.com/HULKs/hulk/blob/main/crates/control/src/orientation_filter.rs#L92, you calibrate the length of the gravity vector and assume, that the direction can be trusted.

I investigated this assumption back in 2021 (https://b-human.de/downloads/publications/2022/WalkStepAdjustment.pdf, chapter 4 "Foot Support Rectangle Calibration"), by letting the robot fall itself over, to compared the expected support polygon with the measured one. The result is shown below for all our V6 robots back then.

image

These errors occur, because we assumed the accelerometer values are atleast in their direction calibrated. This is not the case, but the angles from the imu are calibrated.
Therefore we now use the imu angles to calibrate the acc and gyro directions, which in turn fixed our estimation for the robot orientation and the errors in the graphic disappeared.

This change was very helpful for the walk, because previously the heels would be 10 to 20 mm shorter and the tip of the toe 10 to 20 mm longer and made everything a nightmare to parameterize :D

(As a side note, it also profited the camera calibration)

In addition to that one could try to compensate not high enough joint angles with other ones

I recommend to look into logs to find out the reasons why the robots fell over (without posting the numbers directly, I counted the falls of the teams https://b-human.informatik.uni-bremen.de/public/Statistics/2024/ 👀 Also in color if used with libre calc).
I am still highly confident that your high rotation speed is one large contributor to the falls, judging from the videos. Apart from that from my experience I often looked into the joint command from cases of falls on our robots and found way too many times bugs or design flaws.

We should also try to test our walking on different terrains (soft, hard, angled, uneven...)

Thin wooden plates (2-4 mm) from the hardware store (Baumarkt) or stacking field carpets on top of each other a good tests.

Other helpful references: #411 and https://docs.b-human.de/master/motion/motion-walking/#list-of-design-decisions

@philipniklas-r
Copy link

We could try to build a Backlash(PID) compensation like BHuman

If interested, I could link the files that handle this part in our code. The short summary is, that we have three parts for it:

  • a reset in the requested position at the start of a new walking step (which I am currently rewriting, because our current solution is ugly and hacky 😅. But I can explain our new approach too)
  • handling the joint errors while executing the walk step. Primarily by reducing the changes (e.g. the speed) of other joints, and also reducing the turn step size if necessary (which I can highly recommend)
  • measuring an artificial value between 0 and 1 to evaluate how good the robot it (automatically done while the robot walks). This value is then used to scale walk parameters (e.g. walk speed)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Open
Development

No branches or pull requests

3 participants