Replies: 1 comment 2 replies
-
Hi @famora2 It is possible to use any environment to retrain the policy in the real world. The only constraint is that the environment follows one of the supported environment interfaces: OpenAI Gym, DeepMind or Isaac Gym. In addition, it is recommended that the real-world environment should, as far as possible, produce the same state/observation and reward. Also, it's a good idea to store all the involved models during the training in simulation (not only the policy). Then, you can load the checkpoints and continue the training in the real world. Let me know any comments and we will continue the discussion :) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I have seen your script "environment.py" in the below discussion which gives a rough baseline for evaluating the trained policy in the real world setup. I would like to ask whether there is a way to extend this script so that the trained policy can be retrained in the real world setup in order to minimize the existing sim2real gap.
Discussed in #10
Originally posted by AntonBock May 2, 2022
Hello,
We have trained a policy that we would like to test on a real-world setup. Does SKRL have any built-in support for this, or do you have any recommended method of doing this?
-Anton
Beta Was this translation helpful? Give feedback.
All reactions