You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 4, 2019. It is now read-only.
Currently __train__er simply saves all params to the database assuming he is alone.
This makes running several parallel training processes useless [kind of bootstrap dqn for higher price].
whether to also LOAD params every save_period to synchronize with other trainers
a coefficient by which to change params on the server (default 1) - send it to save_all_params
a flag here that allows to partially update params on server. Default 1. Warn if >1. If != 1, also make trainer load weights from server (prev point)
Also it may be wise to avoid locks in case someone wants this to work in 100500 processes. Or at least measure lock time loss and make sure it is small.
Would be super-nice if you first created implementation with max readability / min lines of code.
The text was updated successfully, but these errors were encountered:
Currently __train__er simply saves all params to the database assuming he is alone.
This makes running several parallel training processes useless [kind of bootstrap dqn for higher price].
There are, however, techniques that allow parallel updates with periodic synchronizations.
https://www.cs.cmu.edu/~muli/file/parameter_server_osdi14.pdf
May or may not be used here
The goal is to allow such parallelism with minimum lines of code.
Also it may be wise to avoid locks in case someone wants this to work in 100500 processes. Or at least measure lock time loss and make sure it is small.
Would be super-nice if you first created implementation with max readability / min lines of code.
The text was updated successfully, but these errors were encountered: