You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
because of network latency, the task model lags when dividing to many nodes.
it may make sense to use the service model in which min-cpu-threads dictates how many threads the work would be parallelized across on a presumably hefty provider.
but the (simple) service model implies multiple work consecutively, so for a single file, task model makes sense. service model may come into play with archiving... see comments below for plans on optimizing single file compression in current task model
The text was updated successfully, but these errors were encountered:
initial heuristics model:
if min-cpu-threads exceeds the number of divisions, the division count may be reduced.
suppose divisions is 10 (default) and min-cpu-threads is 24
then divisions should be adjusted down to 1
suppose divisions is 10 (default) and min-cpu-threads is 11
then divisions should be adjusted down to 1
supposed divisions is 10 (default) and min-cpu-threads is 8
then divisions should be adjusted down to 2 the first multiple of 8 above division count
krunch3r76
changed the title
reinterpret min-cpu-threads in a service model
reinterpret min-cpu-threads in task or potentially service model
Apr 29, 2022
krunch3r76
changed the title
reinterpret min-cpu-threads in task or potentially service model
reinterpret min-cpu-threads in task model
Apr 29, 2022
krunch3r76
changed the title
reinterpret min-cpu-threads in task model
reinterpret min-cpu-threads in task model for optimal division count
Apr 29, 2022
krunch3r76
changed the title
reinterpret min-cpu-threads in task model for optimal division count
optimally choose division count by interpreting min-cpu-threads in task model
Apr 29, 2022
currently work is divided into blocks of 64MiB to use 1 thread. the thread count should be increased to 2 for files 128MiB or larger. when this is done this issue shall be resolved.
because of network latency, the task model lags when dividing to many nodes.
it may make sense to use the service model in which min-cpu-threads dictates how many threads the work would be parallelized across on a presumably hefty provider.
but the (simple) service model implies multiple work consecutively, so for a single file, task model makes sense. service model may come into play with archiving... see comments below for plans on optimizing single file compression in current task model
The text was updated successfully, but these errors were encountered: