0.18.36
Vultr
Cluster placement
The vultr
backend can now provision fleets with cluster placement.
type: fleet
nodes: 4
placement: cluster
resources:
gpu: MI300X:8
backends: [vultr]
Nodes in such a cluster will be interconnected and can be used to run distributed tasks.
Performance
The update optimizes the performance of dstack server
, allowing a single server replica to handle up to 150 active runs, jobs, and instances. Capacity can be further increased by using PostgreSQL and running multiple server replicas.
Last, getting instance offers from backends when you run dstack apply
has also been optimized and now takes less time.
What's changed
- Increase max active resources supported by server by @r4victor in #2189
- Implement bridge network mode for jobs by @un-def in #2191
- [Internal] Fix
python-json-logger
deprecation warning by @jvstme in #2201 - Fix local backend by @r4victor in #2203
- Implement offers cache by @r4victor in #2197
- Add
/api/instances/list
by @jvstme in #2199 - Allow getting by ID in
/api/project/_/fleets/get
by @jvstme in #2200 - Add termination reason and message to the runner API by @r4victor in #2204
- Add vpc cluster support in Vultr by @Bihan in #2196
- Fix instance_types not respected for pool instances by @r4victor in #2205
- Delete manually created empty fleets by @r4victor in #2206
- Return repo errors from runner by @r4victor in #2207
- Fix caching offers with GPU requirements by @jvstme in #2210
- Fix filtering idle instances by instance type by @jvstme in #2214
- Add more project URLs on PyPI by @jvstme in #2215
Full changelog: 0.18.35...0.18.36