Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Simple Data Parallelism in vLLM #9206

Open
1 task done
simon-mo opened this issue Oct 9, 2024 · 7 comments
Open
1 task done

[Feature]: Simple Data Parallelism in vLLM #9206

simon-mo opened this issue Oct 9, 2024 · 7 comments

Comments

@simon-mo
Copy link
Collaborator

simon-mo commented Oct 9, 2024

🚀 The feature, motivation and pitch

It is common to have a scenario where folks want to deploy multiple vLLM instances on a single machine due to the machine have several GPUs (commonly 8 GPUs). The work can then be sharded across replicated instances. This issue describes the intended UX for such feature. Notably we might not want to tackle large distributed settings (100s of parallel vLLM instances), which should be better handled by a higher layers.

  • Offline use case, for the LLM class, a new argument data_parallel_size and support dispatching requests to one engine per GPU (or per tensor parallel size).
from vllm import LLM

llm = LLM(model="...", data_parallel_size=X) # spawn X number of engine processes and shard the work among them
llm = LLM(model="...", data_parallel_size=X, tensor_parallel_size=Y) # this is supported if X*Y <= total number of GPUs

For the server, same argument, route requests to different engine processes, we can start with simple round robin load balancing, but a good stretch goal is session affinity or prefix aware routing

vllm serve ... --data-parallel-size X

Alternatives

  • LiteLLM + manually creating replicas
  • Using ray.data or ray serve to scale out

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@archthegit
Copy link

Hi, I'll be working on this issue

@BKitor
Copy link
Contributor

BKitor commented Oct 9, 2024

I like this idea, providing flexibility around deployment is great!
Being able to deploy multiple types of parallelism would be ideal, e.g. -pp 2 -dp 4 -dp 2 across 16 GPUs would be benificial.
Would also be good to have fine-grained control of process-to-device topology mapping, but that might be a stretch goal for a different ticket...

@andoorve
Copy link
Collaborator

Related issue: #9198

@Imss27
Copy link
Contributor

Imss27 commented Oct 10, 2024

Previously, I encountered an issue: When running vLLM(older version) on a machine with multiple GPUs(a CI environment) without specifying CUDA_VISIBLE_DEVICE, it keeps allocating memory on one single GPU which then causes OOM error. This might be related?

@archthegit and I chated a little and plan to investigate and potentially work together on this.😄

@noooop
Copy link
Contributor

noooop commented Oct 10, 2024

Specifying CUDA_VISIBLE_DEVICE before starting vllm can use different cards for multiple vllm instances started on one machine. But multiple vllm instances may have port conflicts. So the best way to use multiple vllm instances on a single machine is to use docker

Now implementation of pp/tp is coupled with layers, it is very difficult to implement dp. If you don't use pp/tp, you can patch GroupCoordinator to implement dp, but without asynchronous scheduling, there is
no meaning to implementing dp. Therefore, asynchronous scheduling should be implemented first before implementing dp

If you have many GPUs doing decoder-only model inference, using "Disaggregating Prefill and Decoding architecture", compare with use dp, has higher throughput , refer to DistServe Mooncake

@noooop
Copy link
Contributor

noooop commented Oct 10, 2024

#8452

A prototype, first implement asynchronous scheduling, then by patching GroupCoordinator to implement dp.

@tensorflowt
Copy link

#8452

A prototype, first implement asynchronous scheduling, then by patching GroupCoordinator to implement dp.

What is patching group coordinator ? Can I verify the performance of dp through benchmark_throughput.py?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants