Skip to content

Commit

Permalink
Deployed 41d578f to master with MkDocs 1.6.0 and mike 2.1.1
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions[bot] committed May 17, 2024
1 parent 17f1fc9 commit 0eac541
Show file tree
Hide file tree
Showing 5 changed files with 188 additions and 179 deletions.
17 changes: 13 additions & 4 deletions master/modelserving/v1beta1/llm/huggingface/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -1179,7 +1179,7 @@ <h1 id="deploy-the-llama2-model-with-hugging-face-llm-serving-runtime">Deploy th
<p>In this example, we deploy a Llama2 model from Hugging Face by running an <code>InferenceService</code> with <a href="https://github.com/kserve/kserve/tree/master/python/huggingfaceserver">Hugging Face Serving runtime</a>. Based on the performance requirement for large language models, KServe chooses to perform the inference using a more optimized inference engine like <a href="https://github.com/vllm-project/vllm">vLLM</a> for text generation models.</p>
<h3 id="serve-the-hugging-face-llm-model-using-vllm">Serve the Hugging Face LLM model using vLLM<a class="headerlink" href="#serve-the-hugging-face-llm-model-using-vllm" title="Permanent link"></a></h3>
<p>KServe Hugging Face runtime by default uses vLLM to serve the LLM models for faster inference, higher throughput than Hugging Face API, implemented with paged attention, continous batching, optmized CUDA kernel.
You can still use <code>--disable_vllm</code> flag to fall back to perform the inference using Hugging Face API.</p>
You can still use <code>--backend=huggingface</code> in the container args to fall back to perform the inference using Hugging Face API.</p>
<div class="tabbed-set tabbed-alternate" data-tabs="1:1"><input checked="checked" id="__tabbed_1_1" name="__tabbed_1" type="radio"><div class="tabbed-labels"><label for="__tabbed_1_1">Yaml</label></div>
<div class="tabbed-content">
<div class="tabbed-block">
Expand Down Expand Up @@ -1223,13 +1223,22 @@ <h3 id="perform-model-inference">Perform Model Inference<a class="headerlink" hr
</div>
<div class="no-copy highlight"><pre><span></span><code><span class="w"> </span><span class="o">{</span><span class="s2">"predictions"</span>:<span class="o">[</span><span class="s2">"Where is Eiffel Tower?\nEiffel Tower is located in Paris, France. It is one of the most iconic landmarks in the world and stands at 324 meters (1,063 feet) tall. The tower was built for the 1889 World's Fair in Paris and was designed by Gustave Eiffel. It is made of iron and has four pillars that support the tower. The Eiffel Tower is a popular tourist destination and offers stunning views of the city of Paris."</span><span class="o">]}</span>
</code></pre></div>
<p>KServe Hugging Face vLLM runtime supports the <a href="https://github.com/kserve/open-inference-protocol/blob/main/specification/protocol/generate_rest.yaml">/generate</a> endpoint schema for text generation endpoint.</p>
<div class="highlight"><pre><span></span><code>curl<span class="w"> </span>-H<span class="w"> </span><span class="s2">"content-type:application/json"</span><span class="w"> </span>-H<span class="w"> </span><span class="s2">"Host: </span><span class="si">${</span><span class="nv">SERVICE_HOSTNAME</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>-v<span class="w"> </span>http://<span class="si">${</span><span class="nv">INGRESS_HOST</span><span class="si">}</span>:<span class="si">${</span><span class="nv">INGRESS_PORT</span><span class="si">}</span>/v2/models/<span class="si">${</span><span class="nv">MODEL_NAME</span><span class="si">}</span>/generate<span class="w"> </span>-d<span class="w"> </span><span class="s1">'{"text_input": "The capital of france is [MASK]." }'</span>
<p>KServe Hugging Face vLLM runtime supports the OpenAI <code>/v1/completions</code> and <code>/v1/chat/completions</code> endpoints for inference</p>
<p>Sample OpenAI Completions request:</p>
<div class="highlight"><pre><span></span><code>curl<span class="w"> </span>-H<span class="w"> </span><span class="s2">"content-type:application/json"</span><span class="w"> </span>-H<span class="w"> </span><span class="s2">"Host: </span><span class="si">${</span><span class="nv">SERVICE_HOSTNAME</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>-v<span class="w"> </span>http://<span class="si">${</span><span class="nv">INGRESS_HOST</span><span class="si">}</span>:<span class="si">${</span><span class="nv">INGRESS_PORT</span><span class="si">}</span>/openai/v1/completions<span class="w"> </span>-d<span class="w"> </span><span class="s1">'{"model": "${MODEL_NAME}", "prompt": "&lt;prompt&gt;", "stream":false, "max_tokens": 30 }'</span>
</code></pre></div>
<div class="admonition success">
<p class="admonition-title">Expected Output</p>
</div>
<div class="no-copy highlight"><pre><span></span><code><span class="w"> </span><span class="o">{</span><span class="s2">"text_output"</span>:<span class="s2">"Where is Eiffel Tower?\nThe Eiffel Tower is located in the 7th arrondissement of Paris, France. It stands on the Champ de Mars, a large public park next to the Seine River. The tower's exact address is:\n\n2 Rue du Champ de Mars, 75007 Paris, France."</span>,<span class="s2">"model_name"</span>:<span class="s2">"llama2"</span>,<span class="s2">"model_version"</span>:null,<span class="s2">"details"</span>:null<span class="o">}</span>
<div class="no-copy highlight"><pre><span></span><code><span class="w"> </span><span class="o">{</span><span class="s2">"id"</span>:<span class="s2">"cmpl-7c654258ab4d4f18b31f47b553439d96"</span>,<span class="s2">"choices"</span>:<span class="o">[{</span><span class="s2">"finish_reason"</span>:<span class="s2">"length"</span>,<span class="s2">"index"</span>:0,<span class="s2">"logprobs"</span>:null,<span class="s2">"text"</span>:<span class="s2">"&lt;generated_text&gt;"</span><span class="o">}]</span>,<span class="s2">"created"</span>:1715353182,<span class="s2">"model"</span>:<span class="s2">"llama2"</span>,<span class="s2">"system_fingerprint"</span>:null,<span class="s2">"object"</span>:<span class="s2">"text_completion"</span>,<span class="s2">"usage"</span>:<span class="o">{</span><span class="s2">"completion_tokens"</span>:26,<span class="s2">"prompt_tokens"</span>:4,<span class="s2">"total_tokens"</span>:30<span class="o">}}</span>
</code></pre></div>
<p>Sample OpenAI Chat request:</p>
<div class="highlight"><pre><span></span><code>curl<span class="w"> </span>-H<span class="w"> </span><span class="s2">"content-type:application/json"</span><span class="w"> </span>-H<span class="w"> </span><span class="s2">"Host: </span><span class="si">${</span><span class="nv">SERVICE_HOSTNAME</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>-v<span class="w"> </span>http://<span class="si">${</span><span class="nv">INGRESS_HOST</span><span class="si">}</span>:<span class="si">${</span><span class="nv">INGRESS_PORT</span><span class="si">}</span>/openai/v1/chat/completions<span class="w"> </span>-d<span class="w"> </span><span class="s1">'{"model": "${MODEL_NAME}", "messages": [{"role": "user","content": "&lt;message&gt;"}], "stream":false }'</span>
</code></pre></div>
<div class="admonition success">
<p class="admonition-title">Expected Output</p>
</div>
<div class="no-copy highlight"><pre><span></span><code><span class="w"> </span><span class="o">{</span><span class="s2">"id"</span>:<span class="s2">"cmpl-87ee252062934e2f8f918dce011e8484"</span>,<span class="s2">"choices"</span>:<span class="o">[{</span><span class="s2">"finish_reason"</span>:<span class="s2">"length"</span>,<span class="s2">"index"</span>:0,<span class="s2">"message"</span>:<span class="o">{</span><span class="s2">"content"</span>:<span class="s2">"&lt;generated_response&gt;"</span>,<span class="s2">"tool_calls"</span>:null,<span class="s2">"role"</span>:<span class="s2">"assistant"</span>,<span class="s2">"function_call"</span>:null<span class="o">}</span>,<span class="s2">"logprobs"</span>:null<span class="o">}]</span>,<span class="s2">"created"</span>:1715353461,<span class="s2">"model"</span>:<span class="s2">"llama2"</span>,<span class="s2">"system_fingerprint"</span>:null,<span class="s2">"object"</span>:<span class="s2">"chat.completion"</span>,<span class="s2">"usage"</span>:<span class="o">{</span><span class="s2">"completion_tokens"</span>:30,<span class="s2">"prompt_tokens"</span>:3,<span class="s2">"total_tokens"</span>:33<span class="o">}}</span>
</code></pre></div>
</article>
</div>
Expand Down
4 changes: 2 additions & 2 deletions master/python_runtime_api/docs/api/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -2778,7 +2778,7 @@ <h2 class="doc doc-heading" id="kserve.model.BaseKServeModel">
<a class="headerlink" href="#kserve.model.BaseKServeModel" title="Permanent link">¶</a></h2>
<div class="doc doc-contents">
<p class="doc doc-class-bases">
Bases: <code><span title="abc.ABC">ABC</span></code></p>
Bases: <code><span title="abc.ABC">ABC</span></code></p>
<p>A base class to inherit all of the kserve models from.</p>
<p>This class implements the expectations of model repository and model server.</p>
<details class="quote">
Expand Down Expand Up @@ -2907,7 +2907,7 @@ <h2 class="doc doc-heading" id="kserve.model.Model">
<a class="headerlink" href="#kserve.model.Model" title="Permanent link">¶</a></h2>
<div class="doc doc-contents">
<p class="doc doc-class-bases">
Bases: <code><a class="autorefs autorefs-internal" title="kserve.model.BaseKServeModel" href="#kserve.model.BaseKServeModel">BaseKServeModel</a></code></p>
Bases: <code><a class="autorefs autorefs-internal" title="kserve.model.BaseKServeModel" href="#kserve.model.BaseKServeModel">BaseKServeModel</a></code></p>
<details class="quote">
<summary>Source code in <code>kserve/model.py</code></summary>
<div class="highlight"><table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre><span></span><span class="normal">110</span>
Expand Down
2 changes: 1 addition & 1 deletion master/search/search_index.json

Large diffs are not rendered by default.

Loading

0 comments on commit 0eac541

Please sign in to comment.