Skip to content

Commit

Permalink
remove deployment from apps that don't need it (#1001)
Browse files Browse the repository at this point in the history
* remove deployment from apps that don't need it

* adds back in the deployment of 100k cbxes, used by load-testing example

* adds back deployment of trtllm_llama, used by dbt example
  • Loading branch information
charlesfrye authored Dec 4, 2024
1 parent 9b8a786 commit aeb71ab
Show file tree
Hide file tree
Showing 12 changed files with 3 additions and 15 deletions.
1 change: 0 additions & 1 deletion 06_gpu_and_ml/comfyui/comfyapp.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# ---
# cmd: ["modal", "serve", "06_gpu_and_ml/comfyui/comfyapp.py"]
# deploy: true
# ---
#
# # Run Flux on ComfyUI interactively and as an API
Expand Down
1 change: 0 additions & 1 deletion 06_gpu_and_ml/controlnet/controlnet_gradio_demos.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# ---
# cmd: ["modal", "serve", "06_gpu_and_ml/controlnet/controlnet_gradio_demos.py"]
# deploy: false
# tags: ["use-case-image-video-3d", "featured"]
# ---
#
Expand Down
1 change: 0 additions & 1 deletion 06_gpu_and_ml/hyperparameter-sweep/hp_sweep_gpt.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# ---
# deploy: true
# cmd: ["modal", "run", "06_gpu_and_ml/hyperparameter-sweep/hp_sweep_gpt.py", "--n-steps", "200", "--n-steps-before-checkpoint", "50", "--n-steps-before-eval", "50"]
# ---

Expand Down
4 changes: 0 additions & 4 deletions 06_gpu_and_ml/llm-serving/chat_with_pdf_vision.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,3 @@
# ---
# deploy: true
# ---

# # Chat with PDF: RAG with ColQwen2

# In this example, we demonstrate how to use the the [ColQwen2](https://huggingface.co/vidore/colqwen2-v0.1) model to build a simple
Expand Down
1 change: 0 additions & 1 deletion 06_gpu_and_ml/llm-serving/sgl_vlm.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# ---
# deploy: true
# tags: ["use-case-lm-inference", "use-case-image-video-3d"]
# ---
# # Run LLaVA-Next on SGLang for Visual QA
Expand Down
1 change: 0 additions & 1 deletion 06_gpu_and_ml/llm-serving/vllm_inference.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# ---
# deploy: true
# cmd: ["modal", "serve", "06_gpu_and_ml/llm-serving/vllm_inference.py"]
# pytest: false
# tags: ["use-case-lm-inference", "featured"]
Expand Down
2 changes: 1 addition & 1 deletion 06_gpu_and_ml/obj_detection_webcam/webcam.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
#
# ## Live demo
#
# [Take a look at the deployed app](https://modal-labs-example-webcam-object-detection-fastapi-app.modal.run/).
# [Take a look at the deployed app](https://modal-labs--example-webcam-object-detection-fastapi-app.modal.run/).
#
# A couple of caveats:
# * This is not optimized for latency: every prediction takes about 1s, and
Expand Down
1 change: 0 additions & 1 deletion 06_gpu_and_ml/stable_diffusion/text_to_image.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
# output-directory: "/tmp/stable-diffusion"
# args: ["--prompt", "A 1600s oil painting of the New York City skyline"]
# tags: ["use-case-image-video-3d"]
# deploy: true
# ---

# # Run Stable Diffusion 3.5 Large Turbo as a CLI, API, and web UI
Expand Down
1 change: 0 additions & 1 deletion 07_web_endpoints/count_faces.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# ---
# deploy: true
# cmd: ["modal", "serve", "07_web_endpoints/count_faces.py"]
# ---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ---
# deploy: true
# cmd: ["modal", "serve", "07_web_endpoints.fasthtml-checkboxes.fasthtml_checkboxes"]
# deploy: true
# mypy: ignore-errors
# ---

Expand Down
1 change: 0 additions & 1 deletion 07_web_endpoints/fasthtml_app.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# ---
# deploy: true
# cmd: ["modal", "serve", "07_web_endpoints/fasthtml_app.py"]
# ---

Expand Down
2 changes: 1 addition & 1 deletion 10_integrations/streamlit/serve_streamlit.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,5 +82,5 @@ def run():
# modal deploy serve_streamlit.py
# ```
#
# If successful, this will print a URL for your app, that you can navigate to from
# If successful, this will print a URL for your app that you can navigate to from
# your browser 🎉 .

0 comments on commit aeb71ab

Please sign in to comment.