Skip to content

Commit

Permalink
New examples naming format (#479)
Browse files Browse the repository at this point in the history
  • Loading branch information
gongy authored Oct 26, 2023
1 parent 8a473b0 commit edbd867
Show file tree
Hide file tree
Showing 12 changed files with 20 additions and 16 deletions.
2 changes: 1 addition & 1 deletion 02_building_containers/screenshot.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,5 +76,5 @@ def main(url: str = "https://modal.com"):
print(f"wrote {len(data)} bytes to {filename}")


# And we're done! Please also see our [introductory guide](/docs/guide/web-scraper) for another
# And we're done! Please also see our [introductory guide](/docs/examples/web-scraper) for another
# example of a web scraper, with more in-depth logic.
2 changes: 1 addition & 1 deletion 06_gpu_and_ml/falcon_bitsandbytes.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# to the sheer size of the model, the cold start time on Modal is around 2 minutes.
#
# For faster cold start at the expense of inference speed, check out
# [Running Falcon-40B with AutoGPTQ](/docs/guide/ex/falcon_gptq).
# [Running Falcon-40B with AutoGPTQ](/docs/examples/falcon_gptq).
#
# ## Setup
#
Expand Down
4 changes: 2 additions & 2 deletions 06_gpu_and_ml/falcon_gptq.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
# cold start time on Modal is around 25s.
#
# For faster inference at the expense of a slower cold start, check out
# [Running Falcon-40B with `bitsandbytes` quantization](/docs/guide/ex/falcon_bitsandbytes). You can also
# run a smaller, 7-billion-parameter model with the [OpenLLaMa example](/docs/guide/ex/openllama).
# [Running Falcon-40B with `bitsandbytes` quantization](/docs/examples/falcon_bitsandbytes). You can also
# run a smaller, 7-billion-parameter model with the [OpenLLaMa example](/docs/examples/openllama).
#
# ## Setup
#
Expand Down
2 changes: 1 addition & 1 deletion 06_gpu_and_ml/llm-frontend/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
<section x-data="state()" class="max-w-2xl mx-auto pt-16 px-4">
<div class="text-xs font-semibold tracking-wide uppercase text-center text-white">
<a
href="https://modal.com/docs/guide/ex/text_generation_inference"
href="https://modal.com/docs/examples/text_generation_inference"
class="inline-flex gap-x-1 items-center bg-lime-400 py-0.5 px-3 rounded-full hover:text-lime-400 hover:ring hover:ring-lime-400 hover:bg-white focus:outline-neutral-400"
target="_blank"
>
Expand Down
2 changes: 1 addition & 1 deletion 06_gpu_and_ml/openllama.py
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,6 @@ def main():
# you could use OpenLLaMa to perform a more useful downstream task.
#
# If you're looking for useful responses out-of-the-box like ChatGPT, you could try Vicuna-13B, which is larger and has been instruction-tuned.
# However, note that this model is not permissively licensed due to the dataset it was trained on. Refer to our [LLM voice chat](/docs/guide/llm-voice-chat)
# However, note that this model is not permissively licensed due to the dataset it was trained on. Refer to our [LLM voice chat](/docs/examples/llm-voice-chat)
# post for how to build a complete voice chat app using Vicuna, or go straight to the [file](https://github.com/modal-labs/quillman/blob/main/src/llm_vicuna.py)
# if you want to run it by itself.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
# Example by [@maxscheel](https://github.com/maxscheel)
#
# This example shows the Stable Diffusion 2.1 compiled with [AITemplate](https://github.com/facebookincubator/AITemplate) to run faster on Modal.
# There is also a [Stable Diffusion CLI example](/docs/guide/ex/stable_diffusion_cli).
# There is also a [Stable Diffusion CLI example](/docs/examples/stable_diffusion_cli).
#
# #### Upsides
# - Image generation improves over the CLI example to about 550ms per image generated (A10G, 10 steps, 512x512, png).
Expand Down
4 changes: 2 additions & 2 deletions 06_gpu_and_ml/stable_diffusion/stable_diffusion_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@
# that makes it run faster on Modal. The example takes about 10s to cold start
# and about 1.0s per image generated.
#
# To use the new XL 1.0 model, see the example posted [here](/docs/guide/ex/stable_diffusion_xl).
# To use the new XL 1.0 model, see the example posted [here](/docs/examples/stable_diffusion_xl).
#
# For instance, here are 9 images produced by the prompt
# `An 1600s oil painting of the New York City skyline`
#
# ![stable diffusion slackbot](./stable_diffusion_montage.png)
#
# There is also a [Stable Diffusion Slack bot example](/docs/guide/ex/stable_diffusion_slackbot)
# There is also a [Stable Diffusion Slack bot example](/docs/examples/stable_diffusion_slackbot)
# which does not have all the optimizations, but shows how you can set up a Slack command to
# trigger Stable Diffusion.
#
Expand Down
2 changes: 1 addition & 1 deletion 06_gpu_and_ml/stable_diffusion/stable_diffusion_onnx.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# # Stable Diffusion with ONNX Runtime
#
# This example is similar to the [Stable Diffusion CLI](/docs/guide/ex/stable_diffusion_cli)
# This example is similar to the [Stable Diffusion CLI](/docs/examples/stable_diffusion_cli)
# example, but it runs inference unsing [ONNX Runtime](https://onnxruntime.ai/) instead of PyTorch.


Expand Down
2 changes: 1 addition & 1 deletion 06_gpu_and_ml/stable_diffusion/stable_diffusion_xl.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# ---
# # Stable Diffusion XL 1.0
#
# This example is similar to the [Stable Diffusion CLI](/docs/guide/ex/stable_diffusion_cli)
# This example is similar to the [Stable Diffusion CLI](/docs/examples/stable_diffusion_cli)
# example, but it generates images from the larger XL 1.0 model. Specifically, it runs the
# first set of steps with the base model, followed by the refiner model.
#
Expand Down
4 changes: 2 additions & 2 deletions 09_job_queues/doc_ocr_jobs.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
#
# This tutorial shows you how to use Modal as an infinitely scalable job queue
# that can service async tasks from a web app. For the purpose of this tutorial,
# we've also built a [React + FastAPI web app on Modal](/docs/guide/ex/doc_ocr_webapp)
# we've also built a [React + FastAPI web app on Modal](/docs/examples/doc_ocr_webapp)
# that works together with it, but note that you don't need a web app running on Modal
# to use this pattern. You can submit async tasks to Modal from any Python
# application (for example, a regular Django app running on Kubernetes).
Expand Down Expand Up @@ -117,7 +117,7 @@ def parse_receipt(image: bytes):
#
# Modal will auto-scale to handle all the tasks queued, and
# then scale back down to 0 when there's no work left. To see how you could use this from a Python web
# app, take a look at the [receipt parser frontend](/docs/guide/ex/doc_ocr_webapp)
# app, take a look at the [receipt parser frontend](/docs/examples/doc_ocr_webapp)
# tutorial.

# ## Run manually
Expand Down
4 changes: 2 additions & 2 deletions 09_job_queues/doc_ocr_webapp.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
# [React](https://reactjs.org/) + [FastAPI](https://fastapi.tiangolo.com/) application.
# We're going to build a simple "Receipt Parser" web app that submits OCR transcription
# tasks to a separate Modal app defined in the [Job Queue
# tutorial](/docs/guide/ex/doc_ocr_jobs), polls until the task is completed, and displays
# tutorial](/docs/examples/doc_ocr_jobs), polls until the task is completed, and displays
# the results. Try it out for yourself
# [here](https://modal-labs-example-doc-ocr-webapp-wrapper.modal.run/).
#
Expand Down Expand Up @@ -38,7 +38,7 @@
# and another to poll for the results of the job.
#
# In `parse`, we're going to submit tasks to the function defined in the [Job
# Queue tutorial](/docs/guide/ex/doc_ocr_jobs), so we import it first using
# Queue tutorial](/docs/examples/doc_ocr_jobs), so we import it first using
# [`Function.lookup`](/docs/reference/modal.Function#lookup).
#
# We call [`.spawn()`](/docs/reference/modal.Function#spawn) on the function handle
Expand Down
6 changes: 5 additions & 1 deletion 11_notebooks/basic.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@
"outputs": [],
"source": [
"import modal\n",
"assert modal.__version__ > '0.49.0'"
"\n",
"assert modal.__version__ > \"0.49.0\""
]
},
{
Expand Down Expand Up @@ -50,6 +51,7 @@
"def double(x: int) -> int:\n",
" return x + x\n",
"\n",
"\n",
"double(5)"
]
},
Expand Down Expand Up @@ -79,13 +81,15 @@
"def double_with_modal(x: int) -> int:\n",
" return x + x\n",
"\n",
"\n",
"@stub.function()\n",
"def quadruple(x: int) -> int:\n",
" if x <= 1_000_000:\n",
" return double(x) + double(x)\n",
" else:\n",
" return double_with_modal.remote(x) + double_with_modal.remote(x)\n",
"\n",
"\n",
"with stub.run():\n",
" print(quadruple(100))\n",
" print(quadruple.remote(100)) # run remotely\n",
Expand Down

0 comments on commit edbd867

Please sign in to comment.