diff --git a/docs/operations/security-overview.md b/docs/operations/security-overview.md index fa01e1b7e948..279a1327b97d 100644 --- a/docs/operations/security-overview.md +++ b/docs/operations/security-overview.md @@ -183,8 +183,6 @@ the extension used in the examples above. * [Kerberos](../development/extensions-core/druid-kerberos.md) for Kerberos authentication. * [User authentication and authorization](security-user-auth.md) for details about permissions. * [SQL permissions](security-user-auth.md#sql-permissions) for permissions on SQL system tables. -* [The `druidapi` Python library](../tutorials/tutorial-jupyter-index.md), - provided as part of the Druid tutorials, to set up users and roles for learning how security works. ## Enable authorizers diff --git a/docs/querying/tips-good-queries.md b/docs/querying/tips-good-queries.md index 8b718d9b76ff..adbba8d59bed 100644 --- a/docs/querying/tips-good-queries.md +++ b/docs/querying/tips-good-queries.md @@ -23,7 +23,9 @@ sidebar_label: "Tips for writing good queries" ~ under the License. --> -This topic includes tips and examples that can help you investigate and improve query performance and accuracy using [Apache Druid SQL](./sql.md). Use this topic as a companion to the Jupyter Notebook tutorial [Learn the basics of Druid SQL](https://github.com/apache/druid/blob/master/examples/quickstart/jupyter-notebooks/notebooks/03-query/00-using-sql-with-druidapi.ipynb). +This topic includes tips and examples that can help you investigate and improve query performance and accuracy using [Apache Druid SQL](./sql.md). + +For an interactive tutorial on Druid SQL, see [Learn the basics of Druid SQL](https://github.com/implydata/learn-druid/tree/main/notebooks) within the [Learn Druid repo](https://github.com/implydata/learn-druid). Your ability to effectively query your data depends in large part on the way you've ingested and stored the data in Apache Druid. This document assumes that you've followed the best practices described in [Schema design tips and best practices](../ingestion/schema-design.md#general-tips-and-best-practices) when modeling your data. @@ -68,7 +70,8 @@ When possible, design your SQL queries in such a way that they match the rules f Note that TopN queries are approximate in that each data process ranks its top K results and only returns those top K results to the Broker. -You can follow the tutorial [Using TopN approximation in Druid queries](https://github.com/apache/druid/blob/master/examples/quickstart/jupyter-notebooks/notebooks/03-query/02-approxRanking.ipynb) to work through some examples with approximation turned on and off. The tutorial [Get to know Query view](../tutorials/tutorial-sql-query-view.md) demonstrates running aggregate queries in the Druid console. +You can follow the tutorial [Using TopN approximation in Druid queries](https://github.com/implydata/learn-druid/tree/main/notebooks) within the [Learn Druid repo](https://github.com/implydata/learn-druid) to work through some examples with approximation turned on and off. +The tutorial [Get to know Query view](../tutorials/tutorial-sql-query-view.md) demonstrates running aggregate queries in the Druid console. ### Manually tune your queries diff --git a/docs/tutorials/tutorial-jupyter-docker.md b/docs/tutorials/tutorial-jupyter-docker.md deleted file mode 100644 index a1091f0ab7a4..000000000000 --- a/docs/tutorials/tutorial-jupyter-docker.md +++ /dev/null @@ -1,252 +0,0 @@ ---- -id: tutorial-jupyter-docker -title: "Docker for Jupyter Notebook tutorials" -sidebar_label: "Docker for tutorials" ---- - - - - -Apache Druid provides a custom Jupyter container that contains the prerequisites -for all [Jupyter-based Druid tutorials](tutorial-jupyter-index.md), as well as all of the tutorials themselves. -You can run the Jupyter container, as well as containers for Druid and Apache Kafka, -using the Docker Compose file provided in the Druid GitHub repository. - -You can run the following combination of applications: -* [Jupyter only](#start-only-the-jupyter-container) -* [Jupyter and Druid](#start-jupyter-and-druid) -* [Jupyter, Druid, and Kafka](#start-jupyter-druid-and-kafka) -* [Kafka and Jupyter](#start-kafka-and-jupyter) - -## Prerequisites - -Jupyter in Docker requires that you have **Docker** and **Docker Compose**. -We recommend installing these through [Docker Desktop](https://docs.docker.com/desktop/). - -For ARM-based devices, see [Tutorial setup for ARM-based devices](#tutorial-setup-for-arm-based-devices). - -## Launch the Docker containers - -You run Docker Compose to launch Jupyter and optionally Druid or Kafka. -Docker Compose references the configuration in `docker-compose.yaml`. -Running Druid in Docker also requires the `environment` file, which -sets the configuration properties for the Druid services. -To get started, download both `docker-compose.yaml` and `environment` from -[`tutorial-jupyter-docker.zip`](https://github.com/apache/druid/blob/master/examples/quickstart/jupyter-notebooks/docker-jupyter/tutorial-jupyter-docker.zip). - -Alternatively, you can clone the [Apache Druid repo](https://github.com/apache/druid) and -access the files in `druid/examples/quickstart/jupyter-notebooks/docker-jupyter`. - -### Start only the Jupyter container - -If you already have Druid running locally or on another machine, you can run the Docker containers for Jupyter only. -In the same directory as `docker-compose.yaml`, start the application: - -```bash -docker compose --profile jupyter up -d -``` - -The Docker Compose file assigns `8889` for the Jupyter port. -You can override the port number by setting the `JUPYTER_PORT` environment variable before starting the Docker application. - -If Druid is running local to the same machine as Jupyter, open the tutorial and set the `host` variable to `host.docker.internal` before starting. For example: -```python -host = "host.docker.internal" -``` - -### Start Jupyter and Druid - -Running Druid in Docker requires the `environment` file as well as an environment variable named `DRUID_VERSION`, -which determines the version of Druid to use. The Druid version references the Docker tag to pull from the -[Apache Druid Docker Hub](https://hub.docker.com/r/apache/druid/tags). - -In the same directory as `docker-compose.yaml` and `environment`, start the application: - -```bash -DRUID_VERSION={{DRUIDVERSION}} docker compose --profile druid-jupyter up -d -``` - -### Start Jupyter, Druid, and Kafka - -Running Druid in Docker requires the `environment` file as well as the `DRUID_VERSION` environment variable. - -In the same directory as `docker-compose.yaml` and `environment`, start the application: - -```bash -DRUID_VERSION={{DRUIDVERSION}} docker compose --profile all-services up -d -``` - -### Start Kafka and Jupyter - -If you already have Druid running externally, such as an existing cluster or a dedicated infrastructure for Druid, you can run the Docker containers for Kafka and Jupyter only. - -In the same directory as `docker-compose.yaml` and `environment`, start the application: - -```bash -DRUID_VERSION={{DRUIDVERSION}} docker compose --profile kafka-jupyter up -d -``` - -If you have an external Druid instance running on a different machine than the one hosting the Docker Compose environment, change the `host` variable in the notebook tutorial to the hostname or address of the machine where Druid is running. - -If Druid is running local to the same machine as Jupyter, open the tutorial and set the `host` variable to `host.docker.internal` before starting. For example: - -```python -host = "host.docker.internal" -``` - -To enable Druid to ingest data from Kafka within the Docker Compose environment, update the `bootstrap.servers` property in the Kafka ingestion spec to `localhost:9094` before ingesting. For reference, see [Consumer properties](../development/extensions-core/kafka-supervisor-reference.md#consumer-properties). - -### Update image from Docker Hub - -If you already have a local cache of the Jupyter image, you can update the image before running the application using the following command: - -```bash -docker compose pull jupyter -``` - -### Use locally built image - -The default Docker Compose file pulls the custom Jupyter Notebook image from a third party Docker Hub. -If you prefer to build the image locally from the official source, do the following: -1. Clone the Apache Druid repository. -2. Navigate to `examples/quickstart/jupyter-notebooks/docker-jupyter`. -3. Start the services using `-f docker-compose-local.yaml` in the `docker compose` command. For example: - -```bash -DRUID_VERSION={{DRUIDVERSION}} docker compose --profile all-services -f docker-compose-local.yaml up -d -``` - -## Access Jupyter-based tutorials - -The following steps show you how to access the Jupyter notebook tutorials from the Docker container. -At startup, Docker creates and mounts a volume to persist data from the container to your local machine. -This way you can save your work completed within the Docker container. - -1. Navigate to the notebooks at http://localhost:8889. -:::info - If you set `JUPYTER_PORT` to another port number, replace `8889` with the value of the Jupyter port. -::: - -2. Select a tutorial. If you don't plan to save your changes, you can use the notebook directly as is. Otherwise, continue to the next step. - -3. Optional: To save a local copy of your tutorial work, -select **File > Save as...** from the navigation menu. Then enter `work/.ipynb`. -If the notebook still displays as read only, you may need to refresh the page in your browser. -Access the saved files in the `notebooks` folder in your local working directory. - -## View the Druid web console - -To access the Druid web console in Docker, go to http://localhost:8888/unified-console.html. -Use the web console to view datasources and ingestion tasks that you create in the tutorials. - -## Stop Docker containers - -Shut down the Docker application using the following command: - -```bash -docker compose down -v -``` - -## Tutorial setup without using Docker - -To use the Jupyter Notebook-based tutorials without using Docker, do the following: - -1. Clone the Apache Druid repo, or download the [tutorials](tutorial-jupyter-index.md#tutorials) -as well as the [Python client for Druid](tutorial-jupyter-index.md#python-api-for-druid). - -2. Install the prerequisite Python packages with the following commands: - - ```bash - # Install requests - pip install requests - ``` - - ```bash - # Install JupyterLab - pip install jupyterlab - - # Install Jupyter Notebook - pip install notebook - ``` - - Individual notebooks may list additional packages you need to install to complete the tutorial. - -3. In your Druid source repo, install `druidapi` with the following commands: - - ```bash - cd examples/quickstart/jupyter-notebooks/druidapi - pip install . - ``` - -4. Start Jupyter, in the same directory as the tutorials, using either JupyterLab or Jupyter Notebook: - ```bash - # Start JupyterLab on port 3001 - jupyter lab --port 3001 - - # Start Jupyter Notebook on port 3001 - jupyter notebook --port 3001 - ``` - -5. Start Druid. You can use the [Quickstart (local)](./index.md) instance. The tutorials - assume that you are using the quickstart, so no authentication or authorization - is expected unless explicitly mentioned. - - If you contribute to Druid, and work with Druid integration tests, you can use a test cluster. - Assume you have an environment variable, `DRUID_DEV`, which identifies your Druid source repo. - - ```bash - cd $DRUID_DEV - ./it.sh build - ./it.sh image - ./it.sh up - ``` - - Replace `` with one of the available integration test categories. See the integration - test `README.md` for details. - -You should now be able to access and complete the tutorials. - -## Tutorial setup for ARM-based devices - -For ARM-based devices, follow this setup to start Druid externally, while keeping Kafka and Jupyter within the Docker Compose environment: - -1. Start Druid using the `start-druid` script. You can follow [Quickstart (local)](./index.md) instructions. The tutorials - assume that you are using the quickstart, so no authentication or authorization is expected unless explicitly mentioned. -2. Start either Jupyter only or Jupyter and Kafka using the following commands in the same directory as `docker-compose.yaml` and `environment`: - - ```bash - # Start only Jupyter - docker compose --profile jupyter up -d - - # Start Kafka and Jupyter - DRUID_VERSION={{DRUIDVERSION}} docker compose --profile kafka-jupyter up -d - ``` - -3. If Druid is running local to the same machine as Jupyter, open the tutorial and set the `host` variable to `host.docker.internal` before starting. For example: - ```python - host = "host.docker.internal" - ``` -4. If using Kafka to handle the data stream that will be ingested into Druid and Druid is running local to the same machine, update the consumer property `bootstrap.servers` to `localhost:9094`. - -## Learn more - -See the following topics for more information: -* [Jupyter Notebook tutorials](tutorial-jupyter-index.md) for the available Jupyter Notebook-based tutorials for Druid -* [Tutorial: Run with Docker](docker.md) for running Druid from a Docker container \ No newline at end of file diff --git a/docs/tutorials/tutorial-jupyter-index.md b/docs/tutorials/tutorial-jupyter-index.md index a0e14a5885fa..26ea7aa0b02d 100644 --- a/docs/tutorials/tutorial-jupyter-index.md +++ b/docs/tutorials/tutorial-jupyter-index.md @@ -23,53 +23,9 @@ sidebar_label: Jupyter Notebook tutorials ~ under the License. --> - +You can try out the Druid APIs using interactive Jupyter notebook tutorials. -You can try out the Druid APIs using the Jupyter Notebook-based tutorials. These -tutorials provide snippets of Python code that you can use to run calls against -the Druid API to complete the tutorial. +For ease of use, the tutorials are contained within their own open source [repo](https://github.com/implydata/learn-druid). +See the [notebook index](https://github.com/implydata/learn-druid/tree/main/notebooks) for a list of available tutorials. -## Prerequisites -The simplest way to get started is to use Docker. In this case, you only need to set up Docker Desktop. -For more information, see [Docker for Jupyter Notebook tutorials](tutorial-jupyter-docker.md). - -Otherwise, you can install the prerequisites on your own. Here's what you need: - -- An available Druid instance. -- Python 3.7 or later -- JupyterLab (recommended) or Jupyter Notebook running on a non-default port. -By default, Druid and Jupyter both try to use port `8888`, so start Jupyter on a different port. -- The `requests` Python package -- The `druidapi` Python package - -For setup instructions, see [Tutorial setup without using Docker](tutorial-jupyter-docker.md#tutorial-setup-without-using-docker). -Individual tutorials may require additional Python packages, such as for visualization or streaming ingestion. - -## Python API for Druid - -The `druidapi` Python package is a REST API for Druid. -One of the notebooks shows how to use the Druid REST API. The others focus on other -topics and use a simple set of Python wrappers around the underlying REST API. The -wrappers reside in the `druidapi` package within the notebooks directory. While the package -can be used in any Python program, the key purpose, at present, is to support these -notebooks. See -[Introduction to the Druid Python API](https://raw.githubusercontent.com/apache/druid/master/examples/quickstart/jupyter-notebooks/notebooks/01-introduction/01-druidapi-package-intro.ipynb) -for an overview of the Python API. - -The `druidapi` package is already installed in the custom Jupyter Docker container for Druid tutorials. - -## Tutorials - -The notebooks are located in the [apache/druid repo](https://github.com/apache/druid/tree/master/examples/quickstart/jupyter-notebooks/). You can either clone the repo or download the notebooks you want individually. - -The links that follow are the raw GitHub URLs, so you can use them to download the notebook directly, such as with `wget`, or manually through your web browser. Note that if you save the file from your web browser, make sure to remove the `.txt` extension. - -- [Introduction to the Druid REST API](https://raw.githubusercontent.com/apache/druid/master/examples/quickstart/jupyter-notebooks/notebooks/04-api/00-getting-started.ipynb) walks you through some of the - basics related to the Druid REST API and several endpoints. -- [Introduction to the Druid Python API](https://raw.githubusercontent.com/apache/druid/master/examples/quickstart/jupyter-notebooks/notebooks/01-introduction/01-druidapi-package-intro.ipynb) walks you through some of the - basics related to the Druid API using the Python wrapper API. -- [Learn the basics of Druid SQL](https://raw.githubusercontent.com/apache/druid/master/examples/quickstart/jupyter-notebooks/notebooks/03-query/00-using-sql-with-druidapi.ipynb) introduces you to the unique aspects of Druid SQL with the primary focus on the SELECT statement. -- [Ingest and query data from Apache Kafka](https://raw.githubusercontent.com/apache/druid/master/examples/quickstart/jupyter-notebooks/notebooks/02-ingestion/01-streaming-from-kafka.ipynb) walks you through ingesting an event stream from Kafka. diff --git a/docs/tutorials/tutorial-sql-query-view.md b/docs/tutorials/tutorial-sql-query-view.md index beeb08e15d4e..a313c7a300c2 100644 --- a/docs/tutorials/tutorial-sql-query-view.md +++ b/docs/tutorials/tutorial-sql-query-view.md @@ -26,7 +26,7 @@ sidebar_label: Get to know Query view This tutorial demonstrates some useful features built into Query view in Apache Druid. -Query view lets you run [Druid SQL queries](../querying/sql.md) and [native (JSON-based) queries](../querying/querying.md) against ingested data. Try out the [Introduction to Druid SQL](./tutorial-jupyter-index.md#tutorials) tutorial to learn more about Druid SQL. +Query view lets you run [Druid SQL queries](../querying/sql.md) and [native (JSON-based) queries](../querying/querying.md) against ingested data. You can use Query view to test and tune queries before you use them in API requests—for example, to perform [SQL-based ingestion](../api-reference/sql-ingestion-api.md). You can also ingest data directly in Query view. @@ -193,3 +193,5 @@ For more information on ingestion and querying data, see the following topics: - [Ingestion](../ingestion/index.md) for an overview of ingestion and the ingestion methods available in Druid. - [SQL-based ingestion](../multi-stage-query/index.md) for an overview of SQL-based ingestion. - [SQL-based ingestion query examples](../multi-stage-query/examples.md) for examples of SQL-based ingestion for various use cases. +- [Introduction to Druid SQL](https://github.com/implydata/learn-druid/tree/main/notebooks) to learn more about Druid SQL. + diff --git a/website/sidebars.json b/website/sidebars.json index a38292bfafef..55974a9a887a 100644 --- a/website/sidebars.json +++ b/website/sidebars.json @@ -26,7 +26,6 @@ "tutorials/tutorial-unnest-arrays", "tutorials/tutorial-query-deep-storage", "tutorials/tutorial-jupyter-index", - "tutorials/tutorial-jupyter-docker", "tutorials/tutorial-jdbc" ], "Design": [