diff --git a/cloud/connect-to-a-project.mdx b/cloud/connect-to-a-project.mdx index 0b6cd9d1..f9d165c6 100644 --- a/cloud/connect-to-a-project.mdx +++ b/cloud/connect-to-a-project.mdx @@ -26,7 +26,7 @@ To connect with any local clients, follow the steps below: * Alternatively, you can create a new user, RisingWave Cloud offers `psql`, `Connection String`, `Parameters Only`, `Java`, `Node.js`, `Python`, and `Golang` as connection options. -To connect via `psql`, you need to [Install psql](/docs/current/install-psql-without-postgresql/) in your environment. `psql` is a command-line interface for interacting with PostgreSQL databases, including RisingWave. +To connect via `psql`, you need to [Install psql](/deploy/install-psql-without-postgresql) in your environment. `psql` is a command-line interface for interacting with PostgreSQL databases, including RisingWave. 3. You may need to set up a CA certificate to enable SSL connections. See the instructions displayed on the portal for more details. diff --git a/cloud/create-a-connection.mdx b/cloud/create-a-connection.mdx index eafbd83b..374a255a 100644 --- a/cloud/create-a-connection.mdx +++ b/cloud/create-a-connection.mdx @@ -63,4 +63,4 @@ We aim to automate this process in the future to make it even easier. Now, you can create a source or sink with the PrivateLink connection using SQL. -For details on how to use the VPC endpoint to create a source with the PrivateLink connection, see [Create source with PrivateLink connection](/docs/current/ingest-from-kafka/#create-source-with-privatelink-connection); for creating a sink, see [Create sink with PrivateLink connection](/integrations/destinations/apache-kafka#create-sink-with-privatelink-connection). +For details on how to use the VPC endpoint to create a source with the PrivateLink connection, see [Create source with PrivateLink connection](/integrations/sources/kafka#create-source-with-privatelink-connection); for creating a sink, see [Create sink with PrivateLink connection](/integrations/destinations/apache-kafka#create-sink-with-privatelink-connection). diff --git a/cloud/develop-overview.mdx b/cloud/develop-overview.mdx index 9efd8e46..3288f42c 100644 --- a/cloud/develop-overview.mdx +++ b/cloud/develop-overview.mdx @@ -48,7 +48,7 @@ Select the version of the corresponding docs when using the RisingWave user docs ### Ecosystem - See how RisingWave can integrate with your existing data stack. Vote for your favorite data tools and streaming services to help us prioritize the integration development. + See how RisingWave can integrate with your existing data stack. Vote for your favorite data tools and streaming services to help us prioritize the integration development. Connect to and ingest data from external sources such as databases and message brokers. See supported data sources. Stream processed data out of RisingWave to message brokers and databases. See supported data destinations. @@ -90,14 +90,14 @@ Select the version of the corresponding docs when using the RisingWave user docs RisingWave offers support for popular PostgreSQL drivers, enabling seamless integration with your applications for interacting with it.

-Java +Java

-Node.js +Node.js

-Python +Python

Go diff --git a/demos/fast-twitter-events-processing.mdx b/demos/fast-twitter-events-processing.mdx index 3e1d0487..89d9b816 100644 --- a/demos/fast-twitter-events-processing.mdx +++ b/demos/fast-twitter-events-processing.mdx @@ -92,7 +92,7 @@ CREATE SOURCE twitter ( ) FORMAT PLAIN ENCODE JSON; ``` -Note that the SQL statement uses the STRUCT data type. For details about the STRUCT data type, please see [Data types](/docs/current/sql-data-types/). +Note that the SQL statement uses the STRUCT data type. For details about the STRUCT data type, please see [Data types](/sql/data-types/overview). ## Step 3: Define a materialized view and analyze data diff --git a/deploy/migrate-to-sql-backend.mdx b/deploy/migrate-to-sql-backend.mdx index cbda6add..2a607050 100644 --- a/deploy/migrate-to-sql-backend.mdx +++ b/deploy/migrate-to-sql-backend.mdx @@ -32,7 +32,7 @@ Make sure the SQL backend service is operational and you have the necessary cred ### Back up etcd data -The migration process from etcd to a SQL backend is performed offline, so we recommend taking a backup of your current etcd data to avoid any data loss before the migration. Refer to the [meta-backup](/docs/current/meta-backup/) for detailed instructions. +The migration process from etcd to a SQL backend is performed offline, so we recommend taking a backup of your current etcd data to avoid any data loss before the migration. Refer to the [meta-backup](/operate/meta-backup) for detailed instructions. ## Procedure diff --git a/deploy/node-specific-configurations.mdx b/deploy/node-specific-configurations.mdx index 16c46e97..bc682bf1 100644 --- a/deploy/node-specific-configurations.mdx +++ b/deploy/node-specific-configurations.mdx @@ -59,7 +59,7 @@ recent_filter_rotate_interval_ms = 10000 When setting up configurations, please be extra careful with those items prefixed by `unsafe_`. Typically these configurations can cause system or data damage if wrongly configured. You may want to contact our technical support before changing the `unsafe_` prefixed configurations. ### System configurations -System configurations are used to **initialize** the [system parameters](/docs/current/view-configure-system-parameters/) at the first startup. Once the system has started, the system parameters are managed by Meta service and can be altered using the `ALTER SYSTEM SET` command. +System configurations are used to **initialize** the [system parameters](/operate/view-configure-system-parameters) at the first startup. Once the system has started, the system parameters are managed by Meta service and can be altered using the `ALTER SYSTEM SET` command. Example for the system configuration section: @@ -73,7 +73,7 @@ backup_storage_url = "minio://hummockadmin:hummockadmin@127.0.0.1:9301/hummock00 backup_storage_directory = "hummock_001/backup" ``` -For more information on system parameters, please refer to [View and configure system parameters](/docs/current/view-configure-system-parameters/). +For more information on system parameters, please refer to [View and configure system parameters](/operate/view-configure-system-parameters). ### Streaming configurations @@ -158,4 +158,4 @@ Below is an example of the cache refill configuration for your reference. #### Other storage configurations -Except for the above, RisingWave also provides some other storage configurations to help control the overall buffer and cache limits. Please see [Dedicated compute node](/docs/current/dedicated-compute-node/) for more. +Except for the above, RisingWave also provides some other storage configurations to help control the overall buffer and cache limits. Please see [Dedicated compute node](/operate/dedicated-compute-node) for more. diff --git a/deploy/risingwave-docker-compose.mdx b/deploy/risingwave-docker-compose.mdx index cdbe31ed..f4bd30dc 100644 --- a/deploy/risingwave-docker-compose.mdx +++ b/deploy/risingwave-docker-compose.mdx @@ -6,7 +6,7 @@ description: This topic describes how to start RisingWave using Docker Compose o In this option, RisingWave functions as an all-in-one service. All components of RisingWave, including the compute node, meta node, and compactor node, are put into a single process. They are executed in different threads, eliminating the need to start each component as a separate process. -However, please be aware that certain critical features, such as failover and resource management, are not implemented in this mode. Therefore, this option is not recommended for production deployments. For production deployments, please consider [RisingWave Cloud](/docs/current/risingwave-cloud/), [Kubernetes with Helm](/docs/current/risingwave-k8s-helm/), or [Kubernetes with Operator](/docs/current/risingwave-kubernetes/). +However, please be aware that certain critical features, such as failover and resource management, are not implemented in this mode. Therefore, this option is not recommended for production deployments. For production deployments, please consider [RisingWave Cloud](/deploy/risingwave-cloud), [Kubernetes with Helm](/deploy/risingwave-k8s-helm), or [Kubernetes with Operator](/deploy/risingwave-kubernetes). This option uses a pre-defined Docker Compose configuration file to set up a RisingWave cluster. @@ -167,7 +167,7 @@ Remember to replace the `docker-compose-with-storage_backend_name.yml` with the ## Connect to RisingWave -After RisingWave is up and running, you need to connect to it via the Postgres interactive terminal `psql` so that you can issue queries to RisingWave and see the query results. If you don't have `psql` installed, [install psql](/docs/current/install-psql-without-postgresql/) first. +After RisingWave is up and running, you need to connect to it via the Postgres interactive terminal `psql` so that you can issue queries to RisingWave and see the query results. If you don't have `psql` installed, [install psql](/deploy/install-psql-without-postgresql) first. ```bash psql -h localhost -p 4566 -d dev -U root diff --git a/deploy/risingwave-k8s-helm.mdx b/deploy/risingwave-k8s-helm.mdx index d4cba9e2..a1cec17f 100644 --- a/deploy/risingwave-k8s-helm.mdx +++ b/deploy/risingwave-k8s-helm.mdx @@ -8,7 +8,7 @@ sidebarTitle: Kubernetes with Helm * Ensure you have Helm 3.7 + installed in your environment. For details about how to install Helm, see the [Helm documentation](https://helm.sh/docs/intro/install/). * Ensure you have [Kubernetes](https://kubernetes.io/) 1.24 or higher installed in your environment. -* Ensure you allocate enough resources for the deployment. For details, see [Hardware requirements](/docs/current/hardware-requirements/). +* Ensure you allocate enough resources for the deployment. For details, see [Hardware requirements](/deploy/hardware-requirements). ## Step 1: Start Kubernetes @@ -112,7 +112,7 @@ psql -h localhost -p 4567 -d dev -U root ## Step 4: Monitor performance -You can monitor the RisingWave cluster using the monitoring stack. For details, see [Monitoring a RisingWave cluster](/docs/current/monitor-risingwave-cluster/). +You can monitor the RisingWave cluster using the monitoring stack. For details, see [Monitoring a RisingWave cluster](/operate/monitor-risingwave-cluster). ## Optional: Resize a node @@ -134,4 +134,4 @@ compactorComponent: memory: 64Mi ``` -Please note that increasing the CPU resource will not automatically increase the parallelism of existing materialized views. When scaling up (adding more CPU cores) a compute node, you should perform the scaling by following the instructions in [Cluster scaling](/docs/current/k8s-cluster-scaling/). +Please note that increasing the CPU resource will not automatically increase the parallelism of existing materialized views. When scaling up (adding more CPU cores) a compute node, you should perform the scaling by following the instructions in [Cluster scaling](/deploy/k8s-cluster-scaling). diff --git a/deploy/risingwave-kubernetes.mdx b/deploy/risingwave-kubernetes.mdx index 2d7855b5..c45e0597 100644 --- a/deploy/risingwave-kubernetes.mdx +++ b/deploy/risingwave-kubernetes.mdx @@ -10,11 +10,11 @@ The Operator is a deployment and management system for RisingWave. It runs on to * **[Install kubectl](http://pwittrock.github.io/docs/tasks/tools/install-kubectl/)** Ensure that the Kubernetes command-line tool [kubectl](https://kubernetes.io/docs/reference/kubectl/) is installed in your environment. -* **[Install psql](/docs/current/install-psql-without-postgresql/)** +* **[Install psql](/deploy/install-psql-without-postgresql)** Ensure that the PostgreSQL interactive terminal [psql](https://www.postgresql.org/docs/current/app-psql.html) is installed in your environment. * **[Install and run Docker](https://docs.docker.com/get-docker/)** Ensure that [Docker](https://docs.docker.com/desktop/) is installed in your environment and running. -* Ensure you allocate enough resources for the deployment. For details, see [Hardware requirements](/docs/current/hardware-requirements/). +* Ensure you allocate enough resources for the deployment. For details, see [Hardware requirements](/deploy/hardware-requirements). ## Create a Kubernetes cluster diff --git a/faq/faq-using-risingwave.mdx b/faq/faq-using-risingwave.mdx index 44d0156d..b18188a9 100644 --- a/faq/faq-using-risingwave.mdx +++ b/faq/faq-using-risingwave.mdx @@ -7,7 +7,7 @@ mode: wide Don't worry, this is by design. RisingWave uses memory for in-memory cache of streaming queries, such as data structures like hash tables, etc., to optimize streaming computation performance. By default, RisingWave will utilize all available memory (unless specifically configured through `RW_TOTAL_MEMORY_BYTES`/`--total-memory-bytes`). This is why setting memory limits is required in Kubernetes/Docker deployments. -During the instance running, RisingWave will keep memory usage below this limit. If you encounter unexpected issues like OOM (Out-of-memory), please refer to [Troubleshoot out-of-memory](/docs/current/troubleshoot-oom/) for assistance. +During the instance running, RisingWave will keep memory usage below this limit. If you encounter unexpected issues like OOM (Out-of-memory), please refer to [Troubleshoot out-of-memory](/troubleshoot/troubleshoot-oom) for assistance. As part of its design, RisingWave allocates part of the total memory in the compute node as reserved memory. This reserved memory is specifically set aside for system usage, such as the stack and code segment of processes, allocation overhead, and network buffer. @@ -48,7 +48,7 @@ By continuously improving the reserved memory feature, we strive to offer a more The execution time for the `CREATE MATERIALIZED VIEW` statement can vary based on several factors. Here are two common reasons: 1. **Backfilling of historical data**: RisingWave ensures consistent snapshots across materialized views (MVs). So when a new MV is created, it backfills all historical data from the upstream MV or tables and calculate them, which takes some time. And the created DDL statement will only end when the backfill ends. You can run `SHOW JOBS;` in SQL to check the DDL progress. If you want the create statement to not wait for the process to finish and not block the session, you can execute `SET BACKGROUND_DDL=true;` before running the `CREATE MATERIALIZED VIEW` statement. See details in [SET BACKGROUND\_DDL](/sql/commands/sql-set-background-ddl). But please notice that the newly created MV is still invisible in the catalog until the end of backfill when `BACKGROUND_DDL=true`. -2. **High cluster latency**: If the cluster experiences high latency, it may take longer to apply changes to the streaming graph. If the `Progress` in the `SHOW JOBS;` result stays at 0.0%, high latency could be the cause. See details in [Troubleshoot high latency](/docs/current/troubleshoot-high-latency/) +2. **High cluster latency**: If the cluster experiences high latency, it may take longer to apply changes to the streaming graph. If the `Progress` in the `SHOW JOBS;` result stays at 0.0%, high latency could be the cause. See details in [Troubleshoot high latency](/troubleshoot/troubleshoot-high-latency) Memory usage is divided into the following components: diff --git a/faq/risingwave-flink-comparison.mdx b/faq/risingwave-flink-comparison.mdx index 4b207455..c14021d8 100644 --- a/faq/risingwave-flink-comparison.mdx +++ b/faq/risingwave-flink-comparison.mdx @@ -8,7 +8,7 @@ We periodically update this article to keep up with the rapidly evolving landsca ## Summary -| Apache Flink | RisingWave | | +| | Apache Flink | RisingWave | | :------------------------------- | :-------------------------------------------------------------------- | :------------------------------------------------------------------------- | | Version | 1.17 | Latest version | | License | Apache License 2.0 | Apache License 2.0 | @@ -82,7 +82,7 @@ RisingWave is a SQL streaming database that offers PostgreSQL-style SQL to its u Apache Flink is a programming framework that does not support any language clients. To use Apache Flink, users must either write Java/Scala/Python programs or use Flink’s own SQL client. -RisingWave is compatible with the PostgreSQL wire protocol and can work with the majority of PostgreSQL's client libraries. This means that RisingWave can communicate in any programming language that is supported by the [PostgreSQL driver](https://wiki.postgresql.org/wiki/Client%5FLibraries), such as [Java](/docs/current/java-client-libraries/), [Python](/docs/current/python-client-libraries/), and [Node.js](/docs/current/nodejs-client-libraries/). Additionally, users can interact with RisingWave using `psql`, the official PostgreSQL terminal. +RisingWave is compatible with the PostgreSQL wire protocol and can work with the majority of PostgreSQL's client libraries. This means that RisingWave can communicate in any programming language that is supported by the [PostgreSQL driver](https://wiki.postgresql.org/wiki/Client%5FLibraries), such as [Java](/client-libraries/java), [Python](/client-libraries/python), and [Node.js](/client-libraries/nodejs). Additionally, users can interact with RisingWave using `psql`, the official PostgreSQL terminal. ## State management @@ -127,7 +127,7 @@ RisingWave differs from Apache Flink in that it was specifically designed for th RisingWave can function as both a stream processing system and a database system. As a database system, it is compatible with PostgreSQL clients, making it a natural fit for the PostgreSQL ecosystem. Users can program in different languages such as Python, Java, and Node.js using existing libraries. Additionally, users can easily find tools that work with RisingWave, such as DBeaver. -For a complete list of RisingWave integrations, see [Integrations](/docs/current/rw-integration-summary/). +For a complete list of RisingWave integrations, check what's listed under [Integrations](/integrations/overview). ## Learning curve diff --git a/get-started/quickstart.mdx b/get-started/quickstart.mdx index f801e8cb..2a6688e8 100644 --- a/get-started/quickstart.mdx +++ b/get-started/quickstart.mdx @@ -8,7 +8,7 @@ description: "This guide aims to provide a quick and easy way to get started wit The following options start RisingWave in the standalone mode. In this mode, data is stored in the file system and the metadata is stored in the embedded SQLite database. See [About RisingWave standalone mode](#about-risingwave-standalone-mode) for more details. -For extensive testing or single-machine deployment, consider [starting RisingWave via Docker Compose](/docs/current/risingwave-docker-compose/). For production environments, consider [RisingWave Cloud](/docs/current/risingwave-cloud/), our fully managed service, or [deployment on Kubernetes using the Operator](/docs/current/risingwave-kubernetes/) or [Helm Chart](/docs/current/risingwave-k8s-helm/). +For extensive testing or single-machine deployment, consider [starting RisingWave via Docker Compose](/deploy/risingwave-docker-compose). For production environments, consider [RisingWave Cloud](/deploy/risingwave-cloud), our fully managed service, or [deployment on Kubernetes using the Operator](/deploy/risingwave-kubernetes) or [Helm Chart](/deploy/risingwave-k8s-helm). ### Script installation @@ -44,7 +44,7 @@ risingwave ## Step 2: Connect to RisingWave -Ensure you have `psql` installed in your environment. To learn about how to install it, see [Install psql without PostgreSQL](/docs/current/install-psql-without-postgresql/). +Ensure you have `psql` installed in your environment. To learn about how to install it, see [Install psql without PostgreSQL](/deploy/install-psql-without-postgresql). Open a new terminal window and run: @@ -133,13 +133,13 @@ SELECT * FROM average_exam_scores; RisingWave standalone mode is a simplified deployment mode for RisingWave. It is designed to be minimal, easy to install, and configure. -Unlike other deployment modes, for instance [Docker Compose](/docs/current/risingwave-docker-compose/) or [Kubernetes](/docs/current/risingwave-kubernetes/), RisingWave standalone mode starts the cluster as a single process. This means that services like `compactor`, `frontend`, `compute` and `meta` are all embedded in this process. +Unlike other deployment modes, for instance [Docker Compose](/deploy/risingwave-docker-compose) or [Kubernetes](/deploy/risingwave-kubernetes), RisingWave standalone mode starts the cluster as a single process. This means that services like `compactor`, `frontend`, `compute` and `meta` are all embedded in this process. For state store, we will use the embedded `LocalFs` Object Store, eliminating the need for an external service like `minio` or `s3`; for meta store, we will use the embedded `SQLite` database, eliminating the need for an external service like `etcd`. By default, the RisingWave standalone mode will store its data in `~/risingwave`, which includes both `Metadata` and `State Data`. -For a batteries-included setup, with `monitoring` tools and external services like `kafka` fully included, you can use [Docker Compose](/docs/current/risingwave-docker-compose/) instead. If you would like to set up these external services manually, you may check out RisingWave's [Docker Compose](https://github.com/risingwavelabs/risingwave/blob/main/docker/docker-compose.yml), and run these services using the same configurations. +For a batteries-included setup, with `monitoring` tools and external services like `kafka` fully included, you can use [Docker Compose](/deploy/risingwave-docker-compose) instead. If you would like to set up these external services manually, you may check out RisingWave's [Docker Compose](https://github.com/risingwavelabs/risingwave/blob/main/docker/docker-compose.yml), and run these services using the same configurations. ## Configure RisingWave standalone mode diff --git a/get-started/rw-premium-edition-intro.mdx b/get-started/rw-premium-edition-intro.mdx index bef2e689..ac345763 100644 --- a/get-started/rw-premium-edition-intro.mdx +++ b/get-started/rw-premium-edition-intro.mdx @@ -24,11 +24,11 @@ RisingWave Premium 1.0 is the first major release of this new edition with sever * Automatic schema mapping to the source tables for [PostgreSQL CDC](/integrations/sources/postgresql-cdc#automatically-map-upstream-table-schema) and [MySQL CDC](/integrations/sources/mysql-cdc#automatically-map-upstream-table-schema) * [Automatic schema change for MySQL CDC](/integrations/sources/mysql-cdc#automatically-change-schema) -* [AWS Glue Schema Registry](/docs/current/ingest-from-kafka/#read-schemas-from-aws-glue-schema-registry) +* [AWS Glue Schema Registry](/integrations/sources/kafka#read-schemas-from-aws-glue-schema-registry) ### Connectors - + For users who are already using these features in 1.9.x or earlier versions, rest assured that the functionality of these features will be intact if you stay on the version. If you choose to upgrade to v2.0 or later versions, an error will show up to indicate you need a license to use the features. diff --git a/get-started/use-cases.mdx b/get-started/use-cases.mdx index 6b87ad28..e3af57f8 100644 --- a/get-started/use-cases.mdx +++ b/get-started/use-cases.mdx @@ -238,7 +238,7 @@ SELECT * FROM bidding_feature_vectors WHERE ad_id = 'specific_ad_id'; 4. Real-time inference -As new bidding data arrives, you can continuously update your feature vectors and use them for real-time inference, ensuring your bids are always informed by the most recent data. For instance, you can create a [User-defined function](/docs/current/user-defined-functions/), `PREDICT_BID`, that predicts the next bid given the most recent data. +As new bidding data arrives, you can continuously update your feature vectors and use them for real-time inference, ensuring your bids are always informed by the most recent data. For instance, you can create a [User-defined function](/sql/udfs/user-defined-functions), `PREDICT_BID`, that predicts the next bid given the most recent data. ```sql CREATE MATERIALIZED VIEW live_predictions AS diff --git a/ingestion/change-data-capture-with-risingwave.mdx b/ingestion/change-data-capture-with-risingwave.mdx index 83f5bb22..d9f44475 100644 --- a/ingestion/change-data-capture-with-risingwave.mdx +++ b/ingestion/change-data-capture-with-risingwave.mdx @@ -9,4 +9,4 @@ You can use event streaming systems like Apache Kafka, Pulsar, or Kinesis to str RisingWave also provides native MySQL and PostgreSQL CDC connectors. With these CDC connectors, you can ingest CDC data from these databases directly, without setting up additional services like Kafka. For complete step-to-step guides about using the native CDC connector to ingest MySQL and PostgreSQL data, see [Ingest data from MySQL](/integrations/sources/mysql-cdc) and [Ingest data from PostgreSQL](/integrations/sources/postgresql-cdc). This topic only describes the configurations for using RisingWave to ingest CDC data from an event streaming system. -For the supported sources and corresponding formats, see [Supported sources and formats](/docs/current/supported-sources-and-formats/). +For the supported sources and corresponding formats, see [Supported sources and formats](/ingestion/supported-sources-and-formats). diff --git a/ingestion/format-and-encode-parameters.mdx b/ingestion/format-and-encode-parameters.mdx index f87b46de..e83a1555 100644 --- a/ingestion/format-and-encode-parameters.mdx +++ b/ingestion/format-and-encode-parameters.mdx @@ -1,6 +1,6 @@ --- title: "FORMAT and ENCODE parameters" -description: "When creating a source or table using a connector, you need to specify the `FORMAT` and `ENCODE` section of the [CREATE SOURCE](/docs/current/sql-create-source/) or [CREATE TABLE](/docs/current/sql-create-source/) statement. This topic provides an overview of the formats and encoding options. For the complete list of formats we support, see [Supported sources and formats](/docs/current/supported-sources-and-formats/)" +description: "When creating a source or table using a connector, you need to specify the `FORMAT` and `ENCODE` section of the [CREATE SOURCE](/docs/current/sql-create-source/) or [CREATE TABLE](/docs/current/sql-create-source/) statement. This topic provides an overview of the formats and encoding options. For the complete list of formats we support, see [Supported sources and formats](/ingestion/supported-sources-and-formats)" sidebarTitle: Formats and encoding mode: wide --- diff --git a/ingestion/generate-test-data.mdx b/ingestion/generate-test-data.mdx index 5c8b70be..b34631cc 100644 --- a/ingestion/generate-test-data.mdx +++ b/ingestion/generate-test-data.mdx @@ -68,7 +68,7 @@ Specify the following fields for every column in the source you are creating. | column\_parameter | Description | Value | Required? | | :---------------- | :--------------- | :------------- | :----------------- | | kind | Generator type. | Set to `random`. | False. Default: `random` | -| max\_past | Specify the maximum deviation from the baseline timestamp or timestamptz to determine the earliest possible timestamp or timestamptz that can be generated. | An [interval](/docs/current/sql-data-types/). Example: `2h 37min` | False. Default: `1 day` | +| max\_past | Specify the maximum deviation from the baseline timestamp or timestamptz to determine the earliest possible timestamp or timestamptz that can be generated. | An [interval](/sql/data-types/overview). Example: `2h 37min` | False. Default: `1 day` | | max\_past\_mode | Specify the baseline timestamp or timestamptz. The range for generated timestamps or timestamptzs is \[base time - `max_past`, base time\] | `absolute` — The base time is set to the execution time of the generator. The base time is fixed for each generation. `relative` — The base time is the system time obtained each time a new record is generated. | False. Default: `absolute` | | basetime | If set, the generator will ignore max\_past\_mode and use the specified time as the base time. | A [date and time string](https://docs.rs/chrono/latest/chrono/struct.DateTime.html#method.parse%5Ffrom%5Frfc3339). Example: `2023-04-01T16:39:57-08:00` | False. Default: generator execution time | | seed | A seed number that initializes the random load generator. The sequence of the generated timestamps or timestamptzs is determined by the seed value. If given the same seed number, the generator will produce the same sequence of timestamps or timestamptzs. | A positive integer. Example: `3` | False. If not specified, a fixed sequence of timestamps or timestamptzs will be generated (if the system time is constant). | @@ -89,7 +89,7 @@ Specify the following fields for every column in the source you are creating. -The generator supports generating data in a [struct](/docs/current/data-type-struct/). A column of `struct` type can contain multiple nested columns of different types. +The generator supports generating data in a [struct](/sql/data-types/struct). A column of `struct` type can contain multiple nested columns of different types. The following statement creates a load generator source which contains one column, `v1`. `v1` consists of two nested columns `v2` and `v3`. @@ -114,7 +114,7 @@ When you configure a nested column, use `column.nested_column` to specify it. Fo -The generator supports generating data in an [array](/docs/current/data-type-array/). An array is a list of elements of the same type. Append `[]` to the data type of the column when creating the source. +The generator supports generating data in an [array](/sql/data-types/array-type). An array is a list of elements of the same type. Append `[]` to the data type of the column when creating the source. The following statement creates a load generator source which contains one column, `c1`. `c1` is an array of `varchar`. diff --git a/ingestion/modify-source-or-table-schemas.mdx b/ingestion/modify-source-or-table-schemas.mdx index ffcb5831..a7e2256d 100644 --- a/ingestion/modify-source-or-table-schemas.mdx +++ b/ingestion/modify-source-or-table-schemas.mdx @@ -21,7 +21,7 @@ Similarly, to add a column to a table, use this command: ALTER TABLE ADD COLUMN ; ``` -For details about these two commands, see [ALTER SOURCE](/docs/current/sql-alter-source/) and [ALTER TABLE](/docs/current/sql-alter-table/). +For details about these two commands, see [ALTER SOURCE](/sql/commands/sql-alter-source) and [ALTER TABLE](/sql/commands/sql-alter-table). Note that you cannot add a primary key column to a source or table in RisingWave. To modify the primary key of a source or table, you need to recreate the table. @@ -59,7 +59,7 @@ ALTER TABLE table_name DROP COLUMN column_name; ### Source -At present, combined with the [ALTER SOURCE command](/docs/current/sql-alter-source/#format-and-encode-options), you can refresh the schema registry of a source by refilling its [FORMAT and ENCODE options](/docs/current/formats-and-encode-parameters/). The syntax is: +At present, combined with the [ALTER SOURCE command](/sql/commands/sql-alter-source#format-and-encode-options), you can refresh the schema registry of a source by refilling its [FORMAT and ENCODE options](/ingestion/format-and-encode-parameters). The syntax is: ```sql ALTER SOURCE source_name FORMAT data_format ENCODE data_encode [ ( @@ -95,7 +95,7 @@ ALTER SOURCE src_user FORMAT PLAIN ENCODE PROTOBUF( Currently, it is not supported to modify the `data_format` and `data_encode`. Furthermore, when refreshing the schema registry of a source, it is not allowed to drop columns or change types. -In addition, when the [FORMAT and ENCODE options](/docs/current/formats-and-encode-parameters/) are not changed, the `REFRESH SCHEMA` clause of `ALTER SOURCE` can also be used to refresh the schema of a source. +In addition, when the [FORMAT and ENCODE options](/ingestion/format-and-encode-parameters) are not changed, the `REFRESH SCHEMA` clause of `ALTER SOURCE` can also be used to refresh the schema of a source. ```sql ALTER SOURCE source_name REFRESH SCHEMA; @@ -130,7 +130,7 @@ For more details about this example, see our [test file](https://github.com/risi ### Table -Similarly, you can use the following statement to refresh the schema of a table with connectors. For more details, see [ALTER TABLE](/docs/current/sql-alter-table/#refresh-schema). +Similarly, you can use the following statement to refresh the schema of a table with connectors. For more details, see [ALTER TABLE](/sql/commands/sql-alter-table#refresh-schema). Refresh schema of table @@ -144,6 +144,6 @@ If a downstream fragment references a column that is either missing or has under ## See also -* [ALTER SOURCE command](/docs/current/sql-alter-source/) -* [ALTER TABLE command](/docs/current/sql-alter-table/) -* [ALTER SCHEMA command](/docs/current/sql-alter-schema/) +* [ALTER SOURCE command](/sql/commands/sql-alter-source) +* [ALTER TABLE command](/sql/commands/sql-alter-table) +* [ALTER SCHEMA command](/sql/commands/sql-alter-schema) diff --git a/ingestion/overview.mdx b/ingestion/overview.mdx index 0a5e29cf..e530459b 100644 --- a/ingestion/overview.mdx +++ b/ingestion/overview.mdx @@ -42,7 +42,7 @@ SELECT * FROM kafka_source; ``` -* Also, queries can be executed directly on the source, and **ad-hoc ingestion** will happen during the query's processing, see more information in [directly query Kafka](/docs/current/ingest-from-kafka/#query-kafka-timestamp). +* Also, queries can be executed directly on the source, and **ad-hoc ingestion** will happen during the query's processing, see more information in [directly query Kafka](/integrations/sources/kafka#query-kafka-timestamp). ```sql SELECT * FROM source_name WHERE _rw_kafka_timestamp > now() - interval '10 minute'; @@ -125,8 +125,8 @@ INSERT INTO t1 SELECT * FROM source_iceberg_t1; The information presented above provides a brief overview of the data ingestion process in RisingWave. To gain a more comprehensive understanding of this process, the following topics in this section will delve more deeply into the subject matter. Here is a brief introduction to what you can expect to find in each topic: * Among different types of sources, we have abstracted a series of common syntax and features. - * For more detailed information about the types, formats, and encoding options of sources, see [Formats and encoding](/docs/current/formats-and-encode-parameters/). - * For the complete list of the sources and formats supported in RisingWave, see [Supported sources and formats](/docs/current/supported-sources-and-formats/). + * For more detailed information about the types, formats, and encoding options of sources, see [Formats and encoding](/ingestion/format-and-encode-parameters). + * For the complete list of the sources and formats supported in RisingWave, see [Supported sources and formats](/ingestion/supported-sources-and-formats). * To learn about how to manage schemas and ingest additional fields from sources : * [Modify source or table schemas](/docs/current/modify-schemas/) * [Ingest additional fields with INCLUDE clause](/docs/current/include-clause/) diff --git a/ingestion/supported-sources-and-formats.mdx b/ingestion/supported-sources-and-formats.mdx index a6a644e0..6cd846ae 100644 --- a/ingestion/supported-sources-and-formats.mdx +++ b/ingestion/supported-sources-and-formats.mdx @@ -12,7 +12,7 @@ To ingest data in formats marked with "T", you need to create tables (with conne | Connector | Version | Format | | :------------ | :------------ | :------------------- | -| [Kafka](/docs/current/ingest-from-kafka/) | 3.1.0 or later versions | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Debezium AVRO](#debezium-avro) (T), [DEBEZIUM\_MONGO\_JSON](#debezium-mongo-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T), [Upsert JSON](#upsert-json) (T), [Upsert AVRO](#upsert-avro) (T), [Bytes](#bytes) | +| [Kafka](/integrations/sources/kafka) | 3.1.0 or later versions | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Debezium AVRO](#debezium-avro) (T), [DEBEZIUM\_MONGO\_JSON](#debezium-mongo-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T), [Upsert JSON](#upsert-json) (T), [Upsert AVRO](#upsert-avro) (T), [Bytes](#bytes) | | [Redpanda](/docs/current/ingest-from-redpanda/) | Latest | [Avro](#avro), [JSON](#json), [protobuf](#protobuf) | | [Pulsar](/integrations/sources/pulsar) | 2.8.0 or later versions | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | | [Kinesis](/docs/current/ingest-from-kinesis/) | Latest | [Avro](#avro), [JSON](#json), [protobuf](#protobuf), [Debezium JSON](#debezium-json) (T), [Maxwell JSON](#maxwell-json) (T), [Canal JSON](#canal-json) (T) | @@ -34,7 +34,7 @@ When creating a source, you need to specify the data and encoding formats in the ### Avro -For data in Avro format, you must specify a message and a schema registry. For Kafka data in Avro, you need to provide a Confluent Schema Registry that RisingWave can get the schema from. For more details about using Schema Registry for Kafka data, see [Read schema from Schema Registry](/docs/current/ingest-from-kafka/#read-schemas-from-schema-registry). +For data in Avro format, you must specify a message and a schema registry. For Kafka data in Avro, you need to provide a Confluent Schema Registry that RisingWave can get the schema from. For more details about using Schema Registry for Kafka data, see [Read schema from Schema Registry](/integrations/sources/kafka#read-schemas-from-confluent-schema-registry). `schema.registry` can accept multiple addresses. RisingWave will send requests to all URLs and return the first successful result. @@ -65,7 +65,7 @@ Note that for `map.handling.mode = 'jsonb'`, the value types can only be: `null` ### Debezium AVRO -When creating a source from streams in with Debezium AVRO, the schema of the source does not need to be defined in the `CREATE TABLE` statement as it can be inferred from the `SCHEMA REGISTRY`. This means that the schema file location must be specified. The schema file location can be an actual Web location, which is in `http://...`, `https://...`, or `S3://...` format, or a Confluent Schema Registry. For more details about using Schema Registry for Kafka data, see [Read schema from Schema Registry](/docs/current/ingest-from-kafka/#read-schemas-from-schema-registry). +When creating a source from streams in with Debezium AVRO, the schema of the source does not need to be defined in the `CREATE TABLE` statement as it can be inferred from the `SCHEMA REGISTRY`. This means that the schema file location must be specified. The schema file location can be an actual Web location, which is in `http://...`, `https://...`, or `S3://...` format, or a Confluent Schema Registry. For more details about using Schema Registry for Kafka data, see [Read schema from Schema Registry](/integrations/sources/kafka#read-schemas-from-confluent-schema-registry). `schema.registry` can accept multiple addresses. RisingWave will send requests to all URLs and return the first successful result. @@ -183,7 +183,7 @@ ENCODE JSON [ ( ### Protobuf -For data in protobuf format, you must specify a message (fully qualified by package path) and a schema location. The schema location can be an actual Web location that is in `http://...`, `https://...`, or `S3://...` format. For Kafka data in protobuf, instead of providing a schema location, you can provide a Confluent Schema Registry that RisingWave can get the schema from. For more details about using Schema Registry for Kafka data, see [Read schema from Schema Registry](/docs/current/ingest-from-kafka/#read-schemas-from-schema-registry). +For data in protobuf format, you must specify a message (fully qualified by package path) and a schema location. The schema location can be an actual Web location that is in `http://...`, `https://...`, or `S3://...` format. For Kafka data in protobuf, instead of providing a schema location, you can provide a Confluent Schema Registry that RisingWave can get the schema from. For more details about using Schema Registry for Kafka data, see [Read schema from Schema Registry](/integrations/sources/kafka#read-schemas-from-confluent-schema-registry). `schema.registry` can accept multiple addresses. RisingWave will send requests to all URLs and return the first successful result. diff --git a/integrations/destinations/apache-doris.mdx b/integrations/destinations/apache-doris.mdx index 92d2116f..60cd8bdc 100644 --- a/integrations/destinations/apache-doris.mdx +++ b/integrations/destinations/apache-doris.mdx @@ -8,7 +8,7 @@ description: "This guide describes how to sink data from RisingWave to Apache Do * Ensure that RisingWave can access the network where the Doris backend and frontend are located. For more details, see [Synchronize Data Through External Table](https://doris.apache.org/docs/dev/data-operate/import/import-scenes/external-table-load/). * Ensure you have an upstream materialized view or source that you can sink data from. For more details, see [CREATE SOURCE](/docs/current/sql-create-source/) or [CREATE MATERIALIZED VIEW](/docs/current/sql-create-mv/). -* Ensure that for `struct` elements, the name and type are the same in Doris and RisingWave. If they are not the same, the values will be set to `NULL` or to default values. For more details on the `struct` data type, see [Struct](/docs/current/data-type-struct/). +* Ensure that for `struct` elements, the name and type are the same in Doris and RisingWave. If they are not the same, the values will be set to `NULL` or to default values. For more details on the `struct` data type, see [Struct](/sql/data-types/struct). ## Syntax @@ -75,7 +75,7 @@ WITH ( ## Data type mapping -The following table shows the corresponding data types between RisingWave and Doris that should be specified when creating a sink. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data types between RisingWave and Doris that should be specified when creating a sink. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). In regards to `decimal` types, RisingWave will round to the nearest decimal place to ensure that its precision matches that of Doris. Ensure that the length of decimal types being imported into Doris does not exceed Doris's decimal length. Otherwise, it will fail to import. diff --git a/integrations/destinations/snowflake.mdx b/integrations/destinations/snowflake.mdx index fa410e0c..f0c23376 100644 --- a/integrations/destinations/snowflake.mdx +++ b/integrations/destinations/snowflake.mdx @@ -58,7 +58,7 @@ All parameters are required unless specified otherwise. ## Data type mapping -The following table shows the corresponding data types between RisingWave and Snowflake. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data types between RisingWave and Snowflake. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). | RisingWave type | Snowflake type | | :-------------- | :---------------------------------------------------------------- | diff --git a/integrations/destinations/sql-server.mdx b/integrations/destinations/sql-server.mdx index 2e8cc291..873bf9c7 100644 --- a/integrations/destinations/sql-server.mdx +++ b/integrations/destinations/sql-server.mdx @@ -46,7 +46,7 @@ WITH ( ## Data type mapping -The following table shows the corresponding data types between RisingWave and SQL Server that should be specified when creating a sink. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data types between RisingWave and SQL Server that should be specified when creating a sink. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). | SQL Server type | RisingWave type | | :-------------- | :-------------------------- | diff --git a/integrations/destinations/starrocks.mdx b/integrations/destinations/starrocks.mdx index 1e072696..85ea9b36 100644 --- a/integrations/destinations/starrocks.mdx +++ b/integrations/destinations/starrocks.mdx @@ -66,7 +66,7 @@ FROM bhv_mv WITH ( ``` ## Data type mapping -The following table shows the corresponding data type in RisingWave that should be specified when creating a sink. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data type in RisingWave that should be specified when creating a sink. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). | StarRocks type | RisingWave type | | :------------- | :---------------------------------- | diff --git a/integrations/destinations/tidb.mdx b/integrations/destinations/tidb.mdx index d218ca4e..1c4325b6 100644 --- a/integrations/destinations/tidb.mdx +++ b/integrations/destinations/tidb.mdx @@ -8,7 +8,7 @@ For the syntax, settings, and examples, see [Sink data from RisingWave to MySQL ### Data type mapping -The following table shows the corresponding data types between RisingWave and TiDB. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data types between RisingWave and TiDB. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). | RisingWave type | TiDB type | | :-------------- | :------------------------------------------------- | diff --git a/integrations/sources/amazon-msk.mdx b/integrations/sources/amazon-msk.mdx index 1e005647..7b5d8d58 100644 --- a/integrations/sources/amazon-msk.mdx +++ b/integrations/sources/amazon-msk.mdx @@ -163,7 +163,7 @@ psql -h localhost -p 4566 -d dev -U root ### Create a source in RisingWave -To learn about the specific syntax used to consume data from a Kafka topic, see [Ingest data from Kafka](/docs/current/ingest-from-kafka/). +To learn about the specific syntax used to consume data from a Kafka topic, see [Ingest data from Kafka](/integrations/sources/kafka). For example, the following query creates a table that consumes data from an MSK topic connected to Kafka. diff --git a/integrations/sources/confluent-cloud.mdx b/integrations/sources/confluent-cloud.mdx index 05ca26d3..a7f0d5fd 100644 --- a/integrations/sources/confluent-cloud.mdx +++ b/integrations/sources/confluent-cloud.mdx @@ -54,7 +54,7 @@ Create a table in RisingWave to ingest data from the Kafka topic created in Conf The following query will create a table that connects to the data generator created in Confluent. Remember to fill in the authentication parameters accordingly. -See the [Ingest data from Kafka](/docs/current/ingest-from-kafka/) topic for more details on the syntax and connection parameters. +See the [Ingest data from Kafka](/integrations/sources/kafka) topic for more details on the syntax and connection parameters. ```sql CREATE TABLE s ( diff --git a/integrations/sources/mysql-cdc.mdx b/integrations/sources/mysql-cdc.mdx index 4aa58ba4..7ba04220 100644 --- a/integrations/sources/mysql-cdc.mdx +++ b/integrations/sources/mysql-cdc.mdx @@ -275,7 +275,7 @@ cdc_offset | {"MySql": {"filename": "binlog.000005", "position": 60946679 ## Data type mapping -The following table shows the corresponding data type in RisingWave that should be specified when creating a source. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data type in RisingWave that should be specified when creating a source. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). RisingWave data types marked with an asterisk indicate that while there is no corresponding RisingWave data type, the ingested data can still be consumed as the listed type. diff --git a/integrations/sources/overview.mdx b/integrations/sources/overview.mdx index 7fd667fc..a471e311 100644 --- a/integrations/sources/overview.mdx +++ b/integrations/sources/overview.mdx @@ -5,4 +5,4 @@ mode: wide sidebarTitle: Overview --- - 6 items 5 items 1 item 3 items 3 item + 6 items 5 items 1 item 3 items 3 item diff --git a/integrations/sources/postgresql-cdc.mdx b/integrations/sources/postgresql-cdc.mdx index f0b5072a..035c5d20 100644 --- a/integrations/sources/postgresql-cdc.mdx +++ b/integrations/sources/postgresql-cdc.mdx @@ -290,7 +290,7 @@ To check the progress of backfilling historical data, find the corresponding int ## Data type mapping -The following table shows the corresponding data type in RisingWave that should be specified when creating a source. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data type in RisingWave that should be specified when creating a source. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). RisingWave data types marked with an asterisk indicate that while there is no corresponding RisingWave data type, the ingested data can still be consumed as the listed type. diff --git a/integrations/sources/redhat-amq-streams.mdx b/integrations/sources/redhat-amq-streams.mdx index f3510dff..ac19e394 100644 --- a/integrations/sources/redhat-amq-streams.mdx +++ b/integrations/sources/redhat-amq-streams.mdx @@ -14,7 +14,7 @@ Before ingesting data from RedHat AMQ Streams into RisingWave, please ensure the * Create the AMQ Streams topic from which you want to ingest data. * Ensure that your RisingWave cluster is running. -For example, we create a topic named `financial-transactions` with the following sample data from various financial transactions data, formatted as JSON. Each sample represents a unique transaction with distinct transaction IDs, sender and receiver accounts, amounts, currencies, and timestamps. Hence AMQ Streams is compatible with Apache Kafka. For more information, refer to [Apache Kafka](https://docs.risingwave.com/docs/current/ingest-from-kafka/). +For example, we create a topic named `financial-transactions` with the following sample data from various financial transactions data, formatted as JSON. Each sample represents a unique transaction with distinct transaction IDs, sender and receiver accounts, amounts, currencies, and timestamps. Hence AMQ Streams is compatible with Apache Kafka. For more information, refer to [Apache Kafka](https://docs.risingwave.com/integrations/sources/kafka). ```bash {"tx_id": "TX1004", "sender_account": "ACC1004", "receiver_account": "ACC2004", "amount": 2000.00, "currency": "USD", "tx_timestamp": "2024-03-29T12:36:00Z"} diff --git a/integrations/sources/redpanda.mdx b/integrations/sources/redpanda.mdx index f1cf34cd..5910ab81 100644 --- a/integrations/sources/redpanda.mdx +++ b/integrations/sources/redpanda.mdx @@ -5,4 +5,4 @@ mode: wide sidebarTitle: Redpanda --- -For the syntax, settings, and examples, see [Ingest data from Kafka](/docs/current/ingest-from-kafka/). +For the syntax, settings, and examples, see [Ingest data from Kafka](/integrations/sources/kafka). diff --git a/integrations/sources/sql-server-cdc.mdx b/integrations/sources/sql-server-cdc.mdx index 6647339b..93333c9d 100644 --- a/integrations/sources/sql-server-cdc.mdx +++ b/integrations/sources/sql-server-cdc.mdx @@ -223,7 +223,7 @@ To check the progress of backfilling historical data, find the corresponding int ## Data type mapping -The following table shows the corresponding data type in RisingWave that should be specified when creating a CDC table. For details on native RisingWave data types, see [Overview of data types](/docs/current/sql-data-types/). +The following table shows the corresponding data type in RisingWave that should be specified when creating a CDC table. For details on native RisingWave data types, see [Overview of data types](/sql/data-types/overview). RisingWave data types marked with an asterisk indicate that while there is no corresponding RisingWave data type, the ingested data can still be consumed as the listed type. diff --git a/integrations/visualization/overview.mdx b/integrations/visualization/overview.mdx index 4babcf58..00495113 100644 --- a/integrations/visualization/overview.mdx +++ b/integrations/visualization/overview.mdx @@ -4,4 +4,4 @@ description: "You can use a variety of visualization tools, such as [Apache Supe mode: wide --- -If the visualization tool you are using is not listed in the official RisingWave documentation, you may attempt to connect to RisingWave directly using the PostgreSQL driver. You can let us know your interest in a particular system by clicking the thumb-up button on [Integrations](/docs/current/rw-integration-summary/). +If the visualization tool you are using is not listed in the official RisingWave documentation, you may attempt to connect to RisingWave directly using the PostgreSQL driver. You can let us know your interest in a particular system by clicking the thumb-up button on [Integrations](/integrations/overview). diff --git a/operate/alter-streaming.mdx b/operate/alter-streaming.mdx index b556d19d..bb289b4e 100644 --- a/operate/alter-streaming.mdx +++ b/operate/alter-streaming.mdx @@ -5,7 +5,7 @@ description: "This document explains how to modify the logic in streaming pipeli ## Alter a table or source -To add or drop columns from a table or source, simply use the [ALTER TABLE](/docs/current/sql-alter-table/) or [ALTER SOURCE](/docs/current/sql-alter-source/) command. For example: +To add or drop columns from a table or source, simply use the [ALTER TABLE](/sql/commands/sql-alter-table) or [ALTER SOURCE](/sql/commands/sql-alter-source) command. For example: ```sql ALTER TABLE customers ADD COLUMN birth_date date; diff --git a/operate/manage-a-large-number-of-streaming-jobs.mdx b/operate/manage-a-large-number-of-streaming-jobs.mdx index 7809f40b..9ce034e8 100644 --- a/operate/manage-a-large-number-of-streaming-jobs.mdx +++ b/operate/manage-a-large-number-of-streaming-jobs.mdx @@ -25,7 +25,7 @@ The adaptive parallelism feature in version 1.7.0 ensures that every streaming j ### Limit the concurrency of creating stream jobs -If you want to create multiple streaming jobs at once using scripts or tools such as DBT, the [system parameter](/docs/current/view-configure-system-parameters/) `max_concurrent_creating_streaming_jobs` is helpful. It controls the maximum number of streaming jobs created concurrently. However, please do not set it too high, as it may introduce excessive pressure on the cluster. +If you want to create multiple streaming jobs at once using scripts or tools such as DBT, the [system parameter](/operate/view-configure-system-parameters) `max_concurrent_creating_streaming_jobs` is helpful. It controls the maximum number of streaming jobs created concurrently. However, please do not set it too high, as it may introduce excessive pressure on the cluster. ## Tuning an existing cluster @@ -39,7 +39,7 @@ If the number exceeds 50000, please pay close attention and check the following ### Decrease the parallelism -When the total number of actors in the cluster is large, excessive parallelism can be counterproductive. After v1.7.0, you can check the parallelism number of the running streaming jobs in the system table `rw_fragment_parallelism`, and you can alter the streaming jobs's parallelism with the `ALTER` statement. For more information, refer to [Cluster scaling](/docs/current/k8s-cluster-scaling/). +When the total number of actors in the cluster is large, excessive parallelism can be counterproductive. After v1.7.0, you can check the parallelism number of the running streaming jobs in the system table `rw_fragment_parallelism`, and you can alter the streaming jobs's parallelism with the `ALTER` statement. For more information, refer to [Cluster scaling](/deploy/k8s-cluster-scaling). Here is an example of how to adjust the parallelism. diff --git a/operate/meta-backup.mdx b/operate/meta-backup.mdx index 8c087959..a6ebb728 100644 --- a/operate/meta-backup.mdx +++ b/operate/meta-backup.mdx @@ -13,7 +13,7 @@ Before you can create a meta snapshot, you need to set the `backup_storage_url` Be careful not to set the `backup_storage_url` and `backup_storage_directory` when there are snapshots. However, it's not strictly forbidden. If you insist on doing so, please note the snapshots taken before the setting will all be invalidated and cannot be used in restoration anymore. -To learn about how to configure system parameters, see [How to configure system parameters](/docs/current/view-configure-system-parameters/#how-to-configure-system-parameters). +To learn about how to configure system parameters, see [How to configure system parameters](/operate/view-configure-system-parameters#how-to-configure-system-parameters). ## Create a meta snapshot diff --git a/processing/time-travel-queries.mdx b/processing/time-travel-queries.mdx index c69b2b3f..87afaf18 100644 --- a/processing/time-travel-queries.mdx +++ b/processing/time-travel-queries.mdx @@ -17,9 +17,9 @@ This feature is in the public preview stage, meaning it's nearing the final prod ## Prerequisites -Time travel requires the meta store type to be [SQL-compatible](/docs/current/risingwave-docker-compose/introduction#customize-meta-store). We recommend reserving at least 50 GB of disk space for the meta store. +Time travel requires the meta store type to be [SQL-compatible](/deploy/risingwave-docker-compose#customize-meta-store). We recommend reserving at least 50 GB of disk space for the meta store. -The system parameter `time_travel_retention_ms` controls time travel functionality. By default, it's set to `0`, which means time travel is turned off. To enable time travel, you need to [alter this system parameter](/docs/current/view-configure-system-parameters/#how-to-configure-system-parameters) to a non-zero value. +The system parameter `time_travel_retention_ms` controls time travel functionality. By default, it's set to `0`, which means time travel is turned off. To enable time travel, you need to [alter this system parameter](/operate/view-configure-system-parameters#how-to-configure-system-parameters) to a non-zero value. For example, you can set `time_travel_retention_ms` to `86400000` (1 day). Then historical data older than this period will be deleted and no longer accessible. diff --git a/processing/watermarks.mdx b/processing/watermarks.mdx index 52b2e4d7..d57cc80d 100644 --- a/processing/watermarks.mdx +++ b/processing/watermarks.mdx @@ -47,7 +47,7 @@ WATERMARK FOR time_col as time_col ```sql WATERMARK FOR time_col as time_col - INTERVAL 'string' time_unit ``` -Supported `time_unit` values include: second, minute, hour, day, month, and year. For more details, see the `interval` data type under [Overview of data types](/docs/current/sql-data-types/). +Supported `time_unit` values include: second, minute, hour, day, month, and year. For more details, see the `interval` data type under [Overview of data types](/sql/data-types/overview). Currently, RisingWave only supports using one of the columns from the table as the watermark column. To use nested fields (e.g., fields in `STRUCT`), or perform expression evaluation on the input rows (e.g., casting data types), please refer to [generated columns](/docs/current/query-syntax-generated-columns/). diff --git a/reference/key-concepts.mdx b/reference/key-concepts.mdx index d23adf71..08292f80 100644 --- a/reference/key-concepts.mdx +++ b/reference/key-concepts.mdx @@ -43,7 +43,7 @@ A node is a logical collection of IT resources that handles specific workloads b ### Parallelism[](#parallelism "Direct link to Parallelism") -Parallelism refers to the technique of simultaneously executing multiple database operations or queries to improve performance and increase efficiency. It involves dividing a database workload into smaller tasks and executing them concurrently on multiple processors or machines. In RisingWave, you can set the parallelism of streaming jobs, like [tables](/docs/current/sql-alter-table/#set-parallelism), [materialized views](/docs/current/sql-alter-materialized-view/#set-parallelism), and [sinks](/docs/current/sql-alter-sink/#set-parallelism). +Parallelism refers to the technique of simultaneously executing multiple database operations or queries to improve performance and increase efficiency. It involves dividing a database workload into smaller tasks and executing them concurrently on multiple processors or machines. In RisingWave, you can set the parallelism of streaming jobs, like [tables](/sql/commands/sql-alter-table#set-parallelism), [materialized views](/docs/current/sql-alter-materialized-view/#set-parallelism), and [sinks](/docs/current/sql-alter-sink/#set-parallelism). ### Sinks[](#sinks "Direct link to Sinks") diff --git a/sql/commands/overview.mdx b/sql/commands/overview.mdx index 4c8f285d..1db8b32b 100644 --- a/sql/commands/overview.mdx +++ b/sql/commands/overview.mdx @@ -51,11 +51,11 @@ sidebarTitle: Overview title="ALTER SCHEMA" icon="diagram-project" iconType="solid" - href="/docs/current/sql-alter-schema/" + href="/sql/commands/sql-alter-schema" > Modify the properties of a schema. - Modify the properties of a sink. Modify the properties of a source. Modify a server configuration parameter. Modify the properties of a table. Modify the properties of a user. Modify the properties of a view. Convert stream into an append-only changelog. Start a transaction. Cancel specific streaming jobs. Add comments on tables or columns. Commit the current transaction. Create a user-defined aggregate function. Create a connection between VPCs. Create a new database. Create a user-defined function. Create an index on a column of a table or a materialized view to speed up data retrieval. Create a materialized view. Create a new schema. Create a secret to store credentials. Create a sink into RisingWave's table. Create a sink. Supported data sources and how to connect RisingWave to the sources. Create a table. Create a new user account. Create a non-materialized view. + Modify the properties of a sink. Modify the properties of a source. Modify a server configuration parameter. Modify the properties of a table. Modify the properties of a user. Modify the properties of a view. Convert stream into an append-only changelog. Start a transaction. Cancel specific streaming jobs. Add comments on tables or columns. Commit the current transaction. Create a user-defined aggregate function. Create a connection between VPCs. Create a new database. Create a user-defined function. Create an index on a column of a table or a materialized view to speed up data retrieval. Create a materialized view. Create a new schema. Create a secret to store credentials. Create a sink into RisingWave's table. Create a sink. Supported data sources and how to connect RisingWave to the sources. Create a table. Create a new user account. Create a non-materialized view. Remove rows from a table. Get information about the columns in a table, source, sink, view, or materialized view. Discard session state. Drop a user-defined aggregate function. Remove a connection. Remove a database. Drop a user-defined function. Remove an index. Remove a materialized view. Remove a schema. Drop a secret. Remove a sink. Remove a source. Remove a table. Remove a user. Drop a view. Show the execution plan of a statement. Commit pending data changes and persists updated data to storage. Grant a user privileges. Insert new rows of data into a table. Trigger recovery manually. Revoke privileges from a user. Retrieve data from a table or a materialized view. Run Data Definition Language (DDL) operations in the background. Enable or disable implicit flushes after batch operations. Set time zone. Change a run-time parameter. Show the details of your RisingWave cluster. Show columns in a table, source, sink, view or materialized view. Show existing connections. Show the query used to create the specified index. Show the query used to create the specified materialized view. Show the query used to create the specified sink. Show the query used to create the specified source. Show the query used to create the specified table. Show the query used to create the specified view. Show all cursors in the current session. Show existing databases. Show all user-defined functions. Show existing indexes from a particular table. Show internal tables to learn about the existing internal states. Show all streaming jobs. Show existing materialized views. Show the details of the system parameters. Display system current workload. Show existing schemas. Shows all sinks. Show existing sources. Show all subscription cursors in the current session. Show existing tables. Show existing views. Start a transaction. Modify existing rows in a table. diff --git a/sql/commands/sql-alter-source.mdx b/sql/commands/sql-alter-source.mdx index 3a479ef9..05216969 100644 --- a/sql/commands/sql-alter-source.mdx +++ b/sql/commands/sql-alter-source.mdx @@ -34,7 +34,7 @@ ALTER SOURCE src1 ``` -* To alter columns in a source created with a schema registry, see [FORMAT and ENCODE options](/docs/current/sql-alter-source/#format-and-encode-options). +* To alter columns in a source created with a schema registry, see [FORMAT and ENCODE options](/sql/commands/sql-alter-source#format-and-encode-options). * You cannot add a primary key column to a source or table in RisingWave. To modify the primary key of a source or table, you need to recreate the table. * You cannot remove a column from a source in RisingWave. If you intend to remove a column from a source, you'll need to drop the source and create the source again. @@ -93,7 +93,7 @@ ALTER SOURCE test_source SET SCHEMA test_schema; ### `FORMAT and ENCODE options` -At present, combined with the `ALTER SOURCE` command, you can refresh the schema registry of a source by refilling the FORMAT and ENCODE options. For more details about these options, see [FORMAT and ENCODE parameters](/docs/current/formats-and-encode-parameters/). +At present, combined with the `ALTER SOURCE` command, you can refresh the schema registry of a source by refilling the FORMAT and ENCODE options. For more details about these options, see [FORMAT and ENCODE parameters](/ingestion/format-and-encode-parameters). ```sql ALTER SOURCE source_name FORMAT data_format ENCODE data_encode [ ( diff --git a/sql/commands/sql-alter-system.mdx b/sql/commands/sql-alter-system.mdx index 42dacf5e..057336d0 100644 --- a/sql/commands/sql-alter-system.mdx +++ b/sql/commands/sql-alter-system.mdx @@ -3,7 +3,7 @@ title: "ALTER SYSTEM" description: "The `ALTER SYSTEM` command modifies the value of a server configuration parameter." --- -You can use this command to configure some parameters, like the [system parameters](/docs/current/view-configure-system-parameters/#how-to-configure-system-parameters) and [runtime parameters](/docs/current/view-configure-runtime-parameters/#how-to-configure-runtime-parameters). +You can use this command to configure some parameters, like the [system parameters](/operate/view-configure-system-parameters#how-to-configure-system-parameters) and [runtime parameters](/docs/current/view-configure-runtime-parameters/#how-to-configure-runtime-parameters). ```sql Syntax ALTER SYSTEM SET configuration_parameter { TO | = } { value [, ...] | DEFAULT } diff --git a/sql/commands/sql-create-aggregate.mdx b/sql/commands/sql-create-aggregate.mdx index a6188e7e..acbcb127 100644 --- a/sql/commands/sql-create-aggregate.mdx +++ b/sql/commands/sql-create-aggregate.mdx @@ -1,6 +1,6 @@ --- title: "CREATE AGGREGATE" -description: "The `CREATE AGGREGATE` command can be used to create [user-defined aggregate functions](/docs/current/user-defined-functions/) (UDAFs). Currently, UDAFs are only supported in Python and JavaScript as embedded UDFs." +description: "The `CREATE AGGREGATE` command can be used to create [user-defined aggregate functions](/sql/udfs/user-defined-functions) (UDAFs). Currently, UDAFs are only supported in Python and JavaScript as embedded UDFs." --- ## Syntax diff --git a/sql/commands/sql-create-connection.mdx b/sql/commands/sql-create-connection.mdx index 0b5ebcd4..77ca441b 100644 --- a/sql/commands/sql-create-connection.mdx +++ b/sql/commands/sql-create-connection.mdx @@ -62,5 +62,5 @@ CREATE CONNECTION connection_name WITH ( ); ``` 7. Create a source or sink with AWS PrivateLink connection. - * Use the `CREATE SOURCE/TABLE` command to create a Kafka source with PrivateLink connection. For more details, see [Create source with AWS PrivateLink connection](/docs/current/ingest-from-kafka/#create-source-with-vpc-connection). + * Use the `CREATE SOURCE/TABLE` command to create a Kafka source with PrivateLink connection. For more details, see [Create source with AWS PrivateLink connection](/integrations/sources/kafka#create-source-with-privatelink-connection). * Use the `CREATE SINK` command to create a Kafka sink with PrivateLink connection. For more details, see [Create sink with AWS PrivateLink connection](/integrations/destinations/apache-kafka#create-sink-with-vpc-connection). diff --git a/sql/commands/sql-create-function.mdx b/sql/commands/sql-create-function.mdx index 97f86a2a..6ea8f64f 100644 --- a/sql/commands/sql-create-function.mdx +++ b/sql/commands/sql-create-function.mdx @@ -1,6 +1,6 @@ --- title: "CREATE FUNCTION" -description: "The `CREATE FUNCTION` command can be used to create [user-defined functions](/docs/current/user-defined-functions/) (UDFs)." +description: "The `CREATE FUNCTION` command can be used to create [user-defined functions](/sql/udfs/user-defined-functions) (UDFs)." --- There are three ways to create UDFs in RisingWave: UDFs as external functions, embedded UDFs and SQL UDFs. `CREATE FUNCTION` can be used for them with different syntax. diff --git a/sql/commands/sql-create-source.mdx b/sql/commands/sql-create-source.mdx index 577146ef..58afc3d8 100644 --- a/sql/commands/sql-create-source.mdx +++ b/sql/commands/sql-create-source.mdx @@ -3,7 +3,7 @@ title: "CREATE SOURCE" description: "A source is a resource that RisingWave can read data from. You can create a source in RisingWave using the `CREATE SOURCE` command." --- -For the full list of the sources we support, see [Supported sources](/docs/current/supported-sources-and-formats/#supported-sources). +For the full list of the sources we support, see [Supported sources](/ingestion/supported-sources-and-formats#supported-sources). If you choose to persist the data from the source in RisingWave, use the [CREATE TABLE](/sql/commands/sql-create-table) command with connector settings. Or if you need to create the primary key (which is required by some formats like FORMAT UPSERT/DEBEZIUM), you have to use `CREATE TABLE` too. For more details about the differences between sources and tables, see [here](/docs/current/data-ingestion/#table-with-connectors). @@ -67,8 +67,8 @@ The generated column is created in RisingWave and will not be accessed through t | _generation\_expression_ | The expression for the generated column. For details about generated columns, see [Generated columns](/docs/current/query-syntax-generated-columns/). | | _watermark\_clause_ | A clause that defines the watermark for a timestamp column. The syntax is WATERMARK FOR column\_name as expr. For details about watermarks, refer to [Watermarks](/docs/current/watermarks/). | | **INCLUDE** clause | Extract fields not included in the payload as separate columns. For more details on its usage, see [INCLUDE clause](/docs/current/include-clause/). | -| **WITH** clause | Specify the connector settings here if trying to store all the source data. See [Supported sources](/docs/current/supported-sources-and-formats/#supported-sources) for the full list of supported source as well as links to specific connector pages detailing the syntax for each source. | -| **FORMAT** and **ENCODE** options | Specify the data format and the encoding format of the source data. To learn about the supported data formats, see [Supported formats](/docs/current/supported-sources-and-formats/#supported-formats). | +| **WITH** clause | Specify the connector settings here if trying to store all the source data. See [Supported sources](/ingestion/supported-sources-and-formats#supported-sources) for the full list of supported source as well as links to specific connector pages detailing the syntax for each source. | +| **FORMAT** and **ENCODE** options | Specify the data format and the encoding format of the source data. To learn about the supported data formats, see [Supported formats](/ingestion/supported-sources-and-formats#supported-formats). | Please distinguish between the parameters set in the FORMAT and ENCODE options and those set in the WITH clause. Ensure that you place them correctly and avoid any misuse. @@ -197,7 +197,7 @@ Shared sources do not support `ALTER SOURCE`. Use non-shared sources if you requ title="ALTER SOURCE" icon="pen-to-square" iconType="solid" - href="/docs/current/sql-alter-source/" + href="/sql/commands/sql-alter-source" > Modify a source diff --git a/sql/commands/sql-drop-aggregate.mdx b/sql/commands/sql-drop-aggregate.mdx index 1c64c6d6..f025743a 100644 --- a/sql/commands/sql-drop-aggregate.mdx +++ b/sql/commands/sql-drop-aggregate.mdx @@ -1,6 +1,6 @@ --- title: "DROP AGGREGATE" -description: "Use the `DROP AGGREGATE` command to remove an existing [user-defined aggregate function (UDAF)](/docs/current/user-defined-functions/). The usage is similar to `DROP FUNCTION`, except that it's for aggregate functions." +description: "Use the `DROP AGGREGATE` command to remove an existing [user-defined aggregate function (UDAF)](/sql/udfs/user-defined-functions). The usage is similar to `DROP FUNCTION`, except that it's for aggregate functions." --- ## Syntax diff --git a/sql/commands/sql-drop-function.mdx b/sql/commands/sql-drop-function.mdx index b74b50e4..ae388af9 100644 --- a/sql/commands/sql-drop-function.mdx +++ b/sql/commands/sql-drop-function.mdx @@ -2,7 +2,7 @@ title: "DROP FUNCTION" --- -Use the `DROP FUNCTION` command to remove an existing [user-defined function (UDF)](/docs/current/user-defined-functions/). +Use the `DROP FUNCTION` command to remove an existing [user-defined function (UDF)](/sql/udfs/user-defined-functions). ## Syntax @@ -85,7 +85,7 @@ DROP FUNCTION f1; title="User-defined functions" icon="code" iconType="solid" - href="/docs/current/user-defined-functions/" + href="/sql/udfs/user-defined-functions" > A step-by-step guide for using UDFs in RisingWave: installing the RisingWave UDF API, defining functions in a Python file, starting the UDF server, and declaring UDFs in RisingWave. diff --git a/sql/commands/sql-show-functions.mdx b/sql/commands/sql-show-functions.mdx index 47a1178d..51bae471 100644 --- a/sql/commands/sql-show-functions.mdx +++ b/sql/commands/sql-show-functions.mdx @@ -1,6 +1,6 @@ --- title: "SHOW FUNCTIONS" -description: "Run `SHOW FUNCTIONS` to get a list of existing [user-defined functions](/docs/current/user-defined-functions/). The returned information includes the name, argument types, return type, language, and server address of each function." +description: "Run `SHOW FUNCTIONS` to get a list of existing [user-defined functions](/sql/udfs/user-defined-functions). The returned information includes the name, argument types, return type, language, and server address of each function." --- ## Syntax diff --git a/sql/commands/sql-show-parameters.mdx b/sql/commands/sql-show-parameters.mdx index 7f911a75..cc0bd52a 100644 --- a/sql/commands/sql-show-parameters.mdx +++ b/sql/commands/sql-show-parameters.mdx @@ -1,6 +1,6 @@ --- title: "SHOW PARAMETERS" -description: "You can use the `SHOW PARAMETERS` command to view the [system parameters](/docs/current/view-configure-system-parameters/), along with their current values." +description: "You can use the `SHOW PARAMETERS` command to view the [system parameters](/operate/view-configure-system-parameters), along with their current values." --- ```bash Examples diff --git a/sql/query-syntax/value-exp.mdx b/sql/query-syntax/value-exp.mdx index e1bdfcf2..0254b550 100644 --- a/sql/query-syntax/value-exp.mdx +++ b/sql/query-syntax/value-exp.mdx @@ -28,7 +28,7 @@ The `DISTINCT` keyword, which is only available in the second form, cannot be us AGGREGATE:function_name ``` -where the `AGGREGATE:` prefix converts a [builtin array function](/docs/current/sql-function-array/) (e.g. `array_sum`) or an [user-defined function](/docs/current/user-defined-functions/), to an aggregate function. The function being converted must accept exactly one argument of an [array type](/docs/current/data-type-array/). After the conversion, a function like `foo ( array of T ) -> U` becomes an aggregate function like `AGGREGATE:foo ( T ) -> U`. +where the `AGGREGATE:` prefix converts a [builtin array function](/docs/current/sql-function-array/) (e.g. `array_sum`) or an [user-defined function](/sql/udfs/user-defined-functions), to an aggregate function. The function being converted must accept exactly one argument of an [array type](/sql/data-types/array-type). After the conversion, a function like `foo ( array of T ) -> U` becomes an aggregate function like `AGGREGATE:foo ( T ) -> U`. ## Window function calls diff --git a/sql/system-catalogs/rw-catalog.mdx b/sql/system-catalogs/rw-catalog.mdx index 5f8eab26..c8367745 100644 --- a/sql/system-catalogs/rw-catalog.mdx +++ b/sql/system-catalogs/rw-catalog.mdx @@ -104,7 +104,7 @@ SELECT name, initialized_at, created_at FROM rw_sources; | rw\_indexes | Contains information about indexes in the database, including their IDs, names, schema identifiers, definitions, and more. | | rw\_internal\_tables | Contains information about internal tables in the database. Internal tables are tables that store intermediate results (also known as internal states) of queries. Equivalent to the [SHOW INTERNAL TABLES](/docs/current/sql-show-internal-tables/) command. | | rw\_materialized\_views | Contains information about materialized views in the database, including their unique IDs, names, schema IDs, owner IDs, definitions, append-only information, access control lists, initialization and creation timestamps, and the cluster version when the materialized view was initialized and created. | -| rw\_meta\_snapshot | Contains information about existing snapshots of the RisingWave meta service. You can use this relation to get IDs of meta snapshots and then restore the meta service from a snapshot. For details, see [Back up and restore meta service](/docs/current/meta-backup/). | +| rw\_meta\_snapshot | Contains information about existing snapshots of the RisingWave meta service. You can use this relation to get IDs of meta snapshots and then restore the meta service from a snapshot. For details, see [Back up and restore meta service](/operate/meta-backup). | | rw\_parallel\_units | Contains information about parallel worker units used for executing database operations, including their unique IDs, worker IDs, and primary keys. | | rw\_relation\_info | Contains low-level relation information about tables, sources, materialized views, and indexes that are available in the database. | | rw\_relations | Contains information about relations in the database, including their unique IDs, names, types, schema IDs, and owners. | diff --git a/troubleshoot/meta-failure.mdx b/troubleshoot/meta-failure.mdx index 5964c0f9..dec92e9c 100644 --- a/troubleshoot/meta-failure.mdx +++ b/troubleshoot/meta-failure.mdx @@ -16,10 +16,9 @@ The observed issue is most likely a result of ETCD experiencing fluctuations, wh ## Solutions -1. Check the [notes about disks for etcd in our documentation](/docs/current/hardware-requirements/#etcd). -2. Check etcd configures, whether `-auto-compaction-mode`, `-max-request-bytes` are set properly. -3. If only one meta node is deployed, you can set the parameter `meta_leader_lease_secs` to `86400` to avoid impact on leader election by the disk performance. For multi-node deployment, you can also increase the value of this parameter. -4. For better performance and stability of the cluster, it is recommended to use higher-performance disks and configure etcd correctly. +1. Check etcd configures, whether `-auto-compaction-mode`, `-max-request-bytes` are set properly. +2. If only one meta node is deployed, you can set the parameter `meta_leader_lease_secs` to `86400` to avoid impact on leader election by the disk performance. For multi-node deployment, you can also increase the value of this parameter. +3. For better performance and stability of the cluster, it is recommended to use higher-performance disks and configure etcd correctly. ## Further explanation diff --git a/troubleshoot/node-failure.mdx b/troubleshoot/node-failure.mdx index bf68281d..9328382e 100644 --- a/troubleshoot/node-failure.mdx +++ b/troubleshoot/node-failure.mdx @@ -50,7 +50,7 @@ Since compaction is an append-only operation and does not modify files in place, RisingWave supports two types of metadata storage backends: etcd and relational databases (Postgres by default). -etcd is designed to be a highly available and consistent key-value storage solution. However, after equipping etcd in the production environment for a while, we learned that etcd can be quite demanding for the quality of the disk it operates on. You can find more details about [etcd's hardware requirements](/docs/current/hardware-requirements/#etcd) in our documentation. +etcd is designed to be a highly available and consistent key-value storage solution. However, after equipping etcd in the production environment for a while, we learned that etcd can be quite demanding for the quality of the disk it operates on. Therefore, we have decided to make RDS the default metadata storage backend starting from version v1.9.0 of RisingWave. Over time, we will gradually deprecate the support for etcd. This decision is based on the following factors: diff --git a/troubleshoot/overview.mdx b/troubleshoot/overview.mdx index 4dc578df..3bb46447 100644 --- a/troubleshoot/overview.mdx +++ b/troubleshoot/overview.mdx @@ -19,7 +19,7 @@ You can access RisingWave Dashboard at `http://localhost:5691` by default. You can monitor the performance metrics of a RisingWave cluster, including the usage of resources like CPU, memory, and network, and the status of different nodes. -RisingWave uses Prometheus for collecting data, and Grafana for visualization and alerting. This monitoring stack requires configuration. To configure the monitoring stack, follow the steps detailed in [Monitor a RisingWave cluster](/docs/current/monitor-risingwave-cluster/). +RisingWave uses Prometheus for collecting data, and Grafana for visualization and alerting. This monitoring stack requires configuration. To configure the monitoring stack, follow the steps detailed in [Monitor a RisingWave cluster](/operate/monitor-risingwave-cluster). After you complete the configuration, go to [http://localhost:3000](http://localhost:3000) to access Grafana from a local machine, or `http://:3000` to access Grafana from a different host, where `` is the IP address of the machine running the Grafana service. When prompted, enter the default credentials (username: `admin`; password: `prom-operator`). diff --git a/troubleshoot/troubleshoot-oom.mdx b/troubleshoot/troubleshoot-oom.mdx index c19116f3..630ba2e2 100644 --- a/troubleshoot/troubleshoot-oom.mdx +++ b/troubleshoot/troubleshoot-oom.mdx @@ -44,7 +44,7 @@ Barrier latency can be observed from Grafana dashboard - Barrier latency panel. Instead of solely addressing the memory problem, we recommend investigating why the barrier is getting stuck. This issue could be caused by heavy streaming jobs, sudden impact of input traffic, or even some temporary issues. -Please refer to [High latency](/docs/current/troubleshoot-high-latency/) for more details. +Please refer to [High latency](/troubleshoot/troubleshoot-high-latency) for more details. ## OOM during prefetching diff --git a/troubleshoot/troubleshoot-recovery-failure.mdx b/troubleshoot/troubleshoot-recovery-failure.mdx index d475f046..4085e5a1 100644 --- a/troubleshoot/troubleshoot-recovery-failure.mdx +++ b/troubleshoot/troubleshoot-recovery-failure.mdx @@ -19,7 +19,7 @@ It’s important to identify the root cause of the issue. Some common reasons fo How to identify: 1. When the meta node continues to enter the recovery state or when the actor keeps exiting during the recovery process. -2. Check if the CN node is continuously restarting due to OOM, refer to: [Out-of-memory](/docs/current/troubleshoot-oom/). +2. Check if the CN node is continuously restarting due to OOM, refer to: [Out-of-memory](/troubleshoot/troubleshoot-oom). Two solutions: @@ -27,7 +27,7 @@ Two solutions: 2. Decrease the parallelism of the running streaming jobs or drop problematic streaming jobs. 1. `alter system set pause_on_next_bootstrap to true;` 2. Reboot the meta service, then the cluster will enter safe mode after recovery. - 3. Drop the problematic streaming jobs or scale in them using `risectl` , refer to: [Cluster scaling](/docs/current/k8s-cluster-scaling/). + 3. Drop the problematic streaming jobs or scale in them using `risectl` , refer to: [Cluster scaling](/deploy/k8s-cluster-scaling). 4. Restart the meta node, or resume the cluster by: `risectl meta resume`. ### Unconventional CN scaling down