Skip to content

Commit

Permalink
save work
Browse files Browse the repository at this point in the history
  • Loading branch information
WanYixian committed Nov 25, 2024
1 parent 0107005 commit 5ad708a
Show file tree
Hide file tree
Showing 51 changed files with 119 additions and 115 deletions.
2 changes: 1 addition & 1 deletion cloud/create-a-connection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,4 @@ We aim to automate this process in the future to make it even easier.

Now, you can create a source or sink with the PrivateLink connection using SQL.

For details on how to use the VPC endpoint to create a source with the PrivateLink connection, see [Create source with PrivateLink connection](/docs/current/ingest-from-kafka/#create-source-with-privatelink-connection); for creating a sink, see [Create sink with PrivateLink connection](/docs/current/create-sink-kafka/#create-sink-with-privatelink-connection).
For details on how to use the VPC endpoint to create a source with the PrivateLink connection, see [Create source with PrivateLink connection](/docs/current/ingest-from-kafka/#create-source-with-privatelink-connection); for creating a sink, see [Create sink with PrivateLink connection](/integrations/destinations/apache-kafka#create-sink-with-privatelink-connection).
2 changes: 1 addition & 1 deletion cloud/manage-sinks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ For the complete list of supported sink connectors and data formats, see [Data d

You can create a sink using SQL command to deliver processed data to an external target.

Refer to [CREATE SINK](/docs/current/sql-create-sink/) in the RisingWave Database documentation.
Refer to [CREATE SINK](/sql/commands/sql-create-sink) in the RisingWave Database documentation.

## Check a sink

Expand Down
30 changes: 15 additions & 15 deletions delivery/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,50 +4,50 @@ description: "RisingWave supports delivering data to downstream systems via its
sidebarTitle: Overview
---

To stream data out of RisingWave, you must create a sink. A sink is an external target that you can send data to. Use the [CREATE SINK](/docs/current/sql-create-sink/) statement to create a sink. You need to specify what data to be exported, the format, and the sink parameters.
To stream data out of RisingWave, you must create a sink. A sink is an external target that you can send data to. Use the [CREATE SINK](/sql/commands/sql-create-sink) statement to create a sink. You need to specify what data to be exported, the format, and the sink parameters.

Sinks become visible right after you create them, regardless of the backfilling status. Therefore, it's important to understand that the data in the sinks may not immediately reflect the latest state of their upstream sources due to the latency of the sink, connector, and backfilling process. To determine whether the process is complete and the data in the sink is consistent, refer to [Monitor statement progress](/docs/current/monitor-statement-progress/).
Sinks become visible right after you create them, regardless of the backfilling status. Therefore, it's important to understand that the data in the sinks may not immediately reflect the latest state of their upstream sources due to the latency of the sink, connector, and backfilling process. To determine whether the process is complete and the data in the sink is consistent, refer to [Monitor statement progress](/operate/monitor-statement-progress).

Currently, RisingWave supports the following sink connectors:

* Apache Doris sink connector (`connector = 'doris'`)
With this connector, you can sink data from RisingWave to Apache Doris. For details about the syntax and parameters, see [Sink data to Apache Doris](/docs/current/sink-to-doris/).
With this connector, you can sink data from RisingWave to Apache Doris. For details about the syntax and parameters, see [Sink data to Apache Doris](/integrations/destinations/apache-doris).
* Apache Iceberg sink connector (`connector = 'iceberg'`)
With this connector, you can sink data from RisingWave to Apache Iceberg. For details about the syntax and parameters, see [Sink data to Apache Iceberg](/integrations/destinations/apache-iceberg).
* AWS Kinesis sink connector (`connector = 'kinesis'`)
With this connector, you can sink data from RisingWave to AWS Kinesis. For details about the syntax and parameters, see [Sink data to AWS Kinesis](/integrations/destinations/aws-kinesis).
* Cassandra and ScyllaDB sink connector (`connector = 'cassandra'`)
With this connector, you can sink data from RisingWave to Cassandra or ScyllaDB. For details about the syntax and parameters, see [Sink data to Cassandra or ScyllaDB](/integrations/destinations/cassandra-or-scylladb).
* ClickHouse sink connector (`connector = 'clickhouse'`)
With this connector, you can sink data from RisingWave to ClickHouse. For details about the syntax and parameters, see [Sink data to ClickHouse](/docs/current/sink-to-clickhouse/).
With this connector, you can sink data from RisingWave to ClickHouse. For details about the syntax and parameters, see [Sink data to ClickHouse](/integrations/destinations/clickhouse).
* CockroachDB sink connector (`connector = 'jdbc'`)
With this connector, you can sink data from RisingWave to CockroachDB. For details about the syntax and parameters, see [Sink data to CockroachDB](/docs/current/sink-to-cockroach/).
With this connector, you can sink data from RisingWave to CockroachDB. For details about the syntax and parameters, see [Sink data to CockroachDB](/integrations/destinations/cockroachdb).
* Delta Lake sink connector (`connector = 'deltalake'`)
With this connector, you can sink data from RisingWave to Delta Lake. For details about the syntax and parameters, see [Sink data to Delta Lake](/docs/current/sink-to-delta-lake/).
With this connector, you can sink data from RisingWave to Delta Lake. For details about the syntax and parameters, see [Sink data to Delta Lake](/integrations/destinations/delta-lake).
* Elasticsearch sink connector (`connector = 'elasticsearch'`)
With this connector, you can sink data from RisingWave to Elasticsearch. For details about the syntax and parameters, see [Sink data to Elasticsearch](/integrations/destinations/elasticsearch).
* Google BigQuery sink connector (`connector = 'bigquery'`)
With this connector, you can sink data from RisingWave to Google BigQuery. For details about the syntax and parameters, see [Sink data to Google BigQuery](/integrations/destinations/bigquery).
* Google Pub/Sub sink connector (`connector = 'google_pubsub'`)
With this connector, you can sink data from RisingWave to Google Pub/Sub. For details about the syntax and parameters, see [Sink data to Google Pub/Sub](/docs/current/sink-to-google-pubsub/).
With this connector, you can sink data from RisingWave to Google Pub/Sub. For details about the syntax and parameters, see [Sink data to Google Pub/Sub](/integrations/destinations/google-pub-sub).
* JDBC sink connector for MySQL, PostgreSQL, or TiDB (`connector = 'jdbc'`)
With this connector, you can sink data from RisingWave to JDBC-available databases, such as MySQL, PostgreSQL, or TiDB. When sinking to a database with a JDBC driver, ensure that the corresponding table created in RisingWave has the same schema as the table in the database you are sinking to. For details about the syntax and parameters, see [Sink to MySQL](/docs/current/sink-to-mysql-with-jdbc/), [Sink to PostgreSQL](/docs/current/sink-to-postgres/), or [Sink to TiDB](/docs/current/sink-to-tidb/).
With this connector, you can sink data from RisingWave to JDBC-available databases, such as MySQL, PostgreSQL, or TiDB. When sinking to a database with a JDBC driver, ensure that the corresponding table created in RisingWave has the same schema as the table in the database you are sinking to. For details about the syntax and parameters, see [Sink to MySQL](/integrations/destinations/mysql), [Sink to PostgreSQL](/integrations/destinations/postgresql), or [Sink to TiDB](/integrations/destinations/tidb).
* Kafka sink connector (`connector = 'kafka'`)
With this connector, you can sink data from RisingWave to Kafka topics. For details about the syntax and parameters, see [Sink data to Kafka](/docs/current/create-sink-kafka/).
With this connector, you can sink data from RisingWave to Kafka topics. For details about the syntax and parameters, see [Sink data to Kafka](/integrations/destinations/apache-kafka).
* MQTT sink connector (`connector = 'mqtt'`)
With this connector, you can sink data from RisingWave to MQTT topics. For details about the syntax and parameters, see [Sink data to MQTT](/docs/current/sink-to-mqtt/).
With this connector, you can sink data from RisingWave to MQTT topics. For details about the syntax and parameters, see [Sink data to MQTT](/integrations/destinations/mqtt).
* NATS sink connector (`connector = 'nats'`)
With this connector, you can sink data from RisingWave to NATS. For details about the syntax and parameters, see [Sink data to NATS](/integrations/destinations/nats-and-nats-jetstream).
* Pulsar sink connector (`connector = 'pulsar'`)
With this connector, you can sink data from RisingWave to Pulsar. For details about the syntax and parameters, see [Sink data to Pulsar](/integrations/destinations/apache-pulsar).
* Redis sink connector (`connector = 'redis'`)
With this connector, you can sink data from RisingWave to Redis. For details about the syntax and parameters, see [Sink data to Redis](/docs/current/sink-to-redis/).
With this connector, you can sink data from RisingWave to Redis. For details about the syntax and parameters, see [Sink data to Redis](/integrations/destinations/redis).
* Snowflake sink connector (`connector = 'snowflake'`)
With this connector, you can sink data from RisingWave to Snowflake. For details about the syntax and parameters, see [Sink data to Snowflake](/integrations/destinations/snowflake).
* StarRocks sink connector (`connector = 'starrocks'`)
With this connector, you can sink data from RisingWave to StarRocks. For details about the syntax and parameters, see [Sink data to StarRocks](/docs/current/sink-to-starrocks/).
With this connector, you can sink data from RisingWave to StarRocks. For details about the syntax and parameters, see [Sink data to StarRocks](/integrations/destinations/starrocks).
* Microsoft SQL Server sink connector(`connector = 'sqlserver'`)
With this connector, you can sink data from RisingWave to Microsoft SQL Server. For details about the syntax and parameters, see [Sink data to SQL Server](/docs/current/sink-to-sqlserver/).
With this connector, you can sink data from RisingWave to Microsoft SQL Server. For details about the syntax and parameters, see [Sink data to SQL Server](/integrations/destinations/sql-server).

## Sink decoupling

Expand All @@ -58,7 +58,7 @@ Sink decoupling introduces a buffering queue between a RisingWave sink and the d
<Note>
**PUBLIC PREVIEW**

This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/product-lifecycle/#features-in-the-public-preview-stage).
This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/changelog/product-lifecycle#features-in-the-public-preview-stage).
</Note>

The `sink_decouple` session variable can be specified to enable or disable sink decoupling. The default value for the session variable is `default`.
Expand Down Expand Up @@ -105,7 +105,7 @@ When creating an `upsert` sink, note whether or not you need to specify the prim
<Note>
**PUBLIC PREVIEW**

Sink data in parquet encode is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/product-lifecycle/#features-in-the-public-preview-stage).
Sink data in parquet encode is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/changelog/product-lifecycle#features-in-the-public-preview-stage).
</Note>

RisingWave supports sinking data in Parquet or JSON encode to file systems including S3, Google Cloud Storage (GCS), Azure Blob Storage, and WebHDFS. This eliminates the need for complex data lake setups. Once the data is saved, the files can be queried using the batch processing engine of RisingWave through the `file_scan` API. You can also leverage third-party OLAP query engines for further data processing.
Expand Down
2 changes: 1 addition & 1 deletion delivery/risingwave-as-postgres-fdw.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: "A foreign data wrapper in PostgreSQL allows you to directly virtua
<Note>
**PUBLIC PREVIEW**

This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/product-lifecycle/#features-in-the-public-preview-stage).
This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/changelog/product-lifecycle#features-in-the-public-preview-stage).
</Note>

## Prerequisites
Expand Down
2 changes: 1 addition & 1 deletion delivery/subscription.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This feature allows you to monitor all data changes without relying on external
<Note>
**PUBLIC PREVIEW**

This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/product-lifecycle/#features-in-the-public-preview-stage).
This feature is in the public preview stage, meaning it's nearing the final product but is not yet fully stable. If you encounter any issues or have feedback, please contact us through our [Slack channel](https://www.risingwave.com/slack). Your input is valuable in helping us improve the feature. For more information, see our [Public preview feature list](/changelog/product-lifecycle#features-in-the-public-preview-stage).
</Note>

## Manage subscription
Expand Down
4 changes: 2 additions & 2 deletions demos/clickstream-analysis.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ First, the `tumble()` function will map each event into a 10-minute window to cr

Next, the `hop()` function will create 24-hour time windows every 10 minutes. Each event will be mapped to corresponding windows. Finally, they will be grouped by `target_id` and `window_time` to calculate the total number of clicks of each thread within 24 hours.

Please refer to [Time window functions](/docs/current/sql-function-time-window/) for an explanation of the tumble and hop functions and aggregations.
Please refer to [Time window functions](/processing/sql/time-windows) for an explanation of the tumble and hop functions and aggregations.

```sql
CREATE MATERIALIZED VIEW thread_view_count AS WITH t AS (
Expand Down Expand Up @@ -142,7 +142,7 @@ The result may look like this:
(5 rows)
```

We can also query results by specifying a time interval. To learn more about data and time functions and operators, see [Date and time](/docs/current/sql-function-datetime/).
We can also query results by specifying a time interval. To learn more about data and time functions and operators, see [Date and time](/sql/functions/datetime).

```sql
SELECT * FROM thread_view_count
Expand Down
10 changes: 5 additions & 5 deletions demos/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,29 +11,29 @@ Try out the following runnable demos in these different industries:
## Capital markets

<CardGroup>
<Card title="Market data enhancement and transformation" icon="magnifying-glass-chart" href="market-data-enrichment" iconType="solid" >
<Card title="Market data enhancement and transformation" icon="magnifying-glass-chart" href="/demos/market-data-enrichment" iconType="solid" >
Transform raw market data in real-time to provide insights into market trends, asset health, and trade opportunities.
</Card>
<Card title="Market and trade surveillance" icon="circle-exclamation" href="market-trade-surveillance" iconType="solid" >
<Card title="Market and trade surveillance" icon="circle-exclamation" href="/demos/market-trade-surveillance" iconType="solid" >
Detect suspicious patterns, compliance breaches, and anomalies from trading activities in real-time.
</Card>
</CardGroup>

## Sports betting

<CardGroup>
<Card title="Risk and profit analysis in sports betting" icon="football" iconType="solid" href="sports-risk-profit-analysis" >
<Card title="Risk and profit analysis in sports betting" icon="football" iconType="solid" href="/demos/sports-risk-profit-analysis" >
Manage your sports betting positions in real-time by using RisingWave to monitor exposure and risk.
</Card>
<Card title="User betting behavior analysis" icon="users-viewfinder" iconType="solid" href="betting-behavior-analysis" >
<Card title="User betting behavior analysis" icon="users-viewfinder" iconType="solid" href="/demos/betting-behavior-analysis" >
Identify high-risk and high-value users by analyzing and identifying trends in user betting patterns.
</Card>
</CardGroup>

## Logistics

<CardGroup>
<Card title="Inventory management and demand forecast" icon="boxes-stacked" iconType="solid" href="inventory-management-forecase" >
<Card title="Inventory management and demand forecast" icon="boxes-stacked" iconType="solid" href="/demos/inventory-management-forecast" >
Track inventory levels and forecast demand to prevent shortages and optimize restocking schedules.
</Card>
</CardGroup>
2 changes: 1 addition & 1 deletion demos/server-performance-anomaly-detection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ In this tutorial, we will create a few different materialized views. The first v

First, we will create the materialized view that contains all relevant TCP values. We use the tumble function to map all events into 1-minute windows and calculate the average metric value for each device within each time window. Next, the average TCP and NIC metrics are calculated separately before joining on device names and time windows. We will keep the records measuring the volume of bytes transferred by the interface and where the average utilization is greater than or equal to 50.

Please refer to this [guide](/docs/current/sql-function-time-window/) for an explanation of the tumble function and aggregations.
Please refer to this [guide](/processing/sql/time-windows) for an explanation of the tumble function and aggregations.

```sql
CREATE MATERIALIZED VIEW high_util_tcp_metrics AS
Expand Down
2 changes: 1 addition & 1 deletion demos/use-risingwave-to-monitor-risingwave-metrics.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ We have connected RisingWave to the streams, but RisingWave has not started to c

## Step 3: Create a materialized view

Now, create a materialized view that tracks the average metric values every 30 seconds. We will split the stream into 30 seconds windows and calculate the average metric value over each window. Here we use the [tumble window](/docs/current/sql-function-time-window/) functionality to support window slicing.
Now, create a materialized view that tracks the average metric values every 30 seconds. We will split the stream into 30 seconds windows and calculate the average metric value over each window. Here we use the [tumble window](/processing/sql/time-windows) functionality to support window slicing.

```sql
CREATE MATERIALIZED VIEW metric_avg_30s AS
Expand Down
Loading

0 comments on commit 5ad708a

Please sign in to comment.