Skip to content

Commit

Permalink
Update apache-kafka.mdx (#109)
Browse files Browse the repository at this point in the history
Signed-off-by: Bohan Zhang <[email protected]>
  • Loading branch information
tabVersion authored Dec 4, 2024
1 parent db39c55 commit 508dabd
Showing 1 changed file with 1 addition and 2 deletions.
3 changes: 1 addition & 2 deletions integrations/destinations/apache-kafka.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ These options should be set in `FORMAT data_format ENCODE data_encode (key = 'va
| force\_append\_only | If true, forces the sink to be `PLAIN` (also known as append-only), even if it cannot be. |
| timestamptz.handling.mode | Controls the timestamptz output format. This parameter specifically applies to append-only or upsert sinks using JSON encoding. <ul><li>If omitted, the output format of timestamptz is `2023-11-11T18:30:09.453000Z` which includes the UTC suffix `Z`.</li><li>When `utc_without_suffix` is specified, the format is changed to `2023-11-11 18:30:09.453000`.</li></ul> |
| schemas.enable | Only configurable for upsert JSON sinks. By default, this value is false for upsert JSON sinks and true for debezium JSON sinks. If true, RisingWave will sink the data with the schema to the Kafka sink. This is not referring to a schema registry containing a JSON schema, but rather schema formats defined using [Kafka Connect](https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#json-schemas). |
| key\_encode | Optional. When specified, the key encode can only be TEXT, and the primary key should be one and only one of the following types: `varchar`, `bool`, `smallint`, `int`, and `bigint`; When absent, both key and value will use the same setting of `ENCODE data_encode ( ... )`. |
| key\_encode | Optional. When specified, the key encode can only be TEXT or BYTES. If set to TEXT, the primary key should be one and only one of the following types: `varchar`, `bool`, `smallint`, `int`, and `bigint`; If set to BYTES, the primary key should be one and only one of type `bytea`; When absent, both key and value will use the same setting of `ENCODE data_encode ( ... )`. |

### Avro specific parameters

Expand Down Expand Up @@ -241,7 +241,6 @@ To create a Kafka sink with a PrivateLink connection, in the WITH section of you
| :------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| privatelink.targets | The PrivateLink targets that correspond to the Kafka brokers. The targets should be in JSON format. Note that each target listed corresponds to each broker specified in the properties.bootstrap.server field. If the order is incorrect, there will be connectivity issues. |
| privatelink.endpoint | The DNS name of the VPC endpoint. If you're using RisingWave Cloud, you can find the auto-generated endpoint after you created a connection. See details in [Create a VPC connection](/cloud/create-a-connection/#whats-next). |
| connection.name | The name of the connection, which comes from the connection created using the [CREATE CONNECTION](/sql/commands/sql-create-connection) statement. Omit this parameter if you have provisioned a VPC endpoint using privatelink.endpoint (recommended). |

Here is an example of creating a Kafka sink using a PrivateLink connection. Notice that `{"port": 8001}` corresponds to the broker `ip1:9092`, and `{"port": 8002}` corresponds to the broker `ip2:9092`.

Expand Down

0 comments on commit 508dabd

Please sign in to comment.