-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Processing Avro messages with embedded schema #86
Comments
If you case is store data in
CREATE TABLE your_topic_name (
id INT PRIMARY KEY NOT NULL,
json_value JSON NOT NULL,
jsonb_value JSONB NOT NULL,
uuid_value UUID NOT NULL
)
{
"type": "record",
"name": "pg_sql_types",
"fields": [
{"name": "id", "type": "int"},
{"name": "json_value", "type": "string"},
{"name": "jsonb_value", "type": "string"},
{"name": "uuid_value", "type": "string"}
]
} So in other words, connector can save |
Thank you for the explanation. I have a couple of more question please. I can either send my messages to Kafka in JSON with schema or AVRO with schema. Can you please let me know what converter settings should I be using in the standalone kafka-connect file for these two cases. The options I am talking about are "key.converter" and "value.converter". If I am including the schema in each AVRO message do I need a schema registry with the JDBC sink connector? Thanks |
Sorry for the late answer. |
Hello:
I converted my raw array of JSON messages to an Avro message with embedded schema. How can I configure the sink to process them?
Thanks,
The text was updated successfully, but these errors were encountered: