You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
right now, when connecting via SSL, the user is expected to input 4 fields:
properties.ssl.ca.location
properties.ssl.certificate.location
properties.ssl.key.location
properties.ssl.key.password
When RW is deployed in the cloud environment, the file needs to be mounted to the file system in the pod first and then the user can specify the path in the 3 location configs
Since we have supported create secret, it is easier if the user can just copy & paste the content of these files into RW as a secret. The Cloud user can still upload the file, but this uploading is just a widget on Cloud UI to copy & paste the content.
The change is that there is no longer a need to mount the file to the pod.
To support this, it requires the Kafka API, i.e. librdkafka to support configs of reading in credential content directly.
Upvote for the option changes, but they are mostly for on-perm users. Cloud Kafka vendors seldom require using custom certs to connect to brokers. We need to be aware of the case.
I will keep the current ones, and add the "*.pem" ones for all three options, which is offered in other DB/ETL saas products. We will also expose these fields in RW Cloud
I will not add the ones without extension, they are in the form of special DER-encoded binary bytes.
right now, when connecting via SSL, the user is expected to input 4 fields:
properties.ssl.ca.location
properties.ssl.certificate.location
properties.ssl.key.location
properties.ssl.key.password
When RW is deployed in the cloud environment, the file needs to be mounted to the file system in the pod first and then the user can specify the path in the 3
location
configsSince we have supported
create secret
, it is easier if the user can just copy & paste the content of these files into RW as a secret. The Cloud user can still upload the file, but this uploading is just a widget on Cloud UI to copy & paste the content.The change is that there is no longer a need to mount the file to the pod.
To support this, it requires the Kafka API, i.e. librdkafka to support configs of reading in credential content directly.
By checking https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md
each of the first three options has its corresponding API for reading in credential content, which is listed as follows:
properties.ssl.ca.location
->
ssl.ca.pem
ssl_ca
properties.ssl.certificate.location
->
ssl.certificate.pem
ssl_certificate
properties.ssl.key.location
->
ssl.key.pem
ssl_key
This makes the cloud deployment and usage easier
The text was updated successfully, but these errors were encountered: