diff --git a/README.md b/README.md index d1a28883..85162329 100644 --- a/README.md +++ b/README.md @@ -74,7 +74,7 @@ Kafka Connector allows to move high volume data out of Kafka by leveraging the b To rapidly showcase the functioning of the Lightstreamer Kafka Connector, the [`examples/quickstart`](examples/quickstart/) folder hosts all the stuff required to set up a quickstart app to display real-time market data received from Lightstreamer Server. The app is a modified version of the [Stock List](https://github.com/Lightstreamer/Lightstreamer-example-StockList-client-javascript?tab=readme-ov-file#basic-stock-list-demo---html-client) demo. -![quick-start-diagram](quickstart-diagram.png) +![quick-start-diagram](pictures/quickstart-diagram.png) As you can see from the diagram above, in this variant the stream of simulated market events is injected from Kafka to the web client through the Ligthstreamer Kafka Connector. @@ -288,7 +288,7 @@ where you have to replace `API.key` and `secret` with the _API Key_ and _secret_ java -jar deploy/lightstreamer-kafka-connector-samples-producer-all-.jar --bootstrap-servers --topic stocks ``` - ![producer_video](producer.gif) + ![producer_video](pictures/producer.gif) #### Publishing with Confluent Cloud @@ -312,7 +312,7 @@ where you have to replace `API.key` and `secret` with the _API Key_ and _secret_ After starting the publisher, you should immediately see the real-time updates flowing from the consumer shell: - ![consumer_video](consumer.gif) + ![consumer_video](pictures/consumer.gif) ## Configuration @@ -762,9 +762,9 @@ In particular, Kafka Connector supports message validation for _Avro_ and _JSON_ Kafka Connector allows independent deserialization of keys and values, which means that: -- Key and value can have different formats. -- Message validation against the Confluent Schema Registry can be enabled separately for the Kafka key and Kafka value (through [`record.key.evaluator.schema.registry.enable` and `record.value.evaluator.schema.registry.enable`](#recordkeyevaluatorschemaregistryenable-and-recordvalueevaluatorschemaregistryenable)). -- Message validation against local schema files must be specified separately for key and value (through the [`record.key.evaluator.schema.path` and `record.value.evaluator.schema.path`](#recordkeyevaluatorschemapath-and-recordvalueevaluatorschemapath)) +- key and value can have different formats +- message validation against the Confluent Schema Registry can be enabled separately for the Kafka key and Kafka value (through [`record.key.evaluator.schema.registry.enable` and `record.value.evaluator.schema.registry.enable`](#recordkeyevaluatorschemaregistryenable-and-recordvalueevaluatorschemaregistryenable)) +- message validation against local schema files must be specified separately for key and value (through [`record.key.evaluator.schema.path` and `record.value.evaluator.schema.path`](#recordkeyevaluatorschemapath-and-recordvalueevaluatorschemapath)) > [!IMPORTANT] > For Avro, schema validation is mandatory, therefore either a local schema file must be provided or the Confluent Schema Registry must be enabled. @@ -880,6 +880,8 @@ This configuration enables the implementation of various mapping scenarios, as s sample-item ``` + ![one-to-one](pictures/one-to-one.png) + This is the most straightforward scenario one may think of: every record published to the Kafka topic `sample-topic` will simply be routed to the Lightstreamer item `sample-item`. Therefore, messages will be immediately broadcasted as real-time updates to all clients subscribed to such an item. - _One To Many_ @@ -888,6 +890,8 @@ This configuration enables the implementation of various mapping scenarios, as s sample-item1,sample-item2,sample-item3 ``` + ![one-to-many](pictures/one-to-many.png) + Every record published to the Kafka topic `sample-topic` will be routed to the Lightstreamer items `sample-item1`, `sample-item2`, and `sample-item3`. This scenario may activate unicast and multicast messaging, as it is possible to specify which item can be subscribed to by which user or group of users. To do that, it is required to provide a customized extension of the factory Metadata Adapter class (see the [example](examples/custom-kafka-connector-adapter/)), in which every subscription must be validated against the user identity. @@ -900,12 +904,16 @@ This configuration enables the implementation of various mapping scenarios, as s sample-item ``` + ![many-to-one](pictures/many-to-one.png) + With this scenario, it is possible to broadcast to all clients subscribed to a single item (`sample-item`) every message published to different topics (`sample-topic1`, `sample-topic2`, `sample-topic3`). ##### Record Mapping (`field.`) To forward real-time updates to the Lightstreamer clients, a Kafka record must be mapped to Lightstreamer fields, which define the _schema_ of any Lightstreamer item. +![record-mapping](pictures/record-mapping.png) + To configure the mapping, you define the set of all subscribeable fields through parameters with the prefix `field.`: ```xml @@ -923,11 +931,12 @@ To write an extraction expression, Kafka Connector provides the _Data Extraction - expressions must be enclosed within `#{...}` - expressions use _Extraction Keys_, a set of predefined constants that reference specific parts of the record structure: - - `#{KEY}`, the key - - `#{VALUE}`, the value - - `#{TIMESTAMP}`, the timestamp - - `#{OFFSET}`, the offset + - `#{KEY}`: the key + - `#{VALUE}`,:the value + - `#{TOPIC}`: the topic + - `#{TIMESTAMP}`: the timestamp - `#{PARTITION}`, the partition + - `#{OFFSET}`, the offset - the _dot notation_ is used to access attributes or fields of record keys and record values serialized in JSON or Avro formats: @@ -996,6 +1005,8 @@ Besides mapping topics to statically predefined items, Kakfa Connector allows yo which specify the rule needed to decide if a message can be forwarded to the clients, thus enabling a _filtered routing_. The item template leverages the Data Extraction Language to extract data from Kafka records and match them against the _parameterized_ subscribed items. +![filtered-routing](pictures/filtered-routing.png) + To configure an item template, use the parameter `item-template.`: ```xml @@ -1044,9 +1055,9 @@ Finally, the message will be mapped and routed only in case the subscribed item Consider the following configuration: ```xml -user-#{firstName=VALUE.name,lastName=VALUE.surname} -user-#{years=VALUE.age} -item-template.user-by-name,item-template.user-by-age +user-#{firstName=VALUE.name,lastName=VALUE.surname} +user-#{years=VALUE.age} +item-template.by-name,item-template.by-age ``` which specifies how to route records published from the topic `user` to item templates defined to extract some personal data. @@ -1073,10 +1084,10 @@ Now, let's see how filtered routing works for the following incoming Kafka recor } ``` - | Template | Expansion | Matched Subscribed Item | Routed to Client | - | -------------- | -------------------------------------- | ----------------------- | -----------------| - | `user-by-name` | `user-[firstName=James,lastName=Kirk]` | _SA1_ | _Client A_ | - | `user-by-age` | `user-[age=37]` | _SC1_ | _Client C_ | + | Template | Expansion | Matched Subscribed Item | Routed to Client | + | ----------| -------------------------------------- | ----------------------- | -----------------| + | `by-name` | `user-[firstName=James,lastName=Kirk]` | _SA1_ | _Client A_ | + | `by-age` | `user-[age=37]` | _SC1_ | _Client C_ | - Record 2: @@ -1090,10 +1101,10 @@ Now, let's see how filtered routing works for the following incoming Kafka recor } ``` - | Template | Expansion | Matched Subscribed Item | Routed to Client | - | -------------- | --------------------------------------------- | ----------------------- | -----------------| - | `user-by-name` | `user-[firstName=Montgomery,lastName=Scotty]` | _SB1_ | _Client B_ | - | `user-by-age` | `user-[age=45]` | _SA2_ | _Client A_ | + | Template | Expansion | Matched Subscribed Item | Routed to Client | + | --------- | --------------------------------------------- | ----------------------- | -----------------| + | `by-name` | `user-[firstName=Montgomery,lastName=Scotty]` | _SB1_ | _Client B_ | + | `by-age` | `user-[age=45]` | _SA2_ | _Client A_ | - Record 3: ```js @@ -1106,10 +1117,10 @@ Now, let's see how filtered routing works for the following incoming Kafka recor } ``` - | Template | Expansion | Matched Subscribed Item | Routed to Client | - | -------------- | ---------------------------------------- | ----------------------- | -----------------| - | `user-by-name` | `user-[firstName=Nyota,lastName=Uhura]` | _None_ | _None_ | - | `user-by-age` | `user-[age=37]` | _SC1_ | _Client C_ | + | Template | Expansion | Matched Subscribed Item | Routed to Client | + | ----------| --------------------------------------- | ----------------------- | -----------------| + | `by-name` | `user-[firstName=Nyota,lastName=Uhura]` | _None_ | _None_ | + | `by-age` | `user-[age=37]` | _SC1_ | _Client C_ | diff --git a/consumer.gif b/pictures/consumer.gif similarity index 100% rename from consumer.gif rename to pictures/consumer.gif diff --git a/pictures/filtered-routing.png b/pictures/filtered-routing.png new file mode 100644 index 00000000..323ba9a6 Binary files /dev/null and b/pictures/filtered-routing.png differ diff --git a/pictures/many-to-one.png b/pictures/many-to-one.png new file mode 100644 index 00000000..724eaaa6 Binary files /dev/null and b/pictures/many-to-one.png differ diff --git a/pictures/one-to-many.png b/pictures/one-to-many.png new file mode 100644 index 00000000..11859df0 Binary files /dev/null and b/pictures/one-to-many.png differ diff --git a/pictures/one-to-one.png b/pictures/one-to-one.png new file mode 100644 index 00000000..3b0b3cfa Binary files /dev/null and b/pictures/one-to-one.png differ diff --git a/producer.gif b/pictures/producer.gif similarity index 100% rename from producer.gif rename to pictures/producer.gif diff --git a/pictures/quickstart-diagram.png b/pictures/quickstart-diagram.png new file mode 100644 index 00000000..eace15c8 Binary files /dev/null and b/pictures/quickstart-diagram.png differ diff --git a/pictures/record-mapping.png b/pictures/record-mapping.png new file mode 100644 index 00000000..947a6835 Binary files /dev/null and b/pictures/record-mapping.png differ diff --git a/quickstart-diagram.png b/quickstart-diagram.png deleted file mode 100644 index bd47d3d3..00000000 Binary files a/quickstart-diagram.png and /dev/null differ