Skip to content

rhdemo/2021-kafka-streams-player-matches-aggregator

Repository files navigation

2021 Kafka Streams Player Matches Aggregator

This application will receive match events via a Kafka Topic, and aggregate them into a per player record that contains the matches that the player has played.

The example below is for a player who has played two matches:

{
  "gameId": "qohs1SNJwul3R0746IYOn",
  "playerId":"7S724N3jjIDdUQH3XxDMX",
  "matches":["2YFdCqkRoTBQsTCLDNTyl", "FZNQcjGn_kg8c7JV6qHzJ"]
}

HTTP API for Player Data

Firstly you need to know the current/latest game ID. You can find this at the /game endpoint:

curl http://localhost:8080/game/

{"gameId":"c538ddcce22a3a58"}

NOTE: The /game endpoint can return a null value for gameId if Kafka Streams has not yet processed data.

To fetch data for a given player you must pass the gameId and playerId as URL parameters. A sample curl and response is shown below:

export GAME_ID=qohs1SNJwul3R0746IYOn
export PLAYER_ID=7S724N3jjIDdUQH3XxDMX
curl http://localhost:8080/game/$GAME_ID/player-matches/$PLAYER_ID

{"gameId":"qohs1SNJwul3R0746IYOn","playerId":"7S724N3jjIDdUQH3XxDMX","matches":["2YFdCqkRoTBQsTCLDNTyl"]}

Building

To build a container image using Docker:

./scripts/build.sh

Running

./scripts/build.sh

export KAFKA_SVC_USERNAME=username
export KAFKA_SVC_PASSWORD=username
export KAFKA_BOOTSTRAP_URL=hostname.kafka.devshift.org:443

./scripts/run.sh

Running Locally

For development purposes it can be handy to run aggregator application directly on your local machine instead of via Docker. For that purpose, a separate Docker Compose file is provided which just starts Apache Kafka and ZooKeeper, docker-compose-local.yaml configured to be accessible from your host system. Open this file an editor and change the value of the KAFKA_ADVERTISED_LISTENERS variable so it contains your host machine's name or ip address. Then run:

docker-compose -f docker-compose-local.yaml up

mvn quarkus:dev -Dquarkus.http.port=8081 -f aggregator/pom.xml

Any changes done to the aggregator application will be picked up instantly, and a reload of the stream processing application will be triggered upon the next Kafka message to be processed.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published