This project provides a scalable and fault-tolerant real-time data pipeline using Apache Kafka for ingesting, processing, and analyzing streaming data.
Before getting started, make sure you have the following installed on your machine:
-
Clone this repository to your local machine:
git clone https://github.com/yourusername/Real-time-Data-Pipeline-with-Apache-Kafka.git
-
Navigate to the project directory:
cd Real-time-Data-Pipeline-with-Apache-Kafka
-
Start the Kafka cluster with Docker Compose:
docker-compose up
This command will download the required Docker images, create and start the containers for ZooKeeper and Kafka brokers, and set up the cluster according to the configurations specified in the Docker Compose file.
-
Verify the Kafka Cluster:
- Use Kafka command-line tools to interact with the Kafka cluster.
- Create topics, produce messages, and consume messages to verify the cluster's functionality.
- The
docker-compose.yml
file defines the configuration for ZooKeeper and Kafka brokers. - Additional Kafka broker services can be added by following the same pattern as existing services.
- ZooKeeper:
localhost:2181
- Kafka Broker 1:
localhost:9092
- Kafka Broker 2:
localhost:9093
To stop and remove the Docker containers, use the following command:
docker-compose down
Feel free to contribute to this project by creating issues or submitting pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.