You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to be able to visualize our log data so that we can make better decisions about where to focus our time and effort to most effectively improve this application.
So far @epg323 has made some progress towards this goal with a Node app that reads the logs data from MongoDB directly and outputs some results to the command line, which led to discovering some changes that needed to be made in the application itself.
This ticket is an attempt to create a deployable pipeline using the open-source ELK stack (Elasticsearch, Logstash, and Kibana) that will accomplish the same goal. We will use Logstash to pipe the data from Mongo to Elasticsearch, then visualize the data in Elasticsearch using Kibana.
Our hope is to be able to deploy this pipeline with Docker on AWS. When we tear down the pipeline, we will store the historical log data in S3. Every time we deploy the pipeline, we will first read in historical data from S3, then read in the new data from Mongo. This way we have cheap storage for our historical data and don't have to worry about our short log history on Mongo.
Tasks Remaining
Use logstash to pipe logs data from Mongo to Elasticsearch
Use Kibana to visualize logs data in Elasticsearch
Offload historical data from Elasticsearch to S3
Merge historical data from S3 with new data from Mongo without duplicating logs
Make the pipeline easily deployable with Docker on AWS
The text was updated successfully, but these errors were encountered:
Undebate ELK Data Pipeline
We want to be able to visualize our log data so that we can make better decisions about where to focus our time and effort to most effectively improve this application.
So far @epg323 has made some progress towards this goal with a Node app that reads the logs data from MongoDB directly and outputs some results to the command line, which led to discovering some changes that needed to be made in the application itself.
This ticket is an attempt to create a deployable pipeline using the open-source ELK stack (Elasticsearch, Logstash, and Kibana) that will accomplish the same goal. We will use Logstash to pipe the data from Mongo to Elasticsearch, then visualize the data in Elasticsearch using Kibana.
Our hope is to be able to deploy this pipeline with Docker on AWS. When we tear down the pipeline, we will store the historical log data in S3. Every time we deploy the pipeline, we will first read in historical data from S3, then read in the new data from Mongo. This way we have cheap storage for our historical data and don't have to worry about our short log history on Mongo.
Tasks Remaining
The text was updated successfully, but these errors were encountered: