Bro IDS | Elasticsearch + Kibana | RabbitMQ |
---|---|---|
2.5 | 2.4 + 4.6 | 3.5.7 |
Integrates Bro IDS git 2.5 with Elasticsearch 2.4 & Kibana 4.6. Bro was compiled with broker,rocksdb and pybroker (full featured). Bro can write directly into Elasticsearch without logstash. The bro scripts have been modified in order to satisfy elasticsearch. The example below uses 3 elasticsearch nodes. The container bro-xinetd writes to the master. Kibana reads from node02. The commandline bro uses node01. Added amqp (rabbitmq) consume/publish roles with the debian amqp-tools.
The simplest way to start all nodes is using docker-compose The DOCKERHOST is the ip and port the user sees in kibana ! The port from the compose file is 8080.
export DOCKERHOST="<ip>:8080"
wget https://raw.githubusercontent.com/danielguerra69/bro-debian-elasticsearch/master/docker-compose.yml
docker-compose pull
docker-compose up
This compose file starts a role/xinetd-forensic which currently supports pcap and extracted file access from kibana. It listens to port 1969 for pcap files.
nc <dockerhost-ip> 1969 < my.pcap
tcpdump -i eth0 -s 0 -w - not host <dockerhost-ip> | nc <dockerhost-ip> 1969
Kibana is viewed in your browser. http://:5601/
The pcap and extracted data can be reached over tcp port 8080
Full version with all tools and sources to build this project. Sources are in /tmp.
docker pull danielguerra/bro-debian-elasticsearch:develop
Before you begin I recommend to start with pulling fresh images.
docker pull danielguerra/bro-debian-elasticsearch
docker pull elasticsearch (or latest)
docker pull kibana (or latest)
docker pull rabbitmq:3.5.6-management
Create empty elasticsearch data volumes optional,if not remove --volumes-from ...
docker create -v /usr/share/elasticsearch/data --name elastic-data-master danielguerra/empty-elastic-data /bin/true
docker create -v /usr/share/elasticsearch/data --name elastic-data-node01 danielguerra/empty-elastic-data /bin/true
docker create -v /usr/share/elasticsearch/data --name elastic-data-node02 danielguerra/empty-elastic-data /bin/true
Run three elasticsearch nodes (minimal)
docker run -d --volumes-from elastic-data-master --hostname=elasticsearch-master --name elasticsearch-master elasticsearch -Des.network.bind_host=elasticsearch-master --cluster.name=bro --node.name=elasticsearch-master --discovery.zen.ping.multicast.enabled=false --network.host=elasticsearch-master
docker run -d --volumes-from elastic-data-node01 --hostname=elasticsearch-node01 --name elasticsearch-node01 --link=elasticsearch-master:master elasticsearch -Des.network.bind_host=elasticsearch-node01 --cluster.name=bro --node.name=elasticsearch-node01 --discovery.zen.ping.unicast.hosts=master:9300 --network.host=elasticsearch-node01
docker run -d --volumes-from elastic-data-node02 --hostname=elasticsearch-node02 --name elasticsearch-node02 --link=elasticsearch-master:master elasticsearch -Des.network.bind_host=elasticsearch-node02 --cluster.name=bro --node.name=elasticsearch-node02 --discovery.zen.ping.unicast.hosts=master:9300 --network.host=elasticsearch-node02
After you have a running elasticsearch-cluster you should start a commandline bro and do
docker run --link elasticsearch-master:elasticsearch --rm danielguerra/bro-debian-elasticsearch /scripts/bro-mapping.sh
Configure kibana
docker run --rm --link elasticsearch-master:elasticsearch danielguerra/bro-kibana-config
Start kibana
docker run -d -p 5601:5601 --link=elasticsearch-node02:elasticsearch --hostname=kibana --name kibana kibana
Point your browser http://:5601
commandline and local file log
docker run -ti -v /Users/PCAP:/pcap --name bro-log danielguerra/bro-debian-elasticsearch
commandline and log to elasticsearch
docker run -ti --link elasticsearch-node01:elasticsearch -v /Users/PCAP:/pcap --name bro danielguerra/bro-debian-elasticsearch /role/cmd-elasticsearch
readfiles from bro commandline
bro -r /pcap/mydump.pcap
bro develop version (all sources are in /tmp)
docker run -ti --link elasticsearch-node01:elasticsearch -v /Users/PCAP:/pcap --name bro danielguerra/bro-debian-elasticsearch:develop /role/cmd-elasticsearch
when role/xinetd is used no local logs are written, all logs go to elasticsearch
docker run -d -p 1969:1969 --link elasticsearch-master:elasticsearch --name bro-xinetd --hostname bro-xinetd danielguerra/bro-debian-elasticsearch /role/xinetd-elasticsearch
tcpdump to your container from a remote host, replace dockerhost with your ip
tcpdump -i eth0 -s 0 -w /dev/stdout | nc dockerhost 1969
or read a file file to your container
nc dockerhost 1969 < mydump.pcap
when role/xinetd-forensic is used, pcap and extracted files are available from kibana.
docker run -d -p 1969:1969 -p 8080:80 --link elasticsearch-master:elasticsearch --name bro-xinetd-forensic --hostname bro-xinetd-forensic danielguerra/bro-debian-elasticsearch /role/xinetd-forensic
for bro nodes or just remote key based authentication create an empty ssh volume
docker create -v /root/.ssh --name ssh-container danielguerra/ssh-container /bin/true
create your own keys on your own machine
docker run --volumes-from ssh-container debian:jessie ssh-keygen -q
add your pub key to authorized_keys file
docker run --volumes-from ssh-container debian:jessie cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
create a copy in your directory (pwd)
docker run --volumes-from ssh-container -v $(pwd):/backup debian:jessie cp -R /root/.ssh/* /backup
start bro as ssh daemon
docker run -d -p 1922:22 --link elasticsearch:elasticsearch --name bro-dev danielguerra/bro-debian-elasticsearch /role/sshd
ssh -p 1922 -i id_rsa root@dockerhost
Bro can be used with amqp in elasticsearch out or amqp output
First we need an amqp, this case a rabbitmq
docker run -d -p 8080:15672 --name=rabbitmq --hostname=rabbitmq rabbitmq:3.5.6-management
docker inspect rabbitmq (to get the ip)
Now we can start a bro xinetd service which outputs to rabbitmq
docker run -d -p 1970:1969 --name bro-xinetd-amqp --hostname bro-xinetd-amqp danielguerra/bro-debian-elasticsearch /role/xinetd-amqp
Or a bro that reads pcap files from amqp and outputs to amqp
docker run -d --name=bro-amqp-amqp --hostname=bro-amqp-amqp danielguerra/bro-debian-elasticsearch /role/amqp-amqp <user> <pass> <ip> <queue> <user> <pass> <ip> <exchange>
And publish a pcap file from bro-dev commandline
cat <pcap-file> | amqp-publish --url=amqp://<user>:<pass>@<amqp-ip> --exchange=<exchange>
Start a bro-xinetd, do a (replace with your container name and with the bro xinetd ip)
docker run --rm --net=container:<container-to-dump> crccheck/tcpdump -i eth0 -w - | nc <bro-xinetd-ip> 1969 &
docker run --rm --net=container:<container-to-dump> danielguerra/bro-debian-elasticsearch:develop /role/dump-elasticsearch
elastic-indices.sh shows elasticsearch indices bro-mapping.sh bro mapping for kibana including geo_point mapping remove-mapping.sh remove the mapping clean-elastic.sh clean elasticsearch from bro data update-intel.sh update intel for bro