Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify deployment process #10

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 27 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,26 +55,41 @@ user = [email protected]
password = somepassword
```

4. Enable IP geolocation by installing [GeoIP Update software](https://github.com/maxmind/geoipupdate). And edit `docker-compose.yml` to enable access to the MaxMind databases on your host system.
```
volumes:
- ./parsedmarc/parsedmarc.ini:/etc/parsedmarc.ini:z
- /path/to/GeoIP:/usr/share/GeoIP
```
4. Geolocation data

Create an account and generate a licence key at
[maxmind](https://www.maxmind.com/en/accounts/current/license-key). And
update GEOIPUPDATE_ACCOUNT_ID and GEOIPUPDATE_LICENSE_KEY in the
`docker-compose`

For more information refer to [GeoIP Update
software](https://github.com/maxmind/geoipupdate) github page

5. Create `nginx/htpasswd` to provide Basic-Authentification for Nginx.
Change `dnf` to your package manager and `anyusername` to your needs.
In end you will be prompted to enter password to console.
```
dnf install -y httpd-tools
htpasswd -c nginx/htpasswd anyusername
docker-compose run nginx htpasswd -c /etc/nginx-secrets/htpasswd anyusername
```

6. Generate & put your SSL keypair `kibana.crt` and `kibana.key` to `nginx/ssl` folder.
You will be prompted for password.

6. Generate SSL certificates

The following example leverages the Cloudflare API. But you can
similary use any of the options provided by acme.sh. But if you do,
don't forget to create a pull request with the verified steps :).

Update `docker-compose.yml` with your Cloudflare credentials, and then simply run:

```
docker-compose run acme.sh --issue -d parsedmarc.your.domain --dns dns_cf
ocker-compose run acme.sh --install-cert -d parsedmarc.your.domain --cert-file /installed_certs/kibana.crt --key-file /installed_certs/kibana.key
```

There are to many posible solutuins like [Let's Encrypt](https://letsencrypt.org/docs/client-options/), private PKI or [self-hosted](https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-16-04) certificates.
From now on, acme.sh container will take care of certifcates
renewal. The nginx would not get automatically restarted, but it will
reload certificates once a week by means of a cron job.

It all up to you what to use. Note: for Let's Encrypt you need modify nginx configs to support it. You can use local ACME or modify docker-compose image.

7. Create needed folders and configure permissions.
```
Expand Down
49 changes: 43 additions & 6 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: '2.4'
version: '3.4'

services:
parsedmarc:
Expand All @@ -10,14 +10,28 @@ services:
tty: true
volumes:
- ./parsedmarc/parsedmarc.ini:/etc/parsedmarc.ini:z
#- /path/to/GeoIP:/usr/share/GeoIP
- geoipupdate_data:/usr/share/GeoIP
restart: unless-stopped
networks:
- parsedmarc-network
depends_on:
elasticsearch:
condition: service_healthy

geoipupdate:
container_name: geoipupdate
image: maxmindinc/geoipupdate
restart: unless-stopped
environment:
- GEOIPUPDATE_ACCOUNT_ID=
- GEOIPUPDATE_LICENSE_KEY=
- 'GEOIPUPDATE_EDITION_IDS=GeoLite2-ASN GeoLite2-City GeoLite2-Country'
- GEOIPUPDATE_FREQUENCY=72
networks:
- geoipupdate
volumes:
- geoipupdate_data:/usr/share/GeoIP

elasticsearch:
container_name: "elasticsearch"
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0
Expand All @@ -31,7 +45,7 @@ services:
soft: -1
hard: -1
volumes:
- ./elasticsearch/data/:/usr/share/elasticsearch/data/:z
- elasticsearch_data:/usr/share/elasticsearch/data/
restart: "unless-stopped"
networks:
- parsedmarc-network
Expand Down Expand Up @@ -61,19 +75,42 @@ services:

nginx:
container_name: "nginx"
image: nginx:alpine
build:
context: .
dockerfile: nginx/Dockerfile
restart: unless-stopped
tty: true
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d/:/etc/nginx/conf.d/:z
- ./nginx/ssl/:/etc/nginx/ssl/:z
- ./nginx/htpasswd:/etc/nginx/htpasswd:z
- acme_certs:/etc/nginx/ssl/
- nginx_secrets:/etc/nginx-secrets
networks:
- parsedmarc-network

acme.sh:
image: neilpang/acme.sh
container_name: acme.sh
command: daemon
volumes:
- acmeout:/acme.sh
- acme_certs:/installed_certs
environment:

# CloudFlare
- CF_Token=
- CF_Account_ID=
- CF_Zone_ID=
networks:
parsedmarc-network:
driver: bridge
geoipupdate:

volumes:
acmeout:
elasticsearch_data:
geoipupdate_data:
acme_certs:
nginx_secrets:
5 changes: 5 additions & 0 deletions nginx/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from nginx:alpine

RUN apk add --no-cache --virtual .checksum-deps \
apache2-utils && \
echo "0 0 * * 0 nginx -s reload" | crontab -
2 changes: 1 addition & 1 deletion nginx/conf.d/kibana.conf
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ server {
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
auth_basic "Login required";
auth_basic_user_file /etc/nginx/htpasswd;
auth_basic_user_file /etc/nginx-secrets/htpasswd;

location / {
proxy_pass http://kibana:5601;
Expand Down
2 changes: 2 additions & 0 deletions parsedmarc/parsedmarc.ini
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ save_forensic = True
host = imap.example.com
user = [email protected]
password = somepassword

[mailbox]
watch = True

[elasticsearch]
Expand Down