This installer uses a Bash script that has been tested only on Mac. It will likely work on Linux. It will likely work in a Linux VM on Windows.
It has only been tested and documented for the following setup:
- Kubernetes 0.15+ cluster
- Running in AWS
- But not an Amazon EKS cluster, which does not play very well with OpenID Connect (OIDC).
- Other providers may work with small adjustments but have not been tested.
- In particular the
StorageClass
es provided by Moondog Engine are AWS-specific.
- OIDC auth provided by GitHub teams
- Other providers (Active Directory, Google Apps, etc.) will probably work with minimal effort, but again, they have not yet been tested.
- Amazon S3 for various backups
We plan to support alternatives as quickly as possible and will update these docs accordingly.
If you're reading this, you almost certainly have kubectl
already, but in case you don't: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Install Helm if you do not already have it: https://helm.sh/docs/intro/install/
Install kubeseal
if you do not already have it: https://github.com/bitnami-labs/sealed-secrets#homebrew
- Create an Amazon S3 bucket that Moondog Engine apps will use for storing backups of Kubernetes resources and volumes.
- Create an Amazon S3 bucket that Moondog Engine apps will use for storing backups of KubeDB databases.
- Create IAM credentials that can write to each S3 bucket you created above.
Kubernetes resources will be automatically created and updated when you commit yaml
files to this repo.
You should have a private repository and credentials to access it. (For GitHub, generate a personal access token: https://github.com/settings/tokens/new)
https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/
[TODO: MORE INSTRUCTIONS HERE ABOUT CALLBACK URL AND WHATNOT]
git clone https://github.com/revelrylabs/moondog-engine.git
cd moondog-engine
config/
pki/
ca.crt
etcd/
ca.crt
healthcheck-client.crt
healthcheck-client.key
values.yaml
config/pki
contains certificate and key filesconfig/values.yaml
You will need to download some files from your Kubernetes masters and put them in their respective directories in ./config/pki
.
If you used kubeadm, more information is here: https://kubernetes.io/docs/setup/best-practices/certificates/#where-certificates-are-stored
For the most part, you should be able to copy values.yaml
to config/values.yaml
and then edit it, following the instructions and comments contained therein.
For a more detailed description and usage guide for each configuration section, refer to the full docs.
./install.sh
Read any notes that are printed after successful installation, and follow any additional instructions.
Some of the objects installed require DNS to work in order for them to start up and work correctly. After running the installer script, query your nginx service to discover the DNS address of your AWS Elastic Load Balancer (ELB) and add that as a wildcard for the domain, since this is where all of our web traffic will go.
If you're using AWS for your DNS and installed the nginx ingress controller using the default settings, this can be done by invoking the wildcard script with your cluster's base domain as an argument:
./bin/wildcard.sh my-cluster.example.com
Otherwise, to make the DNS entry manually, start by finding the service with:
❯❯❯ kubectl get svc -n md-nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
md-nginx-ingress-controller LoadBalancer 10.96.228.202 somelongdns.us-east-1.elb.amazonaws.com 80:31638/TCP,443:31495/TCP
Then, for example, in your DNS create a CNAME for *.foo.example.com
to somelongdns.us-east-1.elb.amazonaws.com
This will allow traffic to reach the cluster and complete the installation process. At this point, certificates should begin to solve via cert-manager
. Wait a bit and then check if you can reach one of the web apps.
To delete everything that Moondog Engine installed and start over:
./uninstall.sh
Hopefully the installer will exit with a helpful error. If not, the line it exits on is going to be your best clue as to where to look to debug further. Ensure helmreleases are deployed correctly by running
helm list --all-namespaces
which should show deployed
for all services.
If any of the components appear to be missing, you can describe the helmrelease to try to see what went wrong with its installation. Failing that, you can look at the logs of pods in the individual namespaces the installer creates for each helmrelease.
If all else fails, feel free to open an issue and we will try to help you!