-
Notifications
You must be signed in to change notification settings - Fork 326
Deployment
(NOTE: This is a work in process)
The intent of this page is to start to describe what a production deployment of API Umbrella looks like.
A production deployment consists of three types of nodes:
- API Umbrella Router (including a gatekeeper and optionally a static website and/or admin portal)
- MongoDB nodes for storing configuration and operational data
- ElasticSearch nodes for gathering usage analytics
The embedded MongoDB and ElasticSearch servers included in the all-in-one package should not be used in a production deployment. Instead, these two services should each be configured independently according to their respective best practices.
(Are we recommending the all-in-one package for setting up the router in a production deploy?)
The Router is the core of the API Umbrella deployment. In production, there should be two or more of them, fronted by a Load Balancer. It is essentially a stateless component, although it does queue up usage data locally before syncing with MongoDB/ElasticSearch. So, in the event that it is not shut down cleanly, there is the potential to lose under a minute of usage data.
In the current architecture, there are a lot of moving parts that make up the router:
- Various Nginx listeners ** Offering routing between gatekeeper and admin web interface ** Load balancing across internal gatekeeper listeners ** Configurable external load balancing across API backends
- Gatekeeper nodejs server, which enforces API usage policy
- Redis data store, which temporarily stores local data to minimize network calls for each request
- Varnish cache for caching API results
However, for deployment purposes, they should all be treated as a single component. Each router has its own independent set of these subcomponents to minimize the amount of network communication required on each request. These elements can not and should not be split out onto their own nodes.
The /etc/api-umbrella/api-umbrella.yml
configuration file for the router nodes should look something like the following:
services:
- router
- web
web:
admin:
initial_superusers:
- <insert email of admin here>
mongodb:
url: "mongodb://<user>:<password>@<host1>:<port1>,<host2>:<port2>[,...]/<database>"
elasticsearch:
hosts:
- "http://<server1>:<port1>"
- "http://<server2>:<port2>"
...
You will need a hardware or software load balancer for distributing and failing over traffic across your router nodes. Possible software alternatives include HAProxy or, when running in AWS, Amazon Elastic Load Balancer (ELB).