Skip to content
This repository has been archived by the owner on Apr 19, 2024. It is now read-only.

Commit

Permalink
Introduced Release Candidate v3.0
Browse files Browse the repository at this point in the history
CHANGES:
* Introduced HTTPClient
* Integrating DUH-RPC
* Updated tests with new client
* Fixed some lint issues
* GRPC is no more
* OTEL tracing for new client and handler
* Updated benchmarks
* Isolate the trace benchmark
  • Loading branch information
thrawn01 committed Dec 7, 2023
1 parent 885519d commit e885db6
Show file tree
Hide file tree
Showing 75 changed files with 2,551 additions and 4,069 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,5 @@ coverage.out
coverage.html
/gubernator
/gubernator-cli
.run/
tmp/
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ clean:

.PHONY: proto
proto:
scripts/proto.sh
buf build

.PHONY: certs
certs:
Expand Down
50 changes: 14 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ Gubernator is a distributed, high performance, cloud native and stateless rate-l
kubernetes or nomad trivial.
* Gubernator holds no state on disk, It’s configuration is passed to it by the
client on a per-request basis.
* Gubernator provides both GRPC and HTTP access to the API.
* It Can be run as a sidecar to services that need rate limiting or as a separate service.
* It Can be used as a library to implement a domain-specific rate limiting service.
* Supports optional eventually consistent rate limit distribution for extremely
Expand All @@ -38,11 +37,13 @@ $ docker-compose up -d
```
Now you can make rate limit requests via CURL
```
# Hit the HTTP API at localhost:9080 (GRPC is at 9081)
$ curl http://localhost:9080/v1/HealthCheck
# Hit the HTTP API at localhost:9080
$ curl http://localhost:9080/v1/health.check
# TODO: Update this example
# Make a rate limit request
$ curl http://localhost:9080/v1/GetRateLimits \
$ curl http://localhost:9080/v1/rate-limits.check \
--header 'Content-Type: application/json' \
--data '{
"requests": [
Expand All @@ -59,7 +60,7 @@ $ curl http://localhost:9080/v1/GetRateLimits \

### ProtoBuf Structure

An example rate limit request sent via GRPC might look like the following
An example rate limit request sent with protobuf might look like the following
```yaml
rate_limits:
# Scopes the request to a specific rate limit
Expand Down Expand Up @@ -189,7 +190,7 @@ limiting service.

When you use the library, your service becomes a full member of the cluster
participating in the same consistent hashing and caching as a stand alone
Gubernator server would. All you need to do is provide the GRPC server instance
Gubernator server would. All you need to do is provide the server instance
and tell Gubernator where the peers in your cluster are located. The
`cmd/gubernator/main.go` is a great example of how to use Gubernator as a
library.
Expand All @@ -213,29 +214,22 @@ to support rate limit durations longer than a minute, day or month, calls to
those rate limits that have durations over a self determined limit.

### API
All methods are accessed via GRPC but are also exposed via HTTP using the
[GRPC Gateway](https://github.com/grpc-ecosystem/grpc-gateway)

#### Health Check
Health check returns `unhealthy` in the event a peer is reported by etcd or kubernetes
as `up` but the server instance is unable to contact that peer via it's advertised address.

###### GRPC
```grpc
rpc HealthCheck (HealthCheckReq) returns (HealthCheckResp)
```

###### HTTP
```
GET /v1/HealthCheck
GET /v1/health.check
```
Example response:
```json
{
"status": "healthy",
"peer_count": 3
"peerCount": 3
}
```

Expand All @@ -244,14 +238,9 @@ Rate limits can be applied or retrieved using this interface. If the client
makes a request to the server with `hits: 0` then current state of the rate
limit is retrieved but not incremented.

###### GRPC
```grpc
rpc GetRateLimits (GetRateLimitsReq) returns (GetRateLimitsResp)
```

###### HTTP
```
POST /v1/GetRateLimits
POST /v1/rate-limit.check
```

Example Payload
Expand Down Expand Up @@ -289,20 +278,10 @@ Example response:
```

### Deployment
NOTE: Gubernator uses `etcd`, Kubernetes or round-robin DNS to discover peers and
NOTE: Gubernator uses `memberlist` Kubernetes or round-robin DNS to discover peers and
establish a cluster. If you don't have either, the docker-compose method is the
simplest way to try gubernator out.


##### Docker with existing etcd cluster
```bash
$ docker run -p 8081:81 -p 9080:80 -e GUBER_ETCD_ENDPOINTS=etcd1:2379,etcd2:2379 \
ghcr.io/mailgun/gubernator:latest

# Hit the HTTP API at localhost:9080
$ curl http://localhost:9080/v1/HealthCheck
```

##### Kubernetes
```bash
# Download the kubernetes deployment spec
Expand All @@ -321,14 +300,13 @@ you can use same fully-qualified domain name to both let your business logic con
instances to find `gubernator` and for `gubernator` containers/instances to find each other.

##### TLS
Gubernator supports TLS for both HTTP and GRPC connections. You can see an example with
self signed certs by running `docker-compose-tls.yaml`
Gubernator supports TLS. You can see an example with self signed certs by running `docker-compose-tls.yaml`
```bash
# Run docker compose
$ docker-compose -f docker-compose-tls.yaml up -d

# Hit the HTTP API at localhost:9080 (GRPC is at 9081)
$ curl --cacert certs/ca.cert --cert certs/gubernator.pem --key certs/gubernator.key https://localhost:9080/v1/HealthCheck
# Hit the HTTP API at localhost:9080
$ curl -X POST --cacert certs/ca.cert --cert certs/gubernator.pem --key certs/gubernator.key https://localhost:9080/v1/health.check
```

### Configuration
Expand Down
21 changes: 9 additions & 12 deletions algorithms.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import (
)

// Implements token bucket algorithm for rate limiting. https://en.wikipedia.org/wiki/Token_bucket
func tokenBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *RateLimitResp, err error) {
func tokenBucket(ctx context.Context, s Store, c Cache, r *RateLimitRequest) (resp *RateLimitResponse, err error) {

tokenBucketTimer := prometheus.NewTimer(metricFuncTimeDuration.WithLabelValues("tokenBucket"))
defer tokenBucketTimer.ObserveDuration()
Expand Down Expand Up @@ -75,7 +75,7 @@ func tokenBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *
if s != nil {
s.Remove(ctx, hashKey)
}
return &RateLimitResp{
return &RateLimitResponse{
Status: Status_UNDER_LIMIT,
Limit: r.Limit,
Remaining: r.Limit,
Expand Down Expand Up @@ -112,7 +112,7 @@ func tokenBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *
t.Limit = r.Limit
}

rl := &RateLimitResp{
rl := &RateLimitResponse{
Status: t.Status,
Limit: r.Limit,
Remaining: t.Remaining,
Expand Down Expand Up @@ -194,7 +194,7 @@ func tokenBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *
}

// Called by tokenBucket() when adding a new item in the store.
func tokenBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *RateLimitResp, err error) {
func tokenBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitRequest) (resp *RateLimitResponse, err error) {
now := MillisecondNow()
expire := now + r.Duration

Expand All @@ -220,7 +220,7 @@ func tokenBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitReq)
ExpireAt: expire,
}

rl := &RateLimitResp{
rl := &RateLimitResponse{
Status: Status_UNDER_LIMIT,
Limit: r.Limit,
Remaining: t.Remaining,
Expand All @@ -246,10 +246,7 @@ func tokenBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitReq)
}

// Implements leaky bucket algorithm for rate limiting https://en.wikipedia.org/wiki/Leaky_bucket
func leakyBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *RateLimitResp, err error) {
leakyBucketTimer := prometheus.NewTimer(metricFuncTimeDuration.WithLabelValues("V1Instance.getRateLimit_leakyBucket"))
defer leakyBucketTimer.ObserveDuration()

func leakyBucket(ctx context.Context, s Store, c Cache, r *RateLimitRequest) (resp *RateLimitResponse, err error) {
if r.Burst == 0 {
r.Burst = r.Limit
}
Expand Down Expand Up @@ -359,7 +356,7 @@ func leakyBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *
b.Remaining = float64(b.Burst)
}

rl := &RateLimitResp{
rl := &RateLimitResponse{
Limit: b.Limit,
Remaining: int64(b.Remaining),
Status: Status_UNDER_LIMIT,
Expand Down Expand Up @@ -412,7 +409,7 @@ func leakyBucket(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *
}

// Called by leakyBucket() when adding a new item in the store.
func leakyBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitReq) (resp *RateLimitResp, err error) {
func leakyBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitRequest) (resp *RateLimitResponse, err error) {
now := MillisecondNow()
duration := r.Duration
rate := float64(duration) / float64(r.Limit)
Expand All @@ -436,7 +433,7 @@ func leakyBucketNewItem(ctx context.Context, s Store, c Cache, r *RateLimitReq)
Burst: r.Burst,
}

rl := RateLimitResp{
rl := RateLimitResponse{
Status: Status_UNDER_LIMIT,
Limit: b.Limit,
Remaining: r.Burst - r.Hits,
Expand Down
2 changes: 1 addition & 1 deletion benchmark_cache_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ import (
"testing"
"time"

gubernator "github.com/mailgun/gubernator/v2"
"github.com/mailgun/gubernator/v3"
"github.com/mailgun/holster/v4/clock"
)

Expand Down
Loading

0 comments on commit e885db6

Please sign in to comment.