Federation 2 is an evolution of the original Apollo Federation with an improved shared ownership model, enhanced type merging, and cleaner syntax for a smoother developer experience. It’s backwards compatible, requiring no major changes to your subgraphs. Try the GA release today!
- Welcome
- Prerequisites
- Build your first graph
- Local development
- Open Telemetry
- Composition examples
- Apollo Router
- Apollo Router Custom Image and Rhai Script
- Apollo Router Custom Plugin
Apollo Federation is an architecture for declaratively composing APIs into a unified graph. Each team can own their slice of the graph independently, empowering them to deliver autonomously and incrementally.
Designed in collaboration with the GraphQL community, Federation 2 is a clean-sheet implementation of the core composition and query-planning engine at the heart of Federation, to:
- streamline common tasks - like extending a type
- simplify advanced workflows - like migrating a field across subgraphs with no downtime
- improve the developer experience - by adding deeper static analysis, cleaner error messages, and new composition hints that help you catch errors sooner and understand how your schema impacts performance.
Federation 2 adds:
- first-class support for shared interfaces, enums, and other value types
- cleaner syntax for common tasks -- without the use of special keywords or directives
- flexible value type merging
- improved shared ownership for federated types
- deeper static analysis, better error messages and a new generalized composition model
- new composition hints let you understand how your schema impacts performance
- lots more!
Learn more:
Let's get started!
You'll need:
To install rover
:
curl -sSL https://rover.apollo.dev/nix/latest | sh
For help with rover
see installing the Rover CLI.
- Create a free Apollo Studio account
- Select
Register a deployed graph
(free forever) - Create your user & org
- Follow the prompt to add your first graph with the
Deployed
option selected and theSupergraph
architecture selected. - Click Next
Then publish the 3 subgraph schemas to the registry in Apollo Studio.
# build a supergraph from 3 subgraphs: products, users, inventory
make publish
It will prompt you for your APOLLO_KEY
and your APOLLO_GRAPH_REF
that you can obtain from the screen above.
The subgraph will be published to the Apollo Registry:
---------------------------------------
subgraph: pandas
---------------------------------------
+ rover subgraph publish My-Graph-3-vh40el@current --routing-url http://pandas:4000/graphql --schema subgraphs/pandas/pandas.graphql --name pandas --convert
Publishing SDL to My-Graph-3-vh40el@current (subgraph: pandas) using credentials from the default profile.
A new subgraph called 'pandas' for the 'My-Graph-3-vh40el@current' graph was created
The gateway for the 'My-Graph-3-vh40el@current' graph was updated with a new schema, composed from the updated 'pandas' subgraph
Monitor your schema delivery progress on on studio: https://studio.apollographql.com/graph/My-Graph-3-vh40el/launches/5bbeb91e-c6bd-4fdf-b8af-5c330f26d618?variant=current
Click See schema changes
Now that Federation 2 is enabled we can start a v2 Gateway that uses the graph composed by Apollo Studio.
This can be done with a single command, or step by step with the instructions that follow:
make demo
make demo
does the following things:
make docker-up
this uses docker-compose.managed.yml
:
version: '3'
services:
apollo-gateway:
container_name: apollo-gateway
build: ./gateway
env_file: # created automatically during `make publish`
- graph-api.env
ports:
- "4000:4000"
products:
container_name: products
build: ./subgraphs/products
inventory:
container_name: inventory
build: ./subgraphs/inventory
users:
container_name: users
build: ./subgraphs/users
which shows:
docker-compose -f docker-compose.managed.yml up -d
Creating network "supergraph-demo_default" with the default driver
Creating apollo-gateway ... done
Starting Apollo Gateway in managed mode ...
Apollo usage reporting starting! See your graph at https://studio.apollographql.com/graph/supergraph-router@dev/
🚀 Server ready at http://localhost:4000/
make query
which issues the following query that fetches across 3 subgraphs:
query Query {
allProducts {
id
sku
createdBy {
email
totalProductsCreated
}
}
}
with results like:
{
data: {
allProducts: [
{
id: "apollo-federation",
sku: "federation",
createdBy: {
email: "[email protected]",
totalProductsCreated: 1337
}
},{
id: "apollo-studio",
sku: "studio",
createdBy:{
email: "[email protected]",
totalProductsCreated: 1337
}
}
]
}
}
Apollo Explorer helps you explore the schemas you've published and create queries using the query builder.
Getting started with Apollo Explorer:
- Ensure the graph we previously started with
make docker-up
is still running - Configure Explorer to use the local v2 Gateway running on http://localhost:4000/
- Use the same query as before, but this time in Apollo Explorer:
query Query {
allProducts {
id
sku
createdBy {
email
totalProductsCreated
}
}
}
Once we're done we can shut down the v2 Gateway and the 3 subgraphs:
docker-compose down
That's it!
This section assumes you have docker
, docker-compose
and the rover
core binary installed from the Prerequisites sections above.
See also: Apollo Federation docs
You can federate multiple subgraphs into a supergraph using:
make demo-local
which does the following:
# build a supergraph from 3 subgraphs: products, users, inventory
make supergraph
which runs:
rover supergraph compose --config ./supergraph.yaml > supergraph.graphql
and then runs:
make docker-up-local
Creating apollo-gateway ... done
Creating inventory ... done
Creating users ... done
Creating products ... done
Starting Apollo Gateway in local mode ...
Using local: supergraph.graphql
🚀 Graph Router ready at http://localhost:4000/
make demo-local
then issues a curl request to the graph router via:
make query
which issues the following query that fetches across 3 subgraphs:
query Query {
allProducts {
id
sku
createdBy {
email
totalProductsCreated
}
}
}
with results like:
{
data: {
allProducts: [
{
id: "apollo-federation",
sku: "federation",
createdBy: {
email: "[email protected]",
totalProductsCreated: 1337
}
},{
id: "apollo-studio",
sku: "studio",
createdBy:{
email: "[email protected]",
totalProductsCreated: 1337
}
}
]
}
}
make demo-local
then shuts down the graph router:
docker-compose down
make docker-up-local
- Open http://localhost:4000/
- Click
Query your server
- Run a query:
query Query {
allProducts {
id
sku
createdBy {
email
totalProductsCreated
}
}
}
View results:
docker-compose down
To see where time is being spent on a request we can use Open Telemetry Distributed Tracing for Apollo Federation.
make docker-up-otel-collector
make smoke
browse to http://localhost:9411/
make docker-down-otel-collector
You can send Open Telemetry from the Gateway to Honeycomb with the following collector-config.yml:
receivers:
otlp:
protocols:
grpc:
http:
cors_allowed_origins:
- http://*
- https://*
exporters:
otlp:
endpoint: "api.honeycomb.io:443"
headers:
"x-honeycomb-team": "your-api-key"
"x-honeycomb-dataset": "your-dataset-name"
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp]
Once the cluster is up and has queries against it (via make smoke
), browse to http://localhost:9090/ and begin querying against metrics pulled from the trace spans.
Example queries:
-
P95 by service:
histogram_quantile(.99, sum(rate(latency_bucket[5m])) by (le, service_name))
-
Average latency by service and operation (e.g. router/graphql.validate):
sum by (operation, service_name)(rate(latency_sum{}[1m])) / sum by (operation, service_name)(rate(latency_count{}[1m]))
-
RPM by service:
sum(rate(calls_total{operation="HTTP POST"}[1m])) by (service_name)
- Docs: Open Telemetry for Apollo Federation
- Docker compose file: docker-compose.otel-collector.yml
- Helper library: supergraph-demo-opentelemetry
- See usage in:
The Apollo Router is our next-generation GraphQL Federation runtime written in Rust, and it is fast.
As a Graph Router, the Apollo Router plays the same role as the Apollo Gateway. The same subgraph schemas and composed supergraph schema can be used in both the Router and the Gateway.
This demo shows using the Apollo Router with a Federation 2 supergraph schema, composed using the Fed 2 rover supergraph compose
command. To see the Router working with Federation 1 composition, checkout the Apollo Router section of apollographql/supergraph-demo.
Early benchmarks show that the Router adds less than 10ms of latency to each operation, and it can process 8x the load of the JavaScript Apollo Gateway.
See the Apollo Router Docs for details.
make demo-router
which uses this docker-compose.router-managed.yml file:
version: '3'
services:
apollo-router:
container_name: apollo-router
image: ghcr.io/apollographql/router:v0.9.1
volumes:
- ./router.yaml:/dist/config/router.yaml
command: [ "-c", "config/router.yaml", "--log", "info" ]
env_file: # create with make graph-api-env
- graph-api.env
ports:
- "4000:4000"
products:
container_name: products
build: ./subgraphs/products
inventory:
container_name: inventory
build: ./subgraphs/inventory
users:
container_name: users
build: ./subgraphs/users
pandas:
container_name: pandas
build: ./subgraphs/pandas
which uses the published Router docker image created from this Dockerfile
Prerequisites: Local development
make demo-local-router
which uses this docker-compose.router.yml file
Similar to Open Telemetry with the Gateway
If using Docker for Mac to try on your laptop, for the best experience:
- Docker for Mac 4.6.1+
- Enable these experimental features:
- New Virtualization framework
- VirtioFS accelerated directory sharing
- Mac Monterey 12.3+
make docker-up-router-otel
make load
browse to http://localhost:9411/
make docker-down-router
The published Router Docker image should work for the majority of use cases.
A custom Docker image can also be used:
make docker-build-router-image
make docker-up-local-router-custom-image
make smoke
Which uses a Router custom image Dockerfile like this:
FROM --platform=linux/amd64 debian:bullseye
RUN apt-get update && apt-get install -y \
ca-certificates \
curl
WORKDIR /dist
COPY ./router.rhai .
RUN curl -ssL https://router.apollo.dev/download/nix/latest | sh
# for faster docker shutdown
STOPSIGNAL SIGINT
# set the startup command to run your binary
# note: if you want sh you can override the entrypoint using docker run -it --entrypoint=sh my-router-image
ENTRYPOINT ["./router"]
Which uses this router.rhai script:
fn router_service(service) {
// Define a closure to process our response
let f = |response| {
let start = apollo_start.elapsed;
// ... Do some processing
let duration = apollo_start.elapsed - start;
print(`response processing took: ${duration}`);
// Log out any errors we may have
print(response.body.errors);
};
// Map our response using our closure
service.map_response(f);
}
To see the INFO
level print()
statements:
make docker-logs-local-router-custom-image
Then cleanup:
make docker-down-router
Docs and examples:
This is based on the hello-world native Rust plugin example.
The router/custom-plugin folder in this repo has the contents of the custom Router docker image used in the steps below.
git clone [email protected]:apollographql/supergraph-demo-fed2.git
cd supergraph-demo-fed2
make docker-build-router-plugin
make docker-up-local-router-custom-plugin
make smoke
make docker-down-router
Which uses a Router custom plugin Dockerfile like this:
FROM --platform=linux/amd64 rust:1.60 as build
ENV NODE_VERSION=16.13.0
RUN apt install -y curl
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
ENV NVM_DIR=/root/.nvm
RUN . "$NVM_DIR/nvm.sh" && nvm install ${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm use v${NODE_VERSION}
RUN . "$NVM_DIR/nvm.sh" && nvm alias default v${NODE_VERSION}
ENV PATH="/root/.nvm/versions/node/v${NODE_VERSION}/bin/:${PATH}"
RUN node --version
RUN npm --version
ENV RUST_BACKTRACE=full
# create a new empty shell project
RUN USER=root cargo new --bin acme_router
WORKDIR /acme_router
RUN rustup component add rustfmt
# copy over your manifests
COPY ./Cargo.lock ./Cargo.lock
COPY ./Cargo.toml ./Cargo.toml
# this build step will cache your dependencies
RUN cargo build --release
RUN rm src/*.rs
# copy your source tree
COPY ./src ./src
# build for release
RUN rm ./target/release/deps/acme_router*
RUN cargo build --release
RUN mkdir -p /dist/config && mkdir -p /dist/schema
# our final image uses distroless
FROM --platform=linux/amd64 gcr.io/distroless/cc-debian11
# copy the build artifact from the build stage
COPY --from=build /dist /dist
COPY --from=build --chown=root:root /acme_router/target/release/acme_router /dist
WORKDIR /dist
# set the startup command to run your binary
ENTRYPOINT ["./acme_router"]
- Blog Post
- Docs
- GitHub
- Discussions -- we'd love to hear what you think!
- Community Forum
- Blog Post
- Docs
- GitHub
- Community Forum -- we'd love to hear what you think!