This guide provides details about the configuration setup for our application, including logger, services (specifically entitlements), and server configurations.
- Logger Configuration
- Server Configuration
- Database Configuration
- OPA Configuration
- Services Configuration
- Install HashiCorp Vault on your local machine:
brew tap hashicorp/tap
brew install hashicorp/tap/vault
- Enable the LDAP auth method in Vault. Start a new session with the Vault container using the Vault root token:
export VAULT_TOKEN="myroot"
export VAULT_ADDR="http://localhost:8200"
vault auth enable ldap
vault write auth/ldap/config \
url="ldap://openldap" \
binddn="cn=admin,dc=example,dc=com" \
bindpass="admin" \
userattr="cn" \
userdn="ou=users,dc=example,dc=com" \
groupdn="ou=groups,dc=example,dc=com" \
insecure_tls=true
- Add a role that maps to LDAP groups and enable the PKI secrets engine:
vault write auth/ldap/groups/developers policies=default
vault secrets enable pki
vault secrets tune -max-lease-ttl=87600h pki
- Generate the root certificate (outside container):
export VAULT_TOKEN="myroot"
export VAULT_ADDR="http://localhost:8200"
vault write -field=certificate pki/root/generate/internal \
common_name="root" \
ttl=87600h > CA_cert.crt
- Configure the issuing certificate URLs
export VAULT_TOKEN="myroot"
export VAULT_ADDR="http://localhost:8200"
vault write pki/config/urls \
issuing_certificates="http://localhost:8200/v1/pki/ca" \
crl_distribution_points="http://localhost:8200/v1/pki/crl"
- Create a role to determine what the engine will issue:
export VAULT_TOKEN="myroot"
export VAULT_ADDR="http://localhost:8200"
vault write pki/roles/example-dot-com \
allowed_domains="example.com" \
allow_subdomains=true \
max_ttl="768h"
- Now you can issue certificates with the following command:
vault write -format=json pki/issue/example-dot-com common_name="localhost" ttl="768h" > server.json
cat server.json | jq -r '.data.certificate' > server.crt
cat server.json | jq -r '.data.private_key' > server.key
cat server.json | jq -r '.data.ca_chain[]' > ca.crt
or
vault write -format=json pki/issue/example-dot-com common_name="pep.example.com" ttl="768h" > pep.json
cat pep.json | jq -r '.data.certificate' > pep.crt
cat pep.json | jq -r '.data.private_key' > pep.key
The logger configuration is used to define how the application logs its output.
Field | Description | Default |
---|---|---|
level |
The logging level. | info |
type |
The format of the log output. | json |
output |
The output destination for logs. | stdout |
Example:
logger:
level: debug
type: text
output: stdout
The server configuration is used to define how the application runs its server.
Field | Description | Default |
---|---|---|
port |
The port number for the server. | 9000 |
host |
The host address for the server. | "" |
grpc.reflection |
The configuration for the grpc server. | true |
tls.enabled |
Enable tls. | false |
tls.cert |
The path to the tls certificate. | |
tls.key |
The path to the tls key. | |
auth.audience |
The audience for the IDP. | |
auth.issuer |
The issuer for the IDP. | |
auth.clients |
A list of client id's that are allowed. |
Example:
server:
grpc:
reflection: true
port: 8081
tls:
enabled: true
cert: /path/to/cert
key: /path/to/key
auth:
enabled: true
audience: https://example.com
issuer: https://example.com
clients:
- client_id
- client_id2
The database configuration is used to define how the application connects to its database.
Field | Description | Default |
---|---|---|
host |
The host address for the database. | localhost |
port |
The port number for the database. | 5432 |
database |
The name of the database. | opentdf |
user |
The username for the database. | postgres |
password |
The password for the database. | changeme |
sslmode |
The ssl mode for the database | prefer |
schema |
The schema for the database. | opentdf |
runMigration |
Whether to run the database migration or not. | true |
Example:
db:
host: localhost
port: 5432
database: opentdf
user: postgres
password: changeme
sslmode: require
schema: opentdf
runMigration: false
Field | Description | Default |
---|---|---|
embedded |
Whether to use the embedded OPA Bundle server or not. This is only used for local development. | false |
path |
The path to the OPA configuration file. | ./opa/opa.yaml |
Example:
opa:
embedded: true # Only for local development
path: ./opa/opa.yaml
Field | Description | Default |
---|---|---|
enabled |
Enable the Key Access Server | true |
Example:
services:
kas:
enabled: true
Field | Description | Default |
---|---|---|
enabled |
Enable the Policy Service | true |
Example:
services:
policy:
enabled: true
Field | Description | Default |
---|---|---|
enabled |
Enable the Authorization |
- Go (see go.mod for specific version)
- Container runtime
- Compose - used to manage multi-container applications
- Buf is used for managing protobuf files
- install with
go install github.com/bufbuild/buf/cmd/buf
- install with
- grpcurl is used for testing gRPC services
- install with
go install github.com/fullstorydev/grpcurl/cmd/grpcurl
- install with
- golangci-lint is used for ensuring good coding practices
- install with
brew install golangci-lint
- install with
- softHSM is used to emulate hardware security (aka
PKCS #11
)
On macOS, these can be installed with brew
brew install buf grpcurl openssl pkcs11-tools softhsm golangci-lint
Note
Migrations are handled automatically by the server. This can be disabled via the config file, as
needed. They can also be run manually using the migrate
command
(make go.work
;go run github.com/arkavo-org/opentdf-platform/service migrate up
).
docker-compose up
- Create an OpenTDF config file:
opentdf.yaml
- The
opentdf-example.yaml
file is a good starting point, but you may need to modify it to match your environment. - The
opentdf-example-no-kas.yaml
file configures the platform to run insecurely without KAS and without endpoint auth.
- The
- Provision keycloak
go run github.com/arkavo-org/opentdf-platform/service provision keycloak
- Configure KAS keys and your HSM with
.github/scripts/hsm-init-temporary-keys.sh
- Run the server
go run github.com/arkavo-org/opentdf-platform/service start
- The server is now running on
localhost:8080
(or the port specified in the config file)
Note: support was added to provision a set of fixture data into the database.
Run go run github.com/arkavo-org/opentdf-platform/service provision fixtures -h
for more information.
Our native gRPC service functions are generated from proto
definitions using Buf.
The Makefile
provides command scripts to invoke Buf
with the buf.gen.yaml
config, including OpenAPI docs, grpc docs, and the
generated code.
For convenience, the make pre-build
script checks if you have the necessary dependencies for proto -> gRPC
generation.
A KAS controls access to TDF protected content.
To enable KAS, you must have a working PKCS #11
library on your system.
For development, we use the SoftHSM library,
which presents a PKCS #11
interface to on CPU cryptography libraries.
export OPENTDF_SERVER_CRYPTOPROVIDER_HSM_PIN=12345
export OPENTDF_SERVER_CRYPTOPROVIDER_HSM_MODULEPATH=/opt/homebrew/Cellar/softhsm/2.6.1//lib/softhsm/libsofthsm2.so
export OPENTDF_SERVER_CRYPTOPROVIDER_HSM_KEYS_EC_LABEL=kas-ec
export OPENTDF_SERVER_CRYPTOPROVIDER_HSM_KEYS_RSA_LABEL=kas-rsa
pkcs11-tool --module $OPENTDF_SERVER_CRYPTOPROVIDER_HSM_MODULEPATH \
--login --pin ${OPENTDF_SERVER_CRYPTOPROVIDER_HSM_PIN} \
--write-object kas-private.pem --type privkey \
--label kas-rsa
pkcs11-tool --module $OPENTDF_SERVER_CRYPTOPROVIDER_HSM_MODULEPATH \
--login --pin ${OPENTDF_SERVER_CRYPTOPROVIDER_HSM_PIN} \
--write-object kas-cert.pem --type cert \
--label kas-rsa
pkcs11-tool --module $OPENTDF_SERVER_CRYPTOPROVIDER_HSM_MODULEPATH \
--login --pin ${OPENTDF_SERVER_CRYPTOPROVIDER_HSM_PIN} \
--write-object ec-private.pem --type privkey \
--label kas-ec
pkcs11-tool --module $OPENTDF_SERVER_CRYPTOPROVIDER_HSM_MODULEPATH \
--login --pin ${OPENTDF_SERVER_CRYPTOPROVIDER_HSM_PIN} \
--write-object ec-cert.pem --type cert \
--label kas-ec
To see how to generate key pairs that KAS can use, review the the temp keys init script.
The policy service is responsible for managing policy configurations. It provides a gRPC API for creating, updating, and deleting policy configurations. Docs