Subscription watch tracks usage and capacity at an account-level. Account-level reporting means that subscriptions are not directly associated to machines, containers, or service instances.
Subscription watch can be thought of as several services that provide related functionality:
Networking diagrams show how requests are routed:
There are currently 3 different ways to deploy the components, with running them locally as the preferred development workflow.
Local Development
First, ensure you have podman-compose, podman and java 11 installed:
sudo dnf install -y podman-compose podman java-11-openjdk-devel
NOTE: You can also use docker if don't want to or are unable to use podman. Make sure docker and docker-compose are installed.
Ensure the checkout has the HBI submodule initialized:
git submodule update --init --recursive
NOTE: in order to deploy insights-inventory (not always useful), you'll need to login to quay.io first.
NOTE: To run any of the following commands using docker,
replace podman-compose with
docker compose
replace podman with
docker
Start via:
podman-compose up -d
If using docker, start via
docker compose up -d
NOTE: if the DB hasn't finished starting up (likely), HBI will fail to
start, to remedy: podman start rhsm-subscriptions_inventory_1
.
For more details about what services are defined, see docker-compose.yml
Note that the compose assumes that none of the services are already running
locally (hint: might need to sudo systemctl stop postgresql
). If you want to
use only some of the services via podman-compose, then podman-compose up --no-start
can be used to define the services (you can then subsequently
manually start containers for the services you wish to deploy locally.
If you prefer to use local postgresql service, you can use init_dbs.sh
.
podman-compose
deploys a kafka instance w/ a UI at http://localhost:3030
Two environment variables can be used to manipulate the offsets of the kafka consumers:
KAFKA_SEEK_OVERRIDE_END
when set totrue
seeks to the very endKAFKA_SEEK_OVERRIDE_TIMESTAMP
when set to an OffsetDateTime, seeks the queue to this position.
These changes are permanent, committed the next time the kafka consumer is detected as idle.
./gradlew :bootRun
Spring Boot defines many properties
that can be overridden via args or environment variables. (We prefer
environment variables). To determine the environment variable name,
uppercase, remove dashes and replace .
with _
(per
Spring docs)
We also define a number of service-specific properties (see Environment Variables)
For example, the server.port
(or SERVER_PORT
env var) property changes the listening port:
SERVER_PORT=9090 ./gradlew :bootRun
We have a number of profiles. Each profile activates a subset of components in the codebase.
api
: Run the user-facing APIcapacity-ingress
: Run the internal only capacity ingress APIcapture-hourly-snapshots
: Run the tally job for hourly snapshotscapture-snapshots
: Run the tally job and exitkafka-queue
: Run with a kafka queue (instead of the default in-memory queue)liquibase-only
: Run the Liquibase migrations and stoprh-marketplace
: Run the worker responsible for processing tally summaries and emitting usage to Red Hat Marketplace.metering-jmx
: Expose the JMX bean to create metering jobsmetering-job
: Create metering jobs and place them on the job queueopenshift-metering-worker
: Process OpenShift metering jobs off the job queuepurge-snapshots
: Run the retention job and exitworker
: Process jobs off the job queue
These can be specified most easily via the SPRING_PROFILES_ACTIVE
environment variable. For example:
SPRING_PROFILES_ACTIVE=capture-snapshots,kafka-queue ./gradlew bootRun
Each profile has a @Configuration
class that controls which components get activated, See ApplicationConfiguration for more details.
If no profiles are specified, the default profiles list in application.yaml
is applied.
RHSM Subscriptions is meant to be deployed under the context path "/". The
location of app specific resources are then controlled by the
rhsm-subscriptions.package_uri_mappings.org.candlepin.insights
property.
This unusual configuration is due to external requirements that our
application base its context path on the value of an environment
variable. Using "/" as the context path means that we can have certain
resources (such as health checks) with a known, static name while others
can vary based on an environment variable given to the pod.
These are served on port 9000. When running locally, you can access them via http://localhost:9000.
- /jolokia - REST access to JMX beans via Jolokia
- /hawtio - Admin UI interface to JMX beans and more
- /health - A Spring Actuator that we use as k8s liveness/readiness probe.
- /info - An actuator that reads the information from
META-INF/build-info.properties
and reports it. The response includes things like the version number.
Both the health actuator and info actuator can be modified, expanded, or extended. Please see the documentation for a discussion of extension points.
rhsm-subscriptions uses an RBAC service to determine application authorization. The RBAC service can via configured by environment variables (see below).
For development purposes, the RBAC service can be stubbed out so that the connection
to the RBAC service is bypassed and all users recieve the 'subscriptions::' role. This
can be enabled by setting RHSM_RBAC_USE_STUB=true
RHSM_RBAC_USE_STUB=true ./gradlew bootRun
DEV_MODE
: disable anti-CSRF, account filtering, and RBAC role checkDEVTEST_SUBSCRIPTION_EDITING_ENABLED
: allow subscription/offering edits via JMX.DEVTEST_EVENT_EDITING_ENABLED
: allow event edits via JMX.PRETTY_PRINT_JSON
: configure Jackson to indent outputted JSONAPP_NAME
: application name for URLs (default: rhsm-subscriptions)PATH_PREFIX
: path prefix in the URLs (default: api)INVENTORY_USE_STUB
: Use stubbed inventory REST APIINVENTORY_API_KEY
: API key for inventory serviceINVENTORY_HOST_LAST_SYNC_THRESHOLD
: reject hosts that haven't checked in since this duration (e.g. 24h)INVENTORY_DATABASE_HOST
: inventory DB hostINVENTORY_DATABASE_DATABASE
: inventory DB databaseINVENTORY_DATABASE_USERNAME
: inventory DB userINVENTORY_DATABASE_PASSWORD
: inventory DB passwordPRODUCT_ALLOWLIST_RESOURCE_LOCATION
: location of the product allowlistACCOUNT_LIST_RESOURCE_LOCATION
: location of the account list (opt-in used otherwise)DATABASE_HOST
: DB hostDATABASE_PORT
: DB portDATABASE_DATABASE
: DB databaseDATABASE_USERNAME
: DB usernameDATABASE_PASSWORD
: DB passwordCAPTURE_SNAPSHOT_SCHEDULE
: cron schedule for capturing tally snapshotsACCOUNT_BATCH_SIZE
: number of accounts to tally at onceTALLY_RETENTION_HOURLY
: number of hourly tallies to keepTALLY_RETENTION_DAILY
: number of daily tallies to keepTALLY_RETENTION_WEEKLY
: number of weekly tallies to keepTALLY_RETENTION_MONTHLY
: number of monthly tallies to keepTALLY_RETENTION_QUARTERLY
: number of quarterly tallies to keepTALLY_RETENTION_YEARLY
: number of yearly tallies to keepKAFKA_TOPIC
: topic for rhsm-subscriptions tasksKAFKA_GROUP_ID
kafka consumer group IDKAFKA_CONSUMER_MAX_POLL_INTERVAL_MS
: kafka max poll interval in millisecondsKAFKA_MESSAGE_THREADS
: number of consumer threadsKAFKA_BOOTSTRAP_HOST
: kafka bootstrap hostKAFKA_BOOTSTRAP_PORT
: kafka boostrap portKAFKA_CONSUMER_RECONNECT_BACKOFF_MS
: kafka consumer reconnect backoff in millisecondsKAFKA_CONSUMER_RECONNECT_BACKOFF_MAX_MS
: kafka consumer reconnect max backoff in millisecondsKAFKA_API_RECONNECT_TIMEOUT_MS
: kafka connection timeout in millisecondsKAFKA_SCHEMA_REGISTRY_SCHEME
: avro schema server scheme (http or https)KAFKA_SCHEMA_REGISTRY_HOST
: kafka schema server hostKAFKA_SCHEMA_REGISTRY_PORT
: kafka schema server portKAFKA_AUTO_REGISTER_SCHEMAS
: enable auto registration of schemasRHSM_RBAC_USE_STUB
: stub out the rbac serviceRHSM_RBAC_APPLICATION_NAME
: name of the RBAC permission application name (<APP_NAME>:*:*
), by default this property is set to 'subscriptions'.RHSM_RBAC_HOST
: RBAC service hostnameRHSM_RBAC_PORT
: RBAC service portRHSM_RBAC_MAX_CONNECTIONS
: max concurrent connections to RBAC serviceCLOUDIGRADE_ENABLED
: set totrue
to query cloudigrade for RHEL usageCLOUDIGRADE_MAX_ATTEMPTS
: maximum number of attempts to query cloudigradeCLOUDIGRADE_HOST
: cloudigrade service hostCLOUDIGRADE_PORT
: cloudigrade service portCLOUDIGRADE_INTERNAL_HOST
: cloudigrade internal services hostCLOUDIGRADE_INTERNAL_PORT
: cloudigrade internal services portCLOUDIGRADE_MAX_CONNECTIONS
: max concurrent connections to cloudigrade serviceCLOUDIGRADE_PSK
: pre-shared key for cloudigrade authenticationSWATCH_*_PSK
: pre-shared keys for internal service-to-service authentication where the*
represents the name of an authorized service
Clowder
Clowder exposes the services it provides in an Openshift config map. This config map appears
in the container as a JSON file located by default at the path defined by ACG_CONFIG
environment
variable (typically /cdapp/cdappconfig.json
). The ClowderJsonEnvironmentPostProcessor
takes
this JSON file and flattens it into Java style properties (with the namespace clowder
prefixed).
For example,
{ "kafka": {
"brokers": [{
"hostname": "localhost"
}]
}}
Becomes clowder.kafka.brokers[0].hostname
. These properties are then passed into the Spring
Environment and may be used elsewhere (the ClowderJsonEnvironmentPostProcessor
runs before
most other environment processing classes).
The pattern we follow is to assign the Clowder style properties to an intermediate property that follows Spring Boot's environment variable binding conventions
It is important to note, this intermediate property must be given a default via the $ {value:default}
syntax. If a default is not provided and the Clowder JSON is not available
(such as in development runs), Spring will fail to start because the clowder.
property will
not resolve to anything.
An example of an intermediate property would be
KAFKA_BOOTSTRAP_HOST=${clowder.kafka.brokers[0].hostname:localhost}
This pattern has the useful property of allowing us to override any Clowder settings (in
development, for example) with environment variables since a value specified in the environment
has a higher precedence
than values defined in config data files (e.g. application.properties
).
The intermediate property is then assigned to any actual property that we wish to use, e.g.
spring.kafka.bootstrap-servers
. Thus, it is trivial to either allow a value to be specified
by Clowder, overridden from Clowder via environment variable, or not given by Clowder at all and
instead based on a default.
A Clowder environment can be simulated in development by pointing the ACG_CONFIG
environment var
to a mock Clowder JSON file.
E.g.
$ ACG_CONFIG=$(pwd)/swatch-core/src/test/resources/test-clowder-config.json ./gradlew bootRun
- Get a token and login via
oc login
. - Switch to the ephemeral namespace via
oc project $namespace
- Remotely exec kakfa-console-consumer.sh with the desired topic (replace
$topic
below):
oc rsh \
$(oc get pod -o name -l app.kubernetes.io/name=kafka) \
bin/kafka-console-consumer.sh \
--topic $topic \
--from-beginning \
--bootstrap-server localhost:9092
Deploy to Openshift via Templates
Prerequisite secrets:
pinhead
: secret withkeystore.jks
- keystore for HTTPS communication with RHSM API (formerly Pinhead).rhsm-db
: DB connection info, havingdb.host
,db.port
,db.user
,db.password
, anddb.name
properties.host-inventory-db-readonly
: inventory read-only clone DB connection info, havingdb.host
,db.port
,db.user
,db.password
, anddb.name
properties.ingress
: secret withkeystore.jks
andtruststore.jks
- keystores for mTLS communication with subscription-conduit.tls
: havingkeystore.password
, the password used for capacity ingress.
Prequisite configmaps:
capacity-allowlist
havingproduct-allowlist.txt
which is a newline-separated list of which SKUs have been approved for capacity ingress.
Adjust as desired:
oc process -f templates/rhsm-subscriptions-api.yml | oc create -f -
oc process -f templates/rhsm-subscriptions-capacity-ingress.yml | oc create -f -
oc process -f templates/rhsm-subscriptions-scheduler.yml | oc create -f -
oc process -f templates/rhsm-subscriptions-worker.yml | oc create -f -
Merges to main
will trigger deployment to a preprod environment. Production
deployments will be handled in an internal App-SRE automation repo.
See App-SRE documentation on updating dashboards for more info.
Essentially:
- Edit the dashboard on the stage grafana instance.
- Export the dashboard, choosing to "export for sharing externally", save JSON to a file.
- Export the dashboard again, this time not selecting the external sharing option and save that JSON to a file.
- For both pieces of JSON, drop them into the
subscription-watch.json
section underdata
ingrafana-dashboard-subscription-watch.configmap.yaml
and update the indentation. - Do a
git diff
. Select the export that makes the most sense. In my experience, not selecting the "external sharing" option leads to more correct results. A export formatted for sharing has an__inputs
section that hardcodes some values we don't want hardcoded. - Rename the file to
subscription-watch.json
.
OR
- Edit the dashboard on the stage grafana instance.
- Navigate to Dashboard Settings (cogwheel top right of page)
- Navigate to JSON Model (left nav)
- Save contents of the JSON Model into a file named
subscription-watch.json
.
Use the following command to update the configmap YAML:
oc create configmap grafana-dashboard-subscription-watch --from-file=subscription-watch.json -o yaml --dry-run=client > ./grafana-dashboard-subscription-watch.configmap.yaml
cat << EOF >> ./grafana-dashboard-subscription-watch.configmap.yaml
annotations:
grafana-folder: /grafana-dashboard-definitions/Insights
labels:
grafana_dashboard: "true"
EOF
Possibly useful, to extract the JSON from the k8s configmap file:
oc extract -f dashboards/grafana-dashboard-subscription-watch.configmap.yaml --confirm
Once you extract it from the .yaml that's checked into this repo, you can import it into the stage instance of grafana by going to Create -> Import from the left nav.
Subscription watch components are licensed GPLv3 (see LICENSE for more details).