For Golang, we try to follow Google's code style For Python we try to follow PEP8 style guid
We use pre-commit package to enforce our pre commit hooks. To install:
brew install pre-commit # Install the package
pre-commit install # install the pre commits hooks
pre-commit run --all-files --verbose # run it!
If you need to change the default values in the configuration(ES_HOST
, ES_PORT
, ES_USERNAME
, ES_PASSWORD
), you
can also create the deployment file yourself:
Self-Managed Kubernetes
just create-vanilla-deployment-file
Self-Managed Kubernetes wthout certificate
just create-vanilla-deployment-file-nocert
EKS
just create-eks-deployment-file
To stop this example and clean up the pod, run:
just delete-cloudbeat
Or when running without certificate
just delete-cloudbeat-nocert
Build & Deploy remote debug docker:
just build-deploy-cloudbeat-debug
Or without certificate
just build-deploy-cloudbeat-debug-nocert
After running the pod, expose the relevant ports:
just expose-ports
The app will wait for the debugger to connect before starting
Note Use your favorite IDE to connect to the debugger on
localhost:40000
(for example Goland)
Update cloudbeat configuration on a running elastic-agent can be done by running the script. The script still requires a second step of triggering the agent to re-run cloudbeat. This can be done on Fleet UI by changing the agent log level.
To update your local configuration of cloudbeat, use mage config
, for example to control the policy type you can pass the following environment variable
POLICY_TYPE=cloudbeat/cis_eks mage config
The default POLICY_TYPE
is set to cloudbeat/cis_k8s
on _meta/config/cloudbeat.common.yml.tmpl
Cloudbeat has a various sets of tests. This guide should help to understand how the different test suites work, how they are used and how new tests are added.
In general there are two major test suites:
- Unit tests written in Go
- Integration tests written in Python (using pytest)
The tests written in Go use the Go Testing package. The tests written in Python depend on pytest and require a compiled and executable binary from the Go code. The python test run a beat with a specific config and params and either check if the output is as expected or if the correct things show up in the logs.
Integration tests in Beats are tests which require an external system like Elasticsearch to test if the integration with this service works as expected. Beats provides in its testsuite docker containers and docker-compose files to start these environments but a developer can run the required services also locally.
For more information, see our testing docs
Cloudbeat uses mockery
as its mocking test framework.
Mockery
provides an easy way to generate mocks for golang interfaces.
Some tests use the new expecter interface the library provides. For example, given an interface such as
type Requester interface {
Get(path string) (string, error)
}
You can use the type-safe expecter interface as such:
requesterMock := Requester{}
requesterMock.EXPECT().Get("some path").Return("result", nil)
requesterMock.EXPECT().
Get(mock.Anything).
Run(func(path string) { fmt.Println(path, "was called") }).
// Can still use return functions by getting the embedded mock.Call
Call.Return(func(path string) string { return "result for " + path }, nil)
To easily generate mocks run just generate-mocks
GitHub act tool allows execution of GitHub actions locally.
When you run
act it reads in your GitHub Actions from .github/workflows/
and determines the set of actions that need to be run. It uses the Docker API to either pull or build the necessary images, as defined in your workflow files and finally determines the execution path based on the dependencies that were defined.
Full information can be found in tool's readme docs.
act
depends ondocker
to run workflows.
Homebrew (Linux/macOS)
brew install act
or if you want to install version based on latest commit, you can run below (it requires compiler to be installed but Homebrew will suggest you how to install it, if you don't have it):
brew install act --HEAD
You can provide default configuration flags to act
by either creating a ./.actrc
or a ~/.actrc
file. Any flags in the files will be applied before any flags provided directly on the command line. For example, a file like below will always use the nektos/act-environments-ubuntu:18.04
image for the ubuntu-latest
runner:
# sample .actrc file
-P ubuntu-20.04=catthehacker/ubuntu:act-20.04
Additionally, act supports loading environment variables from an .env
file. The default is to look in the working directory for the file but can be overridden by:
act --env-file my.env
.env
:
MY_ENV_VAR=MY_ENV_VAR_VALUE
MY_2ND_ENV_VAR="my 2nd env var value"
To run act
with secrets, you can enter them interactively, supply them as environment variables or load them from a file. The following options are available for providing secrets:
act -s MY_SECRET=somevalue
- usesomevalue
as the value forMY_SECRET
.act -s MY_SECRET
- check for an environment variable namedMY_SECRET
and use it if it exists. If the environment variable is not defined, prompt the user for a value.act --secret-file my.secrets
- load secrets values frommy.secrets
file.- secrets file format is the same as
.env
format
- secrets file format is the same as
GithHub upload-artifact action requires simple server running on local machine.
The solution is described in this issue.
In project's root folder need to create docker-compose.yml
:
artifact-server:
image: ghcr.io/jefuller/artifact-server:latest
environment:
AUTH_KEY: foo
ports:
- "8080:8080"
Then update .actrc
:
--env ACTIONS_CACHE_URL=http://localhost:8080/
--env ACTIONS_RUNTIME_URL=http://localhost:8080/
--env ACTIONS_RUNTIME_TOKEN=foo
Then start artifact server:
docker-compose up