Skip to content

Latest commit

 

History

History
290 lines (210 loc) · 7.79 KB

File metadata and controls

290 lines (210 loc) · 7.79 KB

Vagrant

Prerequisites

MacOS (Intel processors)

brew install --cask virtualbox
brew install --cask vagrant

MacOS (Apple Silicon processors)

  1. Install prerequesites:
brew install qemu
brew install --cask vagrant
  1. Install the QEMU vagrant provider:
vagrant plugin install vagrant-qemu
  1. Set the default Vagrant provider to QEMU:
echo 'export VAGRANT_DEFAULT_PROVIDER=qemu' >> ~/.zshrc
source ~/.zshrc
  1. Enable SMB sharing:
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.smbd.plist

Other

Please install Vagrant and a Vagrant-compatible provider such as VirtualBox:

Setting up

You can set up the Vagrant environment with just one command:

vagrant up

If asked for a username and password, enter your macOS user credentials.

If you experience following error (MacOS specific)

There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "create"]

Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg *)" at line 95 of file VBoxManageHostonly.cpp```

go to System Preferences > Security & Privacy Then hit the "Allow" for Oracle VirtualBox

After successfull installation you can ssh to the virtual machine with:

vagrant ssh

NOTICE: The directory with sumo-kubernetes-collection repository on the host is synced with /sumologic/ directory on the virtual machine.

Collector

To install or upgrade collector please type:

sumo-make upgrade

or

/sumologic/vagrant/Makefile upgrade

This command will prepare environment (namespaces, receiver-mock, etc.) and after that it will install/upgrade collector in the vagrant environment.

To remove collector please use:

sumo-make clean

or

/sumologic/vagrant/Makefile clean

List of other useful targets:

  • expose-prometheus - exposes prometheus on port 9090 of virtual machine
  • expose-grafana - exposes grafana on port 8080 of virtual machine
  • apply-avalanche - run one pod deployment of avalanche (metrics generator)

Test

In order to quickly test whether sumo-kubernetes-collection works, one can use receiver-mock for that purpose.

To check receiver-mock logs please use:

sumo-make test-receiver-mock-logs

or

/sumologic/vagrant/Makefile test-receiver-mock-logs

To check metrics exposed by receiver-mock please use:

sumo-make test-receiver-mock-metrics

or

/sumologic/vagrant/Makefile test-receiver-mock-metrics

Istio

In order to setup istio, please use the following commands:

# clone istio repository
sumo-make istio-clone
# generate istio certs and enable it in mirok8s
sumo-make istio-certs istio-enable
# upgrade sumologic
sumo-make upgrade
# patch sumologic
sumo-make istio-patch restart-pods

NOTE: In order to prevent overriding patches, please use sumo-make helm-upgrade instead of sumo-make upgrade

Configuration

Prepare sumologic configuration (in vagrant/values.local.yaml):

And then upgrade the collection with the following command:

sumo-make helm-upgrade

Adjust kube-prometheus-stack configuration

In order to tell kube-prometheus-stack how to scrape metrics, please add the following modifications:

kube-prometheus-stack:
  kube-state-metrics:
    podAnnotations:
      # fix readiness and liveness probes
      sidecar.istio.io/rewriteAppHTTPProbers: "true"
      # fix scraping metrics
      traffic.sidecar.istio.io/excludeInboundPorts: "8080"
  grafana:
    podAnnotations:
      # fix readiness and liveness probes
      sidecar.istio.io/rewriteAppHTTPProbers: "true"
      # fix scraping metrics
      traffic.sidecar.istio.io/excludeInboundPorts: "3000"
  prometheusOperator:
    podAnnotations:
      # fix scraping metrics
      traffic.sidecar.istio.io/excludeInboundPorts: "8080"
  prometheus:
    prometheusSpec:
      podMetadata:
        annotations:
          traffic.sidecar.istio.io/includeInboundPorts: ""   # do not intercept any inbound ports
          traffic.sidecar.istio.io/includeOutboundIPRanges: ""  # do not intercept any outbound traffic
          proxy.istio.io/config: |  # configure an env variable `OUTPUT_CERTS` to write certificates to the given folder
            proxyMetadata:
              OUTPUT_CERTS: /etc/istio-output-certs
          sidecar.istio.io/userVolumeMount: '[{"name": "istio-certs", "mountPath": "/etc/istio-output-certs"}]' # mount the shared volume at sidecar proxy
      volumes:
        - emptyDir:
            medium: Memory
          name: istio-certs
      volumeMounts:
        - mountPath: /etc/prom-certs/
          name: istio-certs
    # https://istio.io/latest/docs/ops/integrations/prometheus/#tls-settings
    additionalServiceMonitors:
      - ...
        endpoints:
          - ...
            # https://istio.io/latest/docs/ops/integrations/prometheus/#tls-settings
            scheme: https
            tlsConfig:
              caFile: /etc/prom-certs/root-cert.pem
              certFile: /etc/prom-certs/cert-chain.pem
              keyFile: /etc/prom-certs/key.pem
              insecureSkipVerify: true

Adjust receiver-mock configuration

Patch for receiver-mock contains two significant changes:

  • additional volume /etc/prom-certs which allows to mock prometheus behaviour:

    curl -k --key /etc/prom-certs/key.pem --cert /etc/prom-certs/cert-chain.pem https://10.1.126.170:24231/metrics
  • additional service port 3002, which is not managed by istio, but points to the standard 3000 port. This change is required for setup job to work correctly outside of istio

Adjust setup job configuration

Setup job disables istio sidecar, as it finish before sidecar is ready which leads to fail. This is done by the following configuration:

sumologic:
  setup:
    job:
      podAnnotations:
        # Disable istio sidecar for setup job
        sidecar.istio.io/inject: "false"
  # Use non-istio rport of receiver-mock
  endpoint: http://receiver-mock.receiver-mock:3002/terraform/api/

Troubleshooting

Replace Prometheus with Opentelemetry

You could face the following error when replacing Prometheus with Opentelemetry on Vagrant

Error: UPGRADE FAILED: resource mapping not found for name: "collection-sumologic-metrics" namespace: "sumologic" from "": no matches for kind "OpenTelemetryCollector" in version "opentelemetry.io/v1alpha1"
ensure CRDs are installed first
make: *** [/sumologic/vagrant/Makefile:52: helm-upgrade] Error 1

To fix this issue, you need to remove the collectors by running

sumo-make clean

and then try upgrading the helm again by running

sumo-make upgrade

Tips and tricks

  • In order to manually take fluentd metrics using receiver-mock, use the following command from receiver-mock container:

    export IP_ADDRESS=<fluentd metrics ip>
    export PORT=<Fluentd metrics port>
    curl --http1.1 -k --key /etc/prom-certs/key.pem --cert /etc/prom-certs/cert-chain.pem  https://${IP_ADDRESS}:${PORT}/metrics