diff --git a/README.md b/README.md index 43182dc875..4f3333337b 100644 --- a/README.md +++ b/README.md @@ -726,7 +726,7 @@ partial_name: palette-setup This is how you set up Palette in {props.cloud}. -This is a `. +This is an . ``` The path of the link should be the path of the destination file from the root directory, without any back operators diff --git a/_partials/_authenticate-palette-cli.mdx b/_partials/_authenticate-palette-cli.mdx new file mode 100644 index 0000000000..2abfac4829 --- /dev/null +++ b/_partials/_authenticate-palette-cli.mdx @@ -0,0 +1,40 @@ +--- +partial_category: pcg-vmware +partial_name: authenticate-palette-cli +--- + +The initial step to deploy a PCG using Palette CLI involves authenticating with your Palette environment using the + command. +In your terminal, execute the following command. + +```bash +palette login +``` + +Once issued, you will be prompted for several parameters to complete the authentication. The table below outlines the +required parameters along with the values that will be utilized in this tutorial. If a parameter is specific to your +environment and Palette account, such as your Palette API key, ensure to input the value according to your environment. +Check out the guide for +more information. option. + +| **Parameter** | **Value** | **Environment-Specific** | +| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | +| **Spectro Cloud Console** | `https://console.spectrocloud.com`. If using a self-hosted instance of Palette, enter the URL for that instance. | No | +| **Allow Insecure Connection** | `Y`. Enabling this option bypasses x509 CA verification. In production environments, enter `Y` if you are using a self-hosted Palette or VerteX instance with self-signed TLS certificates and need to provide a file path to the instance CA. Otherwise, enter `N`. | No | +| **Spectro Cloud API Key** | Enter your Palette API Key. | Yes | +| **Spectro Cloud Organization** | Select your Palette Organization name. | Yes | +| **Spectro Cloud Project** | `None (TenantAdmin)` | No | +| **Acknowledge** | Accept the login banner message. messages are only displayed if the tenant admin enabled a login banner. | Yes | + +After accepting the login banner message, you will receive the following output confirming you have successfully +authenticated with Palette. + +```text hideClipboard +Welcome to Spectro Cloud Palette +``` + +The video below demonstrates Palette's authentication process. Ensure you utilize values specific to your environment, +such as the correct Palette URL. Contact your Palette administrator for the correct URL if you use a self-hosted Palette +or VerteX instance. + + diff --git a/_partials/_aws-static-credentials-setup.mdx b/_partials/_aws-static-credentials-setup.mdx new file mode 100644 index 0000000000..492603afcd --- /dev/null +++ b/_partials/_aws-static-credentials-setup.mdx @@ -0,0 +1,35 @@ +--- +partial_category: palette-setup +partial_name: aws-static-credentials +--- + +1. Create an IAM Role or IAM User for Palette. Use the following resources if you need additional help. + + - [IAM Role creation guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). + - [IAM User creation guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html). + +2. In the AWS console, assign the Palette-required IAM policies to the IAM role or the IAM user that Palette will use. + +3. Log in to [Palette](https://console.spectrocloud.com) as tenant admin. + +4. From the left **Main Menu**, click on **Tenant Settings**. + +5. Select **Cloud Accounts**, and click **+Add AWS Account**. + +6. In the cloud account creation wizard provide the following information: + + - **Account Name:** Custom name for the cloud account. + + - **Description:** Optional description for the cloud account. + - **Partition:** Choose **AWS** from the **drop-down Menu**. + + - **Credentials:** + - AWS Access key + - AWS Secret access key + +7. Click the **Validate** button to validate the credentials. + +8. Once the credentials are validated, the **Add IAM Policies** toggle displays. Toggle **Add IAM Policies** on. + +9. Use the **drop-down Menu**, which lists available IAM policies in your AWS account, to select any desired IAM + policies you want to assign to Palette IAM role or IAM user. diff --git a/_partials/_azure-cloud-account-setup.mdx b/_partials/_azure-cloud-account-setup.mdx new file mode 100644 index 0000000000..1afed5e292 --- /dev/null +++ b/_partials/_azure-cloud-account-setup.mdx @@ -0,0 +1,32 @@ +--- +partial_category: palette-setup +partial_name: azure-cloud-account +--- + +Use the following steps to add an Azure or Azure Government account in Palette or Palette VerteX. + +1. Log in to [Palette](https://console.spectrocloud.com) or Palette VerteX as a tenant admin. + +2. From the left **Main Menu**, select **Tenant Settings**. + +3. Next, select **Cloud Accounts** in the **Tenant Settings Menu**. + +4. Locate **Azure**, and click **+ Add Azure Account**. + +5. Fill out the following information, and click **Confirm** to complete the registration. + +| **Basic Information** | **Description** | +| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| **Account Name** | A custom account name. | +| **Tenant ID** | Unique tenant ID from Azure Management Portal. | +| **Client ID** | Unique client ID from Azure Management Portal. | +| **Client Secret** | Azure secret for authentication. Refer to Microsoft's reference guide for creating a [Client Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-an-azure-active-directory-application). | +| **Cloud** | Select **Azure Public Cloud** or **Azure US Government**. | +| **Tenant Name** | An optional tenant name. | +| **Disable Properties** | This option prevents Palette and VerteX from creating Azure Virtual Networks (VNets) and other network resources on your behalf for static placement deployments. If you enable this option, all users must manually specify a pre-existing VNet, subnets, and security groups when creating clusters. | +| **Connect Private Cloud Gateway** | Select this option to connect to a Private Cloud Gateway (PCG) if you have a PCG deployed in your environment. Refer to the PCG page to learn more about a PCG. | + +6. After providing the required values, click the **Validate** button. If the client secret you provided is correct, a + _Credentials validated_ success message with a green check is displayed. + +7. Click **Confirm** to complete the registration. diff --git a/_partials/_create-tenant-api-key.mdx b/_partials/_create-tenant-api-key.mdx new file mode 100644 index 0000000000..42b66330cd --- /dev/null +++ b/_partials/_create-tenant-api-key.mdx @@ -0,0 +1,34 @@ +--- +partial_category: palette-setup +partial_name: create-tenant-api-key +--- + +1. Log in to [Palette](https://console.spectrocloud.com) as a tenant admin. + +2. Switch to the **Tenant Admin** scope + +3. Navigate to the left **Main Menu** and select **Tenant Settings**. + +4. From the **Tenant Settings Menu**, select **API Keys**. + +5. Click on **Add New API key**. + +6. Fill out the following input fields: + +| **Input Field** | **Description** | +| ------------------- | ----------------------------------------------------------------------------------------------------------------- | +| **API Key Name** | Assign a name to the API key. | +| **Description** | Provide a description for the API key. | +| **User Name** | Select the user to assign the API key. | +| **Expiration Date** | Select an expiration date from the available options. You can also specify a custom date by selecting **Custom**. | + +5. Click the **Generate** button. + +6. Copy the API key and save it in a secure location, such as a password manager. Share the API key with the user you + created the API key for. + +:::warning + +Ensure you save the API key in a secure location. You will not be able to view the API key again. + +::: diff --git a/_partials/_create-upload-ssh-key.mdx b/_partials/_create-upload-ssh-key.mdx new file mode 100644 index 0000000000..4bd7834101 --- /dev/null +++ b/_partials/_create-upload-ssh-key.mdx @@ -0,0 +1,64 @@ +--- +partial_category: palette-setup +partial_name: generate-ssh-key +--- + +1. Open the terminal on your computer. + +2. Check for existing SSH keys by invoking the following command. + +
+ + ```shell + ls -la ~/.ssh + ``` + + If you see files named **id_rsa** and **id_rsa.pub**, you already have an SSH key pair and can skip to step 8. If + not, proceed to step 3. + +3. Generate a new SSH key pair by issuing the following command. + +
+ + ```shell + ssh-keygen -t rsa -b 4096 -C "your_email@example.com" + ``` + + Replace `your_email@example.com` with your actual email address. + +4. Press Enter to accept the default file location for the key pair. + +5. Enter a passphrase (optional) and confirm it. We recommend using a strong passphrase for added security. + +6. Copy the public SSH key value. Use the `cat` command to display the public key. + +
+ + ```shell + cat ~/.ssh/id_rsa.pub + ``` + + Copy the entire key, including the `ssh-rsa` prefix and your email address at the end. + +7. Log in to [Palette](https://console.spectrocloud.com). + +8. Navigate to the left **Main Menu**, select **Project Settings**, and then the **SSH Keys** tab. + +9. Open the **Add New SSH Key** tab and complete the **Add Key** input form: + + - **Name**: Provide a unique name for the SSH key. + + - **SSH Key**: Paste the SSH public key contents from the key pair generated earlier. + +10. Click **Confirm** to complete the wizard. + +
+ +:::info + +You can edit or delete SSH keys later by using the **three-dot Menu** to the right of each key. + +::: + +During cluster creation, assign your SSH key to a cluster. You can use multiple keys to a project, but only one key can +be assigned to an individual cluster. diff --git a/_partials/_delete-pcg-vmware.mdx b/_partials/_delete-pcg-vmware.mdx new file mode 100644 index 0000000000..d01166f6ac --- /dev/null +++ b/_partials/_delete-pcg-vmware.mdx @@ -0,0 +1,21 @@ +--- +partial_category: pcg-vmware +partial_name: delete-pcg-ui +--- + +After deleting your VMware cluster and cluster profile, proceed with the PCG deletion. Log in to Palette as a tenant +admin, navigate to the left **Main Menu** and select **Tenant Settings**. Next, from the **Tenant Settings Menu**, click +on **Private Cloud Gateways**. Identify the PCG you want to delete, click on the **Three-Dot Menu** at the end of the +PCG row, and select **Delete**. Click **OK** to confirm the PCG deletion. + +![Delete PCG image](/clusters_pcg_deploy-app-pcg_pcg-delete.webp) + +Palette will delete the PCG and the Palette services deployed on the PCG node. However, the underlying infrastructure +resources, such as the virtual machine, must be removed manually from VMware vSphere. + +Log in to your VMware vSphere server and select the VM representing the PCG node named `gateway-tutorial-cp`. Click on +the **Three-Dot Actions** button, select **Power**, and **Power Off** to power off the machine. Once the machine is +powered off, click on the **Three-Dot Actions** button again and select **Delete from Disk** to remove the machine from +your VMware vSphere environment. + +![Delete VMware VM](/clusters_pcg_deploy-app-pcg_vmware-delete.webp) diff --git a/_partials/_deploy-pcg-palette-vmware.mdx b/_partials/_deploy-pcg-palette-vmware.mdx new file mode 100644 index 0000000000..f12e7fb25c --- /dev/null +++ b/_partials/_deploy-pcg-palette-vmware.mdx @@ -0,0 +1,157 @@ +--- +partial_category: pcg-vmware +partial_name: deploy-pcg-palette-cli +--- + +After authenticating with Palette, you can proceed with the PCG creation process. Issue the command below to start the +PCG installation. + +```bash +palette pcg install +``` + +The `palette pcg install` command will prompt you for information regarding your PCG cluster, vSphere environment, and +resource configurations. The following tables display the required parameters along with the values that will be used in +this tutorial. Enter the provided values when prompted. If a parameter is specific to your environment, such as your +vSphere endpoint, enter the corresponding value according to your environment. For detailed information about each +parameter, refer to the +guide. + +:::info + +The PCG to be deployed in this tutorial is intended for educational purposes only and is not recommended for production +environments. + +::: + +1. **PCG General Information** + + Configure the PCG general information, including the **Cloud Type** and **Private Cloud Gateway Name**, as shown in + the table below. + + | **Parameter** | **Value** | **Environment-Specific** | + | :--------------------------------------------------- | ------------------ | ------------------------ | + | **Management Plane Type** | `Palette` | No | + | **Enable Ubuntu Pro (required for production)** | `N` | No | + | **Select an image registry type** | `Default` | No | + | **Cloud Type** | `VMware vSphere` | No | + | **Private Cloud Gateway Name** | `gateway-tutorial` | No | + | **Share PCG Cloud Account across platform Projects** | `Y` | No | + +2. **Environment Configuration** + + Enter the environment configuration information, such as the **Pod CIDR** and **Service IP Range** according to the + table below. + + | **Parameter** | **Value** | **Environment-Specific** | + | :------------------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------ | + | **HTTPS Proxy** | Skip. | No | + | **HTTP Proxy** | Skip. | No | + | **Pod CIDR** | `172.16.0.0/20`. The pod IP addresses should be unique and not overlap with any machine IPs in the environment. | No | + | **Service IP Range** | `10.155.0.0/24`. The service IP addresses should be unique and not overlap with any machine IPs in the environment. | No | + +3. **vSphere Account Information** + + Enter the information specific to your vSphere account. + + | **Parameter** | **Value** | **Environment-Specific** | + | -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------ | + | **vSphere Endpoint** | Your vSphere endpoint. You can specify a Full Qualified Domain Name (FQDN) or an IP address. Make sure you specify the endpoint without the HTTP scheme `https://` or `http://`. Example: `vcenter.mycompany.com`. | Yes | + | **vSphere Username** | Your vSphere account username. | Yes | + | **vSphere Password** | Your vSphere account password. | Yes | + | **Allow Insecure Connection (Bypass x509 Verification)** | `Y`. Enabling this option bypasses x509 CA verification. In production environments, enter `N` if using a custom registry with self-signed SSL certificates. Otherwise, enter `Y`. | No | + +4. **vSphere Cluster Configuration** + + Enter the PCG cluster configuration information. For example, specify the vSphere **Resource Pool** to be targeted by + the PCG cluster. + + | **Parameter** | **Value** | **Environment-Specific** | + | -------------------------------------------------------- | ---------------------------------------------------------------------- | ------------------------ | + | **Datacenter** | The vSphere data center to target when deploying the PCG cluster. | Yes | + | **Folder** | The vSphere folder to target when deploying the PCG cluster. | Yes | + | **Network** | The port group to which the PCG cluster will be connected. | Yes | + | **Resource Pool** | The vSphere resource pool to target when deploying the PCG cluster. | Yes | + | **Cluster** | The vSphere compute cluster to use for the PCG deployment. | Yes | + | **Select specific Datastore or use a VM Storage Policy** | `Datastore` | No | + | **Datastore** | The vSphere datastore to use for the PCG deployment. | Yes | + | **Add another Fault Domain** | `N` | No | + | **NTP Servers** | Skip. | No | + | **SSH Public Keys** | Provide a public OpenSSH key to be used to connect to the PCG cluster. | Yes | + +5. **PCG Cluster Size** + + This tutorial will deploy a one-node PCG with dynamic IP placement (DDNS). If needed, you can convert a single-node + PCG to a multi-node PCG to provide additional capacity. Refer to the + guide for more + information. + + | **Parameter** | **Value** | **Environment-Specific** | + | ------------------- | ---------------------------------------------------------------------------- | ------------------------ | + | **Number of Nodes** | `1` | No | + | **Placement Type** | `DDNS` | No | + | **Search domains** | Comma-separated list of DNS search domains. For example, `spectrocloud.dev`. | Yes | + +6. **Cluster Settings** + + Set the parameter **Patch OS on boot** to `N`, meaning the OS of the PCG hosts will not be patched on the first boot. + + | **Parameter** | **Value** | **Environment-Specific** | + | -------------------- | --------- | ------------------------ | + | **Patch OS on boot** | `N` | No | + +7. **vSphere Machine Configuration** + + Set the size of the PCG as small (**S**) as this PCG will not be used in production environments. + + | **Parameter** | **Value** | **Environment-Specific** | + | ------------- | --------------------------------------------- | ------------------------ | + | **S** | `4 CPU, 4 GB of Memory, and 60 GB of Storage` | No | + +8. **Node Affinity Configuration Information** + + Set **Node Affinity** to `N`, indicating no affinity between Palette pods and control plane nodes. + + | **Parameter** | **Value** | **Environment-Specific** | + | ----------------- | --------- | ------------------------ | + | **Node Affinity** | `N` | No | + +After answering the prompts of the `pcg install` command, a new PCG configuration file is generated, and its location is +displayed on the console. + +```text hideClipboard +==== PCG config saved ==== Location: /home/ubuntu/.palette/pcg/pcg-20240313152521/pcg.yaml +``` + +Next, Palette CLI will create a local [kind](https://kind.sigs.k8s.io/) cluster that will be used to bootstrap the PCG +cluster deployment in your VMware environment. Once installed, the PCG registers itself with Palette and creates a +VMware cloud account with the same name as the PCG. + +The following recording demonstrates the `pcg install` command with the `--config-only` flag. When using this flag, a +reusable configuration file named **pcg.yaml** is created under the path **.palette/pcg**. You can then utilize this +file to install a PCG with predefined values using the command `pcg install` with the `--config-file` flag. Refer to the + page for further information +about the command. + + + +
+
+ +You can monitor the PCG cluster creation by logging into Palette and switching to the **Tenant Admin** scope. Next, +click on **Tenant Settings** from the left **Main Menu** and select **Private Cloud Gateways**. Then, click on the PCG +cluster you just created and check the deployment progress under the **Events** tab. + +![PCG Events page.](/clusters_pcg_deploy-app-pcg_pcg-events.webp) + +You can also track the PCG deployment progress from your terminal. Depending on the PCG size and infrastructure +environment, the deployment might take up to 30 minutes. Upon completion, the local kind cluster is automatically +deleted from your machine. + +![Palette CLI PCG deployment](/clusters_pcg_deploy-app-pcg_pcg-cli.webp) + +Next, log in to Palette as a tenant admin. Navigate to the left **Main Menu** and select **Tenant Settings**. Click on +**Private Cloud Gateways** from the **Tenant Settings Menu** and select the PCG you just created. Ensure that the PCG +cluster status is **Running** and **Healthy** before proceeding. + +![PCG Overview page.](/clusters_pcg_deploy-app-pcg_pcg-health.webp) diff --git a/_partials/_gcp-cloud-account-setup.mdx b/_partials/_gcp-cloud-account-setup.mdx new file mode 100644 index 0000000000..1b0087caf8 --- /dev/null +++ b/_partials/_gcp-cloud-account-setup.mdx @@ -0,0 +1,28 @@ +--- +partial_category: palette-setup +partial_name: gcp-cloud-account +--- + +1. Log in to [Palette](https://console.spectrocloud.com) as Tenant admin. + +2. Navigate to the left **Main Menu** and select **Tenant Settings**. + +3. Select **Cloud Accounts** and click on **Add GCP Account**. + +4. In the cloud account creation wizard, provide the following information: + + - **Account Name:** Custom name for the cloud account. + + - **JSON Credentials:** The JSON credentials object. + +
+ + :::info + + You can use the **Upload** button to upload the JSON file you downloaded from the GCP console. + + ::: + +5. Click the **Validate** button to validate the credentials. + +6. When the credentials are validated, click on **Confirm** to save your changes. diff --git a/_partials/getting-started/_cluster_observability.mdx b/_partials/getting-started/_cluster_observability.mdx new file mode 100644 index 0000000000..43ba13bede --- /dev/null +++ b/_partials/getting-started/_cluster_observability.mdx @@ -0,0 +1,17 @@ +--- +partial_category: getting-started +partial_name: cluster-observability +--- + +As we have seen throughout this tutorial, Palette exposes a set of workload metrics out-of-the-box to help cluster +administrators better understand the resource utilization of the cluster. The in Palette are a snapshot in +time and do not provide alerting capabilities. + +We recommend using a dedicated monitoring system in order to gain a better picture of resource utilization in your +environments. Several are available in the monitoring category that +you can use to add additional monitoring capabilities to your cluster. + +Refer to the +guide to learn how to deploy a monitoring stack using the open-source tool +[Prometheus](https://prometheus.io/docs/introduction/overview/) and how to configure a host cluster to forward metrics +to the monitoring stack. \ No newline at end of file diff --git a/_partials/getting-started/_cluster_profile_import_aws.mdx b/_partials/getting-started/_cluster_profile_import_aws.mdx new file mode 100644 index 0000000000..ffdd99792a --- /dev/null +++ b/_partials/getting-started/_cluster_profile_import_aws.mdx @@ -0,0 +1,109 @@ +--- +partial_category: getting-started +partial_name: import-hello-uni-aws +--- + +```json +{ + "metadata": { + "name": "aws-profile", + "description": "Cluster profile to deploy to AWS.", + "labels": {} + }, + "spec": { + "version": "1.0.0", + "template": { + "type": "cluster", + "cloudType": "aws", + "packs": [ + { + "name": "ubuntu-aws", + "type": "spectro", + "layer": "os", + "version": "22.04", + "tag": "22.04", + "values": "# Spectro Golden images includes most of the hardening as per CIS Ubuntu Linux 22.04 LTS Server L1 v1.0.0 standards\n\n# Uncomment below section to\n# 1. Include custom files to be copied over to the nodes and/or\n# 2. Execute list of commands before or after kubeadm init/join is executed\n#\n#kubeadmconfig:\n# preKubeadmCommands:\n# - echo \"Executing pre kube admin config commands\"\n# - update-ca-certificates\n# - 'systemctl restart containerd; sleep 3'\n# - 'while [ ! -S /var/run/containerd/containerd.sock ]; do echo \"Waiting for containerd...\"; sleep 1; done'\n# postKubeadmCommands:\n# - echo \"Executing post kube admin config commands\"\n# files:\n# - targetPath: /usr/local/share/ca-certificates/mycom.crt\n# targetOwner: \"root:root\"\n# targetPermissions: \"0644\"\n# content: |\n# -----BEGIN CERTIFICATE-----\n# MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\n# cm5ldGVzMB4XDTIwMDkyMjIzNDMyM1oXDTMwMDkyMDIzNDgyM1owFTETMBEGA1UE\n# AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMdA\n# nZYs1el/6f9PgV/aO9mzy7MvqaZoFnqO7Qi4LZfYzixLYmMUzi+h8/RLPFIoYLiz\n# qiDn+P8c9I1uxB6UqGrBt7dkXfjrUZPs0JXEOX9U/6GFXL5C+n3AUlAxNCS5jobN\n# fbLt7DH3WoT6tLcQefTta2K+9S7zJKcIgLmBlPNDijwcQsbenSwDSlSLkGz8v6N2\n# 7SEYNCV542lbYwn42kbcEq2pzzAaCqa5uEPsR9y+uzUiJpv5tDHUdjbFT8tme3vL\n# 9EdCPODkqtMJtCvz0hqd5SxkfeC2L+ypaiHIxbwbWe7GtliROvz9bClIeGY7gFBK\n# jZqpLdbBVjo0NZBTJFUCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB\n# /wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBADIKoE0P+aVJGV9LWGLiOhki\n# HFv/vPPAQ2MPk02rLjWzCaNrXD7aPPgT/1uDMYMHD36u8rYyf4qPtB8S5REWBM/Y\n# g8uhnpa/tGsaqO8LOFj6zsInKrsXSbE6YMY6+A8qvv5lPWpJfrcCVEo2zOj7WGoJ\n# ixi4B3fFNI+wih8/+p4xW+n3fvgqVYHJ3zo8aRLXbXwztp00lXurXUyR8EZxyR+6\n# b+IDLmHPEGsY9KOZ9VLLPcPhx5FR9njFyXvDKmjUMJJgUpRkmsuU1mCFC+OHhj56\n# IkLaSJf6z/p2a3YjTxvHNCqFMLbJ2FvJwYCRzsoT2wm2oulnUAMWPI10vdVM+Nc=\n# -----END CERTIFICATE-----", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "kubernetes", + "type": "spectro", + "layer": "k8s", + "version": "1.29.8", + "tag": "1.29.x", + "values": "# spectrocloud.com/enabled-presets: Kube Controller Manager:loopback-ctrlmgr,Kube Scheduler:loopback-scheduler\npack:\n content:\n images:\n - image: registry.k8s.io/coredns/coredns:v1.11.1\n - image: registry.k8s.io/etcd:3.5.12-0\n - image: registry.k8s.io/kube-apiserver:v1.29.8\n - image: registry.k8s.io/kube-controller-manager:v1.29.8\n - image: registry.k8s.io/kube-proxy:v1.29.8\n - image: registry.k8s.io/kube-scheduler:v1.29.8\n - image: registry.k8s.io/pause:3.9\n - image: registry.k8s.io/pause:3.8\n #CIDR Range for Pods in cluster\n # Note : This must not overlap with any of the host or service network\n podCIDR: \"192.168.0.0/16\"\n #CIDR notation IP range from which to assign service cluster IPs\n # Note : This must not overlap with any IP ranges assigned to nodes for pods.\n serviceClusterIpRange: \"10.96.0.0/12\"\n # serviceDomain: \"cluster.local\"\n\nkubeadmconfig:\n apiServer:\n extraArgs:\n # Note : secure-port flag is used during kubeadm init. Do not change this flag on a running cluster\n secure-port: \"6443\"\n anonymous-auth: \"true\"\n profiling: \"false\"\n disable-admission-plugins: \"AlwaysAdmit\"\n default-not-ready-toleration-seconds: \"60\"\n default-unreachable-toleration-seconds: \"60\"\n enable-admission-plugins: \"AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity\"\n admission-control-config-file: \"/etc/kubernetes/pod-security-standard.yaml\"\n audit-log-path: /var/log/apiserver/audit.log\n audit-policy-file: /etc/kubernetes/audit-policy.yaml\n audit-log-maxage: \"30\"\n audit-log-maxbackup: \"10\"\n audit-log-maxsize: \"100\"\n authorization-mode: RBAC,Node\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n extraVolumes:\n - name: audit-log\n hostPath: /var/log/apiserver\n mountPath: /var/log/apiserver\n pathType: DirectoryOrCreate\n - name: audit-policy\n hostPath: /etc/kubernetes/audit-policy.yaml\n mountPath: /etc/kubernetes/audit-policy.yaml\n readOnly: true\n pathType: File\n - name: pod-security-standard\n hostPath: /etc/kubernetes/pod-security-standard.yaml\n mountPath: /etc/kubernetes/pod-security-standard.yaml\n readOnly: true\n pathType: File\n controllerManager:\n extraArgs:\n profiling: \"false\"\n terminated-pod-gc-threshold: \"25\"\n use-service-account-credentials: \"true\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n scheduler:\n extraArgs:\n profiling: \"false\"\n kubeletExtraArgs:\n read-only-port: \"0\"\n event-qps: \"0\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n protect-kernel-defaults: \"true\"\n rotate-server-certificates: \"true\"\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n files:\n - path: hardening/audit-policy.yaml\n targetPath: /etc/kubernetes/audit-policy.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - path: hardening/90-kubelet.conf\n targetPath: /etc/sysctl.d/90-kubelet.conf\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - targetPath: /etc/kubernetes/pod-security-standard.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n content: |\n apiVersion: apiserver.config.k8s.io/v1\n kind: AdmissionConfiguration\n plugins:\n - name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"v1.29\"\n audit: \"baseline\"\n audit-version: \"v1.29\"\n warn: \"restricted\"\n warn-version: \"v1.29\"\n audit: \"restricted\"\n audit-version: \"v1.29\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: [kube-system]\n\n preKubeadmCommands:\n # For enabling 'protect-kernel-defaults' flag to kubelet, kernel parameters changes are required\n - 'echo \"====> Applying kernel parameters for Kubelet\"'\n - 'sysctl -p /etc/sysctl.d/90-kubelet.conf'\n \n postKubeadmCommands:\n - 'chmod 600 /var/lib/kubelet/config.yaml'\n # - 'echo \"List of post kubeadm commands to be executed\"'\n\n# Client configuration to add OIDC based authentication flags in kubeconfig\n#clientConfig:\n #oidc-issuer-url: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-issuer-url }}\"\n #oidc-client-id: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-client-id }}\"\n #oidc-client-secret: 1gsranjjmdgahm10j8r6m47ejokm9kafvcbhi3d48jlc3rfpprhv\n #oidc-extra-scope: profile,email", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "cni-calico", + "type": "spectro", + "layer": "cni", + "version": "3.28.0", + "tag": "3.28.0", + "values": "# spectrocloud.com/enabled-presets: Microk8s:microk8s-false\npack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/calico/3.28.0/cni:v3.28.0\n - image: gcr.io/spectro-images-public/packs/calico/3.28.0/node:v3.28.0\n - image: gcr.io/spectro-images-public/packs/calico/3.28.0/kube-controllers:v3.28.0\n\nmanifests:\n calico:\n microk8s: \"false\"\n images:\n cni: \"\"\n node: \"\"\n kubecontroller: \"\"\n # IPAM type to use. Supported types are calico-ipam, host-local\n ipamType: \"calico-ipam\"\n\n calico_ipam:\n assign_ipv4: true\n assign_ipv6: false\n\n # Should be one of CALICO_IPV4POOL_IPIP or CALICO_IPV4POOL_VXLAN \n encapsulationType: \"CALICO_IPV4POOL_IPIP\"\n\n # Should be one of Always, CrossSubnet, Never\n encapsulationMode: \"Always\"\n\n env:\n # Additional env variables for calico-node\n calicoNode:\n #IPV6: \"autodetect\"\n #FELIX_IPV6SUPPORT: \"true\"\n #CALICO_IPV6POOL_NAT_OUTGOING: \"true\"\n #CALICO_IPV4POOL_CIDR: \"192.168.0.0/16\"\n #IP_AUTODETECTION_METHOD: \"first-found\"\n\n # Additional env variables for calico-kube-controller deployment\n calicoKubeControllers:\n #LOG_LEVEL: \"info\"\n #SYNC_NODE_LABELS: \"true\"", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "csi-aws-ebs", + "type": "spectro", + "layer": "csi", + "version": "1.30.0", + "tag": "1.30.0", + "values": "# spectrocloud.com/enabled-presets: Microk8s:microk8s-false\npack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/aws-ebs-csi-driver:v1.30.0\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-provisioner:v4.0.1-eks-1-30-2\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-attacher:v4.5.1-eks-1-30-2\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-resizer:v1.10.1-eks-1-30-2\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/livenessprobe:v2.12.0-eks-1-30-2\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/node-driver-registrar:v2.10.1-eks-1-30-2\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-snapshotter/csi-snapshotter:v7.0.2-eks-1-30-2\n - image: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/volume-modifier-for-k8s:v0.3.0\n charts:\n - repo: https://kubernetes-sigs.github.io/aws-ebs-csi-driver \n name: aws-ebs-csi-driver\n version: 2.30.0\n namespace: \"kube-system\"\n\ncharts:\n aws-ebs-csi-driver:\n storageClasses: \n # Default Storage Class\n - name: spectro-storage-class\n # annotation metadata\n annotations:\n storageclass.kubernetes.io/is-default-class: \"true\"\n # label metadata\n # labels:\n # my-label-is: supercool\n # defaults to WaitForFirstConsumer\n volumeBindingMode: WaitForFirstConsumer\n # defaults to Delete\n reclaimPolicy: Delete\n parameters:\n # File system type: xfs, ext2, ext3, ext4\n csi.storage.k8s.io/fstype: \"ext4\"\n # EBS volume type: io1, io2, gp2, gp3, sc1, st1, standard\n type: \"gp2\"\n # I/O operations per second per GiB. Required when io1 or io2 volume type is specified.\n # iopsPerGB: \"\"\n # Applicable only when io1 or io2 volume type is specified\n # allowAutoIOPSPerGBIncrease: false\n # I/O operations per second. Applicable only for gp3 volumes.\n # iops: \"\"\n # Throughput in MiB/s. Applicable only for gp3 volumes.\n # throughput: \"\"\n # Whether the volume should be encrypted or not\n # encrypted: \"\"\n # The full ARN of the key to use when encrypting the volume. When not specified, the default KMS key is used.\n # kmsKeyId: \"\"\n # Additional Storage Class \n # - name: addon-storage-class\n # annotations:\n # storageclass.kubernetes.io/is-default-class: \"false\"\n # labels:\n # my-label-is: supercool\n # volumeBindingMode: WaitForFirstConsumer\n # reclaimPolicy: Delete\n # parameters:\n # csi.storage.k8s.io/fstype: \"ext4\"\n # type: \"gp2\"\n # iopsPerGB: \"\"\n # allowAutoIOPSPerGBIncrease: false\n # iops: \"\"\n # throughput: \"\"\n # encrypted: \"\"\n # kmsKeyId: \"\"\n\n image:\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/aws-ebs-csi-driver\n # Overrides the image tag whose default is v{{ .Chart.AppVersion }}\n tag: \"v1.30.0\"\n pullPolicy: IfNotPresent\n \n # -- Custom labels to add into metadata\n customLabels:\n {}\n # k8s-app: aws-ebs-csi-driver\n \n sidecars:\n provisioner:\n env: []\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-provisioner\n tag: \"v4.0.1-eks-1-30-2\"\n logLevel: 2\n # Additional parameters provided by external-provisioner.\n additionalArgs: []\n # Grant additional permissions to external-provisioner\n additionalClusterRoleRules:\n resources: {}\n # Tune leader lease election for csi-provisioner.\n # Leader election is on by default.\n leaderElection:\n enabled: true\n # Optional values to tune lease behavior.\n # The arguments provided must be in an acceptable time.ParseDuration format.\n # Ref: https://pkg.go.dev/flag#Duration\n # leaseDuration: \"15s\"\n # renewDeadline: \"10s\"\n # retryPeriod: \"5s\"\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n attacher:\n env: []\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-attacher\n tag: \"v4.5.1-eks-1-30-2\"\n # Tune leader lease election for csi-attacher.\n # Leader election is on by default.\n leaderElection:\n enabled: true\n # Optional values to tune lease behavior.\n # The arguments provided must be in an acceptable time.ParseDuration format.\n # Ref: https://pkg.go.dev/flag#Duration\n # leaseDuration: \"15s\"\n # renewDeadline: \"10s\"\n # retryPeriod: \"5s\"\n logLevel: 2\n # Additional parameters provided by external-attacher.\n additionalArgs: []\n # Grant additional permissions to external-attacher\n additionalClusterRoleRules: []\n resources: {}\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n snapshotter:\n # Enables the snapshotter sidecar even if the snapshot CRDs are not installed\n forceEnable: false\n env: []\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-snapshotter/csi-snapshotter\n tag: \"v7.0.2-eks-1-30-2\"\n logLevel: 2\n # Additional parameters provided by csi-snapshotter.\n additionalArgs: []\n # Grant additional permissions to csi-snapshotter\n additionalClusterRoleRules: []\n resources: {}\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n livenessProbe:\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/livenessprobe\n tag: \"v2.12.0-eks-1-30-2\"\n # Additional parameters provided by livenessprobe.\n additionalArgs: []\n resources: {}\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n resizer:\n env: []\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/external-resizer\n tag: \"v1.10.1-eks-1-30-2\"\n # Tune leader lease election for csi-resizer.\n # Leader election is on by default.\n leaderElection:\n enabled: true\n # Optional values to tune lease behavior.\n # The arguments provided must be in an acceptable time.ParseDuration format.\n # Ref: https://pkg.go.dev/flag#Duration\n # leaseDuration: \"15s\"\n # renewDeadline: \"10s\"\n # retryPeriod: \"5s\"\n logLevel: 2\n # Additional parameters provided by external-resizer.\n additionalArgs: []\n # Grant additional permissions to external-resizer\n additionalClusterRoleRules: []\n resources: {}\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n nodeDriverRegistrar:\n env: []\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/node-driver-registrar\n tag: \"v2.10.1-eks-1-30-2\"\n logLevel: 2\n # Additional parameters provided by node-driver-registrar.\n additionalArgs: []\n resources: {}\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n livenessProbe:\n exec:\n command:\n - /csi-node-driver-registrar\n - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)\n - --mode=kubelet-registration-probe\n initialDelaySeconds: 30\n periodSeconds: 90\n timeoutSeconds: 15\n volumemodifier:\n env: []\n image:\n pullPolicy: IfNotPresent\n repository: gcr.io/spectro-images-public/packs/csi-aws-ebs/1.30.0/volume-modifier-for-k8s\n tag: \"v0.3.0\"\n leaderElection:\n enabled: true\n # Optional values to tune lease behavior.\n # The arguments provided must be in an acceptable time.ParseDuration format.\n # Ref: https://pkg.go.dev/flag#Duration\n # leaseDuration: \"15s\"\n # renewDeadline: \"10s\"\n # retryPeriod: \"5s\"\n logLevel: 2\n # Additional parameters provided by volume-modifier-for-k8s.\n additionalArgs: []\n resources: {}\n securityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n \n proxy:\n http_proxy:\n no_proxy:\n \n imagePullSecrets: []\n nameOverride:\n fullnameOverride:\n \n awsAccessSecret:\n name: aws-secret\n keyId: key_id\n accessKey: access_key\n \n controller:\n batching: true\n volumeModificationFeature:\n enabled: false\n # Additional parameters provided by aws-ebs-csi-driver controller.\n additionalArgs: []\n sdkDebugLog: false\n loggingFormat: text\n affinity:\n nodeAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 1\n preference:\n matchExpressions:\n - key: eks.amazonaws.com/compute-type\n operator: NotIn\n values:\n - fargate\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - podAffinityTerm:\n labelSelector:\n matchExpressions:\n - key: app\n operator: In\n values:\n - ebs-csi-controller\n topologyKey: kubernetes.io/hostname\n weight: 100\n # The default filesystem type of the volume to provision when fstype is unspecified in the StorageClass.\n # If the default is not set and fstype is unset in the StorageClass, then no fstype will be set\n defaultFsType: ext4\n env: []\n # Use envFrom to reference ConfigMaps and Secrets across all containers in the deployment\n envFrom: []\n # If set, add pv/pvc metadata to plugin create requests as parameters.\n extraCreateMetadata: true\n # Extra volume tags to attach to each dynamically provisioned volume.\n # ---\n # extraVolumeTags:\n # key1: value1\n # key2: value2\n extraVolumeTags: {}\n httpEndpoint:\n # (deprecated) The TCP network address where the prometheus metrics endpoint\n # will run (example: `:8080` which corresponds to port 8080 on local host).\n # The default is empty string, which means metrics endpoint is disabled.\n # ---\n enableMetrics: false\n serviceMonitor:\n # Enables the ServiceMonitor resource even if the prometheus-operator CRDs are not installed\n forceEnable: false\n # Additional labels for ServiceMonitor object\n labels:\n release: prometheus\n # If set to true, AWS API call metrics will be exported to the following\n # TCP endpoint: \"0.0.0.0:3301\"\n # ---\n # ID of the Kubernetes cluster used for tagging provisioned EBS volumes (optional).\n k8sTagClusterId:\n logLevel: 2\n userAgentExtra: \"helm\"\n nodeSelector: {}\n deploymentAnnotations: {}\n podAnnotations: {}\n podLabels: {}\n priorityClassName: system-cluster-critical\n # AWS region to use. If not specified then the region will be looked up via the AWS EC2 metadata\n # service.\n # ---\n # region: us-east-1\n region:\n replicaCount: 2\n revisionHistoryLimit: 10\n socketDirVolume:\n emptyDir: {}\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: 1\n # type: RollingUpdate\n # rollingUpdate:\n # maxSurge: 0\n # maxUnavailable: 1\n resources:\n requests:\n cpu: 10m\n memory: 40Mi\n limits:\n cpu: 100m\n memory: 256Mi\n serviceAccount:\n # A service account will be created for you if set to true. Set to false if you want to use your own.\n create: true\n name: ebs-csi-controller-sa\n annotations: {}\n ## Enable if EKS IAM for SA is used\n # eks.amazonaws.com/role-arn: arn::iam:::role/ebs-csi-role\n automountServiceAccountToken: true\n tolerations:\n - key: CriticalAddonsOnly\n operator: Exists\n - effect: NoExecute\n operator: Exists\n tolerationSeconds: 300\n # TSCs without the label selector stanza\n #\n # Example:\n #\n # topologySpreadConstraints:\n # - maxSkew: 1\n # topologyKey: topology.kubernetes.io/zone\n # whenUnsatisfiable: ScheduleAnyway\n # - maxSkew: 1\n # topologyKey: kubernetes.io/hostname\n # whenUnsatisfiable: ScheduleAnyway\n topologySpreadConstraints: []\n # securityContext on the controller pod\n securityContext:\n runAsNonRoot: true\n runAsUser: 1000\n runAsGroup: 1000\n fsGroup: 1000\n # Add additional volume mounts on the controller with controller.volumes and controller.volumeMounts\n volumes: []\n # Add additional volumes to be mounted onto the controller:\n # - name: custom-dir\n # hostPath:\n # path: /path/to/dir\n # type: Directory\n volumeMounts: []\n # And add mount paths for those additional volumes:\n # - name: custom-dir\n # mountPath: /mount/path\n # ---\n # securityContext on the controller container (see sidecars for securityContext on sidecar containers)\n containerSecurityContext:\n readOnlyRootFilesystem: true\n allowPrivilegeEscalation: false\n initContainers: []\n # containers to be run before the controller's container starts.\n #\n # Example:\n #\n # - name: wait\n # image: busybox\n # command: [ 'sh', '-c', \"sleep 20\" ]\n # Enable opentelemetry tracing for the plugin running on the daemonset\n otelTracing: {}\n # otelServiceName: ebs-csi-controller\n # otelExporterEndpoint: \"http://localhost:4317\"\n \n node:\n env: []\n envFrom: []\n kubeletPath: /var/lib/kubelet\n loggingFormat: text\n logLevel: 2\n priorityClassName:\n additionalArgs: []\n affinity:\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: eks.amazonaws.com/compute-type\n operator: NotIn\n values:\n - fargate\n - key: node.kubernetes.io/instance-type\n operator: NotIn\n values:\n - a1.medium\n - a1.large\n - a1.xlarge\n - a1.2xlarge\n - a1.4xlarge\n nodeSelector: {}\n daemonSetAnnotations: {}\n podAnnotations: {}\n podLabels: {}\n tolerateAllTaints: true\n tolerations:\n - operator: Exists\n effect: NoExecute\n tolerationSeconds: 300\n resources:\n requests:\n cpu: 10m\n memory: 40Mi\n limits:\n cpu: 100m\n memory: 256Mi\n revisionHistoryLimit: 10\n probeDirVolume:\n emptyDir: {}\n serviceAccount:\n create: true\n name: ebs-csi-node-sa\n annotations: {}\n ## Enable if EKS IAM for SA is used\n # eks.amazonaws.com/role-arn: arn::iam:::role/ebs-csi-role\n automountServiceAccountToken: true\n # Enable the linux daemonset creation\n enableLinux: true\n enableWindows: false\n # The number of attachment slots to reserve for system use (and not to be used for CSI volumes)\n # When this parameter is not specified (or set to -1), the EBS CSI Driver will attempt to determine the number of reserved slots via heuristic\n # Cannot be specified at the same time as `node.volumeAttachLimit`\n reservedVolumeAttachments:\n # The \"maximum number of attachable volumes\" per node\n # Cannot be specified at the same time as `node.reservedVolumeAttachments`\n volumeAttachLimit:\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n maxUnavailable: \"10%\"\n hostNetwork: false\n # securityContext on the node pod\n securityContext:\n # The node pod must be run as root to bind to the registration/driver sockets\n runAsNonRoot: false\n runAsUser: 0\n runAsGroup: 0\n fsGroup: 0\n # Add additional volume mounts on the node pods with node.volumes and node.volumeMounts\n volumes: []\n # Add additional volumes to be mounted onto the node pods:\n # - name: custom-dir\n # hostPath:\n # path: /path/to/dir\n # type: Directory\n volumeMounts: []\n # And add mount paths for those additional volumes:\n # - name: custom-dir\n # mountPath: /mount/path\n # ---\n # securityContext on the node container (see sidecars for securityContext on sidecar containers)\n containerSecurityContext:\n readOnlyRootFilesystem: true\n privileged: true\n # Enable opentelemetry tracing for the plugin running on the daemonset\n otelTracing: {}\n # otelServiceName: ebs-csi-node\n # otelExporterEndpoint: \"http://localhost:4317\"\n \n additionalDaemonSets:\n # Additional node DaemonSets, using the node config structure\n # See docs/additional-daemonsets.md for more information\n #\n # example:\n # nodeSelector:\n # node.kubernetes.io/instance-type: c5.large\n # volumeAttachLimit: 15\n \n # Enable compatibility for the A1 instance family via use of an AL2-based image in a separate DaemonSet\n # a1CompatibilityDaemonSet: true\n \n # storageClasses: []\n # Add StorageClass resources like:\n # - name: ebs-sc\n # # annotation metadata\n # annotations:\n # storageclass.kubernetes.io/is-default-class: \"true\"\n # # label metadata\n # labels:\n # my-label-is: supercool\n # # defaults to WaitForFirstConsumer\n # volumeBindingMode: WaitForFirstConsumer\n # # defaults to Delete\n # reclaimPolicy: Retain\n # parameters:\n # encrypted: \"true\"\n \n volumeSnapshotClasses: []\n # Add VolumeSnapshotClass resources like:\n # - name: ebs-vsc\n # # annotation metadata\n # annotations:\n # snapshot.storage.kubernetes.io/is-default-class: \"true\"\n # # label metadata\n # labels:\n # my-label-is: supercool\n # # deletionPolicy must be specified\n # deletionPolicy: Delete\n # parameters:\n \n # Use old CSIDriver without an fsGroupPolicy set\n # Intended for use with older clusters that cannot easily replace the CSIDriver object\n # This parameter should always be false for new installations\n useOldCSIDriver: false", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "hello-universe", + "type": "oci", + "layer": "addon", + "version": "1.2.0", + "tag": "1.2.0", + "values": "# spectrocloud.com/enabled-presets: Backend:disable-api\npack:\n content:\n images:\n - image: ghcr.io/spectrocloud/hello-universe:1.2.0\n spectrocloud.com/install-priority: 0\n\nmanifests:\n hello-universe:\n images:\n hellouniverse: ghcr.io/spectrocloud/hello-universe:1.2.0\n apiEnabled: false\n namespace: hello-universe\n port: 8080\n replicas: 1", + "registry": { + "metadata": { + "uid": "64eaff5630402973c4e1856a", + "name": "Palette Community Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + } + ] + }, + "variables": [] + } +} +``` \ No newline at end of file diff --git a/_partials/getting-started/_cluster_profile_import_azure.mdx b/_partials/getting-started/_cluster_profile_import_azure.mdx new file mode 100644 index 0000000000..45e50ae595 --- /dev/null +++ b/_partials/getting-started/_cluster_profile_import_azure.mdx @@ -0,0 +1,109 @@ +--- +partial_category: getting-started +partial_name: import-hello-uni-azure +--- + +```json +{ + "metadata": { + "name": "azure-profile", + "description": "Cluster profile to deploy to Azure.", + "labels": {} + }, + "spec": { + "version": "1.0.0", + "template": { + "type": "cluster", + "cloudType": "azure", + "packs": [ + { + "name": "ubuntu-azure", + "type": "oci", + "layer": "os", + "version": "22.04", + "tag": "22.04", + "values": "# Spectro Golden images includes most of the hardening as per CIS Ubuntu Linux 22.04 LTS Server L1 v1.0.0 standards\n# Uncomment below section to\n# 1. Include custom files to be copied over to the nodes and/or\n# 2. Execute list of commands before or after kubeadm init/join is executed\n#\n#kubeadmconfig:\n# preKubeadmCommands:\n# - echo \"Executing pre kube admin config commands\"\n# - update-ca-certificates\n# - 'systemctl restart containerd; sleep 3'\n# - 'while [ ! -S /var/run/containerd/containerd.sock ]; do echo \"Waiting for containerd...\"; sleep 1; done'\n# postKubeadmCommands:\n# - echo \"Executing post kube admin config commands\"\n# files:\n# - targetPath: /usr/local/share/ca-certificates/mycom.crt\n# targetOwner: \"root:root\"\n# targetPermissions: \"0644\"\n# content: |\n# -----BEGIN CERTIFICATE-----\n# MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\n# cm5ldGVzMB4XDTIwMDkyMjIzNDMyM1oXDTMwMDkyMDIzNDgyM1owFTETMBEGA1UE\n# AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMdA\n# nZYs1el/6f9PgV/aO9mzy7MvqaZoFnqO7Qi4LZfYzixLYmMUzi+h8/RLPFIoYLiz\n# qiDn+P8c9I1uxB6UqGrBt7dkXfjrUZPs0JXEOX9U/6GFXL5C+n3AUlAxNCS5jobN\n# fbLt7DH3WoT6tLcQefTta2K+9S7zJKcIgLmBlPNDijwcQsbenSwDSlSLkGz8v6N2\n# 7SEYNCV542lbYwn42kbcEq2pzzAaCqa5uEPsR9y+uzUiJpv5tDHUdjbFT8tme3vL\n# 9EdCPODkqtMJtCvz0hqd5SxkfeC2L+ypaiHIxbwbWe7GtliROvz9bClIeGY7gFBK\n# jZqpLdbBVjo0NZBTJFUCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB\n# /wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBADIKoE0P+aVJGV9LWGLiOhki\n# HFv/vPPAQ2MPk02rLjWzCaNrXD7aPPgT/1uDMYMHD36u8rYyf4qPtB8S5REWBM/Y\n# g8uhnpa/tGsaqO8LOFj6zsInKrsXSbE6YMY6+A8qvv5lPWpJfrcCVEo2zOj7WGoJ\n# ixi4B3fFNI+wih8/+p4xW+n3fvgqVYHJ3zo8aRLXbXwztp00lXurXUyR8EZxyR+6\n# b+IDLmHPEGsY9KOZ9VLLPcPhx5FR9njFyXvDKmjUMJJgUpRkmsuU1mCFC+OHhj56\n# IkLaSJf6z/p2a3YjTxvHNCqFMLbJ2FvJwYCRzsoT2wm2oulnUAMWPI10vdVM+Nc=\n# -----END CERTIFICATE-----", + "registry": { + "metadata": { + "uid": "64eaff453040297344bcad5d", + "name": "Palette Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + }, + { + "name": "kubernetes", + "type": "oci", + "layer": "k8s", + "version": "1.27.16", + "tag": "1.27.x", + "values": "# spectrocloud.com/enabled-presets: Kube Controller Manager:loopback-ctrlmgr,Kube Scheduler:loopback-scheduler\npack:\n content:\n images:\n - image: registry.k8s.io/coredns/coredns:v1.10.1\n - image: registry.k8s.io/etcd:3.5.12-0\n - image: registry.k8s.io/kube-apiserver:v1.27.15\n - image: registry.k8s.io/kube-controller-manager:v1.27.15\n - image: registry.k8s.io/kube-proxy:v1.27.15\n - image: registry.k8s.io/kube-scheduler:v1.27.15\n - image: registry.k8s.io/pause:3.9\n - image: registry.k8s.io/pause:3.8\n #CIDR Range for Pods in cluster\n # Note : This must not overlap with any of the host or service network\n podCIDR: \"192.168.0.0/16\"\n #CIDR notation IP range from which to assign service cluster IPs\n # Note : This must not overlap with any IP ranges assigned to nodes for pods.\n serviceClusterIpRange: \"10.96.0.0/12\"\n # serviceDomain: \"cluster.local\"\n\nkubeadmconfig:\n apiServer:\n extraArgs:\n # Note : secure-port flag is used during kubeadm init. Do not change this flag on a running cluster\n secure-port: \"6443\"\n anonymous-auth: \"true\"\n profiling: \"false\"\n disable-admission-plugins: \"AlwaysAdmit\"\n default-not-ready-toleration-seconds: \"60\"\n default-unreachable-toleration-seconds: \"60\"\n enable-admission-plugins: \"AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity\"\n admission-control-config-file: \"/etc/kubernetes/pod-security-standard.yaml\"\n audit-log-path: /var/log/apiserver/audit.log\n audit-policy-file: /etc/kubernetes/audit-policy.yaml\n audit-log-maxage: \"30\"\n audit-log-maxbackup: \"10\"\n audit-log-maxsize: \"100\"\n authorization-mode: RBAC,Node\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n extraVolumes:\n - name: audit-log\n hostPath: /var/log/apiserver\n mountPath: /var/log/apiserver\n pathType: DirectoryOrCreate\n - name: audit-policy\n hostPath: /etc/kubernetes/audit-policy.yaml\n mountPath: /etc/kubernetes/audit-policy.yaml\n readOnly: true\n pathType: File\n - name: pod-security-standard\n hostPath: /etc/kubernetes/pod-security-standard.yaml\n mountPath: /etc/kubernetes/pod-security-standard.yaml\n readOnly: true\n pathType: File\n controllerManager:\n extraArgs:\n profiling: \"false\"\n terminated-pod-gc-threshold: \"25\"\n use-service-account-credentials: \"true\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n scheduler:\n extraArgs:\n profiling: \"false\"\n kubeletExtraArgs:\n read-only-port : \"0\"\n event-qps: \"0\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n protect-kernel-defaults: \"true\"\n rotate-server-certificates: \"true\"\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n files:\n - path: hardening/audit-policy.yaml\n targetPath: /etc/kubernetes/audit-policy.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - path: hardening/90-kubelet.conf\n targetPath: /etc/sysctl.d/90-kubelet.conf\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - targetPath: /etc/kubernetes/pod-security-standard.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n content: |\n apiVersion: apiserver.config.k8s.io/v1\n kind: AdmissionConfiguration\n plugins:\n - name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"v1.27\"\n audit: \"baseline\"\n audit-version: \"v1.27\"\n warn: \"restricted\"\n warn-version: \"v1.27\"\n audit: \"restricted\"\n audit-version: \"v1.27\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: [kube-system]\n\n preKubeadmCommands:\n # For enabling 'protect-kernel-defaults' flag to kubelet, kernel parameters changes are required\n - 'echo \"====> Applying kernel parameters for Kubelet\"'\n - 'sysctl -p /etc/sysctl.d/90-kubelet.conf'\n postKubeadmCommands:\n - 'chmod 600 /var/lib/kubelet/config.yaml'\n #- 'echo \"List of post kubeadm commands to be executed\"'\n\n# Client configuration to add OIDC based authentication flags in kubeconfig\n#clientConfig:\n #oidc-issuer-url: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-issuer-url }}\"\n #oidc-client-id: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-client-id }}\"\n #oidc-client-secret: 1gsranjjmdgahm10j8r6m47ejokm9kafvcbhi3d48jlc3rfpprhv\n #oidc-extra-scope: profile,email", + "registry": { + "metadata": { + "uid": "64eaff453040297344bcad5d", + "name": "Palette Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + }, + { + "name": "cni-calico-azure", + "type": "oci", + "layer": "cni", + "version": "3.26.3", + "tag": "3.26.x", + "values": "pack:\n content:\n images:\n - image: gcr.io/spectro-images-public/calico/cni:v3.26.3\n - image: gcr.io/spectro-images-public/calico/node:v3.26.3\n - image: gcr.io/spectro-images-public/calico/kube-controllers:v3.26.3\n\nmanifests:\n calico:\n images:\n cni: \"\"\n node: \"\"\n kubecontroller: \"\" \n # IPAM type to use. Supported types are calico-ipam, host-local\n ipamType: \"calico-ipam\"\n\n calico_ipam:\n assign_ipv4: true\n assign_ipv6: false\n\n # Should be one of CALICO_IPV4POOL_IPIP or CALICO_IPV4POOL_VXLAN \n encapsulationType: \"CALICO_IPV4POOL_VXLAN\"\n\n # Should be one of Always, CrossSubnet, Never\n encapsulationMode: \"Always\"\n\n env:\n # Additional env variables for calico-node\n calicoNode:\n #IPV6: \"autodetect\"\n #FELIX_IPV6SUPPORT: \"true\"\n #CALICO_IPV6POOL_NAT_OUTGOING: \"true\"\n #CALICO_IPV4POOL_CIDR: \"192.168.0.0/16\"\n #IP_AUTODETECTION_METHOD: \"first-found\"\n\n # Additional env variables for calico-kube-controller deployment\n calicoKubeControllers:\n #LOG_LEVEL: \"info\"\n #SYNC_NODE_LABELS: \"true\"", + "registry": { + "metadata": { + "uid": "64eaff453040297344bcad5d", + "name": "Palette Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + }, + { + "name": "csi-azure", + "type": "oci", + "layer": "csi", + "version": "1.28.3", + "tag": "1.28.x", + "values": "pack:\n content:\n images:\n - image: mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.28.3\n - image: mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.5.0\n - image: mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v4.3.0\n - image: mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.8.0\n - image: mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0\n - image: mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0\n charts:\n - repo: https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts\n name: azuredisk-csi-driver\n version: 1.28.3\n namespace: \"kube-system\"\n\ncharts:\n azuredisk-csi-driver:\n storageclass:\n # Azure storage account Sku tier. Default is empty\n storageaccounttype: \"StandardSSD_LRS\"\n\n # Possible values are shared (default), dedicated, and managed\n kind: \"managed\"\n\n #Allowed reclaim policies are Delete, Retain\n reclaimPolicy: \"Delete\"\n\n #Toggle for Volume expansion\n allowVolumeExpansion: \"true\"\n\n #Toggle for Default class\n isDefaultClass: \"true\"\n\n #Supported binding modes are Immediate, WaitForFirstConsumer\n #Setting binding mode to WaitForFirstConsumer, so that the volumes gets created in the same AZ as that of the pods\n volumeBindingMode: \"WaitForFirstConsumer\"\n\n image:\n baseRepo: mcr.microsoft.com\n azuredisk:\n repository: /oss/kubernetes-csi/azuredisk-csi\n tag: v1.28.3\n pullPolicy: IfNotPresent\n csiProvisioner:\n repository: /oss/kubernetes-csi/csi-provisioner\n tag: v3.5.0\n pullPolicy: IfNotPresent\n csiAttacher:\n repository: /oss/kubernetes-csi/csi-attacher\n tag: v4.3.0\n pullPolicy: IfNotPresent\n csiResizer:\n repository: /oss/kubernetes-csi/csi-resizer\n tag: v1.8.0\n pullPolicy: IfNotPresent\n livenessProbe:\n repository: /oss/kubernetes-csi/livenessprobe\n tag: v2.10.0\n pullPolicy: IfNotPresent\n nodeDriverRegistrar:\n repository: /oss/kubernetes-csi/csi-node-driver-registrar\n tag: v2.8.0\n pullPolicy: IfNotPresent\n \n serviceAccount:\n create: true # When true, service accounts will be created for you. Set to false if you want to use your own.\n controller: csi-azuredisk-controller-sa # Name of Service Account to be created or used\n node: csi-azuredisk-node-sa # Name of Service Account to be created or used\n snapshotController: csi-snapshot-controller-sa # Name of Service Account to be created or used\n \n rbac:\n create: true\n name: azuredisk\n \n controller:\n name: csi-azuredisk-controller\n cloudConfigSecretName: azure-cloud-provider\n cloudConfigSecretNamespace: kube-system\n allowEmptyCloudConfig: false\n enableTrafficManager: false\n trafficManagerPort: 7788\n replicas: 2\n metricsPort: 29604\n livenessProbe:\n healthPort: 29602\n runOnMaster: false\n runOnControlPlane: false\n disableAvailabilitySetNodes: false\n vmType: \"\"\n provisionerWorkerThreads: 100\n attacherWorkerThreads: 1000\n vmssCacheTTLInSeconds: -1\n logLevel: 5\n tolerations:\n - key: \"node-role.kubernetes.io/master\"\n operator: \"Exists\"\n effect: \"NoSchedule\"\n - key: \"node-role.kubernetes.io/controlplane\"\n operator: \"Exists\"\n effect: \"NoSchedule\"\n - key: \"node-role.kubernetes.io/control-plane\"\n operator: \"Exists\"\n effect: \"NoSchedule\"\n hostNetwork: false # this setting could be disabled if controller does not depend on MSI setting\n labels: {}\n annotations: {}\n podLabels: {}\n podAnnotations: {}\n nodeSelector: {}\n affinity: {}\n resources:\n csiProvisioner:\n limits:\n memory: 500Mi\n requests:\n cpu: 10m\n memory: 20Mi\n csiAttacher:\n limits:\n memory: 500Mi\n requests:\n cpu: 10m\n memory: 20Mi\n csiResizer:\n limits:\n memory: 500Mi\n requests:\n cpu: 10m\n memory: 20Mi\n csiSnapshotter:\n limits:\n memory: 200Mi\n requests:\n cpu: 10m\n memory: 20Mi\n livenessProbe:\n limits:\n memory: 100Mi\n requests:\n cpu: 10m\n memory: 20Mi\n azuredisk:\n limits:\n memory: 500Mi\n requests:\n cpu: 10m\n memory: 20Mi\n \n node:\n cloudConfigSecretName: azure-cloud-provider\n cloudConfigSecretNamespace: kube-system\n supportZone: true\n allowEmptyCloudConfig: true\n getNodeIDFromIMDS: false\n maxUnavailable: 1\n logLevel: 5\n livenessProbe:\n healthPort: 29603\n \n snapshot:\n enabled: false\n name: csi-snapshot-controller\n image:\n csiSnapshotter:\n repository: /oss/kubernetes-csi/csi-snapshotter\n tag: v6.2.2\n pullPolicy: IfNotPresent\n csiSnapshotController:\n repository: /oss/kubernetes-csi/snapshot-controller\n tag: v6.2.2\n pullPolicy: IfNotPresent\n snapshotController:\n name: csi-snapshot-controller\n replicas: 2\n labels: {}\n annotations: {}\n podLabels: {}\n podAnnotations: {}\n resources:\n limits:\n memory: 300Mi\n requests:\n cpu: 10m\n memory: 20Mi\n VolumeSnapshotClass:\n enabled: false\n name: csi-azuredisk-vsc\n deletionPolicy: Delete\n parameters:\n incremental: '\"true\"' # available values: \"true\", \"false\" (\"true\" by default for Azure Public Cloud, and \"false\" by default for Azure Stack Cloud)\n resourceGroup: \"\" # available values: EXISTING RESOURCE GROUP (If not specified, snapshot will be stored in the same resource group as source Azure disk)\n tags: \"\" # tag format: 'key1=val1,key2=val2'\n additionalLabels: {}\n \n feature:\n enableFSGroupPolicy: true\n \n driver:\n name: disk.csi.azure.com\n # maximum number of attachable volumes per node,\n # maximum number is defined according to node instance type by default(-1)\n volumeAttachLimit: -1\n customUserAgent: \"\"\n userAgentSuffix: \"OSS-helm\"\n azureGoSDKLogLevel: \"\" # available values: \"\"(no logs), DEBUG, INFO, WARNING, ERROR\n httpsProxy: \"\"\n httpProxy: \"\"\n noProxy: \"\"\n \n linux:\n enabled: true\n dsName: csi-azuredisk-node # daemonset name\n kubelet: /var/lib/kubelet\n distro: debian # available values: debian, fedora\n enablePerfOptimization: true\n enableRegistrationProbe: true\n tolerations:\n - operator: \"Exists\"\n hostNetwork: true # this setting could be disabled if perfProfile is `none`\n getNodeInfoFromLabels: false # get node info from node labels instead of IMDS\n labels: {}\n annotations: {}\n podLabels: {}\n podAnnotations: {}\n nodeSelector: {}\n affinity: {}\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: type\n operator: NotIn\n values:\n - virtual-kubelet\n resources:\n livenessProbe:\n limits:\n memory: 100Mi\n requests:\n cpu: 10m\n memory: 20Mi\n nodeDriverRegistrar:\n limits:\n memory: 100Mi\n requests:\n cpu: 10m\n memory: 20Mi\n azuredisk:\n limits:\n memory: 200Mi\n requests:\n cpu: 10m\n memory: 20Mi\n \n windows:\n enabled: true\n useHostProcessContainers: false\n dsName: csi-azuredisk-node-win # daemonset name\n kubelet: 'C:\\var\\lib\\kubelet'\n getNodeInfoFromLabels: false # get node info from node labels instead of IMDS\n enableRegistrationProbe: true\n tolerations:\n - key: \"node.kubernetes.io/os\"\n operator: \"Exists\"\n effect: \"NoSchedule\"\n labels: {}\n annotations: {}\n podLabels: {}\n podAnnotations: {}\n nodeSelector: {}\n affinity: {}\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: type\n operator: NotIn\n values:\n - virtual-kubelet\n resources:\n livenessProbe:\n limits:\n memory: 150Mi\n requests:\n cpu: 10m\n memory: 40Mi\n nodeDriverRegistrar:\n limits:\n memory: 150Mi\n requests:\n cpu: 30m\n memory: 40Mi\n azuredisk:\n limits:\n memory: 200Mi\n requests:\n cpu: 10m\n memory: 40Mi\n \n cloud: AzurePublicCloud\n \n ## Reference to one or more secrets to be used when pulling images\n ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/\n ##\n imagePullSecrets: []\n # - name: \"image-pull-secret\"\n \n workloadIdentity:\n clientID: \"\"\n # [optional] If the AAD application or user-assigned managed identity is not in the same tenant as the cluster\n # then set tenantID with the application or user-assigned managed identity tenant ID\n tenantID: \"\"", + "registry": { + "metadata": { + "uid": "64eaff453040297344bcad5d", + "name": "Palette Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + }, + { + "name": "hello-universe", + "type": "oci", + "layer": "addon", + "version": "1.2.0", + "tag": "1.2.0", + "values": "# spectrocloud.com/enabled-presets: Backend:disable-api\npack:\n content:\n images:\n - image: ghcr.io/spectrocloud/hello-universe:1.2.0\n spectrocloud.com/install-priority: 0\n\nmanifests:\n hello-universe:\n images:\n hellouniverse: ghcr.io/spectrocloud/hello-universe:1.2.0\n apiEnabled: false\n namespace: hello-universe\n port: 8080\n replicas: 1", + "registry": { + "metadata": { + "uid": "64eaff5630402973c4e1856a", + "name": "Palette Community Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + } + ] + }, + "variables": [] + } +} +``` \ No newline at end of file diff --git a/_partials/getting-started/_cluster_profile_import_gcp.mdx b/_partials/getting-started/_cluster_profile_import_gcp.mdx new file mode 100644 index 0000000000..d8d833f044 --- /dev/null +++ b/_partials/getting-started/_cluster_profile_import_gcp.mdx @@ -0,0 +1,109 @@ +--- +partial_category: getting-started +partial_name: import-hello-uni-gcp +--- + +```json +{ + "metadata": { + "name": "gcp-profile", + "description": "Cluster profile to deploy to GCP.", + "labels": {} + }, + "spec": { + "version": "1.0.0", + "template": { + "type": "cluster", + "cloudType": "gcp", + "packs": [ + { + "name": "ubuntu-gcp", + "type": "spectro", + "layer": "os", + "version": "22.04", + "tag": "22.04", + "values": "# Spectro Golden images includes most of the hardening as per CIS Ubuntu Linux 22.04 LTS Server L1 v1.0.0 standards\n\n# Uncomment below section to\n# 1. Include custom files to be copied over to the nodes and/or\n# 2. Execute list of commands before or after kubeadm init/join is executed\n#\n#kubeadmconfig:\n# preKubeadmCommands:\n# - echo \"Executing pre kube admin config commands\"\n# - update-ca-certificates\n# - 'systemctl restart containerd; sleep 3'\n# - 'while [ ! -S /var/run/containerd/containerd.sock ]; do echo \"Waiting for containerd...\"; sleep 1; done'\n# postKubeadmCommands:\n# - echo \"Executing post kube admin config commands\"\n# files:\n# - targetPath: /usr/local/share/ca-certificates/mycom.crt\n# targetOwner: \"root:root\"\n# targetPermissions: \"0644\"\n# content: |\n# -----BEGIN CERTIFICATE-----\n# MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\n# cm5ldGVzMB4XDTIwMDkyMjIzNDMyM1oXDTMwMDkyMDIzNDgyM1owFTETMBEGA1UE\n# AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMdA\n# nZYs1el/6f9PgV/aO9mzy7MvqaZoFnqO7Qi4LZfYzixLYmMUzi+h8/RLPFIoYLiz\n# qiDn+P8c9I1uxB6UqGrBt7dkXfjrUZPs0JXEOX9U/6GFXL5C+n3AUlAxNCS5jobN\n# fbLt7DH3WoT6tLcQefTta2K+9S7zJKcIgLmBlPNDijwcQsbenSwDSlSLkGz8v6N2\n# 7SEYNCV542lbYwn42kbcEq2pzzAaCqa5uEPsR9y+uzUiJpv5tDHUdjbFT8tme3vL\n# 9EdCPODkqtMJtCvz0hqd5SxkfeC2L+ypaiHIxbwbWe7GtliROvz9bClIeGY7gFBK\n# jZqpLdbBVjo0NZBTJFUCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB\n# /wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBADIKoE0P+aVJGV9LWGLiOhki\n# HFv/vPPAQ2MPk02rLjWzCaNrXD7aPPgT/1uDMYMHD36u8rYyf4qPtB8S5REWBM/Y\n# g8uhnpa/tGsaqO8LOFj6zsInKrsXSbE6YMY6+A8qvv5lPWpJfrcCVEo2zOj7WGoJ\n# ixi4B3fFNI+wih8/+p4xW+n3fvgqVYHJ3zo8aRLXbXwztp00lXurXUyR8EZxyR+6\n# b+IDLmHPEGsY9KOZ9VLLPcPhx5FR9njFyXvDKmjUMJJgUpRkmsuU1mCFC+OHhj56\n# IkLaSJf6z/p2a3YjTxvHNCqFMLbJ2FvJwYCRzsoT2wm2oulnUAMWPI10vdVM+Nc=\n# -----END CERTIFICATE-----", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "kubernetes", + "type": "spectro", + "layer": "k8s", + "version": "1.27.16", + "tag": "1.27.x", + "values": "# spectrocloud.com/enabled-presets: Kube Controller Manager:loopback-ctrlmgr,Kube Scheduler:loopback-scheduler\npack:\n content:\n images:\n - image: registry.k8s.io/coredns/coredns:v1.10.1\n - image: registry.k8s.io/etcd:3.5.12-0\n - image: registry.k8s.io/kube-apiserver:v1.27.15\n - image: registry.k8s.io/kube-controller-manager:v1.27.15\n - image: registry.k8s.io/kube-proxy:v1.27.15\n - image: registry.k8s.io/kube-scheduler:v1.27.15\n - image: registry.k8s.io/pause:3.9\n - image: registry.k8s.io/pause:3.8\n #CIDR Range for Pods in cluster\n # Note : This must not overlap with any of the host or service network\n podCIDR: \"192.168.0.0/16\"\n #CIDR notation IP range from which to assign service cluster IPs\n # Note : This must not overlap with any IP ranges assigned to nodes for pods.\n serviceClusterIpRange: \"10.96.0.0/12\"\n # serviceDomain: \"cluster.local\"\n\nkubeadmconfig:\n apiServer:\n extraArgs:\n # Note : secure-port flag is used during kubeadm init. Do not change this flag on a running cluster\n secure-port: \"6443\"\n anonymous-auth: \"true\"\n profiling: \"false\"\n disable-admission-plugins: \"AlwaysAdmit\"\n default-not-ready-toleration-seconds: \"60\"\n default-unreachable-toleration-seconds: \"60\"\n enable-admission-plugins: \"AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity\"\n admission-control-config-file: \"/etc/kubernetes/pod-security-standard.yaml\"\n audit-log-path: /var/log/apiserver/audit.log\n audit-policy-file: /etc/kubernetes/audit-policy.yaml\n audit-log-maxage: \"30\"\n audit-log-maxbackup: \"10\"\n audit-log-maxsize: \"100\"\n authorization-mode: RBAC,Node\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n extraVolumes:\n - name: audit-log\n hostPath: /var/log/apiserver\n mountPath: /var/log/apiserver\n pathType: DirectoryOrCreate\n - name: audit-policy\n hostPath: /etc/kubernetes/audit-policy.yaml\n mountPath: /etc/kubernetes/audit-policy.yaml\n readOnly: true\n pathType: File\n - name: pod-security-standard\n hostPath: /etc/kubernetes/pod-security-standard.yaml\n mountPath: /etc/kubernetes/pod-security-standard.yaml\n readOnly: true\n pathType: File\n controllerManager:\n extraArgs:\n profiling: \"false\"\n terminated-pod-gc-threshold: \"25\"\n use-service-account-credentials: \"true\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n scheduler:\n extraArgs:\n profiling: \"false\"\n kubeletExtraArgs:\n read-only-port : \"0\"\n event-qps: \"0\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n protect-kernel-defaults: \"true\"\n rotate-server-certificates: \"true\"\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n files:\n - path: hardening/audit-policy.yaml\n targetPath: /etc/kubernetes/audit-policy.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - path: hardening/90-kubelet.conf\n targetPath: /etc/sysctl.d/90-kubelet.conf\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - targetPath: /etc/kubernetes/pod-security-standard.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n content: |\n apiVersion: apiserver.config.k8s.io/v1\n kind: AdmissionConfiguration\n plugins:\n - name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"v1.27\"\n audit: \"baseline\"\n audit-version: \"v1.27\"\n warn: \"restricted\"\n warn-version: \"v1.27\"\n audit: \"restricted\"\n audit-version: \"v1.27\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: [kube-system]\n\n preKubeadmCommands:\n # For enabling 'protect-kernel-defaults' flag to kubelet, kernel parameters changes are required\n - 'echo \"====> Applying kernel parameters for Kubelet\"'\n - 'sysctl -p /etc/sysctl.d/90-kubelet.conf'\n postKubeadmCommands:\n - 'chmod 600 /var/lib/kubelet/config.yaml'\n #- 'echo \"List of post kubeadm commands to be executed\"'\n\n# Client configuration to add OIDC based authentication flags in kubeconfig\n#clientConfig:\n #oidc-issuer-url: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-issuer-url }}\"\n #oidc-client-id: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-client-id }}\"\n #oidc-client-secret: 1gsranjjmdgahm10j8r6m47ejokm9kafvcbhi3d48jlc3rfpprhv\n #oidc-extra-scope: profile,email", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "cni-calico", + "type": "spectro", + "layer": "cni", + "version": "3.27.2", + "tag": "3.27.x", + "values": "# spectrocloud.com/enabled-presets: Microk8s:microk8s-false\npack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/cni:v3.27.2\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/node:v3.27.2\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/kube-controllers:v3.27.2\n\nmanifests:\n calico:\n microk8s: \"false\"\n images:\n cni: \"\"\n node: \"\"\n kubecontroller: \"\"\n # IPAM type to use. Supported types are calico-ipam, host-local\n ipamType: \"calico-ipam\"\n\n calico_ipam:\n assign_ipv4: true\n assign_ipv6: false\n\n # Should be one of CALICO_IPV4POOL_IPIP or CALICO_IPV4POOL_VXLAN \n encapsulationType: \"CALICO_IPV4POOL_IPIP\"\n\n # Should be one of Always, CrossSubnet, Never\n encapsulationMode: \"Always\"\n\n env:\n # Additional env variables for calico-node\n calicoNode:\n #IPV6: \"autodetect\"\n #FELIX_IPV6SUPPORT: \"true\"\n #CALICO_IPV6POOL_NAT_OUTGOING: \"true\"\n #CALICO_IPV4POOL_CIDR: \"192.168.0.0/16\"\n #IP_AUTODETECTION_METHOD: \"first-found\"\n\n # Additional env variables for calico-kube-controller deployment\n calicoKubeControllers:\n #LOG_LEVEL: \"info\"\n #SYNC_NODE_LABELS: \"true\"", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "csi-gcp-driver", + "type": "spectro", + "layer": "csi", + "version": "1.12.4", + "tag": "1.12.x", + "values": "pack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/csi-gcp-driver/1.12.4/csi-provisioner:v3.6.2\n - image: gcr.io/spectro-images-public/packs/csi-gcp-driver/1.12.4/csi-attacher:v4.4.2\n - image: gcr.io/spectro-images-public/packs/csi-gcp-driver/1.12.4/csi-resizer:v1.9.2\n - image: gcr.io/spectro-images-public/packs/csi-gcp-driver/1.12.4/csi-snapshotter:v6.3.2\n - image: gcr.io/spectro-images-public/packs/csi-gcp-driver/1.12.4/gcp-compute-persistent-disk-csi-driver:v1.12.4\n - image: gcr.io/spectro-images-public/packs/csi-gcp-driver/1.12.4/csi-node-driver-registrar:v2.9.2\n \nmanifests:\n storageclass:\n #Flag to denote if this should be the default storage class for dynamic provisioning\n isDefaultClass: \"true\"\n\n parameters:\n #Possible values : pd-standard or pd-ssd\n type: \"pd-standard\"\n \n #Possible values: none or regional-pd\n replication-type: \"none\"\n \n #Supported binding modes are Immediate, WaitForFirstConsumer\n volumeBindingMode: \"WaitForFirstConsumer\"\n\n #Set this flag to true to enable volume expansion\n allowVolumeExpansion: true\n\n #Allowed reclaim policies are Delete, Retain\n reclaimPolicy: \"Delete\"\n\n #allowedTopologies\n zones:\n #- us-central1-a\n #- us-central1-b\n\n k8sVersion: \"{{ .spectro.system.kubernetes.version }}\"\n\n controller:\n args:\n csiProvisioner:\n - \"--v=5\"\n - \"--csi-address=/csi/csi.sock\"\n - \"--feature-gates=Topology=true\"\n - \"--http-endpoint=:22011\"\n - \"--leader-election-namespace=$(PDCSI_NAMESPACE)\"\n - \"--timeout=250s\"\n - \"--extra-create-metadata\"\n #- \"--run-controller-service=false\" # disable the controller service of the CSI driver\n #- \"--run-node-service=false\" # disable the node service of the CSI driver\n - \"--leader-election\"\n - \"--default-fstype=ext4\"\n - \"--controller-publish-readonly=true\"\n \n csiAttacher:\n - \"--v=5\"\n - \"--csi-address=/csi/csi.sock\"\n - \"--http-endpoint=:22012\"\n - \"--leader-election\"\n - \"--leader-election-namespace=$(PDCSI_NAMESPACE)\"\n - \"--timeout=250s\"\n\n csiResizer:\n - \"--v=5\"\n - \"--csi-address=/csi/csi.sock\"\n - \"--http-endpoint=:22013\"\n - \"--leader-election\"\n - \"--leader-election-namespace=$(PDCSI_NAMESPACE)\"\n - \"--handle-volume-inuse-error=false\"\n\n csiSnapshotter:\n - \"--v=5\"\n - \"--csi-address=/csi/csi.sock\"\n - \"--metrics-address=:22014\"\n - \"--leader-election\"\n - \"--leader-election-namespace=$(PDCSI_NAMESPACE)\"\n - \"--timeout=300s\"\n\n csiDriver:\n - \"--v=5\"\n - \"--endpoint=unix:/csi/csi.sock\"", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "hello-universe", + "type": "oci", + "layer": "addon", + "version": "1.2.0", + "tag": "1.2.0", + "values": "# spectrocloud.com/enabled-presets: Backend:disable-api\npack:\n content:\n images:\n - image: ghcr.io/spectrocloud/hello-universe:1.2.0\n spectrocloud.com/install-priority: 0\n\nmanifests:\n hello-universe:\n images:\n hellouniverse: ghcr.io/spectrocloud/hello-universe:1.2.0\n apiEnabled: false\n namespace: hello-universe\n port: 8080\n replicas: 1", + "registry": { + "metadata": { + "uid": "64eaff5630402973c4e1856a", + "name": "Palette Community Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + } + ] + }, + "variables": [] + } +} +``` \ No newline at end of file diff --git a/_partials/getting-started/_cluster_profile_import_vmware.mdx b/_partials/getting-started/_cluster_profile_import_vmware.mdx new file mode 100644 index 0000000000..0ce3cd162c --- /dev/null +++ b/_partials/getting-started/_cluster_profile_import_vmware.mdx @@ -0,0 +1,126 @@ +--- +partial_category: getting-started +partial_name: import-hello-uni-vmware +--- + +```json +{ + "metadata": { + "name": "vmware-profile", + "description": "Cluster profile to deploy to VMware.", + "labels": {} + }, + "spec": { + "version": "1.0.0", + "template": { + "type": "cluster", + "cloudType": "vsphere", + "packs": [ + { + "name": "ubuntu-vsphere", + "type": "spectro", + "layer": "os", + "version": "22.04", + "tag": "22.04", + "values": "# Spectro Golden images includes most of the hardening as per CIS Ubuntu Linux 22.04 LTS Server L1 v1.0.0 standards\n\n# Uncomment below section to\n# 1. Include custom files to be copied over to the nodes and/or\n# 2. Execute list of commands before or after kubeadm init/join is executed\n#\n#kubeadmconfig:\n# preKubeadmCommands:\n# - echo \"Executing pre kube admin config commands\"\n# - update-ca-certificates\n# - 'systemctl restart containerd; sleep 3'\n# - 'while [ ! -S /var/run/containerd/containerd.sock ]; do echo \"Waiting for containerd...\"; sleep 1; done'\n# postKubeadmCommands:\n# - echo \"Executing post kube admin config commands\"\n# files:\n# - targetPath: /usr/local/share/ca-certificates/mycom.crt\n# targetOwner: \"root:root\"\n# targetPermissions: \"0644\"\n# content: |\n# -----BEGIN CERTIFICATE-----\n# MIICyzCCAbOgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl\n# cm5ldGVzMB4XDTIwMDkyMjIzNDMyM1oXDTMwMDkyMDIzNDgyM1owFTETMBEGA1UE\n# AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMdA\n# nZYs1el/6f9PgV/aO9mzy7MvqaZoFnqO7Qi4LZfYzixLYmMUzi+h8/RLPFIoYLiz\n# qiDn+P8c9I1uxB6UqGrBt7dkXfjrUZPs0JXEOX9U/6GFXL5C+n3AUlAxNCS5jobN\n# fbLt7DH3WoT6tLcQefTta2K+9S7zJKcIgLmBlPNDijwcQsbenSwDSlSLkGz8v6N2\n# 7SEYNCV542lbYwn42kbcEq2pzzAaCqa5uEPsR9y+uzUiJpv5tDHUdjbFT8tme3vL\n# 9EdCPODkqtMJtCvz0hqd5SxkfeC2L+ypaiHIxbwbWe7GtliROvz9bClIeGY7gFBK\n# jZqpLdbBVjo0NZBTJFUCAwEAAaMmMCQwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB\n# /wQIMAYBAf8CAQAwDQYJKoZIhvcNAQELBQADggEBADIKoE0P+aVJGV9LWGLiOhki\n# HFv/vPPAQ2MPk02rLjWzCaNrXD7aPPgT/1uDMYMHD36u8rYyf4qPtB8S5REWBM/Y\n# g8uhnpa/tGsaqO8LOFj6zsInKrsXSbE6YMY6+A8qvv5lPWpJfrcCVEo2zOj7WGoJ\n# ixi4B3fFNI+wih8/+p4xW+n3fvgqVYHJ3zo8aRLXbXwztp00lXurXUyR8EZxyR+6\n# b+IDLmHPEGsY9KOZ9VLLPcPhx5FR9njFyXvDKmjUMJJgUpRkmsuU1mCFC+OHhj56\n# IkLaSJf6z/p2a3YjTxvHNCqFMLbJ2FvJwYCRzsoT2wm2oulnUAMWPI10vdVM+Nc=\n# -----END CERTIFICATE-----", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "kubernetes", + "type": "spectro", + "layer": "k8s", + "version": "1.27.15", + "tag": "1.27.x", + "values": "# spectrocloud.com/enabled-presets: Kube Controller Manager:loopback-ctrlmgr,Kube Scheduler:loopback-scheduler\npack:\n content:\n images:\n - image: registry.k8s.io/coredns/coredns:v1.10.1\n - image: registry.k8s.io/etcd:3.5.12-0\n - image: registry.k8s.io/kube-apiserver:v1.27.15\n - image: registry.k8s.io/kube-controller-manager:v1.27.15\n - image: registry.k8s.io/kube-proxy:v1.27.15\n - image: registry.k8s.io/kube-scheduler:v1.27.15\n - image: registry.k8s.io/pause:3.9\n - image: registry.k8s.io/pause:3.8\n #CIDR Range for Pods in cluster\n # Note : This must not overlap with any of the host or service network\n podCIDR: \"192.168.0.0/16\"\n #CIDR notation IP range from which to assign service cluster IPs\n # Note : This must not overlap with any IP ranges assigned to nodes for pods.\n serviceClusterIpRange: \"10.96.0.0/12\"\n # serviceDomain: \"cluster.local\"\n\nkubeadmconfig:\n apiServer:\n extraArgs:\n # Note : secure-port flag is used during kubeadm init. Do not change this flag on a running cluster\n secure-port: \"6443\"\n anonymous-auth: \"true\"\n profiling: \"false\"\n disable-admission-plugins: \"AlwaysAdmit\"\n default-not-ready-toleration-seconds: \"60\"\n default-unreachable-toleration-seconds: \"60\"\n enable-admission-plugins: \"AlwaysPullImages,NamespaceLifecycle,ServiceAccount,NodeRestriction,PodSecurity\"\n admission-control-config-file: \"/etc/kubernetes/pod-security-standard.yaml\"\n audit-log-path: /var/log/apiserver/audit.log\n audit-policy-file: /etc/kubernetes/audit-policy.yaml\n audit-log-maxage: \"30\"\n audit-log-maxbackup: \"10\"\n audit-log-maxsize: \"100\"\n authorization-mode: RBAC,Node\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n extraVolumes:\n - name: audit-log\n hostPath: /var/log/apiserver\n mountPath: /var/log/apiserver\n pathType: DirectoryOrCreate\n - name: audit-policy\n hostPath: /etc/kubernetes/audit-policy.yaml\n mountPath: /etc/kubernetes/audit-policy.yaml\n readOnly: true\n pathType: File\n - name: pod-security-standard\n hostPath: /etc/kubernetes/pod-security-standard.yaml\n mountPath: /etc/kubernetes/pod-security-standard.yaml\n readOnly: true\n pathType: File\n controllerManager:\n extraArgs:\n profiling: \"false\"\n terminated-pod-gc-threshold: \"25\"\n use-service-account-credentials: \"true\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n scheduler:\n extraArgs:\n profiling: \"false\"\n kubeletExtraArgs:\n read-only-port : \"0\"\n event-qps: \"0\"\n feature-gates: \"RotateKubeletServerCertificate=true\"\n protect-kernel-defaults: \"true\"\n rotate-server-certificates: \"true\"\n tls-cipher-suites: \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\"\n files:\n - path: hardening/audit-policy.yaml\n targetPath: /etc/kubernetes/audit-policy.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - path: hardening/90-kubelet.conf\n targetPath: /etc/sysctl.d/90-kubelet.conf\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n - targetPath: /etc/kubernetes/pod-security-standard.yaml\n targetOwner: \"root:root\"\n targetPermissions: \"0600\"\n content: |\n apiVersion: apiserver.config.k8s.io/v1\n kind: AdmissionConfiguration\n plugins:\n - name: PodSecurity\n configuration:\n apiVersion: pod-security.admission.config.k8s.io/v1\n kind: PodSecurityConfiguration\n defaults:\n enforce: \"baseline\"\n enforce-version: \"v1.27\"\n audit: \"baseline\"\n audit-version: \"v1.27\"\n warn: \"restricted\"\n warn-version: \"v1.27\"\n audit: \"restricted\"\n audit-version: \"v1.27\"\n exemptions:\n # Array of authenticated usernames to exempt.\n usernames: []\n # Array of runtime class names to exempt.\n runtimeClasses: []\n # Array of namespaces to exempt.\n namespaces: [kube-system]\n\n preKubeadmCommands:\n # For enabling 'protect-kernel-defaults' flag to kubelet, kernel parameters changes are required\n - 'echo \"====> Applying kernel parameters for Kubelet\"'\n - 'sysctl -p /etc/sysctl.d/90-kubelet.conf'\n postKubeadmCommands:\n - 'chmod 600 /var/lib/kubelet/config.yaml'\n #- 'echo \"List of post kubeadm commands to be executed\"'\n\n# Client configuration to add OIDC based authentication flags in kubeconfig\n#clientConfig:\n #oidc-issuer-url: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-issuer-url }}\"\n #oidc-client-id: \"{{ .spectro.pack.kubernetes.kubeadmconfig.apiServer.extraArgs.oidc-client-id }}\"\n #oidc-client-secret: 1gsranjjmdgahm10j8r6m47ejokm9kafvcbhi3d48jlc3rfpprhv\n #oidc-extra-scope: profile,email", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "cni-calico", + "type": "spectro", + "layer": "cni", + "version": "3.27.2", + "tag": "3.27.x", + "values": "# spectrocloud.com/enabled-presets: Microk8s:microk8s-false\npack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/cni:v3.27.2\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/node:v3.27.2\n - image: gcr.io/spectro-images-public/packs/calico/3.27.2/kube-controllers:v3.27.2\n\nmanifests:\n calico:\n microk8s: \"false\"\n images:\n cni: \"\"\n node: \"\"\n kubecontroller: \"\"\n # IPAM type to use. Supported types are calico-ipam, host-local\n ipamType: \"calico-ipam\"\n\n calico_ipam:\n assign_ipv4: true\n assign_ipv6: false\n\n # Should be one of CALICO_IPV4POOL_IPIP or CALICO_IPV4POOL_VXLAN \n encapsulationType: \"CALICO_IPV4POOL_IPIP\"\n\n # Should be one of Always, CrossSubnet, Never\n encapsulationMode: \"Always\"\n\n env:\n # Additional env variables for calico-node\n calicoNode:\n #IPV6: \"autodetect\"\n #FELIX_IPV6SUPPORT: \"true\"\n #CALICO_IPV6POOL_NAT_OUTGOING: \"true\"\n #CALICO_IPV4POOL_CIDR: \"192.168.0.0/16\"\n #IP_AUTODETECTION_METHOD: \"first-found\"\n\n # Additional env variables for calico-kube-controller deployment\n calicoKubeControllers:\n #LOG_LEVEL: \"info\"\n #SYNC_NODE_LABELS: \"true\"", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "csi-vsphere-csi", + "type": "spectro", + "layer": "csi", + "version": "3.1.2", + "tag": "3.1.x", + "values": "pack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.28.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.22.9\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.23.5\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.26.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.24.6\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.25.3\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/cpi-manager:v1.27.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-attacher:v4.3.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-resizer:v1.8.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/livenessprobe:v2.10.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-provisioner:v3.5.0\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-snapshotter:v6.2.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-driver:v3.1.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-syncer:v3.1.2\n - image: gcr.io/spectro-images-public/packs/csi-vsphere-csi/3.1.2/csi-node-driver-registrar:v2.8.0\n\nmanifests:\n #Storage class config\n vsphere:\n\n #Toggle for Default class\n isDefaultClass: \"true\"\n\n #Specifies file system type\n fstype: \"ext4\"\n\n #Allowed reclaim policies are Delete, Retain\n reclaimPolicy: \"Delete\"\n\n #Specifies the URL of the datastore on which the container volume needs to be provisioned.\n datastoreURL: \"\"\n\n #Specifies the storage policy for datastores on which the container volume needs to be provisioned.\n storagePolicyName: \"\"\n\n volumeBindingMode: \"WaitForFirstConsumer\"\n\n #Set this flag to true to enable volume expansion\n allowVolumeExpansion: true\n\n vsphere-cloud-controller-manager:\n k8sVersion: \"{{ .spectro.system.kubernetes.version }}\"\n # Override CPI image\n image: \"\"\n extraArgs:\n - \"--cloud-provider=vsphere\"\n - \"--v=2\"\n - \"--cloud-config=/etc/cloud/vsphere.conf\"\n\n vsphere-csi-driver:\n replicas: 3\n livenessProbe:\n csiController:\n initialDelaySeconds: 30\n timeoutSeconds: 10\n periodSeconds: 180\n failureThreshold: 3\n # Override CSI component images\n csiAttacherImage: \"\"\n csiResizerImage: \"\"\n csiControllerImage: \"\"\n csiLivenessProbeImage: \"\"\n csiSyncerImage: \"\"\n csiProvisionerImage: \"\"\n csiSnapshotterImage: \"\"\n nodeDriverRegistrarImage: \"\"\n vsphereCsiNodeImage: \"\"\n extraArgs:\n csiAttacher:\n - \"--v=4\"\n - \"--timeout=300s\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n csiResizer:\n - \"--v=4\"\n - \"--timeout=300s\"\n - \"--handle-volume-inuse-error=false\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"\n csiController:\n - \"--fss-name=internal-feature-states.csi.vsphere.vmware.com\"\n - \"--fss-namespace=$(CSI_NAMESPACE)\"\n csiLivenessProbe:\n - \"--v=4\"\n - \"--csi-address=/csi/csi.sock\"\n csiSyncer:\n - \"--leader-election\"\n - \"--leader-election-lease-duration=30s\"\n - \"--leader-election-renew-deadline=20s\"\n - \"--leader-election-retry-period=10s\"\n - \"--fss-name=internal-feature-states.csi.vsphere.vmware.com\"\n - \"--fss-namespace=$(CSI_NAMESPACE)\"\n csiProvisioner:\n - \"--v=4\"\n - \"--timeout=300s\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"\n - \"--default-fstype=ext4\"\n # needed only for topology aware setup\n - \"--feature-gates=Topology=true\"\n - \"--strict-topology\"\n csiSnapshotter:\n - \"--v=4\"\n - \"--kube-api-qps=100\"\n - \"--kube-api-burst=100\"\n - \"--timeout=300s\"\n - \"--csi-address=$(ADDRESS)\"\n - \"--leader-election\"\n - \"--leader-election-lease-duration=120s\"\n - \"--leader-election-renew-deadline=60s\"\n - \"--leader-election-retry-period=30s\"", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "lb-metallb-helm", + "type": "spectro", + "layer": "addon", + "version": "0.14.8", + "tag": "0.14.8", + "values": "pack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/controller:v0.14.8\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/speaker:v0.14.8\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/frr:9.1.0\n - image: gcr.io/spectro-images-public/packs/metallb/0.14.8/kube-rbac-proxy:v0.12.0\n charts:\n - repo: https://metallb.github.io/metallb\n name: metallb\n version: 0.14.8\n namespace: metallb-system\n namespaceLabels:\n \"metallb-system\": \"pod-security.kubernetes.io/enforce=privileged,pod-security.kubernetes.io/enforce-version=v{{ .spectro.system.kubernetes.version | substr 0 4 }}\" # Do not change this namespace, since CRDs expect the namespace to be metallb-system\n spectrocloud.com/install-priority: 0\n\ncharts:\n metallb-full:\n configuration:\n ipaddresspools:\n first-pool:\n spec:\n addresses:\n - 192.168.10.0/24\n # - 192.168.100.50-192.168.100.60\n avoidBuggyIPs: true\n autoAssign: true\n\n l2advertisements:\n default:\n spec:\n ipAddressPools:\n - first-pool\n\n bgpadvertisements: {}\n # external:\n # spec:\n # ipAddressPools:\n # - bgp-pool\n # # communities:\n # # - vpn-only\n\n bgppeers: {}\n # bgp-peer-1:\n # spec:\n # myASN: 64512\n # peerASN: 64512\n # peerAddress: 172.30.0.3\n # peerPort: 180\n # # BFD profiles can only be used in FRR mode\n # # bfdProfile: bfd-profile-1\n\n communities: {}\n # community-1:\n # spec:\n # communities:\n # - name: vpn-only\n # value: 1234:1\n\n bfdprofiles: {}\n # bfd-profile-1:\n # spec:\n # receiveInterval: 380\n # transmitInterval: 270\n\n metallb:\n # Default values for metallb.\n # This is a YAML-formatted file.\n # Declare variables to be passed into your templates.\n\n imagePullSecrets: []\n nameOverride: \"\"\n fullnameOverride: \"\"\n loadBalancerClass: \"\"\n\n # To configure MetalLB, you must specify ONE of the following two\n # options.\n\n rbac:\n # create specifies whether to install and use RBAC rules.\n create: true\n\n prometheus:\n # scrape annotations specifies whether to add Prometheus metric\n # auto-collection annotations to pods. See\n # https://github.com/prometheus/prometheus/blob/release-2.1/documentation/examples/prometheus-kubernetes.yml\n # for a corresponding Prometheus configuration. Alternatively, you\n # may want to use the Prometheus Operator\n # (https://github.com/coreos/prometheus-operator) for more powerful\n # monitoring configuration. If you use the Prometheus operator, this\n # can be left at false.\n scrapeAnnotations: false\n\n # port both controller and speaker will listen on for metrics\n metricsPort: 7472\n\n # if set, enables rbac proxy on the controller and speaker to expose\n # the metrics via tls.\n # secureMetricsPort: 9120\n\n # the name of the secret to be mounted in the speaker pod\n # to expose the metrics securely. If not present, a self signed\n # certificate to be used.\n speakerMetricsTLSSecret: \"\"\n\n # the name of the secret to be mounted in the controller pod\n # to expose the metrics securely. If not present, a self signed\n # certificate to be used.\n controllerMetricsTLSSecret: \"\"\n\n # prometheus doens't have the permission to scrape all namespaces so we give it permission to scrape metallb's one\n rbacPrometheus: true\n\n # the service account used by prometheus\n # required when \" .Values.prometheus.rbacPrometheus == true \" and \" .Values.prometheus.podMonitor.enabled=true or prometheus.serviceMonitor.enabled=true \"\n serviceAccount: \"\"\n\n # the namespace where prometheus is deployed\n # required when \" .Values.prometheus.rbacPrometheus == true \" and \" .Values.prometheus.podMonitor.enabled=true or prometheus.serviceMonitor.enabled=true \"\n namespace: \"\"\n\n # the image to be used for the kuberbacproxy container\n rbacProxy:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/kube-rbac-proxy\n tag: v0.12.0\n pullPolicy:\n\n # Prometheus Operator PodMonitors\n podMonitor:\n # enable support for Prometheus Operator\n enabled: false\n\n # optional additionnal labels for podMonitors\n additionalLabels: {}\n\n # optional annotations for podMonitors\n annotations: {}\n\n # Job label for scrape target\n jobLabel: \"app.kubernetes.io/name\"\n\n # Scrape interval. If not set, the Prometheus default scrape interval is used.\n interval:\n\n # \tmetric relabel configs to apply to samples before ingestion.\n metricRelabelings: []\n # - action: keep\n # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'\n # sourceLabels: [__name__]\n\n # \trelabel configs to apply to samples before ingestion.\n relabelings: []\n # - sourceLabels: [__meta_kubernetes_pod_node_name]\n # separator: ;\n # regex: ^(.*)$\n # target_label: nodename\n # replacement: $1\n # action: replace\n\n # Prometheus Operator ServiceMonitors. To be used as an alternative\n # to podMonitor, supports secure metrics.\n serviceMonitor:\n # enable support for Prometheus Operator\n enabled: false\n\n speaker:\n # optional additional labels for the speaker serviceMonitor\n additionalLabels: {}\n # optional additional annotations for the speaker serviceMonitor\n annotations: {}\n # optional tls configuration for the speaker serviceMonitor, in case\n # secure metrics are enabled.\n tlsConfig:\n insecureSkipVerify: true\n\n controller:\n # optional additional labels for the controller serviceMonitor\n additionalLabels: {}\n # optional additional annotations for the controller serviceMonitor\n annotations: {}\n # optional tls configuration for the controller serviceMonitor, in case\n # secure metrics are enabled.\n tlsConfig:\n insecureSkipVerify: true\n\n # Job label for scrape target\n jobLabel: \"app.kubernetes.io/name\"\n\n # Scrape interval. If not set, the Prometheus default scrape interval is used.\n interval:\n\n # \tmetric relabel configs to apply to samples before ingestion.\n metricRelabelings: []\n # - action: keep\n # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'\n # sourceLabels: [__name__]\n\n # \trelabel configs to apply to samples before ingestion.\n relabelings: []\n # - sourceLabels: [__meta_kubernetes_pod_node_name]\n # separator: ;\n # regex: ^(.*)$\n # target_label: nodename\n # replacement: $1\n # action: replace\n\n # Prometheus Operator alertmanager alerts\n prometheusRule:\n # enable alertmanager alerts\n enabled: false\n\n # optional additionnal labels for prometheusRules\n additionalLabels: {}\n\n # optional annotations for prometheusRules\n annotations: {}\n\n # MetalLBStaleConfig\n staleConfig:\n enabled: true\n labels:\n severity: warning\n\n # MetalLBConfigNotLoaded\n configNotLoaded:\n enabled: true\n labels:\n severity: warning\n\n # MetalLBAddressPoolExhausted\n addressPoolExhausted:\n enabled: true\n labels:\n severity: alert\n\n addressPoolUsage:\n enabled: true\n thresholds:\n - percent: 75\n labels:\n severity: warning\n - percent: 85\n labels:\n severity: warning\n - percent: 95\n labels:\n severity: alert\n\n # MetalLBBGPSessionDown\n bgpSessionDown:\n enabled: true\n labels:\n severity: alert\n\n extraAlerts: []\n\n # controller contains configuration specific to the MetalLB cluster\n # controller.\n controller:\n enabled: true\n # -- Controller log level. Must be one of: `all`, `debug`, `info`, `warn`, `error` or `none`\n logLevel: info\n # command: /controller\n # webhookMode: enabled\n image:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/controller\n tag: v0.14.8\n pullPolicy:\n ## @param controller.updateStrategy.type Metallb controller deployment strategy type.\n ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy\n ## e.g:\n ## strategy:\n ## type: RollingUpdate\n ## rollingUpdate:\n ## maxSurge: 25%\n ## maxUnavailable: 25%\n ##\n strategy:\n type: RollingUpdate\n serviceAccount:\n # Specifies whether a ServiceAccount should be created\n create: true\n # The name of the ServiceAccount to use. If not set and create is\n # true, a name is generated using the fullname template\n name: \"\"\n annotations: {}\n securityContext:\n runAsNonRoot: true\n # nobody\n runAsUser: 65534\n fsGroup: 65534\n resources: {}\n # limits:\n # cpu: 100m\n # memory: 100Mi\n nodeSelector: {}\n tolerations: []\n priorityClassName: \"\"\n runtimeClassName: \"\"\n affinity: {}\n podAnnotations: {}\n labels: {}\n livenessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n readinessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n tlsMinVersion: \"VersionTLS12\"\n tlsCipherSuites: \"\"\n\n extraContainers: []\n\n # speaker contains configuration specific to the MetalLB speaker\n # daemonset.\n speaker:\n enabled: true\n # command: /speaker\n # -- Speaker log level. Must be one of: `all`, `debug`, `info`, `warn`, `error` or `none`\n logLevel: info\n tolerateMaster: true\n memberlist:\n enabled: true\n mlBindPort: 7946\n mlBindAddrOverride: \"\"\n mlSecretKeyPath: \"/etc/ml_secret_key\"\n excludeInterfaces:\n enabled: true\n # ignore the exclude-from-external-loadbalancer label (required for 1-node clusters are all-control-plane clusters)\n ignoreExcludeLB: false\n\n image:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/speaker\n tag: v0.14.8\n pullPolicy:\n ## @param speaker.updateStrategy.type Speaker daemonset strategy type\n ## ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/\n ##\n updateStrategy:\n ## StrategyType\n ## Can be set to RollingUpdate or OnDelete\n ##\n type: RollingUpdate\n serviceAccount:\n # Specifies whether a ServiceAccount should be created\n create: true\n # The name of the ServiceAccount to use. If not set and create is\n # true, a name is generated using the fullname template\n name: \"\"\n annotations: {}\n securityContext: {}\n ## Defines a secret name for the controller to generate a memberlist encryption secret\n ## By default secretName: {{ \"metallb.fullname\" }}-memberlist\n ##\n # secretName:\n resources: {}\n # limits:\n # cpu: 100m\n # memory: 100Mi\n nodeSelector: {}\n tolerations: []\n priorityClassName: \"\"\n affinity: {}\n ## Selects which runtime class will be used by the pod.\n runtimeClassName: \"\"\n podAnnotations: {}\n labels: {}\n livenessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n readinessProbe:\n enabled: true\n failureThreshold: 3\n initialDelaySeconds: 10\n periodSeconds: 10\n successThreshold: 1\n timeoutSeconds: 1\n startupProbe:\n enabled: true\n failureThreshold: 30\n periodSeconds: 5\n # frr contains configuration specific to the MetalLB FRR container,\n # for speaker running alongside FRR.\n frr:\n enabled: false\n image:\n repository: gcr.io/spectro-images-public/packs/metallb/0.14.8/frr\n tag: 9.1.0\n pullPolicy:\n metricsPort: 7473\n resources: {}\n # if set, enables a rbac proxy sidecar container on the speaker to\n # expose the frr metrics via tls.\n # secureMetricsPort: 9121\n\n\n reloader:\n resources: {}\n\n frrMetrics:\n resources: {}\n\n extraContainers: []\n\n crds:\n enabled: true\n validationFailurePolicy: Fail\n\n # frrk8s contains the configuration related to using an frrk8s instance\n # (github.com/metallb/frr-k8s) as the backend for the BGP implementation.\n # This allows configuring additional frr parameters in combination to those\n # applied by MetalLB.\n frrk8s:\n # if set, enables frrk8s as a backend. This is mutually exclusive to frr\n # mode.\n enabled: false\n external: false\n namespace: \"\"", + "registry": { + "metadata": { + "uid": "5eecc89d0b150045ae661cef", + "name": "Public Repo", + "kind": "pack", + "isPrivate": false, + "providerType": "" + } + } + }, + { + "name": "hello-universe", + "type": "oci", + "layer": "addon", + "version": "1.2.0", + "tag": "1.2.0", + "values": "# spectrocloud.com/enabled-presets: Backend:disable-api\npack:\n content:\n images:\n - image: ghcr.io/spectrocloud/hello-universe:1.2.0\n spectrocloud.com/install-priority: 0\n\nmanifests:\n hello-universe:\n images:\n hellouniverse: ghcr.io/spectrocloud/hello-universe:1.2.0\n apiEnabled: false\n namespace: hello-universe\n port: 8080\n replicas: 1", + "registry": { + "metadata": { + "uid": "64eaff5630402973c4e1856a", + "name": "Palette Community Registry", + "kind": "oci", + "isPrivate": true, + "providerType": "pack" + } + } + } + ] + }, + "variables": [] + } +} +``` \ No newline at end of file diff --git a/_partials/getting-started/_getting-started_create-cluster-profile_spacetastic-end.mdx b/_partials/getting-started/_getting-started_create-cluster-profile_spacetastic-end.mdx new file mode 100644 index 0000000000..4c89d296f4 --- /dev/null +++ b/_partials/getting-started/_getting-started_create-cluster-profile_spacetastic-end.mdx @@ -0,0 +1,14 @@ +--- +partial_category: getting-started +partial_name: spacetastic-create-cluster-profile-end +--- + +Wren and Kai have created their first Palette cluster profile by following the steps described in this guide. They are +in good spirits, as the process has gone smoothly. + +> "The visual representation of cluster profiles in Palette is much clearer than our whiteboard." says Kai, glancing +> back at the list they have created. "I can keep track of which versions we are using in production just by reviewing +> the profile. What are your thoughts, Wren? Have you remained a Palette skeptic?" +> +> Wren laughs. "Yes, I admit cluster profiles are very convenient. I'm not convinced yet, but I am already starting to +> understand how Palette could make us more productive. Let's keep exploring and get something deployed with it!" \ No newline at end of file diff --git a/_partials/getting-started/_getting-started_create-cluster-profile_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_create-cluster-profile_spacetastic-intro.mdx new file mode 100644 index 0000000000..e699fe9119 --- /dev/null +++ b/_partials/getting-started/_getting-started_create-cluster-profile_spacetastic-intro.mdx @@ -0,0 +1,16 @@ +--- +partial_category: getting-started +partial_name: spacetastic-create-cluster-profile-intro +--- + +The team are busy exploring and evaluating Palette. In order to prepare for a migration to any external platform, they +begin to map out all the dependencies of their systems and infrastructure. + +> Wren begins creating the list. "Our tech stack has grown, as we have added features and capabilities. I remember +> making a lot of design decisions myself, as I was Spacetastic's Founding Engineer. It's really interesting to look +> back on how much we've built and grown since those days!" +> +> Kai smiles and nods. "It's definitely been an out of this world ride!" they say. "I have a similar feeling when I +> think about the infrastructure that I built in the early days as Platform Engineer. I will add our infrastructure +> layers to your list. This process has the added bonus of giving us a chance to review which dependencies need to be +> updated, so Meera, our security expert, will be happy too." \ No newline at end of file diff --git a/_partials/getting-started/_getting-started_deploy-cluster-tf_spacetastic-end.mdx b/_partials/getting-started/_getting-started_deploy-cluster-tf_spacetastic-end.mdx new file mode 100644 index 0000000000..ebe4ebd208 --- /dev/null +++ b/_partials/getting-started/_getting-started_deploy-cluster-tf_spacetastic-end.mdx @@ -0,0 +1,14 @@ +--- +partial_category: getting-started +partial_name: spacetastic-deploy-cluster-tf-end +--- + +Wren and Kai have followed this tutorial and have learned how Palette supports IaC through Terraform. They found the +essentials covered to be a great introduction to IaC and gives them the confidence to kick off this initiative at +Spacetastic. + +> "I'd say that deploying Palette clusters with Terraform is even more convenient than through the UI." says Kai. "The +> Palette Terraform provider includes a lot of the same functionality that the UI provides." +> +> "Yes! I definitely agree. I'm a Terraform novice and I could follow along with this tutorial." says Wren. "This has +> definitely inspired me to make our IaC adoption a priority in the medium term future." diff --git a/_partials/getting-started/_getting-started_deploy-cluster-tf_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_deploy-cluster-tf_spacetastic-intro.mdx new file mode 100644 index 0000000000..0bb001ce83 --- /dev/null +++ b/_partials/getting-started/_getting-started_deploy-cluster-tf_spacetastic-intro.mdx @@ -0,0 +1,20 @@ +--- +partial_category: getting-started +partial_name: spacetastic-deploy-cluster-tf-intro +--- + +After following the tutorials in the Getting Started section, the Spacetastic team have been have been impressed with +its capabilities. Wren, Founding Engineer, and Kai, Platform Engineer, have been discussing adopting IaC workflows and +have been upskilling with Terraform throughout the past year. They are interested in learning if Palette can support IaC +workflows too. + +> "While we're on the topic of platform improvements, it would be great to kick off our adoption of Infrastructure as +> Code at Spacetastic." says Wren. "I've been wanting to roll this out for a while, but we don't have that much in-house +> expertise." +> +> "Yes, this would definitely be a big improvement to our processes." says Kai, Platform Engineer. "Some people might +> think that it slows down the development and release processes, due to the extra code reviews. However, the ability to +> revert in the case of an outage more than makes up for this small drop in velocity." +> +> Wren nods, knowingly. "Let's explore Palette's IaC capabilities and maybe we can apply some learnings to our +> infrastructure." diff --git a/_partials/getting-started/_getting-started_deploy-cluster_spacetastic-end.mdx b/_partials/getting-started/_getting-started_deploy-cluster_spacetastic-end.mdx new file mode 100644 index 0000000000..eaa1c44596 --- /dev/null +++ b/_partials/getting-started/_getting-started_deploy-cluster_spacetastic-end.mdx @@ -0,0 +1,22 @@ +--- +partial_category: getting-started +partial_name: spacetastic-deploy-cluster-end +--- + +Wren and Kai have deployed their first cluster profile by following the steps described in this tutorial. They were +impressed by how streamlined the process was and how the cluster profiles provided them with a deployment blueprint. + +> "Deploying our first cluster with Palette was intuitive." says Wren. "It's ideal to find an external partner that can +> take care of our Kubernetes infrastructure and free us up to deliver more educational features. I definitely think +> that Palette has the capabilities to take care of all the Kubernetes heavy lifting for us." +> +> "I agree with you and I'm glad to hear you're not as skeptical anymore." says Kai, nodding and laughing. "From a +> platform engineering perspective, I can say that cluster profiles will provide us with reliable deployments across +> environments and even clouds, so I'm much more confident about our testing and deployment strategy." +> +> Meera, Head of Cybersecurity, walks in holding a file. "I've done our security due diligence and I'm happy to report +> that Spectro Cloud adheres to the highest security standards. I'm happy to approve Palette for use in our +> organization." +> +> "It seems like we've found a great platform that can support us. Let's explore the rest of the Getting Started section +> to understand what else Palette has to offer." says Kai turning back to their monitor. diff --git a/_partials/getting-started/_getting-started_deploy-cluster_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_deploy-cluster_spacetastic-intro.mdx new file mode 100644 index 0000000000..9792b01100 --- /dev/null +++ b/_partials/getting-started/_getting-started_deploy-cluster_spacetastic-intro.mdx @@ -0,0 +1,17 @@ +--- +partial_category: getting-started +partial_name: spacetastic-deploy-cluster-intro +--- + +After successfully creating their first cluster profile and mapping out their entire technology stack, Wren, Founding +Engineer and Kai, Platform Engineer, continue their Palette onboarding process. They are evaluating Palette as a +potential platform orchestration tool for all the production workloads at Spacetastic, who provide an astronomy +education platform deployed on Kubernetes. + +> "The Getting Started section is a great way to learn about Palette. The hands-on approach is just what we need to get +> our first cluster deployed." says Kai, scrolling through the Spectro Cloud Docs. "Wren, do you have time to continue +> our onboarding and get our first cluster deployed?" +> +> Wren sits down next to Kai and sips on a cup of coffee. "Now, we'll get a hands-on feel of the Palette developer +> experience. You know me, I'm a champion for developer tooling and always supportive of investing in our platform. +> Let's follow this tutorial and deploy a cluster using the Palette UI." \ No newline at end of file diff --git a/_partials/getting-started/_getting-started_landing-page_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_landing-page_spacetastic-intro.mdx new file mode 100644 index 0000000000..d8f04786e5 --- /dev/null +++ b/_partials/getting-started/_getting-started_landing-page_spacetastic-intro.mdx @@ -0,0 +1,44 @@ +--- +partial_category: getting-started +partial_name: spacetastic-landing-intro +--- + +Spacetastic Ltd., our fictional example company, is on a mission to teach its users about space. They have assembled a +team of bright minds who are passionate about astronomy and the universe. They are a startup that is gaining popularity, +as they expand their dashboards and grow their subscribers. Their small team has been in charge of developing new +features alongside scaling and maintaining their infrastructure, but they are dedicated to providing the best astronomy +education platform on Planet Earth. + +> "I'm the resident space expert around here!" says Anya, Lead Astrophycist, with a beaming smile. "My mission is to +> make astrophysics, the science of space, accessible to everyone." +> +> "I'm here to support you and your mission. I build all the dashboards, pages and features that bring your vast space +> knowledge to our users in a beautiful visual format!" says Wren, Founding Engineer. +> +> Kai smiles and nods. "I work closely with both Wren and Anya. As Platform Engineer, I ensure that our platform is +> reliable and scalable for everyone around the world, and beyond!" +> +> Meera, Head of Cybersecurity, is the final member of the Spacetastic team. "Let's not forget about the security of our +> platform. I make sure that our systems are designed and implemented with security in mind, the true SecDevOps way." + +![Meet the Spacetastic team](/getting-started/getting-started_landing_meet-the-team.webp) + +The team has deployed their services to a single cloud provider. They rely on Kubernetes for the reliability and +scalability of their systems. The team must ensure the systems are secure, patched regularly, scalable, and meet a +reliability SLA of at least 99% uptime. The following diagram presents an overview of their systems. + +![Spacetastic system diagram](/getting-started/getting-started_landing_spacetastic-systems.webp) + +While the system architecture they have chosen was a great place to start, the team soon face common challenges that +many growing organizations encounter with Kubernetes. + +> Wren hurriedly walks into the office, looking at their phone with a worried expression. "Users are reporting on social +> media that our systems are down! This must be related to the new feature we have just released." +> +> Meera looks up from their monitor. "I've also received an alert about a new zero-day vulnerability. We need to patch +> our services without further downtime, as soon as you are able to stabilize our platform." +> +> "Team, we need to rethink our platform engineering tools. We need a solution that can help us scale and deploy with +> confidence, ultimately supporting the growth of our company." says Kai with a determined look. + +![Kubernetes challenges](/getting-started/getting-started_landing_kubernetes-challenges.webp) diff --git a/_partials/getting-started/_getting-started_scale-secure-cluster_spacetastic-end.mdx b/_partials/getting-started/_getting-started_scale-secure-cluster_spacetastic-end.mdx new file mode 100644 index 0000000000..7c3ca2973a --- /dev/null +++ b/_partials/getting-started/_getting-started_scale-secure-cluster_spacetastic-end.mdx @@ -0,0 +1,23 @@ +--- +partial_category: getting-started +partial_name: spacetastic-scale-secure-cluster-end +--- + +After going through the steps in the tutorial, Kai is confident in Palette's upgrade and scanning capabilities. + +> "What have you found out, Kai?" says Meera walking over to Kai's desk. "Can I rely on Palette when a zero-day +> vulnerability comes in?" +> +> "Yes, I know how stressful it is when those are reported." says Kai with a sympathetic nod. "I found out that Palette +> has our security covered through their pack updates and scanning capabilities. Relying on this kind of tooling is +> invaluable to security conscious engineers like us." +> +> "Excellent! These capabilities will be a great addition to our existing systems at Spacetastic." says Meera with a big +> grin. +> +> "I'm so glad that we found a platform that can support everyone!" says Kai. "There is so much more to explore though. +> I will keep reading through the Getting Started section and find out what additional capabilities Palette provides." +> +> "Good thinking, Kai." says Meera, nodding. "We should maximize all of Palette's features now that we have implemented +> it in production. We've got big ideas and goals on our company roadmap, so let's find out how Palette can help us +> deliver them." diff --git a/_partials/getting-started/_getting-started_scale-secure-cluster_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_scale-secure-cluster_spacetastic-intro.mdx new file mode 100644 index 0000000000..e9857ffba3 --- /dev/null +++ b/_partials/getting-started/_getting-started_scale-secure-cluster_spacetastic-intro.mdx @@ -0,0 +1,21 @@ +--- +partial_category: getting-started +partial_name: spacetastic-scale-secure-cluster-intro +--- + +The team have been impressed with Palette's capabilities and decide to become a Spectro Cloud customer. The last piece +of the puzzle is to learn how to handle Day-2 operations, which become increasingly more important as the Spacetastic +platform matures. They must ensure that their systems are patched, upgraded, scaled, and scanned for vulnerabilities. +These maintenance tasks must be automated and applied on a schedule, as the entire team wants to focus on providing +Spacetastic features. + +> "I've read your report on Palette adoption at Spacetastic." says Meera, who provides the security expertise at +> Spacetastic. I was impressed with the ability to roll out updates to all clusters using the same cluster profile. This +> will streamline our system upgrades and cluster patching. Keeping up with security best practices has never been more +> important, now that we are growing faster than ever!" +> +> "I agree. No matter how safe our coding practices are, we need to periodically review, patch and upgrade our +> dependencies." says Wren, who leads the engineering team at Spacetastic. +> +> Kai nods, scrolling through the Palette Docs. "Team, Palette has more security and Day-2 operation support than we +> have explored so far. I will continue their Getting Started section and report back with my findings." \ No newline at end of file diff --git a/_partials/getting-started/_getting-started_setup_spacetastic-end.mdx b/_partials/getting-started/_getting-started_setup_spacetastic-end.mdx new file mode 100644 index 0000000000..e08c9e233c --- /dev/null +++ b/_partials/getting-started/_getting-started_setup_spacetastic-end.mdx @@ -0,0 +1,13 @@ +--- +partial_category: getting-started +partial_name: spacetastic-setup-end +--- + +After following the detailed Palette setup instructions, the Spacetastic team have added their cloud accounts on the +Palette dashboard. They are ready to learn about Palette. + +> "The Spectro Cloud team has provided our Palette accounts" says Kai. "I have followed their setup guide and have added +> our cloud accounts. I can already tell at a first glance that they offer many Kubernetes customization features." +> +> Wren joins Kai in looking at the Palette dashboard. "I'm interested to learn more, but I never believe in _magic_ +> solutions. We should review their Getting Started material in detail to ensure that Palette is a good fit for us." diff --git a/_partials/getting-started/_getting-started_setup_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_setup_spacetastic-intro.mdx new file mode 100644 index 0000000000..9795aabfa9 --- /dev/null +++ b/_partials/getting-started/_getting-started_setup_spacetastic-intro.mdx @@ -0,0 +1,25 @@ +--- +partial_category: getting-started +partial_name: spacetastic-setup-intro +--- + +The Spacetastic team decide to look for an external solution that can help them scale and manage their Kubernetes +services. Partnering with a team of Kubernetes experts allows them to focus on expanding their astronomy education +platform, instead of spending countless hours migrating and rehosting their services. They identify the following list +of benefits that their new platform should provide. + +- Simplified Kubernetes cluster deployment processes across cloud providers. +- Cluster maintenance and security patching across environments. +- Monitoring and observability of Kubernetes workloads. + +> "I have so many ideas for new features for our backlog." says Anya, Lead Astrophycist. "Our community of space +> explorers want to keep learning, so we shouldn't slow down our implementation cycle. We need to keep expanding our +> astronomy education product." +> +> Kai nods knowingly. As a Platform Engineer, they agree with Anya's concerns. "I've done some research on Kubernetes +> orchestration solutions. It seems that Palette has all the capabilities we need to help us grow." +> +> "I agree with both of you, but I want to review the developer experience in detail before we agree to implement a new +> solution in production." says Wren, whose main concern as Founding Engineer is to ensure development velocity does not +> decrease. "Let's reach out to Spectro Cloud to create an account. Then, we can make an informed decision after we +> complete their Getting Started tutorials." \ No newline at end of file diff --git a/_partials/getting-started/_getting-started_update-cluster_spacetastic-end.mdx b/_partials/getting-started/_getting-started_update-cluster_spacetastic-end.mdx new file mode 100644 index 0000000000..fc57bcb5f0 --- /dev/null +++ b/_partials/getting-started/_getting-started_update-cluster_spacetastic-end.mdx @@ -0,0 +1,13 @@ +--- +partial_category: getting-started +partial_name: spacetastic-update-cluster-end +--- + +Wren and Kai have followed this tutorial and now have a great understanding of what cluster profile updates mean to +deployed clusters. They are impressed with Palette's cluster management capabilities. + +> "Neat! Palette's cluster profiles allow us to review all updates we apply to our clusters." says Kai. "I can finally +> take my vacation days, once we can safely maintain our clusters." +> +> "Don't I know the feeling?" laughs Wren. "I think we could all use more vacations, quiet weekends and less excitement +> when it comes to the Spacetastic platform." diff --git a/_partials/getting-started/_getting-started_update-cluster_spacetastic-intro.mdx b/_partials/getting-started/_getting-started_update-cluster_spacetastic-intro.mdx new file mode 100644 index 0000000000..9531a8216f --- /dev/null +++ b/_partials/getting-started/_getting-started_update-cluster_spacetastic-intro.mdx @@ -0,0 +1,21 @@ +--- +partial_category: getting-started +partial_name: spacetastic-update-cluster-intro +--- + +The recent outages of their platform have highlighted the need to mature their systems and establish the future vision +of the Spacetastic platform and infrastructure. The team have identified the following areas of improvement. + +- Automated deployments across cloud providers. +- Scalable infrastructure that can support 10x the amount of current subscribers. +- Safe updates and releases without any downtime. + +> Wren, Founding Engineer, and Kai, Platform Engineer, have been learning and experimenting with Palette. +> +> "The streamlined deployment process is just one part of the improvements we've got planned for our platform." says +> Kai. "I'm interested to learn how Palette's cluster profiles behave when applying updates and other changes to our +> clusters." +> +> Wren nods, knowingly. "Yes, that's critical to avoid future outages like the incidents we’ve had when rolling out new +> features. After all, not every service is greenfield development, so we want services that have streamlined management +> processes too." \ No newline at end of file diff --git a/docs/docs-content/clusters/cluster-management/ssh-keys.md b/docs/docs-content/clusters/cluster-management/ssh-keys.md index 1e62cf30fc..bf22c290be 100644 --- a/docs/docs-content/clusters/cluster-management/ssh-keys.md +++ b/docs/docs-content/clusters/cluster-management/ssh-keys.md @@ -25,68 +25,7 @@ you need a public SSH key registered in Palette. ## Create and Upload an SSH Key -Follow these steps to create an SSH key using the terminal and upload it to Palette: - -1. Open the terminal on your computer. - -2. Check for existing SSH keys by invoking the following command. - -
- - ```shell - ls -la ~/.ssh - ``` - - If you see files named **id_rsa** and **id_rsa.pub**, you already have an SSH key pair and can skip to step 8. If - not, proceed to step 3. - -3. Generate a new SSH key pair by issuing the following command. - -
- - ```shell - ssh-keygen -t rsa -b 4096 -C "your_email@example.com" - ``` - - Replace `your_email@example.com` with your actual email address. - -4. Press Enter to accept the default file location for the key pair. - -5. Enter a passphrase (optional) and confirm it. We recommend using a strong passphrase for added security. - -6. Copy the public SSH key value. Use the `cat` command to display the public key. - -
- - ```shell - cat ~/.ssh/id_rsa.pub - ``` - - Copy the entire key, including the `ssh-rsa` prefix and your email address at the end. - -7. Log in to [Palette](https://console.spectrocloud.com). - -8. Navigate to the left **Main Menu**, select **Project Settings**, and then the **SSH Keys** tab. - -9. Open the **Add New SSH Key** tab and complete the **Add Key** input form: - - - **Name**: Provide a unique name for the SSH key. - - - **SSH Key**: Paste the SSH public key contents from the key pair generated earlier. - -10. Click **Confirm** to complete the wizard. - -
- -:::info - -You can edit or delete SSH keys later by using the **three-dot Menu** to the right of each key. - -::: - -During cluster creation, assign your SSH key to a cluster. You can use multiple keys to a project, but only one key can -be assigned to an individual cluster. - + ## Validate You can validate that the SSH public key is available in Palette by attempting to deploy a host cluster. During the host diff --git a/docs/docs-content/clusters/edge/edge-configuration/installer-reference.md b/docs/docs-content/clusters/edge/edge-configuration/installer-reference.md index 5e80480533..5bfb52f8f6 100644 --- a/docs/docs-content/clusters/edge/edge-configuration/installer-reference.md +++ b/docs/docs-content/clusters/edge/edge-configuration/installer-reference.md @@ -42,6 +42,11 @@ listed in alphabetical order. You can point the Edge Installer to a non-default registry to load content from another source. Use the `registryCredentials` parameter object to specify the registry configurations. +If you are using an external registry and want to use content bundles when deploying your Edge cluster, you must also +enable the local Harbor registry. For more information, refer to +[Build Content Bundles](../edgeforge-workflow/palette-canvos/build-content-bundle.md) and +[Enable Local Harbor Registry](../site-deployment/deploy-custom-registries/local-registry.md). + | Parameter | Description | Default | | -------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | | `stylus.registryCredentials.domain` | The domain of the registry. You can use an IP address plus the port or a domain name. | | diff --git a/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/build-content-bundle.md b/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/build-content-bundle.md index 29d39705c7..575d281046 100644 --- a/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/build-content-bundle.md +++ b/docs/docs-content/clusters/edge/edgeforge-workflow/palette-canvos/build-content-bundle.md @@ -41,6 +41,15 @@ Creating a content bundle provides several benefits that may address common use - Organizations that want better control over the software used by their Edge hosts can use content bundles to ensure that only approved software is consumed. +## Limitation + +- You cannot use content bundles with an external registry if you do not enable the local Harbor registry on your Edge + host. If you specify a external registry without enabling the local Harbor registry, the images will be downloaded + from the external registry even if you provide a content bundle, and deployment will fail if the necessary images + cannot be located in the external registry. For more information, refer to + [Deploy Cluster with External Registry](../../site-deployment/deploy-custom-registries/deploy-external-registry.md) + and [Enable Local Harbor Registry](../../site-deployment/deploy-custom-registries/local-registry.md). + ## Prerequisites - Linux Machine (Physical or VM) with an AMD64 architecture. diff --git a/docs/docs-content/clusters/edge/site-deployment/deploy-custom-registries/deploy-external-registry.md b/docs/docs-content/clusters/edge/site-deployment/deploy-custom-registries/deploy-external-registry.md index f1d34cad87..a55bd4c638 100644 --- a/docs/docs-content/clusters/edge/site-deployment/deploy-custom-registries/deploy-external-registry.md +++ b/docs/docs-content/clusters/edge/site-deployment/deploy-custom-registries/deploy-external-registry.md @@ -38,6 +38,13 @@ information, refer to [Enable Local Harbor Registry](./local-registry.md). - Palette Edge supports basic username/password authentication. Token authentication schemes used by services such as AWS ECR and Google Artifact Registry are not supported. +- You cannot use content bundles with an external registry if you do not enable the local Harbor registry on your Edge + host. If you specify a external registry without enabling the local Harbor registry, the images will be downloaded + from the external registry even if you provide a content bundle, and deployment will fail if the necessary images + cannot be located in the external registry. For more information, refer to + [Build Content Bundles](../../edgeforge-workflow/palette-canvos/build-content-bundle.md) and + [Enable Local Harbor Registry](../../site-deployment/deploy-custom-registries/local-registry.md). + ## Prerequisites - Specifying the external registry and providing credentials happens during the EdgeForge process. You should become diff --git a/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md index 3295f4c048..675e16ebb1 100644 --- a/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md +++ b/docs/docs-content/clusters/public-cloud/aws/add-aws-accounts.md @@ -35,36 +35,7 @@ Use the steps below to add an AWS cloud account using static access credentials. #### Add AWS Account to Palette -1. Create an IAM Role or IAM User for Palette. Use the following resources if you need additional help. - - - [IAM Role creation guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). - - [IAM User creation guide](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html). - -2. In the AWS console, assign the Palette-required IAM policies to the IAM role or the IAM user that Palette will use. - -3. Log in to [Palette](https://console.spectrocloud.com) as tenant admin. - -4. From the left **Main Menu**, click on **Tenant Settings**. - -5. Select **Cloud Accounts**, and click **+Add AWS Account**. - -6. In the cloud account creation wizard provide the following information: - - - **Account Name:** Custom name for the cloud account. - - - **Description:** Optional description for the cloud account. - - **Partition:** Choose **AWS** from the **drop-down Menu**. - - - **Credentials:** - - AWS Access key - - AWS Secret access key - -7. Click the **Validate** button to validate the credentials. - -8. Once the credentials are validated, the **Add IAM Policies** toggle displays. Toggle **Add IAM Policies** on. - -9. Use the **drop-down Menu**, which lists available IAM policies in your AWS account, to select any desired IAM - policies you want to assign to Palette IAM role or IAM user. + #### Validate diff --git a/docs/docs-content/clusters/public-cloud/azure/azure-cloud.md b/docs/docs-content/clusters/public-cloud/azure/azure-cloud.md index 3867710b0e..8f15d5ff74 100644 --- a/docs/docs-content/clusters/public-cloud/azure/azure-cloud.md +++ b/docs/docs-content/clusters/public-cloud/azure/azure-cloud.md @@ -24,41 +24,7 @@ authentication methods to register your cloud account. ## Add Azure Cloud Account -Use the following steps to add an Azure or Azure Government account in Palette or Palette VerteX. - -1. Log in to [Palette](https://console.spectrocloud.com) or Palette VerteX as a tenant admin. - -2. From the left **Main Menu**, select **Tenant Settings**. - -3. Next, select **Cloud Accounts** in the **Tenant Settings Menu**. - -4. Locate **Azure**, and click **+ Add Azure Account**. - -5. Fill out the following information, and click **Confirm** to complete the registration. - - | **Basic Information** | **Description** | - | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | - | **Account Name** | A custom account name. | - | **Tenant ID** | Unique tenant ID from Azure Management Portal. | - | **Client ID** | Unique client ID from Azure Management Portal. | - | **Client Secret** | Azure secret for authentication. Refer to Microsoft's reference guide for creating a [Client Secret](https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-create-service-principal-portal#create-an-azure-active-directory-application). | - | **Cloud** | Select **Azure Public Cloud** or **Azure US Government**. | - | **Tenant Name** | An optional tenant name. | - | **Disable Properties** | This option prevents Palette and VerteX from creating Azure Virtual Networks (VNets) and other network resources on your behalf for static placement deployments. If you enable this option, all users must manually specify a pre-existing VNet, subnets, and security groups when creating clusters. | - | **Connect Private Cloud Gateway** | Select this option to connect to a Private Cloud Gateway (PCG) if you have a PCG deployed in your environment. Refer to the PCG [Architecture](../../pcg/architecture.md) page to learn more about a PCG. | - - :::info - - If you want to provide network proxy configurations to the Kubernetes clusters deployed through Palette, you must use - a PCG cluster. Check out the [Proxy Configuration](./architecture.md#proxy-configuration) section in the Architecture - page for more information. - - ::: - -6. After providing the required values, click the **Validate** button. If the client secret you provided is correct, a - _Credentials validated_ success message with a green check is displayed. - -7. Click **Confirm** to complete the registration. + ## Validate diff --git a/docs/docs-content/clusters/public-cloud/gcp/add-gcp-accounts.md b/docs/docs-content/clusters/public-cloud/gcp/add-gcp-accounts.md index d88e6be5be..aa5f8c59c8 100644 --- a/docs/docs-content/clusters/public-cloud/gcp/add-gcp-accounts.md +++ b/docs/docs-content/clusters/public-cloud/gcp/add-gcp-accounts.md @@ -44,29 +44,7 @@ account in Palette. ## Create Account -1. Log in to [Palette](https://console.spectrocloud.com) as Tenant admin. - -2. Navigate to the left **Main Menu** and select **Tenant Settings**. - -3. Select **Cloud Accounts** and click on **Add GCP Account**. - -4. In the cloud account creation wizard, provide the following information: - - - **Account Name:** Custom name for the cloud account. - - - **JSON Credentials:** The JSON credentials object. - -
- - :::info - - You can use the **Upload** button to upload the JSON file you downloaded from the GCP console. - - ::: - -5. Click the **Validate** button to validate the credentials. - -6. When the credentials are validated, click on **Confirm** to save your changes. + ## Validate diff --git a/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md b/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md index 7657c8e16a..ff7b308347 100644 --- a/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md +++ b/docs/docs-content/enterprise-version/system-management/ssl-certificate-management.md @@ -12,8 +12,8 @@ keywords: ["self-hosted", "enterprise"] Palette uses Secure Sockets Layer (SSL) certificates to secure internal and external communication with Hypertext Transfer Protocol Secure (HTTPS). External Palette endpoints, such as the [system console](../system-management/system-management.md#system-console), -[Palette dashboard](../../getting-started/dashboard.md), Palette API, and gRPC endpoints, are enabled by default with -HTTPS using an auto-generated self-signed certificate. +[Palette dashboard](../../introduction/dashboard.md), Palette API, and gRPC endpoints, are enabled by default with HTTPS +using an auto-generated self-signed certificate. ## Update System Address and Certificates diff --git a/docs/docs-content/getting-started/additional-capabilities.md b/docs/docs-content/getting-started/additional-capabilities.md deleted file mode 100644 index 66f64592b2..0000000000 --- a/docs/docs-content/getting-started/additional-capabilities.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -sidebar_label: "Additional Capabilities" -title: "Additional Capabilities" -description: "Learn more about Palette's Additional Capabilities." -icon: "" -hide_table_of_contents: false -sidebar_position: 80 -tags: ["getting-started"] ---- - -Palette offers a range of additional capabilities designed to enable its users to deploy, scale, and effectively manage -Kubernetes workloads across a wide variety of environments and deployment options. - -This section will introduce you to some of Palette's additional capabilities, which include: - -- Managing thousands of clusters in remote locations with [Edge](./additional-capabilities.md#edge). -- Supporting high-security requirements with our FIPS-validated [VerteX](./additional-capabilities.md#palette-vertex) - edition. -- Self-hosting the Palette management plane in your own environment with - [Self-Hosted Palette](./additional-capabilities.md#self-hosted-palette). -- Integrating virtual machine workloads into Kubernetes environments with - [Virtual Machine Orchestrator](./additional-capabilities.md#virtual-machine-orchestrator). - -![A drawing of Palette with humans interacting](/getting-started/getting-started_additional-capabilities_palette.webp) - -## Edge - -Palette Edge enables you to deploy Kubernetes workloads in remote locations characterized by limited or intermittent -connectivity and limited compute infrastructure. This means you can deploy Kubernetes clusters at scale and ensure -application performance, availability, security, and lifecycle management across a diverse range of edge locations. -These locations include hospitals, retail stores, Telco environments, restaurants, manufacturing facilities, rural -areas, and many more. - -Palette Edge supports both VM and container-based workloads, multiple Kubernetes distributions, and Intel and ARM -hardware architectures. It is built on top of the open-source project [Kairos](https://kairos.io/), which enables the -creation and customization of immutable versions of operating systems. Additionally, Palette Edge is designed to scale -to tens of thousands of locations while enforcing policies locally within each cluster. - -Edge clusters are Kubernetes clusters set up on Edge hosts. These hosts can be bare metal or virtual machines located in -isolated locations. Palette deploys and manages workload clusters at the Edge, and the services continue operating even -when the connection to the management plane is lost. You can manage Edge clusters locally on-site through Local UI, or -centrally through the Palette management plane. Palette Edge is able to meet your needs, regardless of the network -topology your deployments face. Check out the [Palette Edge](../clusters/edge/edge.md) page to learn more about Edge and -its features. - -## Self-Hosted Palette - -By default, the Palette management plane is available as a multi-tenant SaaS deployment in a public cloud with multiple -availability zones. Should you need it, Palette is also offered as a dedicated SaaS instance, as well as a fully -self-hosted option that allows your teams to directly deploy and manage a private instance of the Palette management -plane in your data center or public cloud provider. - -Self-hosted Palette puts you in full control of the management plane, including its configuration and the timing of -upgrades. A self-hosted instance may be necessary to meet compliance requirements or your organization's security -policies. You may also need to deploy an instance of Palette within an airgapped facility to manage clusters where -access to any outside service is not possible. Explore more on the -[Self-Hosted Palette](https://docs.spectrocloud.com/enterprise-version/) page. - -## Palette VerteX - -Palette VerteX offers a simple, flexible, and secure way for government and regulated industries to deploy and manage -Kubernetes workloads containing sensitive and classified information. It is available as a self-hosted platform offering -that you can install in your data center or public cloud provider. - -Palette VerteX is fully proven in operational environments as it has a Technology Readiness Level (TRL) 9 designation, -making it suitable for use in high-security production environments up to Impact Levels (IL) 5, 6, and 6+. It enables -you to deploy and manage the life cycle of multiple Kubernetes clusters in various environments. These include -virtualized and bare metal data centers (such as [VMware vSphere](https://www.vmware.com/products/vsphere.html) and -[Nutanix](https://www.nutanix.com/)), clouds (including [AWS](https://aws.amazon.com/govcloud-us/) and -[Azure](https://azure.microsoft.com/en-ca/explore/global-infrastructure/government) government clouds), and edge -locations (including air-gapped setups), which makes VerteX also appropriate for addressing challenges like intermittent -connectivity or low bandwidth. - -Additionally, VerteX incorporates validated Federal Information Processing Standards (FIPS) 140-2 cryptographic modules -into its management plane and the Kubernetes clusters it deploys. It secures data in motion through encrypted Transport -Layer Security (TLS) communication channels, includes a suite of scanning tools, and offers CONUS support from a -dedicated public sector team. These capabilities ensure robust data protection for your organization’s infrastructure -and applications. To learn more, check out the [Palette VerteX](../vertex/vertex.md) page. - -## Virtual Machine Orchestrator - -Palette Virtual Machine Orchestrator (VMO) allows you to deploy, manage, and scale traditional VM workloads within a -modern Kubernetes environment, side by side with your containerized applications. It lets you apply to VMs the same -lifecycle management capabilities as Palette applies to containers, including backups. - -VMO uses the CNCF project [KubeVirt](https://kubevirt.io) to manage VMs as Kubernetes pods, ensuring complete mapping -between the VM and Kubernetes concepts. This solution also has near complete feature parity with -[VMware vSphere](https://www.vmware.com/products/vsphere.html), including capabilities such as live migration. - -Palette VMO can be used on edge hosts, giving the ability to deploy VM workloads at the edge without the overhead of a -hypervisor layer. This is achieved by leveraging [Canonical MAAS](https://maas.io). Additionally, VMO can also be used -in self-hosted, airgapped, and in our SaaS environments. Learn more on the -[Virtual Machine Orchestrator](../vm-management/vm-management.md) page. diff --git a/docs/docs-content/getting-started/additional-capabilities/_category_.json b/docs/docs-content/getting-started/additional-capabilities/_category_.json new file mode 100644 index 0000000000..79a194a9b1 --- /dev/null +++ b/docs/docs-content/getting-started/additional-capabilities/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 80 +} diff --git a/docs/docs-content/getting-started/additional-capabilities/additional-capabilities.md b/docs/docs-content/getting-started/additional-capabilities/additional-capabilities.md new file mode 100644 index 0000000000..2d19746ff4 --- /dev/null +++ b/docs/docs-content/getting-started/additional-capabilities/additional-capabilities.md @@ -0,0 +1,80 @@ +--- +sidebar_label: "Additional Capabilities" +title: "Additional Capabilities" +description: "Learn more about Palette's Additional Capabilities." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["getting-started"] +--- + +Palette offers a range of additional capabilities designed to enable its users to deploy, scale, and effectively manage +Kubernetes workloads across a wide variety of environments and deployment options. + +This section introduces you to some of Palette's additional capabilities, which include: + +- Managing thousands of clusters in remote locations with [Edge](./edge.md). +- Supporting high-security requirements with our FIPS-validated [VerteX](./self-hosted.md#palette-vertex) edition. +- Self-hosting the Palette management plane in your own environment with + [Self-Hosted Palette](./self-hosted.md#self-hosted-palette). +- Integrating virtual machine workloads into Kubernetes environments with [Virtual Machine Orchestrator](./vmo.md). + +![A drawing of Palette with humans interacting](/getting-started/getting-started_additional-capabilities_palette.webp) + +The concepts you learn about in the Getting Started section are centered around a fictional case study company, +Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + +Spacetastic has been a Palette customer for a few months. In this time, they have become the leading astronomy education +platform. They want to keep pushing the limits of their platform and implement some innovative capabilities. To support +this growth, they will need to expand their team, infrastructure, and systems. They continue exploring the Getting +Started section to learn how they can grow with Palette and have a long-term relationship with Spectro Cloud. + +> "Wouldn’t bringing some astronomy into everyone's home be great?" says Anya, Lead Astrophycist, who has always the +> dreamer of the team. "I wonder how we could make that possible." +> +> Kai is in charge of scaling the Spacetastic platform. "That would be a great dream and a challenge for us, Anya." they +> say. +> +> "You know, we might be able to make your dream happen!" says Wren, Founding Engineer. "Palette's edge capabilities +> could make it possible for us to bring Spacetastic to many devices." +> +> "I can't believe my ears!" says Kai laughing. "Wren, our resident Palette skeptic, has well and truly embraced our new +> platform solution." +> +> Wren laughs and quickly responds. "Oh and one more thing! Palette doesn't lock us into a single tech stack or cloud +> provider, so we can deploy our services in many locations." +> +> "Palette makes Kubernetes just as secure on edge devices as it is in large data centers." says Meera, Head of +> Cybersecurity, joining in. "We'll make your dream a reality and bring Spacetastic to everyone soon enough, Anya. The +> sky's the limit for us!" + +## The Journey Continues + +In this section, you get an overview of other parts of Palette not yet covered by your Getting Started journey so far. +Explore more through the following pages. + + diff --git a/docs/docs-content/getting-started/additional-capabilities/edge.md b/docs/docs-content/getting-started/additional-capabilities/edge.md new file mode 100644 index 0000000000..dccb29d402 --- /dev/null +++ b/docs/docs-content/getting-started/additional-capabilities/edge.md @@ -0,0 +1,48 @@ +--- +sidebar_label: "Palette Edge" +title: "Palette Edge" +description: "Learn more about Palette's Edge Capabilities." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["getting-started"] +--- + +Palette Edge enables you to deploy Kubernetes workloads in remote locations characterized by limited or intermittent +connectivity and limited compute infrastructure. This means you can deploy Kubernetes clusters at scale and ensure +application performance, availability, security, and lifecycle management across a diverse range of edge locations. +These locations include hospitals, retail stores, Telco environments, restaurants, manufacturing facilities, rural +areas, and many more. + +Palette Edge supports both VM and container-based workloads, multiple Kubernetes distributions, and Intel and ARM +hardware architectures. It is built on top of the open-source project [Kairos](https://kairos.io/), which enables the +creation and customization of immutable versions of operating systems. Additionally, Palette Edge is designed to scale +to tens of thousands of locations while enforcing policies locally within each cluster. + +Edge clusters are Kubernetes clusters set up on Edge hosts. These hosts can be bare metal or virtual machines located in +isolated locations. Palette deploys and manages workload clusters at the Edge, and the services continue operating even +when the connection to the management plane is lost. You can manage Edge clusters locally on-site through Local UI, or +centrally through the Palette management plane. Palette Edge is able to meet your needs, regardless of the network +topology your deployments face. + +Palette Edge also allows you to be confident that all software operating on your Edge hosts is authenticated software +verified through cryptographic signatures. [Trusted Boot](../../clusters/edge/trusted-boot/trusted-boot.md) is the +security feature that ensures the authenticity of the boot processes. In the event that an Edge device is lost or +stolen, the +[Trusted Platform Module (TPM)](https://www.intel.com/content/www/us/en/business/enterprise-computers/resources/trusted-platform-module.html) +will not release the key to decrypt the disk encryption if the boot process is tampered with, ensuring your user data +remains encrypted. + +## Resources + +To learn more about Palette Edge, review the [Edge](../../clusters/edge/edge.md) section to learn more about Edge and +its features. Then, follow the [Deploy an Edge Cluster on VMware](../../tutorials/edge/deploy-cluster.md) tutorial to +learn how to build Edge artifacts, prepare VMware VMs as Edge hosts using the Edge installer ISO, create a cluster +profile referencing a provider image, and deploy a cluster. + +Check out the following video for a quick overview of how you can provision and manage thousands of edge Kubernetes +clusters with Palette. + +
+ + diff --git a/docs/docs-content/getting-started/additional-capabilities/self-hosted.md b/docs/docs-content/getting-started/additional-capabilities/self-hosted.md new file mode 100644 index 0000000000..953f67afe4 --- /dev/null +++ b/docs/docs-content/getting-started/additional-capabilities/self-hosted.md @@ -0,0 +1,57 @@ +--- +sidebar_label: "VerteX and Self-Hosted Palette" +title: "VerteX and Self-Hosted Palette" +description: "Learn more about VerteX and Self-Hosted Palette." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["getting-started"] +--- + +## Self-Hosted Palette + +By default, the Palette management plane is available as a multi-tenant SaaS deployment in a public cloud with multiple +availability zones. Should you need it, Palette is also offered as a dedicated SaaS instance, as well as a fully +self-hosted option that allows your teams to directly deploy and manage a private instance of the Palette management +plane in your data center or public cloud provider. + +Self-hosted Palette puts you in full control of the management plane, including its configuration and the timing of +upgrades. A self-hosted instance may be necessary to meet compliance requirements or your organization's security +policies. You may also need to deploy an instance of Palette within an airgapped facility to manage clusters where +access to any outside service is not possible. + +## Palette VerteX + +Palette VerteX offers a simple, flexible, and secure way for government and regulated industries to deploy and manage +Kubernetes workloads containing sensitive and classified information. It is available as a self-hosted platform offering +that you can install in your data center or public cloud provider. + +Palette VerteX is fully proven in operational environments as it has a Technology Readiness Level (TRL) 9 designation, +making it suitable for use in high-security production environments up to Impact Levels (IL) 5, 6, and 6+. It enables +you to deploy and manage the life cycle of multiple Kubernetes clusters in various environments. These include +virtualized and bare metal data centers (such as [VMware vSphere](https://www.vmware.com/products/vsphere.html) and +[Nutanix](https://www.nutanix.com/)), clouds (including [AWS](https://aws.amazon.com/govcloud-us/) and +[Azure](https://azure.microsoft.com/en-ca/explore/global-infrastructure/government) government clouds), and edge +locations (including air-gapped setups), which makes VerteX also appropriate for addressing challenges like intermittent +connectivity or low bandwidth. + +Additionally, VerteX incorporates validated Federal Information Processing Standards (FIPS) 140-2 cryptographic modules +into its management plane and the Kubernetes clusters it deploys. It secures data in motion through encrypted Transport +Layer Security (TLS) communication channels, includes a suite of scanning tools, and offers CONUS support from a +dedicated public sector team. These capabilities ensure robust data protection for your organization’s infrastructure +and applications. + +## Resources + +Check out the [Self-Hosted Palette](../../enterprise-version/enterprise-version.md) section to learn how to install the +self-hosted version of Palette in your data centers or public cloud providers. + +Review the [Palette VerteX](../../vertex/vertex.md) section to learn how to install and configure VerteX in your data +centers or public cloud providers. + +Check out the following video for a tour of Palette VerteX, our tailor-made Kubernetes management solution for +government and regulated industries. + +
+ + diff --git a/docs/docs-content/getting-started/additional-capabilities/vmo.md b/docs/docs-content/getting-started/additional-capabilities/vmo.md new file mode 100644 index 0000000000..a73bd415d7 --- /dev/null +++ b/docs/docs-content/getting-started/additional-capabilities/vmo.md @@ -0,0 +1,36 @@ +--- +sidebar_label: "Virtual Machine Orchestrator" +title: "Virtual Machine Orchestrator" +description: "Learn more about the Palette Virtual Machine Orchestrator (VMO)." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["getting-started"] +--- + +Palette Virtual Machine Orchestrator (VMO) allows you to deploy, manage, and scale traditional VM workloads within a +modern Kubernetes environment, side by side with your containerized applications. It lets you apply to VMs the same +lifecycle management capabilities as Palette applies to containers, including backups. + +VMO uses the CNCF project [KubeVirt](https://kubevirt.io) to manage VMs as Kubernetes pods, ensuring complete mapping +between the VM and Kubernetes concepts. This solution also has near complete feature parity with +[VMware vSphere](https://www.vmware.com/products/vsphere.html), including capabilities such as live migration. + +Palette VMO can be used on edge hosts, giving the ability to deploy VM workloads at the edge without the overhead of a +hypervisor layer. This is achieved by leveraging [Canonical MAAS](https://maas.io). Additionally, VMO can also be used +in self-hosted, airgapped, and in our SaaS environments. Learn more on the +[Virtual Machine Orchestrator](../../vm-management/vm-management.md) page. + +## Resources + +To learn more about Palette VMO, review the [Architecture](../../vm-management/architecture.md) page to learn about the +components involved in enabling VMO for your infrastructure. Then, review the +[Create a VMO Profile](../../vm-management/create-vmo-profile.md) guide to prepare everything you need to deploy your +first cluster with VMO. + +Check out the following video for a tour of Palette's Virtual Machine Orchestrator (VMO) capability. It shows how you +can model, deploy, and manage VM workloads alongside containers in your clusters. + +
+ + diff --git a/docs/docs-content/getting-started/aws/_category_.json b/docs/docs-content/getting-started/aws/_category_.json new file mode 100644 index 0000000000..e7e7c54966 --- /dev/null +++ b/docs/docs-content/getting-started/aws/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 40 +} diff --git a/docs/docs-content/getting-started/aws/aws.md b/docs/docs-content/getting-started/aws/aws.md new file mode 100644 index 0000000000..2b4d53c7a4 --- /dev/null +++ b/docs/docs-content/getting-started/aws/aws.md @@ -0,0 +1,64 @@ +--- +sidebar_label: "Deploy a Cluster to AWS" +title: "Deploy a Cluster to Amazon Web Services (AWS)" +description: "Spectro Cloud Getting Started with AWS" +hide_table_of_contents: false +sidebar_custom_props: + icon: "" +tags: ["getting-started", "aws"] +--- + +Palette supports integration with [Amazon Web Services](https://aws.amazon.com). You can deploy and manage +[Host Clusters](../../glossary-all.md#host-cluster) in AWS. The concepts you learn about in the Getting Started section +are centered around a fictional case study company. This approach gives you a solution focused approach, while +introducing you with Palette workflows and capabilities. + +## 🧑‍🚀 Welcome to Spacetastic! + + + +## Get Started + +In this section, you learn how to create a cluster profile. Then, you deploy a cluster to AWS by using Palette. Once +your cluster is deployed, you can update it using cluster profile updates. + + diff --git a/docs/docs-content/getting-started/aws/create-cluster-profile.md b/docs/docs-content/getting-started/aws/create-cluster-profile.md new file mode 100644 index 0000000000..5fe506d6c6 --- /dev/null +++ b/docs/docs-content/getting-started/aws/create-cluster-profile.md @@ -0,0 +1,120 @@ +--- +sidebar_label: "Create a Cluster Profile" +title: "Create a Cluster Profile" +description: "Learn to create a full cluster profile in Palette for AWS." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["getting-started", "aws"] +--- + +Palette offers profile-based management for Kubernetes, enabling consistency, repeatability, and operational efficiency +across multiple clusters. A cluster profile allows you to customize the cluster infrastructure stack, allowing you to +choose the desired Operating System (OS), Kubernetes, Container Network Interfaces (CNI), Container Storage Interfaces +(CSI). You can further customize the stack with add-on application layers. For more information about cluster profile +types, refer to [Cluster Profiles](../introduction.md#cluster-profiles). + +In this tutorial, you create a full profile directly from the Palette dashboard. Then, you add a layer to your cluster +profile by using a [community pack](../../integrations/community_packs.md) to deploy a web application. The concepts you +learn about in the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- Follow the steps described in the [Set up Palette with AWS](./setup.md) guide to authenticate Palette for use with + your AWS cloud account. +- Ensure that the [Palette Community Registry](../../registries-and-packs/registries/registries.md#default-registries) + is available in your Palette environment. Refer to the + [Add OCI Packs Registry](../../registries-and-packs/registries/oci-registry/add-oci-packs.md) guide for additional + guidance. + +## Create a Full Cluster Profile + +Log in to [Palette](https://console.spectrocloud.com) and navigate to the left **Main Menu**. Select **Profiles** to +view the cluster profile page. You can view the list of available cluster profiles. To create a cluster profile, click +on **Add Cluster Profile**. + +Follow the wizard to create a new profile. + +In the **Basic Information** section, assign the name **aws-profile**, a brief profile description, select the type as +**Full**, and assign the tag **env:aws**. You can leave the version empty if you want to. Just be aware that the version +defaults to **1.0.0**. Click on **Next**. + +**Cloud Type** allows you to choose the infrastructure provider with which this cluster profile is associated. Select +**AWS** and click on **Next**. + +The **Profile Layers** step is where you specify the packs that compose the profile. There are four required +infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack +Type**, **Registry**, and **Pack Name**. + +For this tutorial, use the following packs: + +| Pack Name | Version | Layer | +| -------------- | ------- | ---------------- | +| ubuntu-aws LTS | 22.4.x | Operating System | +| Kubernetes | 1.29.x | Kubernetes | +| cni-calico | 3.27.x | Network | +| csi-aws-ebs | 1.26.x | Storage | + +As you fill out the information for each layer, click on **Next** to proceed to the next layer. + +Click on **Confirm** after you have completed filling out all the core layers. + +![A view of the cluster profile stack](/getting-started/aws/getting-started_create-cluster-profile_clusters_parameters.webp) + +The review section gives an overview of the cluster profile configuration you selected. Click on **Finish +Configuration** to create the cluster profile. + +## Add a Pack + +Navigate to the left **Main Menu** and select **Profiles**. Select the cluster profile you created earlier. + +Click on **Add New Pack** at the top of the page. + +Select the **Palette Community Registry** from the **Registry** dropdown. Then, click on the latest **Hello Universe** +pack with version **v1.2.0**. + +![Screenshot of hello universe pack](/getting-started/aws/getting-started_create-cluster-profile_add-pack.webp) + +Once you have selected the pack, Palette will display its README, which provides you with additional guidance for usage +and configuration options. The pack you added will deploy the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. + +![Screenshot of pack readme](/getting-started/aws/getting-started_create-cluster-profile_pack-readme.webp) + +Click on **Values** to edit the pack manifest. Click on **Presets** on the right-hand side. + +This pack has two configured presets: + +1. **Disable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a standalone frontend application. This is the default preset selection. +2. **Enable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a three-tier application with a frontend, API server, and Postgres database. + +Select the **Enable Hello Universe API** preset. The pack manifest changes according to this preset. + +![Screenshot of pack presets](/getting-started/aws/getting-started_create-cluster-profile_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. + +Click on **Confirm & Create** to save the manifest. Then, click on **Save Changes** to save this new layer to the +cluster profile. + +## Wrap-Up + +In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a +host cluster using Amazon Web Services (AWS). You added a community pack to your profile to deploy a custom workload. We +recommend that you continue to the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy this cluster profile +to a host cluster onto AWS. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/aws/deploy-k8s-cluster.md b/docs/docs-content/getting-started/aws/deploy-k8s-cluster.md new file mode 100644 index 0000000000..f965ad819e --- /dev/null +++ b/docs/docs-content/getting-started/aws/deploy-k8s-cluster.md @@ -0,0 +1,185 @@ +--- +sidebar_label: "Deploy a Cluster" +title: "Deploy a Cluster" +description: "Learn to deploy a Palette host cluster." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["getting-started", "aws"] +--- + +This tutorial will teach you how to deploy a host cluster with Palette using Amazon Web Services (AWS). You will learn +about _Cluster Mode_ and _Cluster Profiles_ and how these components enable you to deploy customized applications to +Kubernetes with minimal effort. + +As you navigate the tutorial, refer to this diagram to help you understand how Palette uses a cluster profile as a +blueprint for the host cluster you deploy. Palette clusters have the same node pools you may be familiar with: _control +plane nodes_ and _worker nodes_ where you will deploy applications. The result is a host cluster that Palette manages. +The concepts you learn about in the Getting Started section are centered around a fictional case study company, +Spacetastic Ltd. + +![A view of Palette managing the Kubernetes lifecycle](/getting-started/getting-started_deploy-k8s-cluster_application.webp) + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following. + +- Follow the steps described in the [Set up Palette with AWS](./setup.md) guide to authenticate Palette for use with + your AWS cloud account. + +- A Palette cluster profile. Follow the [Create a Cluster Profile](./create-cluster-profile.md) tutorial to create the + required AWS cluster profile. + +## Deploy a Cluster + +The following steps will guide you through deploying the cluster infrastructure. + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) + +Palette will prompt you to select the type of cluster. Select **AWS IaaS** and click the **Start AWS IaaS +Configuration** button. Use the following steps to create a host cluster in AWS. + +In the **Basic information** section, insert the general information about the cluster, such as the Cluster name, +Description, Tags, and Cloud account. Click on **Next**. + +![Palette clusters basic information](/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_basic_info.webp) + +Click on **Add Cluster Profile**. A list is displayed of available profiles you can choose to deploy to AWS. Select the +cluster profile you created in the [Create a Cluster Profile](./create-cluster-profile.md) tutorial, named +**aws-profile**, and click on **Confirm**. + +The **Cluster Profile** section displays all the layers in the cluster profile. + +![Palette clusters parameters](/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_creation_parameters.webp) + +Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each +pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. + +The **Cluster Config** section allows you to select the **Region** in which to deploy the host cluster and specify other +options such as the **SSH Key Pair** to assign to the cluster. All clusters require you to select an SSH key. After you +have selected the **Region** and your **SSH Key Pair Name**, click on **Next**. + +The **Nodes Config** section allows you to configure the nodes that make up the control plane and worker nodes of the +host cluster. + +Before you proceed to next section, review the following parameters. + +- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or + worker pool. For this tutorial, set the count to one for the control plane pool and two for the worker pool. + +- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when + spot instances are used as worker nodes. You can check this box if you want to. + +- **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and + hourly cost of the instance. Select `m4.2xlarge`. + +- **Availability zones** - Used to specify the availability zones in which the node pool can place nodes. Select an + availability zone. + +- **Disk size** - Set the disk size to **60 GiB**. + +- **Instance Option** - This option allows you to choose + [on-demand instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) or + [spot instance](https://aws.amazon.com/ec2/spot/) for worker nodes. Select **On Demand**. + +![Palette clusters basic information](/getting-started/aws/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp) + +Select **Next** to proceed with the cluster deployment. + +In the **Cluster Settings** section, you can configure advanced options such as when to patch the OS, enable security +scans, manage backups, add role-based access control (RBAC) bindings, and more. + +For this tutorial, you can use the default settings. Click on **Validate** to continue. + +The **Review** section allows you to review the cluster configuration prior to deploying the cluster. Review all the +settings and click on **Finish Configuration** to deploy the cluster. + +![Configuration overview of newly created AWS cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp) + +Navigate to the left **Main Menu** and select **Clusters**. + +![Update the cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp) + +The cluster deployment process can take 15 to 30 min. The deployment time varies depending on the cloud provider, +cluster profile, cluster size, and the node pool configurations provided. You can learn more about the deployment +progress by reviewing the event log. Click on the **Events** tab to view the log. + +![Update the cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_event_log.webp) + +## Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/aws/getting-started_deploy-k8s-cluster_service_url.webp) + +
+ +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +
+ +![Image that shows the cluster overview of the Hello Universe Frontend Cluster](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed cluster. + +You have deployed your first application to a cluster managed by Palette. Your first application is a three-tier +application with a frontend, API server, and Postgres database. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +
+ +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +
+ +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +## Wrap-Up + +In this tutorial, you used the cluster profile you created in the previous +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to deploy a host cluster onto AWS. After the cluster +deployed, you verified the Hello Universe application was successfully deployed. + +We recommend that you continue to the [Deploy Cluster Profile Updates](./update-k8s-cluster.md) tutorial to learn how to +update your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md new file mode 100644 index 0000000000..406bfa7894 --- /dev/null +++ b/docs/docs-content/getting-started/aws/deploy-manage-k8s-cluster-tf.md @@ -0,0 +1,752 @@ +--- +sidebar_label: "Cluster Management with Terraform" +title: "Cluster Management with Terraform" +description: "Learn how to deploy and update a Palette host cluster to AWS with Terraform." +icon: "" +hide_table_of_contents: false +sidebar_position: 50 +toc_max_heading_level: 2 +tags: ["getting-started", "aws", "terraform"] +--- + +The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider +allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the +provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure. + +This tutorial will teach you how to use Terraform to deploy and update an Amazon Web Services (AWS) host cluster. You +will learn how to create two versions of a cluster profile with different demo applications, update the deployed cluster +with the new cluster profile version, and then perform a rollback. The concepts you learn about in the Getting Started +section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following items in place: + +- Follow the steps described in the [Set up Palette with AWS](./setup.md) guide to authenticate Palette for use with + your AWS cloud account and create a Palette API key. +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation) + installed if you choose to follow along using the tutorial container. +- If you choose to clone the repository instead of using the tutorial container, make sure you have the following + software installed: + - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater + - [Git](https://git-scm.com/downloads) + - [Kubectl](https://kubernetes.io/docs/tasks/tools/) + +## Set Up Local Environment + +You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by +downloading a container image that includes the tutorial code and all dependencies. + + + + + +Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command. + +```bash +docker ps +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise, +skip this step. + +```bash +podman machine init +podman machine start +``` + +Use the following command and ensure you receive an output displaying the installation information. + +```bash +podman info +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +Open a terminal window and download the tutorial code from GitHub. + +```shell +git clone https://github.com/spectrocloud/tutorials.git +``` + +Change the directory to the tutorial folder. + +```shell +cd tutorials/ +``` + +Check out the following git tag. + +```shell +git checkout v1.1.9 +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + + + + + +## Resources Review + +To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS, +Azure, GCP, or VMware vSphere. Before you deploy a host cluster to AWS, review the following files in the folder +structure. + +| **File** | **Description** | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. | +| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. | +| **data.tf** | This file contains all the query resources that perform read actions. | +| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. | +| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. | +| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. | +| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. | +| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. | +| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. | + +The following section reviews the core Terraform resources more closely. + +#### Provider + +The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This +tutorial uses four providers: + +- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) +- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest) +- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest) +- [Local](https://registry.terraform.io/providers/hashicorp/local/latest) + +Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by +modifying the value of the `palette-project` variable in the **terraform.tfvars** file. + +```hcl +terraform { + required_providers { + spectrocloud = { + version = ">= 0.20.6" + source = "spectrocloud/spectrocloud" + } + + tls = { + source = "hashicorp/tls" + version = "4.0.4" + } + + vsphere = { + source = "hashicorp/vsphere" + version = ">= 2.6.1" + } + + local = { + source = "hashicorp/local" + version = "2.4.1" + } + } + + required_version = ">= 1.9" +} + +provider "spectrocloud" { + project_name = var.palette-project +} +``` + +#### Cluster Profile + +The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile` +resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use +or add a manifest or Helm chart. + +The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources +being designated for a specific provider. In this tutorial, two versions of the AWS cluster profile are deployed: +version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while version `1.1.0` +deploys the [Kubecost](https://www.kubecost.com/) pack along with the +[Hello Universe](https://github.com/spectrocloud/hello-universe) application. + +The cluster profiles include layers for the Operating System (OS), Kubernetes, container network interface, and +container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile. +Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks, as the +order in which you arrange the contents of the `pack {}` blocks plays an important role in the cluster profile creation. +The table below displays the packs deployed in each version of the cluster profile. + +| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** | +| ------------- | --------------- | ----------- | -------------------------- | -------------------------- | +| OS | `ubuntu-aws` | `22.04` | :white_check_mark: | :white_check_mark: | +| Kubernetes | `kubernetes` | `1.29.0` | :white_check_mark: | :white_check_mark: | +| Network | `cni-calico` | `3.27.0` | :white_check_mark: | :white_check_mark: | +| Storage | `csi-aws-ebs` | `1.26.1` | :white_check_mark: | :white_check_mark: | +| App Services | `hellouniverse` | `1.2.0` | :white_check_mark: | :white_check_mark: | +| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: | + +The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a +standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and +Postgres database. This tutorial deploys the three-tier version of the +[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is +specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file. +Below is an example of version `1.0.0` of the AWS cluster profile Terraform resource. + +```hcl +resource "spectrocloud_cluster_profile" "aws-profile" { + count = var.deploy-aws ? 1 : 0 + + name = "tf-aws-profile" + description = "A basic cluster profile for AWS" + tags = concat(var.tags, ["env:aws"]) + cloud = "aws" + type = "cluster" + version = "1.0.0" + + pack { + name = data.spectrocloud_pack.aws_ubuntu.name + tag = data.spectrocloud_pack.aws_ubuntu.version + uid = data.spectrocloud_pack.aws_ubuntu.id + values = data.spectrocloud_pack.aws_ubuntu.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.aws_k8s.name + tag = data.spectrocloud_pack.aws_k8s.version + uid = data.spectrocloud_pack.aws_k8s.id + values = data.spectrocloud_pack.aws_k8s.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.aws_cni.name + tag = data.spectrocloud_pack.aws_cni.version + uid = data.spectrocloud_pack.aws_cni.id + values = data.spectrocloud_pack.aws_cni.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.aws_csi.name + tag = data.spectrocloud_pack.aws_csi.version + uid = data.spectrocloud_pack.aws_csi.id + values = data.spectrocloud_pack.aws_csi.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.hellouniverse.name + tag = data.spectrocloud_pack.hellouniverse.version + uid = data.spectrocloud_pack.hellouniverse.id + values = templatefile("manifests/values-3tier.yaml", { + namespace = var.app_namespace, + port = var.app_port, + replicas = var.replicas_number + db_password = base64encode(var.db_password), + auth_token = base64encode(var.auth_token) + }) + type = "oci" + } +} +``` + +#### Data Resources + +Each `pack {}` block contains references to a data resource. +[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in +Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more +dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query +Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values. + +Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.29.0`. + +```hcl +data "spectrocloud_pack" "aws_k8s" { + name = "kubernetes" + version = "1.29.0" + registry_uid = data.spectrocloud_registry.public_registry.id +} +``` + +Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's +`pack {}` block. + +#### Cluster + +The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure +providers. To create an AWS host cluster, you must set the `deploy-aws` variable in the **terraform.tfvars** file to +true. + +When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for +the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by +reviewing the +[AWS cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_aws) +documentation. + +```hcl +resource "spectrocloud_cluster_aws" "aws-cluster" { + count = var.deploy-aws ? 1 : 0 + + name = "aws-cluster" + tags = concat(var.tags, ["env:aws"]) + cloud_account_id = data.spectrocloud_cloudaccount_aws.account[0].id + + cloud_config { + region = var.aws-region + ssh_key_name = var.aws-key-pair-name + } + + cluster_profile { + id = var.deploy-aws && var.deploy-aws-kubecost ? resource.spectrocloud_cluster_profile.aws-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.aws-profile[0].id + } + + machine_pool { + control_plane = true + control_plane_as_worker = true + name = "control-plane-pool" + count = var.aws_control_plane_nodes.count + instance_type = var.aws_control_plane_nodes.instance_type + disk_size_gb = var.aws_control_plane_nodes.disk_size_gb + azs = var.aws_control_plane_nodes.availability_zones + } + + machine_pool { + name = "worker-pool" + count = var.aws_worker_nodes.count + instance_type = var.aws_worker_nodes.instance_type + disk_size_gb = var.aws_worker_nodes.disk_size_gb + azs = var.aws_worker_nodes.availability_zones + } + + timeouts { + create = "30m" + delete = "15m" + } +} +``` + +## Terraform Tests + +Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly. +Issue the following command in your terminal. + +```bash +terraform test +``` + +A successful test execution will output the following. + +```text hideClipboard +Success! 16 passed, 0 failed. +``` + +## Input Variables + +To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your +choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org). + +The file is structured with different sections. Each provider has a section with variables that need to be filled in, +identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-` +available for each provider, which you can use to select the deployment environment. + +In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a +Palette project different from the default one. + +```hcl {4} +##################### +# Palette Settings +##################### +palette-project = "Default" # The name of your project in Palette. +``` + +Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token +for the Hello Universe pack. For example, you can use the value `password` for the database password and the default +token provided in the +[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes) +repository for the authentication token. + +```hcl {7-8} +############################## +# Hello Universe Configuration +############################## +app_namespace = "hello-universe" # The namespace in which the application will be deployed. +app_port = 8080 # The cluster port number on which the service will listen for incoming traffic. +replicas_number = 1 # The number of pods to be created. +db_password = "REPLACE ME" # The database password to connect to the API database. +auth_token = "REPLACE ME" # The auth token for the API connection. +``` + +Locate the AWS provider section and change `deploy-aws = false` to `deploy-aws = true`. Additionally, replace all +occurrences of `REPLACE_ME` with their corresponding values, such as those for the `aws-cloud-account-name`, +`aws-region`, `aws-key-pair-name`, and `availability_zones` variables. You can also update the values for the nodes in +the control plane or worker node pools as needed. + +:::warning + +Ensure that the SSH key pair specified in `aws-key-pair-name` is available in the same region specified by `aws-region`. +For example, if `aws-region` is set to `us-east-1`, use the name of a key pair that exists in the `us-east-1` region. + +::: + +```hcl {4,7-9,16,24} +########################### +# AWS Deployment Settings +########################### +deploy-aws = false # Set to true to deploy to AWS. +deploy-aws-kubecost = false # Set to true to deploy to AWS and include Kubecost to your cluster profile. + +aws-cloud-account-name = "REPLACE ME" +aws-region = "REPLACE ME" +aws-key-pair-name = "REPLACE ME" + +aws_control_plane_nodes = { + count = "1" + control_plane = true + instance_type = "m4.xlarge" + disk_size_gb = "60" + availability_zones = ["REPLACE ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-east-1a", "us-east-1b"]. +} + +aws_worker_nodes = { + count = "1" + control_plane = false + instance_type = "m4.xlarge" + disk_size_gb = "60" + availability_zones = ["REPLACE ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-east-1a", "us-east-1b"]. +} +``` + +When you are done making the required changes, save the file. + +## Deploy the Cluster + +Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an +environment variable. This step allows the Terraform code to authenticate with the Palette API. + +```bash +export SPECTROCLOUD_APIKEY= +``` + +Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that +contains the Terraform files. + +```shell +terraform init +``` + +```text hideClipboard +Terraform has been successfully initialized! +``` + +:::warning + +Before deploying the resources, ensure that there are no active clusters named `aws-cluster` or cluster profiles named +`tf-aws-profile` in your Palette project. + +::: + +Issue the `plan` command to preview the resources that Terraform will create. + +```shell +terraform plan +``` + +The output indicates that three new resources will be created: two versions of the AWS cluster profile and the host +cluster. The host cluster will use version `1.0.0` of the cluster profile. + +```shell +Plan: 3 to add, 0 to change, 0 to destroy. +``` + +To deploy the resources, use the `apply` command. + +```shell +terraform apply -auto-approve +``` + +To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and +click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-aws-profile`. Click on the cluster +profile to review its layers and versions. + +![A view of the cluster profile](/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp) + +You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**. + +![Update the cluster](/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp) + +Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more. + +The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the +node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on +the **Events** tab to check the log. + +![Update the cluster](/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp) + +### Verify the Application + +In Palette, navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +![Deployed application](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed cluster. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this +tutorial, you used Terraform to deploy two versions of an AWS cluster profile. The snippet below displays a segment of +the Terraform cluster profile resource version `1.0.0` that was deployed. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "aws-profile" { + count = var.deploy-aws ? 1 : 0 + + name = "tf-aws-profile" + description = "A basic cluster profile for AWS" + tags = concat(var.tags, ["env:aws"]) + cloud = "aws" + type = "cluster" + version = "1.0.0" +``` + +Open the **terraform.tfvars** file, set the `deploy-aws-kubecost` variable to true, and save the file. Once applied, the +host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack. + +The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note +how the name `tf-aws-profile` is the same as in the first cluster profile resource, but the version is different. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "aws-profile-kubecost" { + count = var.deploy-aws-kubecost ? 1 : 0 + + name = "tf-aws-profile" + description = "A basic cluster profile for AWS with Kubecost" + tags = concat(var.tags, ["env:aws"]) + cloud = "aws" + type = "cluster" + version = "1.1.0" +``` + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster +profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster +profile version. + +To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters** +from the left **Main Menu**. + +Select the cluster named `aws-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was +triggered due to cluster profile changes. + +![Image that shows the cluster profile reconciliation behavior](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_reconciliation.webp) + +Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-aws-profile` +cluster profile. + +![Image that shows the new cluster profile version with Kubecost](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp) + +Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the +**Overview** tab to verify that the Kubecost pack was successfully deployed. + +![Image that shows the cluster with Kubecost](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp) + +Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette +UI. This file enables you and other users to issue `kubectl` commands against the host cluster. + +![Image that shows the cluster's kubeconfig file location](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp) + +Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```bash +export KUBECONFIG=~/Downloads/admin.aws-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```bash +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +information about your cluster. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer pack. + +![Image that shows the Kubecost UI](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubecost.webp) + +Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal +window it is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process of rolling back to a previous version using Terraform is similar to the process of applying a new version. + +Open the **terraform.tfvars** file, set the `deploy-aws-kubecost` variable to false, and save the file. Once applied, +this action will make the active cluster use version **1.0.0** of the cluster profile again. + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that the deployed cluster will now use version `1.0.0` of the cluster profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp) + +## Cleanup + +Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all +the resources you created through Terraform. + +```shell +terraform destroy --auto-approve +``` + +A successful execution of `terraform destroy` will output the following. + +```shell +Destroy complete! Resources: 3 destroyed. +``` + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force +delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to +delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue +the following command to stop and remove the container. + + + + + +```shell +docker stop tutorialContainer && \ +docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +```shell +podman stop tutorialContainer && \ +podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +## Wrap-Up + +In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host +AWS cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to perform +cluster profile roll backs. + +We encourage you to check out the [Scale, Upgrade, and Secure Clusters](./scale-secure-cluster.md) tutorial to learn how +to perform common Day-2 operations on your deployed clusters. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/aws/scale-secure-cluster.md b/docs/docs-content/getting-started/aws/scale-secure-cluster.md new file mode 100644 index 0000000000..333b1d07b4 --- /dev/null +++ b/docs/docs-content/getting-started/aws/scale-secure-cluster.md @@ -0,0 +1,525 @@ +--- +sidebar_label: "Scale, Upgrade, and Secure Clusters" +title: "Scale, Upgrade, and Secure Clusters" +description: "Learn how to scale, upgrade, and secure Palette host clusters deployed to AWS." +icon: "" +hide_table_of_contents: false +sidebar_position: 60 +tags: ["getting-started", "aws", "tutorial"] +--- + +Palette has in-built features to help with the automation of Day-2 operations. Upgrading and maintaining a deployed +cluster is typically complex because you need to consider any possible impact on service availability. Palette provides +out-of-the-box functionality for upgrades, observability, granular Role Based Access Control (RBAC), backup and security +scans. + +This tutorial will teach you how to use the Palette UI to perform scale and maintenance tasks on your clusters. You will +learn how to create Palette projects and teams, import a cluster profile, safely upgrade the Kubernetes version of a +deployed cluster and scale up your cluster nodes. The concepts you learn about in the Getting Started section are +centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with AWS](./setup.md) guide to authenticate +Palette for use with your AWS cloud account. + +Additionally, you should install kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +## Create Palette Projects + +Palette projects help you organize and manage cluster resources, providing logical groupings. They also allow you to +manage user access control through Role Based Access Control (RBAC). You can assign users and teams with specific roles +to specific projects. All resources created within a project are scoped to that project and only available to that +project, but a tenant can have multiple projects. + +Log in to [Palette](https://console.spectrocloud.com). + +Click on the **drop-down Menu** at the top of the page and switch to the **Tenant Admin** scope. Palette provides the +**Default** project out-of-the-box. + +![Image that shows how to select tenant admin scope](/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp) + +Navigate to the left **Main Menu** and click on **Projects**. Click on the **Create Project** button. The **Create a new +project** dialog appears. + +Fill out the input fields with values from the table below to create a project. + +| Field | Description | Value | +| ----------- | ----------------------------------- | --------------------------------------------------------- | +| Name | The name of the project. | `Project-ScaleSecureTutorial` | +| Description | A brief description of the project. | Project for Scale, Upgrade, and Secure Clusters tutorial. | +| Tags | Add tags to the project. | `env:dev` | + +Click **Confirm** to create the project. Once Palette finishes creating the project, a new card appears on the +**Projects** page. + +Navigate to the left **Main Menu** and click on **Users & Teams**. + +Select the **Teams** tab. Then, click on **Create Team**. + +Fill in the **Team Name** with **scale-secure-tutorial-team**. Click on **Confirm**. + +Once Palette creates the team, select it from the **Teams** list. The **Team Details** pane opens. + +On the **Project Roles** tab, click on **New Project Role**. The list of project roles appears. + +Select the **Project-ScaleSecureTutorial** from the **Projects** drop-down. Then, select the **Cluster Profile Viewer** +and **Cluster Viewer** roles. Click on **Confirm**. + +![Image that shows how to select team roles](/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp) + +Any users that you add to this team inherit the project roles assigned to it. Roles are the foundation of Palette's RBAC +enforcement. They allow a single user to have different types of access control based on the resource being accessed. In +this scenario, any user added to this team will have access to view any cluster profiles and clusters in the +**Project-ScaleSecureTutorial** project, but not modify them. Check out the +[Palette RBAC](../../user-management/palette-rbac/palette-rbac.md) section for more details. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on **Open project** on the **Project-ScaleSecureTutorial** card. + +![Image that shows how to open the tutorial project](/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp) + +Your scope changes from **Tenant Admin** to **Project-ScaleSecureTutorial**. All further resources you create will be +part of this project. + +## Import a Cluster Profile + +Palette provides three resource contexts. They help you customize your environment to your organizational needs, as well +as control the scope of your settings. + +| Context | Description | +| ------- | ---------------------------------------------------------------------------------------- | +| System | Resources are available at the system level and to all tenants in the system. | +| Tenant | Resources are available at the tenant level and to all projects belonging to the tenant. | +| Project | Resources are available within a project and not available to other projects. | + +All of the resources you have created as part of your Getting Started journey have used the **Project** context. They +are only visible in the **Default** project. Therefore, you will need to create a new cluster profile in +**Project-ScaleSecureTutorial**. + +Navigate to the left **Main Menu** and click on **Profiles**. Click on **Import Cluster Profile**. The **Import Cluster +Profile** pane opens. + +Paste the following in the text editor. Click on **Validate**. The **Select repositories** dialog appears. + + + +Click on **Confirm**. Then, click on **Confirm** on the **Import Cluster Profile** pane. Palette creates a new cluster +profile named **aws-profile**. + +On the **Profiles** list, select **Project** from the **Contexts** drop-down. Your newly created cluster profile +displays. The Palette UI confirms that the cluster profile was created in the scope of the +**Project-ScaleSecureTutorial**. + +![Image that shows the cluster profile ](/getting-started/aws/getting-started_scale-secure-cluster_cluster-profile-created.webp) + +Select the cluster profile to view its details. The cluster profile summary appears. + +This cluster profile deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) application using a +pack. Click on the **hellouniverse 1.2.0** layer. The pack manifest editor appears. + +Click on **Presets** on the right-hand side. You can learn more about the pack presets on the pack README, which is +available in the Palette UI. Select the **Enable Hello Universe API** preset. The pack manifest changes accordingly. + +![Screenshot of pack presets](/getting-started/aws/getting-started_scale-secure-cluster_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to save your updates. + +## Deploy a Cluster + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +Palette will prompt you to select the type of cluster. Select **AWS IaaS** and click on **Start AWS IaaS +Configuration**. + +Continue with the rest of the cluster deployment flow using the cluster profile you created in the +[Import a Cluster Profile](#import-a-cluster-profile) section, named **aws-profile**. Refer to the +[Deploy a Cluster](./deploy-k8s-cluster.md#deploy-a-cluster) tutorial for additional guidance or if you need a refresher +of the Palette deployment flow. + +### Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. + +When the application is deployed and ready for network traffic, Palette exposes the service URL in the **Services** +field. Click on the URL for port **:8080** to access the Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/aws/getting-started_scale-secure-cluster_service_url.webp) + +## Upgrade Kubernetes Versions + +Regularly upgrading your Kubernetes version is an important part of maintaining a good security posture. New versions +may contain important patches to security vulnerabilities and bugs that could affect the integrity and availability of +your clusters. + +Palette supports three minor Kubernetes versions at any given time. We support the current release and the three +previous minor version releases, also known as N-3. For example, if the current release is 1.29, we support 1.28, 1.27, +and 1.26. + +:::warning + +Once you upgrade your cluster to a new Kubernetes version, you will not be able to downgrade. + +::: + +We recommend using cluster profile versions to safely upgrade any layer of your cluster profile and maintain the +security of your clusters. Expand the following section to learn how to create a new cluster profile version with a +Kubernetes upgrade. + +
+ +Upgrade Kubernetes using Cluster Profile Versions + +Navigate to the left **Main Menu** and click on **Profiles**. Select the cluster profile that you used to deploy your +cluster, named **aws-profile**. The cluster profile details page appears. + +Click on the version drop-down and select **Create new version**. The version creation dialog appears. + +Fill in **1.1.0** in the **Version** input field. Then, click on **Confirm**. The new cluster profile version is created +with the same layers as version **1.0.0**. + +Select the **kubernetes 1.29.x** layer of the profile. The pack manifest editor appears. + +Click on the **Pack Version** dropdown. All of the available versions of the **Palette eXtended Kubernetes** pack +appear. The cluster profile is configured to use the latest patch version of **Kubernetes 1.29**. + +![Cluster profile with all Kubernetes versions](/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-versions.webp) + +The official guidelines for Kubernetes upgrades recommend upgrading one minor version at a time. For example, if you are +using Kubernetes version 1.26, you should upgrade to 1.27, before upgrading to version 1.28. You can learn more about +the official Kubernetes upgrade guidelines in the +[Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/) page. + +Select **1.30.x** from the version dropdown. This selection follows the Kubernetes upgrade guidelines as the cluster +profile is using **1.29.x**. + +The manifest editor highlights the changes made by this upgrade. Once you have verified that the upgrade changes +versions as expected, click on **Confirm changes**. + +Click on **Confirm Updates**. Then, click on **Save Changes** to persist your updates. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Profile** tab. Your cluster is currently using the **1.0.0** version of your cluster profile. + +Change the cluster profile version by selecting **1.1.0** from the version drop-down. Click on **Review & Save**. The +**Changes Summary** dialog appears. + +Click on **Review changes in Editor**. The **Review Update Changes** dialog displays the same Kubernetes version +upgrades as the cluster profile editor previously did. Click on **Update**. + +
+ +Upgrading the Kubernetes version of your cluster modifies an infrastructure layer. Therefore, Kubernetes needs to +replace its nodes. This is known as a repave. Check out the +[Node Pools](../../clusters/cluster-management/node-pool.md#repave-behavior-and-configuration) page to learn more about +the repave behavior and configuration. + +Click on the **Nodes** tab. You can follow along with the node upgrades on this screen. Palette replaces the nodes +configured with the old Kubernetes version with newly upgraded ones. This may affect the performance of your +application, as Kubernetes swaps the workloads to the upgraded nodes. + +![Node repaves in progress](/getting-started/aws/getting-started_scale-secure-cluster_node-repaves.webp) + +### Verify the Application + +The cluster update completes when the Palette UI marks the cluster profile layers as green and the cluster is in a +**Healthy** state. The cluster **Overview** page also displays the Kubernetes version as **1.30**. Click on the URL for +port **:8080** to access the application and verify that your upgraded cluster is functional. + +![Kubernetes upgrade applied](/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp) + +## Scan Clusters + +Palette provides compliance, security, conformance, and Software Bill of Materials (SBOM) scans on tenant clusters. +These scans ensure cluster adherence to specific compliance and security standards, as well as detect potential +vulnerabilities. You can perform four types of scans on your cluster. + +| Scan | Description | +| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Kubernetes Configuration Security | This scan examines the compliance of deployed security features against the CIS Kubernetes Benchmarks, which are consensus-driven security guidelines for Kubernetes. By default, the test set will execute based on the cluster Kubernetes version. | +| Kubernetes Penetration Testing | This scan evaluates Kubernetes-related open-ports for any configuration issues that can leave the tenant clusters exposed to attackers. It hunts for security issues in your clusters and increases visibility of the security controls in your Kubernetes environments. | +| Kubernetes Conformance Testing | This scan validates your Kubernetes configuration to ensure that it conforms to CNCF specifications. Palette leverages an open-source tool called [Sonobuoy](https://sonobuoy.io) to perform this scan. | +| Software Bill of Materials (SBOM) | This scan details the various third-party components and dependencies used by your workloads and helps to manage security and compliance risks associated with those components. | + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Scan** tab. The list of all the available cluster scans appears. Palette indicates that you have never +scanned your cluster. + +![Scans never performed on the cluster](/getting-started/aws/getting-started_scale-secure-cluster_never-scanned-cluster.webp) + +Click **Run Scan** on the **Kubernetes configuration security** and **Kubernetes penetration testing** scans. Palette +schedules and executes these scans on your cluster, which may take a few minutes. Once they complete, you can download +the report in PDF, CSV or view the results directly in the Palette UI. + +![Scans completed on the cluster](/getting-started/aws/getting-started_scale-secure-cluster_scans-completed.webp) + +Click on **Configure Scan** on the **Software Bill of Materials (SBOM)** scan. The **Configure SBOM Scan** dialog +appears. + +Leave the default selections on this screen and click on **Confirm**. Optionally, you can configure an S3 bucket to save +your report into. Refer to the +[Configure an SBOM Scan](../../clusters/cluster-management/compliance-scan.md#configure-an-sbom-scan) guide to learn +more about the configuration options of this scan. + +Once the scan completes, click on the report to view it within the Palette UI. The third-party dependencies that your +workloads rely on are evaluated for potential security vulnerabilities. Reviewing the SBOM enables organizations to +track vulnerabilities, perform regular software maintenance, and ensure compliance with regulatory requirements. + +:::info + +The scan reports highlight any failed checks, based on Kubernetes community standards and CNCF requirements. We +recommend that you prioritize the rectification of any identified issues. + +::: + +As you have seen so far, Palette scans are crucial when maintaining your security posture. Palette provides the ability +to schedule your scans and periodically evaluate your clusters. In addition, it keeps a history of previous scans for +comparison purposes. Expand the following section to learn how to configure scan schedules for your cluster. + +
+ +Configure Cluster Scan Schedules + +Click on **Settings**. Then, select **Cluster Settings**. The **Settings** pane appears. + +Select the **Schedule Scans** option. You can configure schedules for you cluster scans. Palette provides common scan +schedules or you can provide a custom time. We recommend choosing a schedule when you expect the usage of your cluster +to be lowest. Otherwise, the scans may impact the performance of your nodes. + +![Scan schedules](/getting-started/aws/getting-started_scale-secure-cluster_scans-schedules.webp) + +Palette will automatically scan your cluster according to your configured schedule. + +
+ +## Scale a Cluster + +A node pool is a group of nodes within a cluster that all have the same configuration. You can use node pools for +different workloads. For example, you can create a node pool for your production workloads and another for your +development workloads. You can update node pools for active clusters or create a new one for the cluster. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Nodes** tab. Your cluster has a **control-plane-pool** and a **worker-pool**. Each pool contains one node. + +Select the **Overview** tab. Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file. + +![kubeconfig download](/getting-started/aws/getting-started_scale-secure-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.aws-cluster.kubeconfig +``` + +Execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals two nodes, one for the worker pool and one for the control plane. Make a note of the name of your +worker node, which is the node that does not have the `control-plane` role. In the example below, +`ip-10-0-1-133.ec2.internal` is the name of the worker node. + +```shell +NAME STATUS ROLES AGE VERSION +ip-10-0-1-133.ec2.internal Ready 46m v1.30.4 +ip-10-0-1-95.ec2.internal Ready control-plane 51m v1.30.4 +``` + +The Hello Universe pack deploys three pods in the `hello-universe` namespace. Execute the following command to verify +where these pods have been scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that all of the pods have been scheduled on the worker node you made a note of previously. + +```shell +NAME READY STATUS RESTARTS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 1 (20m ago) 20m ip-10-0-1-133.ec2.internal +postgres-698d7ff8f4-vbktf 1/1 Running 0 20m ip-10-0-1-133.ec2.internal +ui-5f777c76df-pplcv 1/1 Running 0 20m ip-10-0-1-133.ec2.internal +``` + +Navigate back to the Palette UI in your browser. Select the **Nodes** tab. + +Click on **New Node Pool**. The **Add node pool** dialog appears. This workflow allows you to create a new worker pool +for your cluster. Fill in the following configuration. + +| Field | Value | Description | +| --------------------- | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Node pool name** | `worker-pool-2` | The name of your worker pool. | +| **Enable Autoscaler** | Enabled | Whether Palette should scale the pool horizontally based on its per-node workload counts. The **Minimum size** parameter specifies the lower bound of nodes in the pool and the **Maximum size** specifies the upper bound. By default, **Minimum size** is `1` and **Maximum size** is `3`. | +| **Instance Type** | `m4.2xlarge` | Set the compute size equal to the already provisioned nodes. | +| **Availability Zone** | _Availability zone of your choice_ | Set the availability zone the same as the already provisioned nodes. | + +Click on **Confirm**. The dialog closes. Palette begins provisioning your node pool. Once the process completes, your +three node pools appear in a healthy state. + +![New worker pool provisioned](/getting-started/aws/getting-started_scale-secure-cluster_third-node-pool.webp) + +Navigate back to your terminal and execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals three nodes, two for worker pools and one for the control plane. Make a note of the names of your +worker nodes. In the example below, `ip-10-0-1-133.ec2.internal` and `ip-10-0-1-32.ec2.internal` are the worker nodes. + +```shell +NAME STATUS ROLES AGE VERSION +ip-10-0-1-32.ec2.internal Ready 16m v1.30.4 +ip-10-0-1-133.ec2.internal Ready 46m v1.30.4 +ip-10-0-1-95.ec2.internal Ready control-plane 51m v1.30.4 +``` + +It is common to dedicate node pools to a particular type of workload. One way to specify this is through the use of +Kubernetes taints and tolerations. + +Taints provide nodes with the ability to repel a set of pods, allowing you to mark nodes as unavailable for certain +pods. Tolerations are applied to pods and allow the pods to schedule onto nodes with matching taints. Once configured, +nodes do not accept any pods that do not tolerate the taints. + +The animation below provides a visual representation of how taints and tolerations can be used to specify which +workloads execute on which nodes. + +![Taints repel pods to a new node](/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif) + +Switch back to Palette in your web browser. Navigate to the left **Main Menu** and select **Profiles**. Select the +cluster profile deployed to your cluster, named `aws-profile`. Ensure that the **1.1.0** version is selected. + +Click on the **hellouniverse 1.2.0** layer. The manifest editor appears. Set the +`manifests.hello-universe.ui.useTolerations` field on line 20 to `true`. Then, set the +`manifests.hello-universe.ui.effect` field on line 22 to `NoExecute`. This toleration describes that the UI pods of +Hello Universe will tolerate the taint with the key `app`, value `ui` and effect `NoExecute`. The tolerations of the UI +pods should be as below. + +```yaml +ui: + useTolerations: true + tolerations: + effect: NoExecute + key: app + value: ui +``` + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to persist your changes. + +Navigate to the left **Main Menu** and select **Clusters**. Select your deployed cluster, named **aws-cluster**. + +Due to the changes you have made to the cluster profile, this cluster has a pending update. Click on **Updates**. The +**Changes Summary** dialog appears. + +Click on **Review Changes in Editor**. The **Review Update Changes** dialog appears. The toleration changes appear as +incoming configuration. + +Click on **Apply Changes** to apply the update to your cluster. + +Select the **Nodes** tab. Click on **Edit** on the first worker pool, named **worker-pool**. The **Edit node pool** +dialog appears. + +Click on **Add New Taint** in the **Taints** section. Fill in `app` for the **Key**, `ui` for the **Value** and select +`NoExecute` for the **Effect**. These values match the toleration you specified in your cluster profile earlier. + +![Add taint to worker pool](/getting-started/getting-started_scale-secure-cluster_add-taint.webp) + +Click on **Confirm** to save your changes. The nodes in the `worker-pool` can now only execute the UI pods that have a +toleration matching the configured taint. + +Switch back to your terminal. Execute the following command again to verify where the Hello Universe pods have been +scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that the UI pods have remained scheduled on their original node named `ip-10-0-1-133.ec2.internal`, +while the other two pods have been moved to the node of the second worker pool named `ip-10-0-1-32.ec2.internal`. + +```shell +NAME READY STATUS RESTARTS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 1 (20m ago) 20m ip-10-0-1-32.ec2.internal +postgres-698d7ff8f4-vbktf 1/1 Running 0 20m ip-10-0-1-32.ec2.internal +ui-5f777c76df-pplcv 1/1 Running 0 20m ip-10-0-1-133.ec2.internal +``` + +Taints and tolerations are a common way of creating nodes dedicated to certain workloads, once the cluster has scaled +accordingly through its provisioned node pools. Refer to the +[Taints and Tolerations](../../clusters/cluster-management/taints.md) guide to learn more. + +### Verify the Application + +Select the **Overview** tab. Click on the URL for port **:8080** to access the Hello Universe application and verify +that the application is functioning correctly. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/aws/getting-started_scale-secure-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name `aws-cluster` to +proceed with the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +Click on the **drop-down Menu** at the top of the page and switch to **Tenant Admin** scope. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on the **three-dot Menu** of the **Project-ScaleSecureTutorial** and select **Delete**. A pop-up box will ask you +to confirm the action. Confirm the deletion. + +Navigate to the left **Main Menu** and click on **Users & Teams**. Select the **Teams** tab. + +Click on **scale-secure-tutorial-team** list entry. The **Team Details** pane appears. Click on **Delete Team**. A +pop-up box will ask you to confirm the action. Confirm the deletion. + +## Wrap-up + +In this tutorial, you learned how to perform very important operations relating to the scalability and availability of +your clusters. First, you created a project and team. Next, you imported a cluster profile and deployed a host AWS +cluster. Then, you upgraded the Kubernetes version of your cluster and scanned your clusters using Palette's scanning +capabilities. Finally, you scaled your cluster's nodes and used taints to select which Hello Universe pods execute on +them. + +We encourage you to check out the [Additional Capabilities](../additional-capabilities/additional-capabilities.md) to +explore other Palette functionalities. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/aws/setup.md b/docs/docs-content/getting-started/aws/setup.md new file mode 100644 index 0000000000..5e9c6dc12c --- /dev/null +++ b/docs/docs-content/getting-started/aws/setup.md @@ -0,0 +1,66 @@ +--- +sidebar_label: "Set up Palette" +title: "Set up Palette with AWS" +description: "Learn how to set up Palette with AWS." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["getting-started", "aws"] +--- + +In this guide, you will learn how to set up Palette for use with your AWS cloud account. These steps are required in +order to authenticate Palette and allow it to deploy host clusters. The concepts you learn about in the Getting Started +section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access. + +- Sign up to a public cloud account from + [AWS](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account). The AWS cloud account + must have the required [IAM policies](../../clusters/public-cloud/aws/required-iam-policies.md). + +- An SSH key pair available in the region where you want to deploy the cluster. Check out the + [Create EC2 SSH Key Pair](https://docs.aws.amazon.com/ground-station/latest/ug/create-ec2-ssh-key-pair.html) for + guidance. + +## Enablement + +Palette needs access to your AWS cloud account in order to create and manage AWS clusters and resources. + +### Static Credentials Access + + + +### Create a Palette API Key + +Follow the steps below to create a Palette API key. This is required for the +[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial. + + + +## Validate + +You can verify your account is added. + +1. Log in to [Palette](https://console.spectrocloud.com). + +2. From the left **Main Menu**, select **Tenant Settings**. + +3. Next, on the **Tenant Settings Menu**, select **Cloud Accounts**. + +4. The added cloud account is listed under **AWS** with all other available AWS cloud accounts. + +## Next Steps + +Now that you set up Palette for use with AWS, you can start deploying Kubernetes clusters to your AWS account. To learn +how to get started with deploying Kubernetes clusters to AWS, we recommend that you continue to the +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to create a full cluster profile for your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/aws/update-k8s-cluster.md b/docs/docs-content/getting-started/aws/update-k8s-cluster.md new file mode 100644 index 0000000000..da614f354e --- /dev/null +++ b/docs/docs-content/getting-started/aws/update-k8s-cluster.md @@ -0,0 +1,297 @@ +--- +sidebar_label: "Deploy Cluster Profile Updates" +title: "Deploy Cluster Profile Updates" +description: "Learn how to update your deployed clusters using Palette Cluster Profiles." +icon: "" +hide_table_of_contents: false +sidebar_position: 40 +tags: ["getting-started", "aws"] +--- + +Palette provides cluster profiles, which allow you to specify layers for your workloads using packs, Helm charts, Zarf +packages, or cluster manifests. Packs serve as blueprints to the provisioning and deployment process, as they contain +the versions of the container images that Palette will install for you. Cluster profiles provide consistency across +environments during the cluster creation process, as well as when maintaining your clusters. Check out +[Cluster Profiles](../introduction.md#cluster-profiles) to learn more. Once provisioned, there are three main ways to +update your Palette deployments. + +| Method | Description | Cluster application process | +| ------------------------ | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Cluster profile versions | Create a new version of the cluster profile with your updates. | Select the new version of the cluster profile. Apply this new profile version to the clusters you want to update. | +| Cluster profile updates | Change the cluster profile in place. | Palette detects the difference between the provisioned resources and this profile. A pending update is available to clusters using this profile. Apply pending updates to the clusters you want to update. | +| Cluster overrides | Change the configuration of a single deployed cluster outside its cluster profile. | Save and apply the changes you've made to your cluster. | + +This tutorial will teach you how to update a cluster deployed with Palette to Amazon Web Services (AWS). You will +explore each cluster update method and learn how to apply these changes using Palette. The concepts you learn about in +the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with AWS](./setup.md) guide to authenticate +Palette for use with your AWS cloud account. + +Additionally, you should install Kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +Follow the instructions of the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy a cluster with the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. Your cluster should be successfully +provisioned and in a healthy state. + +The cluster profile name is `aws-profile` and the cluster name is `aws-cluster`. + +![Cluster details page with service URL highlighted](/getting-started/aws/getting-started_deploy-k8s-cluster_service_url.webp) + +## Tag and Filter Clusters + +Palette provides the ability to add tags to your cluster profiles and clusters. This helps you organize and categorize +your clusters based on your custom criteria. You can add tags during the creation process or by editing the resource +after it has been created. + +Adding tags to your clusters helps you find and identify your clusters, without having to rely on cluster naming. This +is especially important when operating with many clusters or multiple cloud deployments. + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Find the `aws-cluster` you +deployed with the _hello-universe_ application. Click on it to view its **Overview** tab. + +Click on the **Settings** drop-down Menu in the upper right corner and select **Cluster Settings**. + +Fill **service:hello-universe-frontend** in the **Tags (Optional)** input box. Click on **Save Changes**. Close the +panel. + +![Image that shows how to add a cluster tag](/getting-started/aws/getting-started_update-k8s-cluster_add-service-tag.webp) + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Click on **Add Filter**, then +select the **Add custom filter** option. + +Use the drop-down boxes to fill in the values of the filter. Select **Tags** in the left-hand **drop-down Menu**. Select +**is** in the middle **drop-down Menu**. Fill in **service:hello-universe-frontend** in the right-hand input box. + +Click on **Apply Filter**. + +![Image that shows how to add a frontend service filter](/getting-started/aws/getting-started_update-k8s-cluster_apply-frontend-filter.webp) + +Once you apply the filter, only the `aws-cluster` with this tag is displayed. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. If you +do not specify a version for your cluster profile, it defaults to **1.0.0**. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster. It should be named `aws-profile`. Select it to view its +details. + +![Image that shows the frontend cluster profile with cluster linked to it](/getting-started/aws/getting-started_update-k8s-cluster_profile-with-cluster.webp) + +The current version is displayed in the **drop-down Menu** next to the profile name. This profile has the default value +of **1.0.0**, as you did not specify another value when you created it. The cluster profile also shows the host clusters +that are currently deployed with this cluster profile version. + +Click on the version **drop-down Menu**. Select the **Create new version** option. + +A dialog box appears. Fill in the **Version** input with **1.1.0**. Click on **Confirm**. + +Palette creates a new cluster profile version and opens it. The version dropdown displays the newly created **1.1.0** +profile. This profile version is not deployed to any host clusters. + +![Image that shows cluster profile version 1.1.0](/getting-started/aws/getting-started_update-k8s-cluster_new-version-overview.webp) + +The version **1.1.0** has the same layers as the version **1.0.0** it was created from. + +Click on **Add New Pack**. Select the **Public Repo** registry and scroll down to the **Monitoring** section. Find the +**Kubecost** pack and select it. Alternatively, you can use the search function with the pack name **Kubecost**. + +![Image that shows how to select the Kubecost pack](/getting-started/aws/getting-started_update-k8s-cluster_select-kubecost-pack.webp) + +Once selected, the pack manifest is displayed in the manifest editor. + +Click on **Confirm & Create**. The manifest editor closes. + +Click on **Save Changes** to finish the configuration of this cluster profile version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab of this cluster. You can select a new version of your cluster profile by using the version +dropdown. + +Select the **1.1.0** version. + +![Image that shows how to select a new profile version for the cluster](/getting-started/aws/getting-started_update-k8s-cluster_profile-version-selection.webp) + +Click on **Save** to confirm your profile version selection. + +:::warning + +Palette has backup and restore capabilities available for your mission critical workloads. Ensure that you have adequate +backups before you make any cluster profile version changes in your production environments. You can learn more in the +[Backup and Restore](../../clusters/cluster-management/backup-restore/backup-restore.md) section. + +::: + +Palette now makes the required changes to your cluster according to the specifications of the configured cluster profile +version. Once your changes have completed, Palette marks your layers with the green status indicator. The Kubecost pack +will be successfully deployed. + +![Image that shows completed cluster profile updates](/getting-started/aws/getting-started_update-k8s-cluster_completed-cluster-updates.webp) + +Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette UI. +This file enables you and other users to issue kubectl commands against the host cluster. + +![Image that the kubeconfig file](/getting-started/aws/getting-started_update-k8s-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.aws-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```shell +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +visualization tools. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer. + +![Image that shows the Kubecost UI](/getting-started/getting-started_update-k8s-cluster_kubecost-ui.webp) + +Once you are done exploring locally, you can stop the `kubectl port-forward` command by closing the terminal window it +is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process to roll back to a previous version is identical to the process for applying a new version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab. This cluster is currently deployed using cluster profile version **1.1.0**. Select the +option **1.0.0** in the version dropdown. This process is the reverse of what you have done in the previous section, +[Version Cluster Profiles](#version-cluster-profiles). + +Click on **Save** to confirm your changes. + +Palette now makes the changes required for the cluster to return to the state specified in version **1.0.0** of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Cluster details page with service URL highlighted](/getting-started/aws/getting-started_deploy-k8s-cluster_service_url.webp) + +## Pending Updates + +Cluster profiles can also be updated in place, without the need to create a new cluster profile version. Palette +monitors the state of your clusters and notifies you when updates are available for your host clusters. You may then +choose to apply your changes at a convenient time. + +The previous state of the cluster profile will not be saved once it is overwritten. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the tag +**service:hello-universe-frontend**. Select it to view its **Overview** tab. + +Select the **Profile** tab. Then, select the **hello-universe** pack. Change the `replicas` field to `2` on line `15`. +Click on **Save**. The editor closes. + +This cluster now contains an override over its cluster profile. Palette uses the configuration you have just provided +for the single cluster over its cluster profile and begins making the appropriate changes. + +Once these changes are complete, select the **Workloads** tab. Then, select the **hello-universe** namespace. + +Two **ui** pods are available, instead of the one specified by your cluster profile. Your override has been successfully +applied. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster. It is named `aws-profile`. + +Click on it to view its details. Select **1.0.0** in the version dropdown. + +Select the **hello-universe** pack. The editor appears. Change the `replicas` field to `3` on line `15`. Click on +**Confirm Updates**. The editor closes. + +Click on **Save Changes** to confirm the changes you have made to your profile. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the with the **service:hello-universe-frontend** +tag. Palette indicates that the cluster associated with the cluster profile you updated has updates available. + +![Image that shows the pending updates ](/getting-started/aws/getting-started_update-k8s-cluster_pending-update-clusters-view.webp) + +Select this cluster to open its **Overview** tab. Click on **Updates** to begin the cluster update. + +![Image that shows the Updates button](/getting-started/aws/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp) + +A dialog appears which shows the changes made in this update. Review the changes and ensure the only change is the +`replicas` field value. The pending update removes your cluster override and sets the `replicas` field to `3`. At this +point, you can choose to apply the pending changes or keep it by modifying the right-hand side of the dialog. + +![Image that shows the available updates dialog ](/getting-started/aws/getting-started_update-k8s-cluster_available-updates-dialog.webp) + +Click on **Apply Changes** once you have finished reviewing your changes. + +Palette updates your cluster according to cluster profile specifications. Once these changes are complete, select the +**Workloads** tab. Then, select the **hello-universe** namespace. + +Three **ui** pods are available. The cluster profile update is now reflected by your cluster. + +## Cluster Observability + + + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name `aws-cluster` to +proceed with the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +## Wrap-Up + +In this tutorial, you created deployed cluster profile updates. After the cluster was deployed to AWS, you updated the +cluster profile through three different methods: create a new cluster profile version, update a cluster profile in +place, and cluster profile overrides. After you made your changes, the Hello Universe application functioned as a +three-tier application with a REST API backend server. + +Cluster profiles provide consistency during the cluster creation process, as well as when maintaining your clusters. +They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or +rolling back workloads across your environments. + +We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to +learn about how you can use Palette with Terraform. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/azure/_category_.json b/docs/docs-content/getting-started/azure/_category_.json new file mode 100644 index 0000000000..ae9ddb024d --- /dev/null +++ b/docs/docs-content/getting-started/azure/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 50 +} diff --git a/docs/docs-content/getting-started/azure/azure.md b/docs/docs-content/getting-started/azure/azure.md new file mode 100644 index 0000000000..1040e56734 --- /dev/null +++ b/docs/docs-content/getting-started/azure/azure.md @@ -0,0 +1,64 @@ +--- +sidebar_label: "Deploy a Cluster to Azure" +title: "Deploy a Cluster to Microsoft Azure" +description: "Spectro Cloud Getting Started with Azure" +hide_table_of_contents: false +sidebar_custom_props: + icon: "" +tags: ["getting-started", "azure"] +--- + +Palette supports integration with [Microsoft Azure](https://azure.microsoft.com/en-us). You can deploy and manage +[Host Clusters](../../glossary-all.md#host-cluster) in Azure or Azure Government. The concepts you learn about in the +Getting Started section are centered around a fictional case study company. This approach gives you a solution focused +approach, while introducing you with Palette workflows and capabilities. + +## 🧑‍🚀 Welcome to Spacetastic! + + + +## Get Started + +In this section, you learn how to create a cluster profile. Then, you deploy a cluster to Azure by using Palette. Once +your cluster is deployed, you can update it using cluster profile updates. + + diff --git a/docs/docs-content/getting-started/azure/create-cluster-profile.md b/docs/docs-content/getting-started/azure/create-cluster-profile.md new file mode 100644 index 0000000000..6504405a73 --- /dev/null +++ b/docs/docs-content/getting-started/azure/create-cluster-profile.md @@ -0,0 +1,120 @@ +--- +sidebar_label: "Create a Cluster Profile" +title: "Create a Cluster Profile" +description: "Learn to create a full cluster profile in Palette." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["getting-started", "azure"] +--- + +Palette offers profile-based management for Kubernetes, enabling consistency, repeatability, and operational efficiency +across multiple clusters. A cluster profile allows you to customize the cluster infrastructure stack, allowing you to +choose the desired Operating System (OS), Kubernetes, Container Network Interfaces (CNI), Container Storage Interfaces +(CSI). You can further customize the stack with add-on application layers. For more information about cluster profile +types, refer to [Cluster Profiles](../introduction.md#cluster-profiles). + +In this tutorial, you create a full profile directly from the Palette dashboard. Then, you add a layer to your cluster +profile by using a [community pack](../../integrations/community_packs.md) to deploy a web application. The concepts you +learn about in the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- Follow the steps described in the [Set up Palette with Azure](./setup.md) guide to authenticate Palette for use with + your Azure cloud account. +- Ensure that the [Palette Community Registry](../../registries-and-packs/registries/registries.md#default-registries) + is available in your Palette environment. Refer to the + [Add OCI Packs Registry](../../registries-and-packs/registries/oci-registry/add-oci-packs.md) guide for additional + guidance. + +## Create a Full Cluster Profile + +Log in to Palette and navigate to the left **Main Menu**. Select **Profiles** to view the cluster profile page. You can +view the list of available cluster profiles. To create a cluster profile, click on **Add Cluster Profile**. + +Follow the wizard to create a new profile. + +In the **Basic Information** section, assign the name **azure-profile**, a brief profile description, select the type as +**Full**, and assign the tag **env:azure**. You can leave the version empty if you want to. Just be aware that the +version defaults to **1.0.0**. Click on **Next**. + +**Cloud Type** allows you to choose the infrastructure provider with which this cluster profile is associated. Select +**Azure** and click on **Next**. + +The **Profile Layers** step is where you specify the packs that compose the profile. There are four required +infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack +Type**, **Registry**, and **Pack Name**. + +For this tutorial, use the following packs: + +| Pack Name | Version | Layer | +| ---------------- | ------- | ---------------- | +| ubuntu-azure LTS | 22.4.x | Operating System | +| Kubernetes | 1.30.x | Kubernetes | +| cni-calico-azure | 3.26.x | Network | +| Azure Disk | 1.28.x | Storage | + +As you fill out the information for each layer, click on **Next** to proceed to the next layer. + +Click on **Confirm** after you have completed filling out all the core layers. + +![Azure cluster profile overview page](/getting-started/azure/getting-started_create-cluster-profile_cluster_profile_stack.webp) + +The review section gives an overview of the cluster profile configuration you selected. Click on **Finish +Configuration** to finish creating the cluster profile. + +## Add a Pack + +Navigate to the left **Main Menu** and select **Profiles**. Select the cluster profile you created earlier. + +Click on **Add New Pack** at the top of the page. + +Select the **Palette Community Registry** from the **Registry** dropdown. Then, click on the latest **Hello Universe** +pack with version **v1.2.0**. + +![Screenshot of hello universe pack](/getting-started/azure/getting-started_create-cluster-profile_add-pack.webp) + +Once you have selected the pack, Palette will display its README, which provides you with additional guidance for usage +and configuration options. The pack you added will deploy the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. + +![Screenshot of pack readme](/getting-started/azure/getting-started_create-cluster-profile_pack-readme.webp) + +Click on **Values** to edit the pack manifest. Click on **Presets** on the right-hand side. + +This pack has two configured presets: + +1. **Disable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a standalone frontend application. This is the default preset selection. +2. **Enable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a three-tier application with a frontend, API server, and Postgres database. + +Select the **Enable Hello Universe API** preset. The pack manifest changes according to this preset. + +![Screenshot of pack presets](/getting-started/azure/getting-started_create-cluster-profile_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. + +Click on **Confirm & Create** to save the manifest. Then, click on **Save Changes** to save this new layer to the +cluster profile. + +## Wrap-Up + +In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a +host cluster using Microsoft Azure. You added a community pack to your profile to deploy a custom workload. + +We recommend that you continue to the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy this cluster +profile to a host cluster onto Azure. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/azure/deploy-k8s-cluster.md b/docs/docs-content/getting-started/azure/deploy-k8s-cluster.md new file mode 100644 index 0000000000..9fd4965e30 --- /dev/null +++ b/docs/docs-content/getting-started/azure/deploy-k8s-cluster.md @@ -0,0 +1,184 @@ +--- +sidebar_label: "Deploy a Cluster" +title: "Deploy a Cluster" +description: "Learn to deploy a Palette host cluster." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["getting-started", "azure"] +--- + +This tutorial will teach you how to deploy a host cluster with Palette using Microsoft Azure. You will learn about +_Cluster Mode_ and _Cluster Profiles_ and how these components enable you to deploy customized applications to +Kubernetes with minimal effort. + +As you navigate the tutorial, refer to this diagram to help you understand how Palette uses a cluster profile as a +blueprint for the host cluster you deploy. Palette clusters have the same node pools you may be familiar with: _control +plane nodes_ and _worker nodes_ where you will deploy applications. The result is a host cluster that Palette manages. +The concepts you learn about in the Getting Started section are centered around a fictional case study company, +Spacetastic Ltd. + +![A view of Palette managing the Kubernetes lifecycle](/getting-started/getting-started_deploy-k8s-cluster_application.webp) + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following. + +- Follow the steps described in the [Set up Palette with Azure](./setup.md) guide to authenticate Palette for use with + your Azure cloud account. + +- A Palette cluster profile. Follow the [Create a Cluster Profile](./create-cluster-profile.md) tutorial to create the + required Azure cluster profile. + +## Deploy a Cluster + +The following steps will guide you through deploying the cluster infrastructure. + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) + +Palette will prompt you to select the type of cluster. Select **Azure IaaS** and click the **Start Azure IaaS +Configuration** button. Use the following steps to create a host cluster in Azure. + +In the **Basic information** section, insert the general information about the cluster, such as the Cluster name, +Description, Tags, and Cloud account. Click on **Next**. + +![Palette clusters basic information](/getting-started/azure/getting-started_deploy-k8s-cluster_clusters_basic_info.webp) + +Click on **Add Cluster Profile**. A list is displayed of available profiles you can choose to deploy to Azure. Select +the cluster profile you created in the [Create a Cluster Profile](./create-cluster-profile.md) tutorial, named +**azure-profile**, and click on **Confirm**. + +The **Cluster Profile** section displays all the layers in the cluster profile. + +![palette clusters basic information](/getting-started/azure/getting-started_deploy-k8s-cluster_parameters.webp) + +Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each +pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. + +The **Cluster Config** section allows you to select the **Subscription**, **Region**, **Resource Group**, **Storage +account**, and **SSH Key** to apply to the host cluster. All clusters require you to assign an SSH key. Refer to the +[SSH Keys](../../clusters/cluster-management/ssh-keys.md) guide for information about uploading an SSH key. + +When you are done selecting a **Subscription**, **Region**, **Resource Group**, **Storage account** and **SSH Key**, +click on **Next**. + +The **Nodes Config** section allows you to configure the nodes that compose the control plane nodes and worker nodes of +the Kubernetes cluster. + +Refer to the [Node Pool](../../clusters/cluster-management/node-pool.md) guide for a list and description of parameters. + +Before you proceed to next section, review the following parameters. + +- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or + worker pool. For this tutorial, set the count to one for both the control plane and worker pools. + +- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when + spot instances are used as worker nodes. You can check this box if you want to. + +- **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and + hourly cost of the instance. Select **Standard_A8_v2**. + +- **Managed disk** - Used to select the storage class. Select **Standard LRS** and set the disk size to **60**. + +- **Availability zones** - Used to specify the availability zones in which the node pool can place nodes. Select an + availability zone. + +![Palette clusters nodes configuration](/getting-started/azure/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp) + +In the **Cluster Settings** section, you can configure advanced options such as when to patch the OS, enable security +scans, manage backups, add Role-Based Access Control (RBAC) bindings, and more. + +For this tutorial, you can use the default settings. Click on **Validate** to continue. + +The Review section allows you to review the cluster configuration before deploying the cluster. Review all the settings +and click on **Finish Configuration** to deploy the cluster. + +![Configuration overview of newly created Azure cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_profile_review.webp) + +Navigate to the left **Main Menu** and select **Clusters**. + +![Update the cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp) + +The cluster deployment process can take 15 to 30 min. The deployment time varies depending on the cloud provider, +cluster profile, cluster size, and the node pool configurations provided. You can learn more about the deployment +progress by reviewing the event log. Click on the **Events** tab to view the log. + +![Update the cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_event_log.webp) + +## Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/azure/getting-started_deploy-k8s-cluster_service_url.webp) + +
+ +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +
+ +![Image that shows the cluster overview of the Hello Universe Frontend Cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed service. + +You have deployed your first application to a cluster managed by Palette. Your first application is a three-tier +application with a frontend, API server, and Postgres database. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +
+ +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +
+ +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +## Wrap-Up + +In this tutorial, you used the cluster profile you created in the previous +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to deploy a host cluster onto your preferred cloud +service provider. After the cluster deployed, you verified the Hello Universe application was successfully deployed. + +We recommend that you continue to the [Deploy Cluster Profile Updates](./update-k8s-cluster.md) tutorial to learn how to +update your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/azure/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/azure/deploy-manage-k8s-cluster-tf.md new file mode 100644 index 0000000000..f55fded01e --- /dev/null +++ b/docs/docs-content/getting-started/azure/deploy-manage-k8s-cluster-tf.md @@ -0,0 +1,756 @@ +--- +sidebar_label: "Cluster Management with Terraform" +title: "Cluster Management with Terraform" +description: "Learn how to deploy and update a Palette host cluster to Azure with Terraform." +icon: "" +hide_table_of_contents: false +sidebar_position: 50 +toc_max_heading_level: 2 +tags: ["getting-started", "azure", "terraform"] +--- + +The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider +allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the +provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure. + +This tutorial will teach you how to use Terraform to deploy and update an Azure host cluster. You will learn how to +create two versions of a cluster profile with different demo applications, update the deployed cluster with the new +cluster profile version, and then perform a rollback. The concepts you learn about in the Getting Started section are +centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following items in place: + +- Follow the steps described in the [Set up Palette with Azure](./setup.md) guide to authenticate Palette for use with + your Azure cloud account and create a Palette API key. +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation) + installed if you choose to follow along using the tutorial container. +- If you choose to clone the repository instead of using the tutorial container, make sure you have the following + software installed: + - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater + - [Git](https://git-scm.com/downloads) + - [Kubectl](https://kubernetes.io/docs/tasks/tools/) + +## Set Up Local Environment + +You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by +downloading a container image that includes the tutorial code and all dependencies. + + + + + +Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command. + +```bash +docker ps +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise, +skip this step. + +```bash +podman machine init +podman machine start +``` + +Use the following command and ensure you receive an output displaying the installation information. + +```bash +podman info +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +Open a terminal window and download the tutorial code from GitHub. + +```shell +git clone https://github.com/spectrocloud/tutorials.git +``` + +Change the directory to the tutorial folder. + +```shell +cd tutorials/ +``` + +Check out the following git tag. + +```shell +git checkout v1.1.9 +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + + + + + +## Resources Review + +To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS, +Azure, GCP, or VMware vSphere. Before you deploy a host cluster to Azure, review the following files in the folder +structure. + +| **File** | **Description** | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. | +| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. | +| **data.tf** | This file contains all the query resources that perform read actions. | +| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. | +| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. | +| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. | +| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. | +| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. | +| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. | + +The following section reviews the core Terraform resources more closely. + +#### Provider + +The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This +tutorial uses four providers: + +- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) +- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest) +- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest) +- [Local](https://registry.terraform.io/providers/hashicorp/local/latest) + +Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by +modifying the value of the `palette-project` variable in the **terraform.tfvars** file. + +```hcl +terraform { + required_providers { + spectrocloud = { + version = ">= 0.20.6" + source = "spectrocloud/spectrocloud" + } + + tls = { + source = "hashicorp/tls" + version = "4.0.4" + } + + vsphere = { + source = "hashicorp/vsphere" + version = ">= 2.6.1" + } + + local = { + source = "hashicorp/local" + version = "2.4.1" + } + } + + required_version = ">= 1.9" +} + +provider "spectrocloud" { + project_name = var.palette-project +} +``` + +#### Cluster Profile + +The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile` +resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use +or add a manifest or Helm chart. + +The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources +being designated for a specific provider. In this tutorial, two versions of the Azure cluster profile are deployed: +version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while version `1.1.0` +deploys the [Kubecost](https://www.kubecost.com/) pack along with the +[Hello Universe](https://github.com/spectrocloud/hello-universe) application. + +The cluster profiles include layers for the Operating System (OS), Kubernetes, container network interface, and +container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile. +Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks, as the +order in which you arrange the contents of the `pack {}` blocks plays an important role in the cluster profile creation. +The table below displays the packs deployed in each version of the cluster profile. + +| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** | +| ------------- | ------------------ | ----------- | -------------------------- | -------------------------- | +| OS | `ubuntu-azure` | `22.04` | :white_check_mark: | :white_check_mark: | +| Kubernetes | `kubernetes` | `1.30.4` | :white_check_mark: | :white_check_mark: | +| Network | `cni-calico-azure` | `3.26.1` | :white_check_mark: | :white_check_mark: | +| Storage | `csi-azure` | `1.28.3` | :white_check_mark: | :white_check_mark: | +| App Services | `hellouniverse` | `1.2.0` | :white_check_mark: | :white_check_mark: | +| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: | + +The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a +standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and +Postgres database. This tutorial deploys the three-tier version of the +[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is +specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file. +Below is an example of version `1.0.0` of the Azure cluster profile Terraform resource. + +```hcl +resource "spectrocloud_cluster_profile" "azure-profile" { + count = var.deploy-azure ? 1 : 0 + + name = "tf-azure-profile" + description = "A basic cluster profile for Azure" + tags = concat(var.tags, ["env:azure"]) + cloud = "azure" + type = "cluster" + version = "1.0.0" + + pack { + name = data.spectrocloud_pack.azure_ubuntu.name + tag = data.spectrocloud_pack.azure_ubuntu.version + uid = data.spectrocloud_pack.azure_ubuntu.id + values = data.spectrocloud_pack.azure_ubuntu.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.azure_k8s.name + tag = data.spectrocloud_pack.azure_k8s.version + uid = data.spectrocloud_pack.azure_k8s.id + values = data.spectrocloud_pack.azure_k8s.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.azure_cni.name + tag = data.spectrocloud_pack.azure_cni.version + uid = data.spectrocloud_pack.azure_cni.id + values = data.spectrocloud_pack.azure_cni.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.azure_csi.name + tag = data.spectrocloud_pack.azure_csi.version + uid = data.spectrocloud_pack.azure_csi.id + values = data.spectrocloud_pack.azure_csi.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.hellouniverse.name + tag = data.spectrocloud_pack.hellouniverse.version + uid = data.spectrocloud_pack.hellouniverse.id + values = templatefile("manifests/values-3tier.yaml", { + namespace = var.app_namespace, + port = var.app_port, + replicas = var.replicas_number + db_password = base64encode(var.db_password), + auth_token = base64encode(var.auth_token) + }) + type = "oci" + } +} +``` + +#### Data Resources + +Each `pack {}` block contains references to a data resource. +[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in +Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more +dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query +Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values. + +Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.30.4`. + +```hcl +data "spectrocloud_pack" "azure_k8s" { + name = "kubernetes" + version = "1.30.4" + registry_uid = data.spectrocloud_registry.public_registry.id +} +``` + +Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's +`pack {}` block. + +#### Cluster + +The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure +providers. To create an Azure host cluster, you must set the `deploy-azure` variable in the **terraform.tfvars** file to +true. + +When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for +the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by +reviewing the +[Azure cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_azure) +documentation. + +```hcl +resource "spectrocloud_cluster_azure" "azure-cluster" { + count = var.deploy-azure ? 1 : 0 + + name = "azure-cluster" + tags = concat(var.tags, ["env:azure"]) + cloud_account_id = data.spectrocloud_cloudaccount_azure.account[0].id + + cloud_config { + subscription_id = var.azure_subscription_id + resource_group = var.azure_resource_group + region = var.azure-region + ssh_key = tls_private_key.tutorial_ssh_key_azure[0].public_key_openssh + } + + cluster_profile { + id = var.deploy-azure && var.deploy-azure-kubecost ? resource.spectrocloud_cluster_profile.azure-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.azure-profile[0].id + } + + machine_pool { + control_plane = true + control_plane_as_worker = true + name = "control-plane-pool" + count = var.azure_control_plane_nodes.count + instance_type = var.azure_control_plane_nodes.instance_type + azs = var.azure-use-azs ? var.azure_control_plane_nodes.azs : [""] + is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool + disk { + size_gb = var.azure_control_plane_nodes.disk_size_gb + type = "Standard_LRS" + } + } + + machine_pool { + name = "worker-basic" + count = var.azure_worker_nodes.count + instance_type = var.azure_worker_nodes.instance_type + azs = var.azure-use-azs ? var.azure_worker_nodes.azs : [""] + is_system_node_pool = var.azure_worker_nodes.is_system_node_pool + } + + timeouts { + create = "30m" + delete = "15m" + } +} +``` + +## Terraform Tests + +Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly. +Issue the following command in your terminal. + +```bash +terraform test +``` + +A successful test execution will output the following. + +```text hideClipboard +Success! 16 passed, 0 failed. +``` + +## Input Variables + +To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your +choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org). + +The file is structured with different sections. Each provider has a section with variables that need to be filled in, +identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-` +available for each provider, which you can use to select the deployment environment. + +In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a +Palette project different from the default one. + +```hcl {4} +##################### +# Palette Settings +##################### +palette-project = "Default" # The name of your project in Palette. +``` + +Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token +for the Hello Universe pack. For example, you can use the value `password` for the database password and the default +token provided in the +[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes) +repository for the authentication token. + +```hcl {7-8} +############################## +# Hello Universe Configuration +############################## +app_namespace = "hello-universe" # The namespace in which the application will be deployed. +app_port = 8080 # The cluster port number on which the service will listen for incoming traffic. +replicas_number = 1 # The number of pods to be created. +db_password = "REPLACE ME" # The database password to connect to the API database. +auth_token = "REPLACE ME" # The auth token for the API connection. +``` + +Locate the Azure provider section and change `deploy-azure = false` to `deploy-azure = true`. Additionally, replace all +occurrences of `REPLACE_ME` with their corresponding values, such as those for the `azure-cloud-account-name`, +`azure-region`, `azure_subscription_id`, and `azure_resource_group` variables. You can also update the values for the +nodes in the control plane or worker node pools as needed. + +```hcl {4,8-11} +########################### +# Azure Deployment Settings +############################ +deploy-azure = false # Set to true to deploy to Azure. +deploy-azure-kubecost = false # Set to true to deploy to Azure and include Kubecost to your cluster profile. +azure-use-azs = true # Set to false when you deploy to a region without AZs. + +azure-cloud-account-name = "REPLACE ME" +azure-region = "REPLACE ME" +azure_subscription_id = "REPLACE ME" +azure_resource_group = "REPLACE ME" + + +azure_control_plane_nodes = { + count = "1" + control_plane = true + instance_type = "Standard_A8_v2" + disk_size_gb = "60" + azs = ["1"] # If you want to deploy to multiple AZs, add them here. + is_system_node_pool = false +} + +azure_worker_nodes = { + count = "1" + control_plane = false + instance_type = "Standard_A8_v2" + disk_size_gb = "60" + azs = ["1"] # If you want to deploy to multiple AZs, add them here. + is_system_node_pool = false +} +``` + +When you are done making the required changes, save the file. + +## Deploy the Cluster + +Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an +environment variable. This step allows the Terraform code to authenticate with the Palette API. + +```bash +export SPECTROCLOUD_APIKEY= +``` + +Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that +contains the Terraform files. + +```shell +terraform init +``` + +```text hideClipboard +Terraform has been successfully initialized! +``` + +:::warning + +Before deploying the resources, ensure that there are no active clusters named `azure-cluster` or cluster profiles named +`tf-azure-profile` in your Palette project. + +::: + +Issue the `plan` command to preview the resources that Terraform will create. + +```shell +terraform plan +``` + +The output indicates that four new resources will be created: two versions of the Azure cluster profile, the host +cluster, and an SSH key pair. The host cluster will use version `1.0.0` of the cluster profile. + +```shell +Plan: 4 to add, 0 to change, 0 to destroy. +``` + +To deploy the resources, use the `apply` command. + +```shell +terraform apply -auto-approve +``` + +To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and +click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-azure-profile`. Click on the +cluster profile to review its layers and versions. + +![A view of the cluster profile](/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp) + +You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**. + +![Update the cluster](/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp) + +Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more. + +The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the +node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on +the **Events** tab to check the log. + +![Update the cluster](/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp) + +### Verify the Application + +In Palette, navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +![Deployed application](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed service. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this +tutorial, you used Terraform to deploy two versions of an Azure cluster profile. The snippet below displays a segment of +the Terraform cluster profile resource version `1.0.0` that was deployed. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "azure-profile" { + count = var.deploy-azure ? 1 : 0 + + name = "tf-azure-profile" + description = "A basic cluster profile for Azure" + tags = concat(var.tags, ["env:azure"]) + cloud = "azure" + type = "cluster" + version = "1.0.0" +``` + +Open the **terraform.tfvars** file, set the `deploy-azure-kubecost` variable to true, and save the file. Once applied, +the host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack. + +The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note +how the name `tf-azure-profile` is the same as in the first cluster profile resource, but the version is different. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "azure-profile-kubecost" { + count = var.deploy-azure ? 1 : 0 + + name = "tf-azure-profile" + description = "A basic cluster profile for Azure with Kubecost" + tags = concat(var.tags, ["env:azure"]) + cloud = "azure" + type = "cluster" + version = "1.1.0" +``` + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster +profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster +profile version. + +To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters** +from the left **Main Menu**. + +Select the cluster named `azure-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was +triggered due to cluster profile changes. + +![Image that shows the cluster profile reconciliation behavior](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp) + +Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-azure-profile` +cluster profile. + +![Image that shows the new cluster profile version with Kubecost](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp) + +Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the +**Overview** tab to verify that the Kubecost pack was successfully deployed. + +![Image that shows the cluster with Kubecost](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp) + +Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette +UI. This file enables you and other users to issue `kubectl` commands against the host cluster. + +![Image that shows the cluster's kubeconfig file location](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp) + +Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```bash +export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```bash +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +information about your cluster. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer pack. + +![Image that shows the Kubecost UI](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp) + +Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal +window it is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process of rolling back to a previous version using Terraform is similar to the process of applying a new version. + +Open the **terraform.tfvars** file, set the `deploy-azure-kubecost` variable to false, and save the file. Once applied, +this action will make the active cluster use version **1.0.0** of the cluster profile again. + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that the deployed cluster will now use version `1.0.0` of the cluster profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp) + +## Cleanup + +Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all +the resources you created through Terraform. + +```shell +terraform destroy --auto-approve +``` + +A successful execution of `terraform destroy` will output the following. + +```shell +Destroy complete! Resources: 4 destroyed. +``` + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force +delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to +delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue +the following command to stop and remove the container. + + + + + +```shell +docker stop tutorialContainer && \ +docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +```shell +podman stop tutorialContainer && \ +podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +## Wrap-Up + +In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host +Azure cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to perform +cluster profile roll backs. + +We encourage you to check out the [Scale, Upgrade, and Secure Clusters](./scale-secure-cluster.md) tutorial to learn how +to perform common Day-2 operations on your deployed clusters. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/azure/scale-secure-cluster.md b/docs/docs-content/getting-started/azure/scale-secure-cluster.md new file mode 100644 index 0000000000..3f1ca29d83 --- /dev/null +++ b/docs/docs-content/getting-started/azure/scale-secure-cluster.md @@ -0,0 +1,527 @@ +--- +sidebar_label: "Scale, Upgrade, and Secure Clusters" +title: "Scale, Upgrade, and Secure Clusters" +description: "Learn how to scale, upgrade, and secure Palette host clusters deployed to Azure." +icon: "" +hide_table_of_contents: false +sidebar_position: 60 +tags: ["getting-started", "azure", "tutorial"] +--- + +Palette has in-built features to help with the automation of Day-2 operations. Upgrading and maintaining a deployed +cluster is typically complex because you need to consider any possible impact on service availability. Palette provides +out-of-the-box functionality for upgrades, observability, granular Role Based Access Control (RBAC), backup and security +scans. + +This tutorial will teach you how to use the Palette UI to perform scale and maintenance tasks on your clusters. You will +learn how to create Palette projects and teams, import a cluster profile, safely upgrade the Kubernetes version of a +deployed cluster and scale up your cluster nodes. The concepts you learn about in the Getting Started section are +centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with Azure](./setup.md) guide to +authenticate Palette for use with your Azure cloud account. + +Additionally, you should install kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +## Create Palette Projects + +Palette projects help you organize and manage cluster resources, providing logical groupings. They also allow you to +manage user access control through Role Based Access Control (RBAC). You can assign users and teams with specific roles +to specific projects. All resources created within a project are scoped to that project and only available to that +project, but a tenant can have multiple projects. + +Log in to [Palette](https://console.spectrocloud.com). + +Click on the **drop-down Menu** at the top of the page and switch to the **Tenant Admin** scope. Palette provides the +**Default** project out-of-the-box. + +![Image that shows how to select tenant admin scope](/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp) + +Navigate to the left **Main Menu** and click on **Projects**. Click on the **Create Project** button. The **Create a new +project** dialog appears. + +Fill out the input fields with values from the table below to create a project. + +| Field | Description | Value | +| ----------- | ----------------------------------- | --------------------------------------------------------- | +| Name | The name of the project. | `Project-ScaleSecureTutorial` | +| Description | A brief description of the project. | Project for Scale, Upgrade, and Secure Clusters tutorial. | +| Tags | Add tags to the project. | `env:dev` | + +Click **Confirm** to create the project. Once Palette finishes creating the project, a new card appears on the +**Projects** page. + +Navigate to the left **Main Menu** and click on **Users & Teams**. + +Select the **Teams** tab. Then, click on **Create Team**. + +Fill in the **Team Name** with **scale-secure-tutorial-team**. Click on **Confirm**. + +Once Palette creates the team, select it from the **Teams** list. The **Team Details** pane opens. + +On the **Project Roles** tab, click on **New Project Role**. The list of project roles appears. + +Select the **Project-ScaleSecureTutorial** from the **Projects** drop-down. Then, select the **Cluster Profile Viewer** +and **Cluster Viewer** roles. Click on **Confirm**. + +![Image that shows how to select team roles](/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp) + +Any users that you add to this team inherit the project roles assigned to it. Roles are the foundation of Palette's RBAC +enforcement. They allow a single user to have different types of access control based on the resource being accessed. In +this scenario, any user added to this team will have access to view any cluster profiles and clusters in the +**Project-ScaleSecureTutorial** project, but not modify them. Check out the +[Palette RBAC](../../user-management/palette-rbac/palette-rbac.md) section for more details. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on **Open project** on the **Project-ScaleSecureTutorial** card. + +![Image that shows how to open the tutorial project](/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp) + +Your scope changes from **Tenant Admin** to **Project-ScaleSecureTutorial**. All further resources you create will be +part of this project. + +## Import a Cluster Profile + +Palette provides three resource contexts. They help you customize your environment to your organizational needs, as well +as control the scope of your settings. + +| Context | Description | +| ------- | ---------------------------------------------------------------------------------------- | +| System | Resources are available at the system level and to all tenants in the system. | +| Tenant | Resources are available at the tenant level and to all projects belonging to the tenant. | +| Project | Resources are available within a project and not available to other projects. | + +All of the resources you have created as part of your Getting Started journey have used the **Project** context. They +are only visible in the **Default** project. Therefore, you will need to create a new cluster profile in +**Project-ScaleSecureTutorial**. + +Navigate to the left **Main Menu** and click on **Profiles**. Click on **Import Cluster Profile**. The **Import Cluster +Profile** pane opens. + +Paste the following in the text editor. Click on **Validate**. The **Select repositories** dialog appears. + + + +Click on **Confirm**. Then, click on **Confirm** on the **Import Cluster Profile** pane. Palette creates a new cluster +profile named **azure-profile**. + +On the **Profiles** list, select **Project** from the **Contexts** drop-down. Your newly created cluster profile +displays. The Palette UI confirms that the cluster profile was created in the scope of the +**Project-ScaleSecureTutorial**. + +![Image that shows the cluster profile ](/getting-started/azure/getting-started_scale-secure-cluster_cluster-profile-created.webp) + +Select the cluster profile to view its details. The cluster profile summary appears. + +This cluster profile deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) application using a +pack. Click on the **hellouniverse 1.2.0** layer. The pack manifest editor appears. + +Click on **Presets** on the right-hand side. You can learn more about the pack presets on the pack README, which is +available in the Palette UI. Select the **Enable Hello Universe API** preset. The pack manifest changes accordingly. + +![Screenshot of pack presets](/getting-started/azure/getting-started_scale-secure-cluster_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to save your updates. + +## Deploy a Cluster + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +Palette will prompt you to select the type of cluster. Select **Azure IaaS** and click on **Start Azure IaaS +Configuration**. + +Continue with the rest of the cluster deployment flow using the cluster profile you created in the +[Import a Cluster Profile](#import-a-cluster-profile) section, named **azure-profile**. Refer to the +[Deploy a Cluster](./deploy-k8s-cluster.md#deploy-a-cluster) tutorial for additional guidance or if you need a refresher +of the Palette deployment flow. + +### Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. + +When the application is deployed and ready for network traffic, Palette exposes the service URL in the **Services** +field. Click on the URL for port **:8080** to access the Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/azure/getting-started_scale-secure-cluster_service_url.webp) + +## Upgrade Kubernetes Versions + +Regularly upgrading your Kubernetes version is an important part of maintaining a good security posture. New versions +may contain important patches to security vulnerabilities and bugs that could affect the integrity and availability of +your clusters. + +Palette supports three minor Kubernetes versions at any given time. We support the current release and the three +previous minor version releases, also known as N-3. For example, if the current release is 1.29, we support 1.28, 1.27, +and 1.26. + +:::warning + +Once you upgrade your cluster to a new Kubernetes version, you will not be able to downgrade. + +::: + +We recommend using cluster profile versions to safely upgrade any layer of your cluster profile and maintain the +security of your clusters. Expand the following section to learn how to create a new cluster profile version with a +Kubernetes upgrade. + +
+ +Upgrade Kubernetes using Cluster Profile Versions + +Navigate to the left **Main Menu** and click on **Profiles**. Select the cluster profile that you used to deploy your +cluster, named **azure-profile**. The cluster profile details page appears. + +Click on the version drop-down and select **Create new version**. The version creation dialog appears. + +Fill in **1.1.0** in the **Version** input field. Then, click on **Confirm**. The new cluster profile version is created +with the same layers as version **1.0.0**. + +Select the **kubernetes 1.27.x** layer of the profile. The pack manifest editor appears. + +Click on the **Pack Version** dropdown. All of the available versions of the **Palette eXtended Kubernetes** pack +appear. The cluster profile is configured to use the latest patch version of **Kubernetes 1.27**. + +![Cluster profile with all Kubernetes versions](/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-versions.webp) + +The official guidelines for Kubernetes upgrades recommend upgrading one minor version at a time. For example, if you are +using Kubernetes version 1.26, you should upgrade to 1.27, before upgrading to version 1.28. You can learn more about +the official Kubernetes upgrade guidelines in the +[Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/) page. + +Select **1.28.x** from the version dropdown. This selection follows the Kubernetes upgrade guidelines as the cluster +profile is using **1.27.x**. + +The manifest editor highlights the changes made by this upgrade. Once you have verified that the upgrade changes +versions as expected, click on **Confirm changes**. + +Click on **Confirm Updates**. Then, click on **Save Changes** to persist your updates. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Profile** tab. Your cluster is currently using the **1.0.0** version of your cluster profile. + +Change the cluster profile version by selecting **1.1.0** from the version drop-down. Click on **Review & Save**. The +**Changes Summary** dialog appears. + +Click on **Review changes in Editor**. The **Review Update Changes** dialog displays the same Kubernetes version +upgrades as the cluster profile editor previously did. Click on **Update**. + +
+ +Upgrading the Kubernetes version of your cluster modifies an infrastructure layer. Therefore, Kubernetes needs to +replace its nodes. This is known as a repave. Check out the +[Node Pools](../../clusters/cluster-management/node-pool.md#repave-behavior-and-configuration) page to learn more about +the repave behavior and configuration. + +Click on the **Nodes** tab. You can follow along with the node upgrades on this screen. Palette replaces the nodes +configured with the old Kubernetes version with newly upgraded ones. This may affect the performance of your +application, as Kubernetes swaps the workloads to the upgraded nodes. + +![Node repaves in progress](/getting-started/azure/getting-started_scale-secure-cluster_node-repaves.webp) + +### Verify the Application + +The cluster update completes when the Palette UI marks the cluster profile layers as green and the cluster is in a +**Healthy** state. The cluster **Overview** page also displays the Kubernetes version as **1.28**. Click on the URL for +port **:8080** to access the application and verify that your upgraded cluster is functional. + +![Kubernetes upgrade applied](/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp) + +## Scan Clusters + +Palette provides compliance, security, conformance, and Software Bill of Materials (SBOM) scans on tenant clusters. +These scans ensure cluster adherence to specific compliance and security standards, as well as detect potential +vulnerabilities. You can perform four types of scans on your cluster. + +| Scan | Description | +| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Kubernetes Configuration Security | This scan examines the compliance of deployed security features against the CIS Kubernetes Benchmarks, which are consensus-driven security guidelines for Kubernetes. By default, the test set will execute based on the cluster Kubernetes version. | +| Kubernetes Penetration Testing | This scan evaluates Kubernetes-related open-ports for any configuration issues that can leave the tenant clusters exposed to attackers. It hunts for security issues in your clusters and increases visibility of the security controls in your Kubernetes environments. | +| Kubernetes Conformance Testing | This scan validates your Kubernetes configuration to ensure that it conforms to CNCF specifications. Palette leverages an open-source tool called [Sonobuoy](https://sonobuoy.io) to perform this scan. | +| Software Bill of Materials (SBOM) | This scan details the various third-party components and dependencies used by your workloads and helps to manage security and compliance risks associated with those components. | + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Scan** tab. The list of all the available cluster scans appears. Palette indicates that you have never +scanned your cluster. + +![Scans never performed on the cluster](/getting-started/azure/getting-started_scale-secure-cluster_never-scanned-cluster.webp) + +Click **Run Scan** on the **Kubernetes configuration security** and **Kubernetes penetration testing** scans. Palette +schedules and executes these scans on your cluster, which may take a few minutes. Once they complete, you can download +the report in PDF, CSV or view the results directly in the Palette UI. + +![Scans completed on the cluster](/getting-started/azure/getting-started_scale-secure-cluster_scans-completed.webp) + +Click on **Configure Scan** on the **Software Bill of Materials (SBOM)** scan. The **Configure SBOM Scan** dialog +appears. + +Leave the default selections on this screen and click on **Confirm**. Optionally, you can configure an S3 bucket to save +your report into. Refer to the +[Configure an SBOM Scan](../../clusters/cluster-management/compliance-scan.md#configure-an-sbom-scan) guide to learn +more about the configuration options of this scan. + +Once the scan completes, click on the report to view it within the Palette UI. The third-party dependencies that your +workloads rely on are evaluated for potential security vulnerabilities. Reviewing the SBOM enables organizations to +track vulnerabilities, perform regular software maintenance, and ensure compliance with regulatory requirements. + +:::info + +The scan reports highlight any failed checks, based on Kubernetes community standards and CNCF requirements. We +recommend that you prioritize the rectification of any identified issues. + +::: + +As you have seen so far, Palette scans are crucial when maintaining your security posture. Palette provides the ability +to schedule your scans and periodically evaluate your clusters. In addition, it keeps a history of previous scans for +comparison purposes. Expand the following section to learn how to configure scan schedules for your cluster. + +
+ +Configure Cluster Scan Schedules + +Click on **Settings**. Then, select **Cluster Settings**. The **Settings** pane appears. + +Select the **Schedule Scans** option. You can configure schedules for you cluster scans. Palette provides common scan +schedules or you can provide a custom time. We recommend choosing a schedule when you expect the usage of your cluster +to be lowest. Otherwise, the scans may impact the performance of your nodes. + +![Scan schedules](/getting-started/azure/getting-started_scale-secure-cluster_scans-schedules.webp) + +Palette will automatically scan your cluster according to your configured schedule. + +
+ +## Scale a Cluster + +A node pool is a group of nodes within a cluster that all have the same configuration. You can use node pools for +different workloads. For example, you can create a node pool for your production workloads and another for your +development workloads. You can update node pools for active clusters or create a new one for the cluster. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Nodes** tab. Your cluster has a **control-plane-pool** and a **worker-pool**. Each pool contains one node. + +Select the **Overview** tab. Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file. + +![kubeconfig download](/getting-started/azure/getting-started_scale-secure-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig +``` + +Execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals two nodes, one for the worker pool and one for the control plane. Make a note of the name of your +worker node, which is the node that does not have the `control-plane` role. In the example below, +`azure-cluster-worker-pool-6058-7tk4b` is the name of the worker node. + +```shell +NAME STATUS ROLES AGE VERSION +azure-cluster-cp-75841-bmt5v Ready control-plane 56m v1.28.13 +azure-cluster-worker-pool-6058-7tk4b Ready 42m v1.28.13 +``` + +The Hello Universe pack deploys three pods in the `hello-universe` namespace. Execute the following command to verify +where these pods have been scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that all of the pods have been scheduled on the worker node you made a note of previously. + +```shell +NAME READY STATUS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 20m azure-cluster-worker-pool-6058-7tk4b +postgres-698d7ff8f4-vbktf 1/1 Running 20m azure-cluster-worker-pool-6058-7tk4b +ui-5f777c76df-pplcv 1/1 Running 20m azure-cluster-worker-pool-6058-7tk4b +``` + +Navigate back to the Palette UI in your browser. Select the **Nodes** tab. + +Click on **New Node Pool**. The **Add node pool** dialog appears. This workflow allows you to create a new worker pool +for your cluster. Fill in the following configuration. + +| Field | Value | Description | +| --------------------- | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Node pool name** | `worker-pool-2` | The name of your worker pool. | +| **Enable Autoscaler** | Enabled | Whether Palette should scale the pool horizontally based on its per-node workload counts. The **Minimum size** parameter specifies the lower bound of nodes in the pool and the **Maximum size** specifies the upper bound. By default, **Minimum size** is `1` and **Maximum size** is `3`. | +| **Instance Type** | `Standard_A8_v2` | Set the compute size equal to the already provisioned nodes. | +| **Availability Zone** | _Availability zone of your choice_ | Set the availability zone the same as the already provisioned nodes. | + +Click on **Confirm**. The dialog closes. Palette begins provisioning your node pool. Once the process completes, your +three node pools appear in a healthy state. + +![New worker pool provisioned](/getting-started/azure/getting-started_scale-secure-cluster_third-node-pool.webp) + +Navigate back to your terminal and execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals three nodes, two for worker pools and one for the control plane. Make a note of the names of your +worker nodes. In the example below, `azure-cluster-worker-pool-e54e-64fwj` and `azure-cluster-worker-pool-2-6895-pbfnm` +are the worker nodes. + +```shell +NAME STATUS ROLES AGE VERSION +azure-cluster-cp-77030-5szc5 Ready control-plane 114m v1.28.13 +azure-cluster-worker-pool-2-6895-pbfnm Ready 99m v1.28.13 +azure-cluster-worker-pool-e54e-64fwj Ready 102m v1.28.13 +``` + +It is common to dedicate node pools to a particular type of workload. One way to specify this is through the use of +Kubernetes taints and tolerations. + +Taints provide nodes with the ability to repel a set of pods, allowing you to mark nodes as unavailable for certain +pods. Tolerations are applied to pods and allow the pods to schedule onto nodes with matching taints. Once configured, +nodes do not accept any pods that do not tolerate the taints. + +The animation below provides a visual representation of how taints and tolerations can be used to specify which +workloads execute on which nodes. + +![Taints repel pods to a new node](/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif) + +Switch back to Palette in your web browser. Navigate to the left **Main Menu** and select **Profiles**. Select the +cluster profile deployed to your cluster, named `azure-profile`. Ensure that the **1.1.0** version is selected. + +Click on the **hellouniverse 1.2.0** layer. The manifest editor appears. Set the +`manifests.hello-universe.ui.useTolerations` field on line 20 to `true`. Then, set the +`manifests.hello-universe.ui.effect` field on line 22 to `NoExecute`. This toleration describes that the UI pods of +Hello Universe will tolerate the taint with the key `app`, value `ui` and effect `NoExecute`. The tolerations of the UI +pods should be as below. + +```yaml +ui: + useTolerations: true + tolerations: + effect: NoExecute + key: app + value: ui +``` + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to persist your changes. + +Navigate to the left **Main Menu** and select **Clusters**. Select your deployed cluster, named **azure-cluster**. + +Due to the changes you have made to the cluster profile, this cluster has a pending update. Click on **Updates**. The +**Changes Summary** dialog appears. + +Click on **Review Changes in Editor**. The **Review Update Changes** dialog appears. The toleration changes appear as +incoming configuration. + +Click on **Apply Changes** to apply the update to your cluster. + +Select the **Nodes** tab. Click on **Edit** on the first worker pool, named **worker-pool**. The **Edit node pool** +dialog appears. + +Click on **Add New Taint** in the **Taints** section. Fill in `app` for the **Key**, `ui` for the **Value** and select +`NoExecute` for the **Effect**. These values match the toleration you specified in your cluster profile earlier. + +![Add taint to worker pool](/getting-started/getting-started_scale-secure-cluster_add-taint.webp) + +Click on **Confirm** to save your changes. The nodes in the `worker-pool` can now only execute the UI pods that have a +toleration matching the configured taint. + +Switch back to your terminal. Execute the following command again to verify where the Hello Universe pods have been +scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that the UI pods have remained scheduled on their original node named +`azure-cluster-worker-pool-6058-7tk4b`, while the other two pods have been moved to the node of the second worker pool +named `azure-cluster-worker-pool-2-6895-pbfnm`. + +```shell +NAME READY STATUS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 20m azure-cluster-worker-pool-2-6895-pbfnm +postgres-698d7ff8f4-vbktf 1/1 Running 20m azure-cluster-worker-pool-2-6895-pbfnm +ui-5f777c76df-pplcv 1/1 Running 20m azure-cluster-worker-pool-6058-7tk4b +``` + +Taints and tolerations are a common way of creating nodes dedicated to certain workloads, once the cluster has scaled +accordingly through its provisioned node pools. Refer to the +[Taints and Tolerations](../../clusters/cluster-management/taints.md) guide to learn more. + +### Verify the Application + +Select the **Overview** tab. Click on the URL for port **:8080** to access the Hello Universe application and verify +that the application is functioning correctly. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/azure/getting-started_scale-secure-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name `azure-cluster` +to proceed with the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +Click on the **drop-down Menu** at the top of the page and switch to **Tenant Admin** scope. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on the **three-dot Menu** of the **Project-ScaleSecureTutorial** and select **Delete**. A pop-up box will ask you +to confirm the action. Confirm the deletion. + +Navigate to the left **Main Menu** and click on **Users & Teams**. Select the **Teams** tab. + +Click on **scale-secure-tutorial-team** list entry. The **Team Details** pane appears. Click on **Delete Team**. A +pop-up box will ask you to confirm the action. Confirm the deletion. + +## Wrap-up + +In this tutorial, you learned how to perform very important operations relating to the scalability and availability of +your clusters. First, you created a project and team. Next, you imported a cluster profile and deployed a host Azure +cluster. Then, you upgraded the Kubernetes version of your cluster and scanned your clusters using Palette's scanning +capabilities. Finally, you scaled your cluster's nodes and used taints to select which Hello Universe pods execute on +them. + +We encourage you to check out the [Additional Capabilities](../additional-capabilities/additional-capabilities.md) to +explore other Palette functionalities. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/azure/setup.md b/docs/docs-content/getting-started/azure/setup.md new file mode 100644 index 0000000000..9c9529d76f --- /dev/null +++ b/docs/docs-content/getting-started/azure/setup.md @@ -0,0 +1,73 @@ +--- +sidebar_label: "Set up Palette" +title: "Set up Palette with Azure" +description: "Learn how to set up Palette with Azure." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["getting-started", "azure"] +--- + +In this guide, you will learn how to set up Palette for use with your Azure cloud account. These steps are required in +order to authenticate Palette and allow it to deploy host clusters. The concepts you learn about in the Getting Started +section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access. + +- Sign up to a public cloud account from + [Azure](https://learn.microsoft.com/en-us/training/modules/create-an-azure-account). The Azure cloud account must have + the [required permissions](../../clusters/public-cloud/azure/required-permissions.md). + +- Access to a terminal window. + +- The utility `ssh-keygen` or similar SSH key generator software. + +## Enablement + +Palette needs access to your Azure cloud account in order to create and manage Azure clusters and resources. + +### Add Azure Cloud Account + + + +### Create and Upload an SSH Key + +Follow the steps below to create an SSH key using the terminal and upload it to Palette. This step is not required for +the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial. + + + +### Create a Palette API Key + +Follow the steps below to create a Palette API key. This is required for the +[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial. + + + +## Validate + +You can verify your account is added. + +1. Log in to [Palette](https://console.spectrocloud.com). + +2. From the left **Main Menu**, select **Tenant Settings**. + +3. Next, on the **Tenant Settings Menu**, select **Cloud Accounts**. + +4. The added cloud account is listed under **Azure** with all other available Azure cloud accounts. + +## Next Steps + +Now that you set up Palette for use with Azure, you can start deploying Kubernetes clusters to your Azure account. To +learn how to get started with deploying Kubernetes clusters to Azure, we recommend that you continue to the +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to create a full cluster profile for your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/azure/update-k8s-cluster.md b/docs/docs-content/getting-started/azure/update-k8s-cluster.md new file mode 100644 index 0000000000..5f85ae0bdd --- /dev/null +++ b/docs/docs-content/getting-started/azure/update-k8s-cluster.md @@ -0,0 +1,299 @@ +--- +sidebar_label: "Deploy Cluster Profile Updates" +title: "Deploy Cluster Profile Updates" +description: "Learn how to update your deployed clusters using Palette Cluster Profiles." +icon: "" +hide_table_of_contents: false +sidebar_position: 40 +tags: ["getting-started", "azure"] +--- + +Palette provides cluster profiles, which allow you to specify layers for your workloads using packs, Helm charts, Zarf +packages, or cluster manifests. Packs serve as blueprints to the provisioning and deployment process, as they contain +the versions of the container images that Palette will install for you. Cluster profiles provide consistency across +environments during the cluster creation process, as well as when maintaining your clusters. Check out +[Cluster Profiles](../introduction.md#cluster-profiles) to learn more. Once provisioned, there are three main ways to +update your Palette deployments. + +| Method | Description | Cluster application process | +| ------------------------ | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Cluster profile versions | Create a new version of the cluster profile with your updates. | Select the new version of the cluster profile. Apply this new profile version to the clusters you want to update. | +| Cluster profile updates | Change the cluster profile in place. | Palette detects the difference between the provisioned resources and this profile. A pending update is available to clusters using this profile. Apply pending updates to the clusters you want to update. | +| Cluster overrides | Change the configuration of a single deployed cluster outside its cluster profile. | Save and apply the changes you've made to your cluster. | + +This tutorial will teach you how to update a cluster deployed with Palette to Microsoft Azure. You will explore each +cluster update method and learn how to apply these changes using Palette. The concepts you learn about in the Getting +Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with Azure](./setup.md) guide to +authenticate Palette for use with your Azure cloud account. + +Additionally, you should install Kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +Follow the instructions of the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy a cluster with the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. Your cluster should be successfully +provisioned and in a healthy state. + +The cluster profile name is `azure-profile` and the cluster name is `azure-cluster`. + +![Cluster details page](/getting-started/azure/getting-started_update-k8s-cluster_cluster-healthy.webp) + +## Tag and Filter Clusters + +Palette provides the ability to add tags to your cluster profiles and clusters. This helps you organize and categorize +your clusters based on your custom criteria. You can add tags during the creation process or by editing the resource +after it has been created. + +Adding tags to your clusters helps you find and identify your clusters, without having to rely on cluster naming. This +is especially important when operating with many clusters or multiple cloud deployments. + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Find the `azure-cluster` you +deployed with the _hello-universe_ application. Click on it to view its **Overview** tab. + +Click on the **Settings** drop-down Menu in the upper right corner and select **Cluster Settings**. + +Fill **service:hello-universe-frontend** in the **Tags (Optional)** input box. Click on **Save Changes**. Close the +panel. + +![Image that shows how to add a cluster tag](/getting-started/azure/getting-started_update-k8s-cluster_add-service-tag.webp) + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Click on **Add Filter**, then +select the **Add custom filter** option. + +Use the drop-down boxes to fill in the values of the filter. Select **Tags** in the left-hand **drop-down Menu**. Select +**is** in the middle **drop-down Menu**. Fill in **service:hello-universe-frontend** in the right-hand input box. + +Click on **Apply Filter**. + +![Image that shows how to add a frontend service filter](/getting-started/azure/getting-started_update-k8s-cluster_apply-frontend-filter.webp) + +Once you apply the filter, only the `azure-cluster` with this tag is displayed. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. If you +do not specify a version for your cluster profile, it defaults to **1.0.0**. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster. It should be named `azure-profile`. Select it to view its +details. + +![Image that shows the frontend cluster profile with cluster linked to it](/getting-started/azure/getting-started_update-k8s-cluster_profile-with-cluster.webp) + +The current version is displayed in the **drop-down Menu** next to the profile name. This profile has the default value +of **1.0.0**, as you did not specify another value when you created it. The cluster profile also shows the host clusters +that are currently deployed with this cluster profile version. + +Click on the version **drop-down Menu**. Select the **Create new version** option. + +A dialog box appears. Fill in the **Version** input with **1.1.0**. Click on **Confirm**. + +Palette creates a new cluster profile version and opens it. The version dropdown displays the newly created **1.1.0** +profile. This profile version is not deployed to any host clusters. + +![Image that shows cluster profile version 1.1.0](/getting-started/azure/getting-started_update-k8s-cluster_new-version-overview.webp) + +The version **1.1.0** has the same layers as the version **1.0.0** it was created from. + +Click on **Add New Pack**. Select the **Public Repo** registry and scroll down to the **Monitoring** section. Find the +**Kubecost** pack and select it. Alternatively, you can use the search function with the pack name **Kubecost**. + +![Image that shows how to select the Kubecost pack](/getting-started/azure/getting-started_update-k8s-cluster_select-kubecost-pack.webp) + +Once selected, the pack manifest is displayed in the manifest editor. + +Click on **Confirm & Create**. The manifest editor closes. + +Click on **Save Changes** to finish the configuration of this cluster profile version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab of this cluster. You can select a new version of your cluster profile by using the version +dropdown. + +Select the **1.1.0** version. + +![Image that shows how to select a new profile version for the cluster](/getting-started/azure/getting-started_update-k8s-cluster_profile-version-selection.webp) + +Click on **Save** to confirm your profile version selection. + +:::warning + +Palette has backup and restore capabilities available for your mission critical workloads. Ensure that you have adequate +backups before you make any cluster profile version changes in your production environments. You can learn more in the +[Backup and Restore](../../clusters/cluster-management/backup-restore/backup-restore.md) section. + +::: + +Palette now makes the required changes to your cluster according to the specifications of the configured cluster profile +version. Once your changes have completed, Palette marks your layers with the green status indicator. The Kubecost pack +will be successfully deployed. + +![Image that shows completed cluster profile updates](/getting-started/azure/getting-started_update-k8s-cluster_completed-cluster-updates.webp) + +Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette UI. +This file enables you and other users to issue kubectl commands against the host cluster. + +![Image that the kubeconfig file](/getting-started/azure/getting-started_update-k8s-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.azure-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```shell +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +visualization tools. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer. + +![Image that shows the Kubecost UI](/getting-started/azure/getting-started_update-k8s-cluster_kubecost-ui.webp) + +Once you are done exploring locally, you can stop the `kubectl port-forward` command by closing the terminal window it +is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process to roll back to a previous version is identical to the process for applying a new version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab. This cluster is currently deployed using cluster profile version **1.1.0**. Select the +option **1.0.0** in the version dropdown. This process is the reverse of what you have done in the previous section, +[Version Cluster Profiles](#version-cluster-profiles). + +Click on **Save** to confirm your changes. + +Palette now makes the changes required for the cluster to return to the state specified in version **1.0.0** of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Cluster details page with service URL highlighted](/getting-started/azure/getting-started_update-k8s-cluster_rollback.webp) + +## Pending Updates + +Cluster profiles can also be updated in place, without the need to create a new cluster profile version. Palette +monitors the state of your clusters and notifies you when updates are available for your host clusters. You may then +choose to apply your changes at a convenient time. + +The previous state of the cluster profile will not be saved once it is overwritten. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the tag +**service:hello-universe-frontend**. Select it to view its **Overview** tab. + +Select the **Profile** tab. Then, select the **hello-universe** pack. Change the `replicas` field to `2` on line `15`. +Click on **Save**. The editor closes. + +This cluster now contains an override over its cluster profile. Palette uses the configuration you have just provided +for the single cluster over its cluster profile and begins making the appropriate changes. + +Once these changes are complete, select the **Workloads** tab. Then, select the **hello-universe** namespace. + +Two **ui** pods are available, instead of the one specified by your cluster profile. Your override has been successfully +applied. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster, named `azure-profile`. + +Click on it to view its details. Select **1.0.0** in the version dropdown. + +Select the **hello-universe** pack. The editor appears. Change the `replicas` field to `3` on line `15`. Click on +**Confirm Updates**. The editor closes. + +Click on **Save Changes** to confirm the changes you have made to your profile. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the with the **service:hello-universe-frontend** +tag. Palette indicates that the cluster associated with the cluster profile you updated has updates available. + +![Image that shows the pending updates ](/getting-started/azure/getting-started_update-k8s-cluster_pending-update-clusters-view.webp) + +Select this cluster to open its **Overview** tab. Click on **Updates** to begin the cluster update. + +![Image that shows the Updates button](/getting-started/azure/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp) + +A dialog appears which shows the changes made in this update. Click on **Review changes in Editor**. As previously, +Palette displays the changes, with the current configuration on the left and the incoming configuration on the right. + +Review the changes and ensure the only change is the `replicas` field value. You can choose to maintain your cluster +override or apply the incoming cluster profile update. + +![Image that shows the available updates dialog ](/getting-started/azure/getting-started_update-k8s-cluster_available-updates-dialog.webp) + +Click on **Apply Changes** once you have finished reviewing your changes. This removes your cluster override. + +Palette updates your cluster according to cluster profile specifications. Once these changes are complete, select the +**Workloads** tab. Then, select the **hello-universe** namespace. + +Three **ui** pods are available. The cluster profile update is now reflected by your cluster. + +## Cluster Observability + + + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +## Wrap-Up + +In this tutorial, you created deployed cluster profile updates. After the cluster was deployed to Azure, you updated the +cluster profile through three different methods: create a new cluster profile version, update a cluster profile in +place, and cluster profile overrides. After you made your changes, the Hello Universe application functioned as a +three-tier application with a REST API backend server. + +Cluster profiles provide consistency during the cluster creation process, as well as when maintaining your clusters. +They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or +rolling back workloads across your environments. + +We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to +learn about how you can use Palette with Terraform. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/cluster-profiles.md b/docs/docs-content/getting-started/cluster-profiles.md deleted file mode 100644 index 09c24edfd3..0000000000 --- a/docs/docs-content/getting-started/cluster-profiles.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -sidebar_label: "Cluster Profiles" -title: "Cluster Profiles" -description: "Spectro Cloud Palette Cluster Profiles" -icon: "" -hide_table_of_contents: false -sidebar_position: 30 -tags: ["getting-started"] ---- - -Cluster profiles are the declarative, full-stack models that Palette follows when it provisions, scales, and maintains -your clusters. Cluster profiles are composed of layers using packs, Helm charts, Zarf packages, or cluster manifests to -meet specific types of workloads on your Palette cluster deployments. You can create as many profiles as needed for your -workloads. - -Cluster profiles provide you with a repeatable deployment process for all of your development and production -environments. They also give you visibility on the layers, packages and versions present on your deployed clusters. - -Finally, if you want to update or maintain your deployed workloads, cluster profiles give you the flexibility to make -changes to all clusters deployed with the profile by removing, swapping or adding a new layer. Palette will then -reconcile the current state of your workloads with the desired state specified by the profile. - -Below are cluster profile types you can create: - -- _Infrastructure_ profiles provide the essential components for workload cluster deployments within a - [tenant](../glossary-all.md#tenant): Operating System (OS), Kubernetes, Network, and Storage. Collectively, these - layers form the infrastructure for your cluster. - -- _Add-on_ profiles are exclusively composed of add-on layers. They usually do not contain infrastructure components and - are instead designed for reusability across multiple clusters and multiple projects within a tenant. Since they - provide the flexibility to configure clusters based on specific requirements, _add-on_ profiles can be added to - _infrastructure_ profiles to create what we call a _full profile_. - -- _Full profiles_ combine infrastructure packs with add-on layers. By adding layers, you can enhance cluster - functionality. For example, you might add system apps, authentication, monitoring, ingress, load balancers, and more - to your cluster. - -The diagram below illustrates the components of these profile types and how you can build on infrastructure layers with -add-on layers to create a full cluster profile. You can also create separate add-on profiles to reuse among multiple -clusters. - -![A flow diagram that shows how you can add layers to an infrastructure profile to create a full profile.](/getting-started/getting-started_cluster-profiles_cluster-profiles.webp) - -## Packs - -Packs are the smallest component of a cluster profile. Each layer of a cluster profile is made up of a specific pack. -Palette provides packs that are tailored for specific uses to support the core infrastructure a cluster needs. You can -also use add-on packs, or create your own custom pack to extend Kubernetes functionality. - -The diagram below illustrates some of the popular technologies that you can use in your cluster profile layers. Check -out the [Packs List](../integrations/integrations.mdx) page to learn more about individual packs. - -![Diagram of stack grouped as a unit](/getting-started/getting-started_cluster-profiles_stack-grouped-packs.webp) diff --git a/docs/docs-content/getting-started/create-cluster-profile.md b/docs/docs-content/getting-started/create-cluster-profile.md deleted file mode 100644 index 06b407e3ff..0000000000 --- a/docs/docs-content/getting-started/create-cluster-profile.md +++ /dev/null @@ -1,224 +0,0 @@ ---- -sidebar_label: "Create a Cluster Profile" -title: "Create a Cluster Profile" -description: "Learn to create a full cluster profile in Palette." -icon: "" -hide_table_of_contents: false -sidebar_position: 40 -tags: ["getting-started"] ---- - -Palette offers profile-based management for Kubernetes, enabling consistency, repeatability, and operational efficiency -across multiple clusters. A cluster profile allows you to customize the cluster infrastructure stack, allowing you to -choose the desired Operating System (OS), Kubernetes, Container Network Interfaces (CNI), Container Storage Interfaces -(CSI). You can further customize the stack with add-on application layers. For more information about cluster profile -types, refer to [Cluster Profiles](./cluster-profiles.md). - -In this tutorial, you create a full profile directly from the Palette dashboard. Then, you add a layer to your cluster -profile by using a manifest to deploy a web application. Adding custom manifests to your cluster profile allows you to -customize and configure clusters based on specific requirements. - -## Prerequisites - -- Your Palette account role must have the `clusterProfile.create` permission to create a cluster profile. Refer to the - [Roles and Permissions](../user-management/palette-rbac/project-scope-roles-permissions.md#cluster-profile-admin) - documentation for more information. - -## Create a Full Cluster Profile - - - - -Log in to [Palette](https://console.spectrocloud.com) and navigate to the left **Main Menu**. Select **Profiles** to -view the cluster profile page. You can view the list of available cluster profiles. To create a cluster profile, click -on **Add Cluster Profile**. - -![View of the cluster Profiles page](/getting-started/getting-started_create-cluster-profile_profile_list_view.webp) - -Follow the wizard to create a new profile. - -In the **Basic Information** section, assign the name **aws-profile**, a brief profile description, select the type as -**Full**, and assign the tag **env:aws**. You can leave the version empty if you want to. Just be aware that the version -defaults to **1.0.0**. Click on **Next**. - -**Cloud Type** allows you to choose the infrastructure provider with which this cluster profile is associated. Select -**AWS** and click on **Next**. - -The **Profile Layers** step is where you specify the packs that compose the profile. There are four required -infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack -Type**, **Registry**, and **Pack Name**. - -For this tutorial, use the following packs: - -| Pack Name | Version | Layer | -| -------------- | ------- | ---------------- | -| ubuntu-aws LTS | 22.4.x | Operating System | -| Kubernetes | 1.27.x | Kubernetes | -| cni-calico | 3.26.x | Network | -| csi-aws-ebs | 1.22.x | Storage | - -As you fill out the information for each layer, click on **Next** to proceed to the next layer. - -Click on **Confirm** after you have completed filling out all the core layers. - -![A view of the cluster profile stack](/getting-started/aws/getting-started_create-cluster-profile_clusters_parameters.webp) - -The review section gives an overview of the cluster profile configuration you selected. Click on **Finish -Configuration** to create the cluster profile. - - - - - -Log in to Palette and navigate to the left **Main Menu**. Select **Profiles** to view the cluster profile page. You can -view the list of available cluster profiles. To create a cluster profile, click on **Add Cluster Profile**. - -![View of the cluster Profiles page](/getting-started/getting-started_create-cluster-profile_profile_list_view.webp) - -Follow the wizard to create a new profile. - -In the **Basic Information** section, assign the name **azure-profile**, a brief profile description, select the type as -**Full**, and assign the tag **env:azure**. You can leave the version empty if you want to. Just be aware that the -version defaults to **1.0.0**. Click on **Next**. - -**Cloud Type** allows you to choose the infrastructure provider with which this cluster profile is associated. Select -**Azure** and click on **Next**. - -The **Profile Layers** step is where you specify the packs that compose the profile. There are four required -infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack -Type**, **Registry**, and **Pack Name**. - -For this tutorial, use the following packs: - -| Pack Name | Version | Layer | -| ---------------- | ------- | ---------------- | -| ubuntu-azure LTS | 22.4.x | Operating System | -| Kubernetes | 1.27.x | Kubernetes | -| cni-calico-azure | 3.26.x | Network | -| Azure Disk | 1.28.x | Storage | - -As you fill out the information for each layer, click on **Next** to proceed to the next layer. - -Click on **Confirm** after you have completed filling out all the core layers. - -![Azure cluster profile overview page](/getting-started/azure/getting-started_create-cluster-profile_cluster_profile_stack.webp) - -The review section gives an overview of the cluster profile configuration you selected. Click on **Finish -Configuration** to finish creating the cluster profile. - - - - -Log in to [Palette](https://console.spectrocloud.com) and navigate to the left **Main Menu**. Select **Profiles** to -view the cluster profile page. You can view the list of available cluster profiles. To create a cluster profile, click -on **Add Cluster Profile**. - -![View of the cluster Profiles page](/getting-started/getting-started_create-cluster-profile_profile_list_view.webp) - -Follow the wizard to create a new profile. - -In the **Basic Information** section, assign the name **gcp-profile**, provide a profile description, select the type as -**Full**, and assign the tag **env:gcp**. You can leave the version empty if you want to. Just be aware that the version -defaults to **1.0.0**. Click on **Next**. - -Cloud Type allows you to choose the infrastructure provider with which this cluster profile is associated. Select -**Google Cloud** and click on **Next**. - -The **Profile Layers** step is where you specify the packs that compose the profile. There are four required -infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack -Type**, **Registry**, and **Pack Name**. - -For this tutorial, use the following packs: - -| Pack Name | Version | Layer | -| -------------- | ------- | ---------------- | -| ubuntu-gcp LTS | 22.4.x | Operating System | -| Kubernetes | 1.27.x | Kubernetes | -| cni-calico | 3.26.x | Network | -| csi-gcp-driver | 1.8.x | Storage | - -As you fill out the information for each layer, click on **Next** to proceed to the next layer. - -Click on **Confirm** after you have completed filling out all the core layers. - -![GCP cluster profile view](/getting-started/gcp/getting-started_create-cluster-profile_cluster_profile_stack.webp) - -The review section gives an overview of the cluster profile configuration you selected. Click on **Finish -Configuration** to create the cluster profile. - - - - - -## Add a Manifest - -Navigate to the left **Main Menu** and select **Profiles**. Select the cluster profile you created earlier. - -Click on **Add Manifest** at the top of the page and fill out the following input fields. - -- **Layer name** - The name of the layer. Assign the name **application**. -- **Manifests** - Add your manifest by giving it a name and clicking the **New Manifest** button. Assign a name to the - internal manifest and click on the blue button. An empty editor will be displayed on the right side of the screen. - -![Screenshot of unopened manifest editor](/getting-started/getting-started_create-cluster-profile_manifest_blue_btn.webp) - -In the manifest editor, insert the following content. - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: hello-universe-service -spec: - type: LoadBalancer - ports: - - protocol: TCP - port: 8080 - targetPort: 8080 - selector: - app: hello-universe ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: hello-universe-deployment -spec: - replicas: 2 - selector: - matchLabels: - app: hello-universe - template: - metadata: - labels: - app: hello-universe - spec: - containers: - - name: hello-universe - image: ghcr.io/spectrocloud/hello-universe:1.1.0 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 8080 -``` - -The code snippet you added will deploy the [_hello-universe_](https://github.com/spectrocloud/hello-universe) -application. You may have noticed that the code snippet you added is a Kubernetes configuration. Manifest files are a -method you can use to achieve more granular customization of your Kubernetes cluster. You can add any valid Kubernetes -configuration to a manifest file. - -![Screenshot of manifest in the editor](/getting-started/getting-started_create-cluster-profile_manifest.webp) - -The manifest defines a replica set for the application to simulate a distributed environment with a web application -deployed to Kubernetes. The application is assigned a load balancer. Using a load balancer, you can expose a single -access point and distribute the workload to both containers. - -Click on **Confirm & Create** to save the manifest. Click on **Save Changes** to save this new layer to the cluster -profile. - -## Wrap-Up - -In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a -host cluster using Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) cloud providers. You added -a custom manifest to your profile to deploy a custom workload. - -We recommend that you continue to the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy this cluster -profile to a host cluster onto your preferred cloud service provider. diff --git a/docs/docs-content/getting-started/deploy-k8s-cluster.md b/docs/docs-content/getting-started/deploy-k8s-cluster.md deleted file mode 100644 index 3374a6a93f..0000000000 --- a/docs/docs-content/getting-started/deploy-k8s-cluster.md +++ /dev/null @@ -1,355 +0,0 @@ ---- -sidebar_label: "Deploy a Cluster" -title: "Deploy a Cluster" -description: "Learn to deploy a Palette host cluster." -icon: "" -hide_table_of_contents: false -sidebar_position: 50 -tags: ["getting-started"] ---- - -This tutorial will teach you how to deploy a host cluster with Palette using Amazon Web Services (AWS), Microsoft Azure, -or Google Cloud Platform (GCP) cloud providers. You will learn about _Cluster Mode_ and _Cluster Profiles_ and how these -components enable you to deploy customized applications to Kubernetes with minimal effort. - -As you navigate the tutorial, refer to this diagram to help you understand how Palette uses a cluster profile as a -blueprint for the host cluster you deploy. Palette clusters have the same node pools you may be familiar with: _control -plane nodes_ and _worker nodes_ where you will deploy applications. The result is a host cluster that Palette manages. - -![A view of Palette managing the Kubernetes lifecycle](/getting-started/getting-started_deploy-k8s-cluster_application.webp) - -## Prerequisites - -To complete this tutorial, you will need the following. - -- A public cloud account from one of these providers: - - - [AWS](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account) - - [Azure](https://learn.microsoft.com/en-us/training/modules/create-an-azure-account) - - [GCP](https://cloud.google.com/docs/get-started) - -- Register the cloud account in Palette. The following resources provide additional guidance. - - - [Register and Manage AWS Accounts](../clusters/public-cloud/aws/add-aws-accounts.md) - - [Register and Manage Azure Cloud Accounts](../clusters/public-cloud/azure/azure-cloud.md) - - [Register and Manage GCP Accounts](../clusters/public-cloud/gcp/add-gcp-accounts.md) - -- An SSH Key Pair. Use the [Create and Upload an SSH Key](../clusters/cluster-management/ssh-keys.md) guide to learn how - to create an SSH key and upload it to Palette. - - - AWS users must create an AWS Key pair before starting the tutorial. If you need additional guidance, check out the - [Create EC2 SSH Key Pair](https://docs.aws.amazon.com/ground-station/latest/ug/create-ec2-ssh-key-pair.html) - tutorial. - -- A Palette cluster profile. Follow the [Create a Cluster Profile](./create-cluster-profile.md) tutorial to create the - required cluster profile for your chosen cloud provider. - -## Deploy a Cluster - -The following steps will guide you through deploying the cluster infrastructure. - - - - - -Navigate to the left **Main Menu** and select **Cluster**. From the clusters page, click on **Add New Cluster**. - -![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) - -Palette will prompt you to either deploy a new cluster or import an existing one. Click on **Deploy New Cluster** to -access the cluster deployment wizard. Select **AWS** and click the **Start AWS Configuration** button. Use the following -steps to create a host cluster in AWS. - -In the **Basic information** section, insert the general information about the cluster, such as the Cluster name, -Description, Tags, and Cloud account. Click on **Next**. - -![Palette clusters basic information](/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_basic_info.webp) - -A list is displayed of available profiles you can choose to deploy to AWS. Select the cluster profile you created in the -[Create a Cluster Profile](./create-cluster-profile.md) tutorial, named **aws-profile**, and click on **Next**. - -The **Parameters** section displays all the layers in the cluster profile. - -![Palette clusters parameters](/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_creation_parameters.webp) - -Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each -pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. - -The **Cluster config** section allows you to select the **Region** in which to deploy the host cluster and specify other -options such as the **SSH Key Pair** to assign to the cluster. All clusters require you to select an SSH key. After you -have selected the **Region** and your **SSH Key Pair Name**, click on **Next**. - -The **Nodes config** section allows you to configure the nodes that make up the control plane and worker nodes of the -host cluster. - -Before you proceed to next section, review the following parameters. - -- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or - worker pool. For this tutorial, set the count to one for the control plane pool and two for the worker pool. - -- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when - spot instances are used as worker nodes. You can check this box if you want to. - -- **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and - hourly cost of the instance. Select `m4.2xlarge`. - -- **Availability zones** - Used to specify the availability zones in which the node pool can place nodes. Select an - availability zone. - -- **Disk size** - Set the disk size to **60 GiB**. - -- **Instance Option** - This option allows you to choose - [on-demand instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-on-demand-instances.html) or - [spot instance](https://aws.amazon.com/ec2/spot/) for worker nodes. Select **On Demand**. - -![Palette clusters basic information](/getting-started/aws/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp) - -Select **Next** to proceed with the cluster deployment. - -In the **Settings** section, you can configure advanced options such as when to patch the OS, enable security scans, -manage backups, add role-based access control (RBAC) bindings, and more. - -For this tutorial, you can use the default settings. Click on **Validate** to continue. - -The **Review** section allows you to review the cluster configuration prior to deploying the cluster. Review all the -settings and click on **Finish Configuration** to deploy the cluster. - -![Configuration overview of newly created AWS cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp) - -Navigate to the left **Main Menu** and select **Clusters**. - -![Update the cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp) - - - - - -Navigate to the left **Main Menu** and select **Clusters**. Click on **Add New Cluster**. - -![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) - -Click on **Deploy New Cluster** to access the cluster deployment wizard. Select **Azure** and click the **Start Azure -Configuration** button. Use the following steps to create a host cluster in Azure. - -In the **Basic information** section, insert the general information about the cluster, such as the Cluster name, -Description, Tags, and Cloud account. Click on **Next**. - -![Palette clusters basic information](/getting-started/azure/getting-started_deploy-k8s-cluster_clusters_basic_info.webp) - -A list is displayed of available profiles you can choose to deploy to Azure. Select the cluster profile you created in -the [Create a Cluster Profile](./create-cluster-profile.md) tutorial, named **azure-profile**, and click on **Next**. - -The **Parameters** section displays all the layers in the cluster profile. - -![palette clusters basic information](/getting-started/azure/getting-started_deploy-k8s-cluster_parameters.webp) - -Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each -pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. - -The **Cluster config** section allows you to select the **Subscription**, **Region**, **Resource Group**, **Storage -account**, and **SSH Key** to apply to the host cluster. All clusters require you to assign an SSH key. Refer to the -[SSH Keys](../clusters/cluster-management/ssh-keys.md) guide for information about uploading an SSH key. - -When you are done selecting a **Subscription**, **Region**, **Resource Group**, **Storage account** and **SSH Key**, -click on **Next**. - -The **Nodes config** section allows you to configure the nodes that compose the control plane nodes and worker nodes of -the Kubernetes cluster. - -Refer to the [Node Pool](../clusters/cluster-management/node-pool.md) guide for a list and description of parameters. - -Before you proceed to next section, review the following parameters. - -- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or - worker pool. For this tutorial, set the count to one for both the control plane and worker pools. - -- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when - spot instances are used as worker nodes. You can check this box if you want to. - -- **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and - hourly cost of the instance. Select **Standard_A8_v2**. - -- **Managed disk** - Used to select the storage class. Select **Standard LRS** and set the disk size to **60**. - -- **Availability zones** - Used to specify the availability zones in which the node pool can place nodes. Select an - availability zone. - -![Palette clusters nodes configuration](/getting-started/azure/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp) - -In the **Settings** section, you can configure advanced options such as when to patch the OS, enable security scans, -manage backups, add Role-Based Access Control (RBAC) bindings, and more. - -For this tutorial, you can use the default settings. Click on **Validate** to continue. - -The Review section allows you to review the cluster configuration before deploying the cluster. Review all the settings -and click on **Finish Configuration** to deploy the cluster. - -![Configuration overview of newly created Azure cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_profile_review.webp) - -Navigate to the left **Main Menu** and select **Clusters**. - -![Update the cluster](/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp) - - - - - -Navigate to the left **Main Menu** and select **Cluster**. Click on **Add New Cluster**. - -![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) - -Click on **Deploy New Cluster** to access the cluster deployment wizard. Select **Google Cloud** and click the **Start -Google Cloud Configuration** button. Use the following steps to create a host cluster in Google Cloud. - -In the **Basic information** section, insert the general information about the cluster, such as the **Cluster name**, -**Description**, **Tags**, and **Cloud account**. Click on **Next**. - -![Palette clusters basic information](/getting-started/gcp/getting-started_deploy-k8s-cluster_basic_info.webp) - -A list is displayed of available profiles you can choose to deploy to GCP. Select the cluster profile you created in the -[Create a Cluster Profile](./create-cluster-profile.md) tutorial, named **gcp-profile**, and click on **Next**. - -The **Parameters** section displays all the layers in the cluster profile. - -![Palette clusters basic information](/getting-started/gcp/getting-started_deploy-k8s-cluster_clusters_parameters.webp) - -Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each -pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. - -The **Cluster config** section allows you to select the **Project**, **Region**, and **SSH Key** to apply to the host -cluster. All clusters require you to assign an SSH key. Refer to the [SSH Keys](/clusters/cluster-management/ssh-keys) -guide for information about uploading an SSH key. - -After selecting a **Project**, **Region**, and **SSH Key**, click on **Next**. - -The **Nodes config** section allows you to configure the nodes that make up the control plane and worker nodes of the -host cluster. - -Before you proceed to the next section, review the following parameters. - -Refer to the [Node Pool](../clusters/cluster-management/node-pool.md) guide for a list and description of parameters. - -Before you proceed to next section, review the following parameters. - -- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or - worker pool. For this tutorial, set the count to one for the control plane pool and two for the worker pool. - -- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when - spot instances are used as worker nodes. You can check this box if you want to. - -- **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and - hourly cost of the instance. Select **n1-standard-4**. - -- **Disk size** - Set the disk size to **60**. - -- **Availability zones** - Used to specify the availability zones in which the node pool can place nodes. Select an - availability zone. - -![Palette clusters nodes configuration](/getting-started/gcp/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp) - -Select **Next** to proceed with the cluster deployment. - -In the **Settings** section, you can configure advanced options such as when to patch the OS, enable security scans, -manage backups, add Role-Based Access Control (RBAC) bindings, and more. - -For this tutorial, you can use the default settings. Click on **Validate** to continue. - -The **Review** section allows you to review the cluster configuration before deploying the cluster. Review all the -settings and click on **Finish Configuration** to deploy the cluster. - -![Newly created GCP cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_profile_review.webp) - -Navigate to the left **Main Menu** and select **Clusters**. - -![Update the cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp) - - - - - -The cluster deployment process can take 15 to 30 min. The deployment time varies depending on the cloud provider, -cluster profile, cluster size, and the node pool configurations provided. You can learn more about the deployment -progress by reviewing the event log. Click on the **Events** tab to view the log. - -![Update the cluster](/getting-started/getting-started_deploy-k8s-cluster_event_log.webp) - -
- -While you wait for the cluster deployment process to complete, feel free to check out a video where we discuss the -growing pains of using Kubernetes and how Palette can help your address these pain points. - -
- - - -## Verify the Application - -Navigate to the left **Main Menu** and select **Clusters**. - -Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, -indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the -Hello Universe application. - -![Cluster details page with service URL highlighted](/getting-started/getting-started_deploy-k8s-cluster_service_url.webp) - -
- -:::warning - -It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few -moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. - -::: - -
- -![Image that shows the cluster overview of the Hello Universe Frontend Cluster](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-without-api.webp) - -Welcome to Hello Universe, a demo application to help you learn more about Palette and its features. Feel free to click -on the logo to increase the counter and for a fun image change. - -You have deployed your first application to a cluster managed by Palette. Your first application is a single container -application with no upstream dependencies. - -## Cleanup - -Use the following steps to remove all the resources you created for the tutorial. - -To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to -delete to access its details page. - -Click on **Settings** to expand the menu, and select **Delete Cluster**. - -![Delete cluster](/getting-started/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) - -You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with -the delete step. The deletion process takes several minutes to complete. - -
- -:::info - -If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force -delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette -automatically removes clusters stuck in the cluster deletion phase for over 24 hours. - -::: - -
- -Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you -created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the -selection to remove the cluster profile. - -## Wrap-Up - -In this tutorial, you used the cluster profile you created in the previous -[Create a Cluster Profile](./create-cluster-profile.md) tutorial to deploy a host cluster onto your preferred cloud -service provider. After the cluster deployed, you verified the Hello Universe application was successfully deployed. - -We recommend that you continue to the -[Deploy Cluster Profile Updates](../tutorials/cluster-management/update-maintain/update-k8s-cluster.md) tutorial to -learn how to update your host cluster. diff --git a/docs/docs-content/getting-started/gcp/_category_.json b/docs/docs-content/getting-started/gcp/_category_.json new file mode 100644 index 0000000000..c82af61e53 --- /dev/null +++ b/docs/docs-content/getting-started/gcp/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 60 +} diff --git a/docs/docs-content/getting-started/gcp/create-cluster-profile.md b/docs/docs-content/getting-started/gcp/create-cluster-profile.md new file mode 100644 index 0000000000..94b8766175 --- /dev/null +++ b/docs/docs-content/getting-started/gcp/create-cluster-profile.md @@ -0,0 +1,121 @@ +--- +sidebar_label: "Create a Cluster Profile" +title: "Create a Cluster Profile" +description: "Learn to create a full cluster profile in Palette." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["getting-started", "gcp"] +--- + +Palette offers profile-based management for Kubernetes, enabling consistency, repeatability, and operational efficiency +across multiple clusters. A cluster profile allows you to customize the cluster infrastructure stack, allowing you to +choose the desired Operating System (OS), Kubernetes, Container Network Interfaces (CNI), Container Storage Interfaces +(CSI). You can further customize the stack with add-on application layers. For more information about cluster profile +types, refer to [Cluster Profiles](../introduction.md#cluster-profiles). + +In this tutorial, you create a full profile directly from the Palette dashboard. Then, you add a layer to your cluster +profile by using a [community pack](../../integrations/community_packs.md) to deploy a web application. The concepts you +learn about in the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- Follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate Palette for use with + your GCP cloud account. +- Ensure that the [Palette Community Registry](../../registries-and-packs/registries/registries.md#default-registries) + is available in your Palette environment. Refer to the + [Add OCI Packs Registry](../../registries-and-packs/registries/oci-registry/add-oci-packs.md) guide for additional + guidance. + +## Create a Full Cluster Profile + +Log in to [Palette](https://console.spectrocloud.com) and navigate to the left **Main Menu**. Select **Profiles** to +view the cluster profile page. You can view the list of available cluster profiles. To create a cluster profile, click +on **Add Cluster Profile**. + +Follow the wizard to create a new profile. + +In the **Basic Information** section, assign the name **gcp-profile**, provide a profile description, select the type as +**Full**, and assign the tag **env:gcp**. You can leave the version empty if you want to. Just be aware that the version +defaults to **1.0.0**. Click on **Next**. + +Cloud Type allows you to choose the infrastructure provider with which this cluster profile is associated. Select +**Google Cloud** and click on **Next**. + +The **Profile Layers** step is where you specify the packs that compose the profile. There are four required +infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack +Type**, **Registry**, and **Pack Name**. + +For this tutorial, use the following packs: + +| Pack Name | Version | Layer | +| -------------- | ------- | ---------------- | +| ubuntu-gcp LTS | 22.4.x | Operating System | +| Kubernetes | 1.28.x | Kubernetes | +| cni-calico | 3.27.x | Network | +| csi-gcp-driver | 1.12.x | Storage | + +As you fill out the information for each layer, click on **Next** to proceed to the next layer. + +Click on **Confirm** after you have completed filling out all the core layers. + +![GCP cluster profile view](/getting-started/gcp/getting-started_create-cluster-profile_cluster_profile_stack.webp) + +The review section gives an overview of the cluster profile configuration you selected. Click on **Finish +Configuration** to create the cluster profile. + +## Add a Pack + +Navigate to the left **Main Menu** and select **Profiles**. Select the cluster profile you created earlier. + +Click on **Add New Pack** at the top of the page. + +Select the **Palette Community Registry** from the **Registry** dropdown. Then, click on the latest **Hello Universe** +pack with version **v1.2.0**. + +![Screenshot of hello universe pack](/getting-started/gcp/getting-started_create-cluster-profile_add-pack.webp) + +Once you have selected the pack, Palette will display its README, which provides you with additional guidance for usage +and configuration options. The pack you added will deploy the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. + +![Screenshot of pack readme](/getting-started/gcp/getting-started_create-cluster-profile_pack-readme.webp) + +Click on **Values** to edit the pack manifest. Click on **Presets** on the right-hand side. + +This pack has two configured presets: + +1. **Disable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a standalone frontend application. This is the default preset selection. +2. **Enable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a three-tier application with a frontend, API server, and Postgres database. + +Select the **Enable Hello Universe API** preset. The pack manifest changes according to this preset. + +![Screenshot of pack presets](/getting-started/gcp/getting-started_create-cluster-profile_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. + +Click on **Confirm & Create** to save the manifest. Then, click on **Save Changes** to save this new layer to the +cluster profile. + +## Wrap-Up + +In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a +host cluster using GCP. You added a community pack to your profile to deploy a custom workload. + +We recommend that you continue to the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy this cluster +profile to a host cluster onto GCP. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md b/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md new file mode 100644 index 0000000000..95ae5b6420 --- /dev/null +++ b/docs/docs-content/getting-started/gcp/deploy-k8s-cluster.md @@ -0,0 +1,185 @@ +--- +sidebar_label: "Deploy a Cluster" +title: "Deploy a Cluster" +description: "Learn to deploy a Palette host cluster." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["getting-started", "gcp"] +--- + +This tutorial will teach you how to deploy a host cluster with Palette using Google Cloud Platform (GCP). You will learn +about _Cluster Mode_ and _Cluster Profiles_ and how these components enable you to deploy customized applications to +Kubernetes with minimal effort. + +As you navigate the tutorial, refer to this diagram to help you understand how Palette uses a cluster profile as a +blueprint for the host cluster you deploy. Palette clusters have the same node pools you may be familiar with: _control +plane nodes_ and _worker nodes_ where you will deploy applications. The result is a host cluster that Palette manages. +The concepts you learn about in the Getting Started section are centered around a fictional case study company, +Spacetastic Ltd. + +![A view of Palette managing the Kubernetes lifecycle](/getting-started/getting-started_deploy-k8s-cluster_application.webp) + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following. + +- Follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate Palette for use with + your GCP cloud account. + +- A Palette cluster profile. Follow the [Create a Cluster Profile](./create-cluster-profile.md) tutorial to create the + required GCP cluster profile. + +## Deploy a Cluster + +The following steps will guide you through deploying the cluster infrastructure. + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) + +Palette will prompt you to select the type of cluster. Select **GCP IaaS** and click the **Start GCP IaaS +Configuration** button. Use the following steps to create a host cluster in Google Cloud. + +In the **Basic information** section, insert the general information about the cluster, such as the **Cluster name**, +**Description**, **Tags**, and **Cloud account**. Click on **Next**. + +![Palette clusters basic information](/getting-started/gcp/getting-started_deploy-k8s-cluster_basic_info.webp) + +Click on **Add Cluster Profile**. A list is displayed of available profiles you can choose to deploy to GCP. Select the +cluster profile you created in the [Create a Cluster Profile](./create-cluster-profile.md) tutorial, named +**gcp-profile**, and click on **Confirm**. + +The **Cluster Profile** section displays all the layers in the cluster profile. + +![Palette clusters profile](/getting-started/gcp/getting-started_deploy-k8s-cluster_clusters_parameters.webp) + +Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each +pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. + +The **Cluster Config** section allows you to select the **Project** and **Region** to apply to the host cluster. + +After selecting a **Project** and a **Region**, click on **Next**. + +The **Nodes Config** section allows you to configure the nodes that make up the control plane and worker nodes of the +host cluster. + +Before you proceed to the next section, review the following parameters. + +Refer to the [Node Pool](../../clusters/cluster-management/node-pool.md) guide for a list and description of parameters. + +Before you proceed to next section, review the following parameters. + +- **Number of nodes in the pool** - This option sets the number of control plane or worker nodes in the control plane or + worker pool. For this tutorial, set the count to one for the control plane pool and two for the worker pool. + +- **Allow worker capability** - This option allows the control plane node to also accept workloads. This is useful when + spot instances are used as worker nodes. You can check this box if you want to. + +- **Instance Type** - Select the compute type for the node pool. Each instance type displays the amount of CPU, RAM, and + hourly cost of the instance. Select **n1-standard-4**. + +- **Disk size** - Set the disk size to **60**. + +- **Availability zones** - Used to specify the availability zones in which the node pool can place nodes. Select an + availability zone. + +![Palette clusters nodes configuration](/getting-started/gcp/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp) + +Select **Next** to proceed with the cluster deployment. + +In the **Cluster Settings** section, you can configure advanced options such as when to patch the OS, enable security +scans, manage backups, add Role-Based Access Control (RBAC) bindings, and more. + +For this tutorial, you can use the default settings. Click on **Validate** to continue. + +The **Review** section allows you to review the cluster configuration before deploying the cluster. Review all the +settings and click on **Finish Configuration** to deploy the cluster. + +![Newly created GCP cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_profile_review.webp) + +Navigate to the left **Main Menu** and select **Clusters**. + +![Update the cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp) + +The cluster deployment process can take 15 to 30 min. The deployment time varies depending on the cloud provider, +cluster profile, cluster size, and the node pool configurations provided. You can learn more about the deployment +progress by reviewing the event log. Click on the **Events** tab to view the log. + +![Update the cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_event_log.webp) + +## Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/gcp/getting-started_deploy-k8s-cluster_service_url.webp) + +
+ +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +
+ +![Image that shows the cluster overview of the Hello Universe Frontend Cluster](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed cluster. + +You have deployed your first application to a cluster managed by Palette. Your first application is a three-tier +application with a frontend, API server, and Postgres database. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +
+ +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +
+ +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +## Wrap-Up + +In this tutorial, you used the cluster profile you created in the previous +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to deploy a host cluster onto your preferred cloud +service provider. After the cluster deployed, you verified the Hello Universe application was successfully deployed. + +We recommend that you continue to the [Deploy Cluster Profile Updates](./update-k8s-cluster.md) tutorial to learn how to +update your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/gcp/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/gcp/deploy-manage-k8s-cluster-tf.md new file mode 100644 index 0000000000..4fc650a09d --- /dev/null +++ b/docs/docs-content/getting-started/gcp/deploy-manage-k8s-cluster-tf.md @@ -0,0 +1,745 @@ +--- +sidebar_label: "Cluster Management with Terraform" +title: "Cluster Management with Terraform" +description: "Learn how to deploy and update a Palette host cluster to GCP with Terraform." +icon: "" +hide_table_of_contents: false +sidebar_position: 50 +toc_max_heading_level: 2 +tags: ["getting-started", "gcp", "terraform"] +--- + +The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider +allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the +provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure. + +This tutorial will teach you how to use Terraform to deploy and update a Google Cloud Platform (GCP) host cluster. You +will learn how to create two versions of a cluster profile with different demo applications, update the deployed cluster +with the new cluster profile version, and then perform a rollback. The concepts you learn about in the Getting Started +section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following items in place: + +- Follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate Palette for use with + your GCP cloud account and create a Palette API key. +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation) + installed if you choose to follow along using the tutorial container. +- If you choose to clone the repository instead of using the tutorial container, make sure you have the following + software installed: + - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater + - [Git](https://git-scm.com/downloads) + - [Kubectl](https://kubernetes.io/docs/tasks/tools/) + +## Set Up Local Environment + +You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by +downloading a container image that includes the tutorial code and all dependencies. + + + + + +Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command. + +```bash +docker ps +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise, +skip this step. + +```bash +podman machine init +podman machine start +``` + +Use the following command and ensure you receive an output displaying the installation information. + +```bash +podman info +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +Open a terminal window and download the tutorial code from GitHub. + +```shell +git clone https://github.com/spectrocloud/tutorials.git +``` + +Change the directory to the tutorial folder. + +```shell +cd tutorials/ +``` + +Check out the following git tag. + +```shell +git checkout v1.1.9 +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + + + + + +## Resources Review + +To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS, +Azure, GCP, or VMware vSphere. Before you deploy a host cluster to GCP, review the following files in the folder +structure. + +| **File** | **Description** | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. | +| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. | +| **data.tf** | This file contains all the query resources that perform read actions. | +| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. | +| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. | +| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. | +| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. | +| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. | +| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. | + +The following section reviews the core Terraform resources more closely. + +#### Provider + +The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This +tutorial uses four providers: + +- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) +- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest) +- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest) +- [Local](https://registry.terraform.io/providers/hashicorp/local/latest) + +Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by +modifying the value of the `palette-project` variable in the **terraform.tfvars** file. + +```hcl +terraform { + required_providers { + spectrocloud = { + version = ">= 0.20.6" + source = "spectrocloud/spectrocloud" + } + + tls = { + source = "hashicorp/tls" + version = "4.0.4" + } + + vsphere = { + source = "hashicorp/vsphere" + version = ">= 2.6.1" + } + + local = { + source = "hashicorp/local" + version = "2.4.1" + } + } + + required_version = ">= 1.9" +} + +provider "spectrocloud" { + project_name = var.palette-project +} +``` + +#### Cluster Profile + +The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile` +resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use +or add a manifest or Helm chart. + +The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources +being designated for a specific provider. In this tutorial, two versions of the GCP cluster profile are deployed: +version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while version `1.1.0` +deploys the [Kubecost](https://www.kubecost.com/) pack along with the +[Hello Universe](https://github.com/spectrocloud/hello-universe) application. + +The cluster profiles include layers for the Operating System (OS), Kubernetes, container network interface, and +container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile. +Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks, as the +order in which you arrange the contents of the `pack {}` blocks plays an important role in the cluster profile creation. +The table below displays the packs deployed in each version of the cluster profile. + +| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** | +| ------------- | ---------------- | ----------- | -------------------------- | -------------------------- | +| OS | `ubuntu-gcp` | `22.04` | :white_check_mark: | :white_check_mark: | +| Kubernetes | `kubernetes` | `1.28.3` | :white_check_mark: | :white_check_mark: | +| Network | `cni-calico` | `3.27.0` | :white_check_mark: | :white_check_mark: | +| Storage | `csi-gcp-driver` | `1.12.4` | :white_check_mark: | :white_check_mark: | +| App Services | `hellouniverse` | `1.2.0` | :white_check_mark: | :white_check_mark: | +| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: | + +The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a +standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and +Postgres database. This tutorial deploys the three-tier version of the +[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is +specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file. +Below is an example of version `1.0.0` of the GCP cluster profile Terraform resource. + +```hcl +resource "spectrocloud_cluster_profile" "gcp-profile" { + count = var.deploy-gcp ? 1 : 0 + + name = "tf-gcp-profile" + description = "A basic cluster profile for GCP" + tags = concat(var.tags, ["env:GCP"]) + cloud = "gcp" + type = "cluster" + version = "1.0.0" + + pack { + name = data.spectrocloud_pack.gcp_ubuntu.name + tag = data.spectrocloud_pack.gcp_ubuntu.version + uid = data.spectrocloud_pack.gcp_ubuntu.id + values = data.spectrocloud_pack.gcp_ubuntu.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.gcp_k8s.name + tag = data.spectrocloud_pack.gcp_k8s.version + uid = data.spectrocloud_pack.gcp_k8s.id + values = data.spectrocloud_pack.gcp_k8s.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.gcp_cni.name + tag = data.spectrocloud_pack.gcp_cni.version + uid = data.spectrocloud_pack.gcp_cni.id + values = data.spectrocloud_pack.gcp_cni.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.gcp_csi.name + tag = data.spectrocloud_pack.gcp_csi.version + uid = data.spectrocloud_pack.gcp_csi.id + values = data.spectrocloud_pack.gcp_csi.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.hellouniverse.name + tag = data.spectrocloud_pack.hellouniverse.version + uid = data.spectrocloud_pack.hellouniverse.id + values = templatefile("manifests/values-3tier.yaml", { + namespace = var.app_namespace, + port = var.app_port, + replicas = var.replicas_number + db_password = base64encode(var.db_password), + auth_token = base64encode(var.auth_token) + }) + type = "oci" + } +} +``` + +#### Data Resources + +Each `pack {}` block contains references to a data resource. +[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in +Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more +dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query +Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values. + +Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.28.3`. + +```hcl +data "spectrocloud_pack" "gcp_k8s" { + name = "kubernetes" + version = "1.28.3" + registry_uid = data.spectrocloud_registry.public_registry.id +} +``` + +Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's +`pack {}` block. + +#### Cluster + +The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure +providers. To create a GCP host cluster, you must set the `deploy-gcp` variable in the **terraform.tfvars** file to +true. + +When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for +the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by +reviewing the +[GCP cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_gcp) +documentation. + +```hcl +resource "spectrocloud_cluster_gcp" "gcp-cluster" { + count = var.deploy-gcp ? 1 : 0 + + name = "gcp-cluster" + tags = concat(var.tags, ["env:gcp"]) + cloud_account_id = data.spectrocloud_cloudaccount_gcp.account[0].id + + cloud_config { + project = var.gcp_project_name + region = var.gcp-region + } + + cluster_profile { + id = var.deploy-gcp && var.deploy-gcp-kubecost ? resource.spectrocloud_cluster_profile.gcp-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.gcp-profile[0].id + } + + machine_pool { + control_plane = true + control_plane_as_worker = true + name = "control-plane-pool" + count = var.gcp_control_plane_nodes.count + instance_type = var.gcp_control_plane_nodes.instance_type + disk_size_gb = var.gcp_control_plane_nodes.disk_size_gb + azs = var.gcp_control_plane_nodes.availability_zones + } + + machine_pool { + name = "worker-pool" + count = var.gcp_worker_nodes.count + instance_type = var.gcp_worker_nodes.instance_type + disk_size_gb = var.gcp_worker_nodes.disk_size_gb + azs = var.gcp_worker_nodes.availability_zones + } + + timeouts { + create = "30m" + delete = "15m" + } +} +``` + +## Terraform Tests + +Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly. +Issue the following command in your terminal. + +```bash +terraform test +``` + +A successful test execution will output the following. + +```text hideClipboard +Success! 16 passed, 0 failed. +``` + +## Input Variables + +To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your +choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org). + +The file is structured with different sections. Each provider has a section with variables that need to be filled in, +identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-` +available for each provider, which you can use to select the deployment environment. + +In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a +Palette project different from the default one. + +```hcl {4} +##################### +# Palette Settings +##################### +palette-project = "Default" # The name of your project in Palette. +``` + +Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token +for the Hello Universe pack. For example, you can use the value `password` for the database password and the default +token provided in the +[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes) +repository for the authentication token. + +```hcl {7-8} +############################## +# Hello Universe Configuration +############################## +app_namespace = "hello-universe" # The namespace in which the application will be deployed. +app_port = 8080 # The cluster port number on which the service will listen for incoming traffic. +replicas_number = 1 # The number of pods to be created. +db_password = "REPLACE ME" # The database password to connect to the API database. +auth_token = "REPLACE ME" # The auth token for the API connection. +``` + +Locate the GCP provider section and change `deploy-gcp = false` to `deploy-gcp = true`. Additionally, replace all +occurrences of `REPLACE_ME` with their corresponding values, such as those for the `gcp-cloud-account-name`, +`gcp-region`, `gcp_project_name`, and `availability_zones` variables. You can also update the values for the nodes in +the control plane or worker node pools as needed. + +```hcl {4,7-9,16,24} +########################### +# GCP Deployment Settings +############################ +deploy-gcp = false # Set to true to deploy to GCP. +deploy-gcp-kubecost = false # Set to true to deploy to GCP and include Kubecost to your cluster profile. + +gcp-cloud-account-name = "REPLACE ME" +gcp-region = "REPLACE ME" +gcp_project_name = "REPLACE ME" + +gcp_control_plane_nodes = { + count = "1" + control_plane = true + instance_type = "n1-standard-4" + disk_size_gb = "60" + availability_zones = ["REPLACE ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-central1-a", "us-central1-b"]. +} + +gcp_worker_nodes = { + count = "1" + control_plane = false + instance_type = "n1-standard-4" + disk_size_gb = "60" + availability_zones = ["REPLACE ME"] # If you want to deploy to multiple AZs, add them here. Example: ["us-central1-a", "us-central1-b"]. +} +``` + +When you are done making the required changes, save the file. + +## Deploy the Cluster + +Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an +environment variable. This step allows the Terraform code to authenticate with the Palette API. + +```bash +export SPECTROCLOUD_APIKEY= +``` + +Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that +contains the Terraform files. + +```shell +terraform init +``` + +```text hideClipboard +Terraform has been successfully initialized! +``` + +:::warning + +Before deploying the resources, ensure that there are no active clusters named `gcp-cluster` or cluster profiles named +`tf-gcp-profile` in your Palette project. + +::: + +Issue the `plan` command to preview the resources that Terraform will create. + +```shell +terraform plan +``` + +The output indicates that three new resources will be created: two versions of the GCP cluster profile and the host +cluster. The host cluster will use version `1.0.0` of the cluster profile. + +```shell +Plan: 3 to add, 0 to change, 0 to destroy. +``` + +To deploy the resources, use the `apply` command. + +```shell +terraform apply -auto-approve +``` + +To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and +click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-gcp-profile`. Click on the cluster +profile to review its layers and versions. + +![A view of the cluster profile](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp) + +You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**. + +![Update the cluster](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp) + +Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more. + +The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the +node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on +the **Events** tab to check the log. + +![Update the cluster](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp) + +### Verify the Application + +In Palette, navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +![Deployed application](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed cluster. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this +tutorial, you used Terraform to deploy two versions of a GCP cluster profile. The snippet below displays a segment of +the Terraform cluster profile resource version `1.0.0` that was deployed. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "gcp-profile" { + count = var.deploy-gcp ? 1 : 0 + + name = "tf-gcp-profile" + description = "A basic cluster profile for GCP" + tags = concat(var.tags, ["env:GCP"]) + cloud = "gcp" + type = "cluster" + version = "1.0.0" +``` + +Open the **terraform.tfvars** file, set the `deploy-gcp-kubecost` variable to true, and save the file. Once applied, the +host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack. + +The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note +how the name `tf-gcp-profile` is the same as in the first cluster profile resource, but the version is different. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "gcp-profile-kubecost" { + count = var.deploy-gcp ? 1 : 0 + + name = "tf-gcp-profile" + description = "A basic cluster profile for GCP with Kubecost" + tags = concat(var.tags, ["env:GCP"]) + cloud = "gcp" + type = "cluster" + version = "1.1.0" +``` + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster +profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster +profile version. + +To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters** +from the left **Main Menu**. + +Select the cluster named `gcp-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was +triggered due to cluster profile changes. + +![Image that shows the cluster profile reconciliation behavior](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp) + +Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-gcp-profile` +cluster profile. + +![Image that shows the new cluster profile version with Kubecost](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp) + +Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the +**Overview** tab to verify that the Kubecost pack was successfully deployed. + +![Image that shows the cluster with Kubecost](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp) + +Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette +UI. This file enables you and other users to issue `kubectl` commands against the host cluster. + +![Image that shows the cluster's kubeconfig file location](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp) + +Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```bash +export KUBECONFIG=~/Downloads/admin.gcp-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```bash +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +information about your cluster. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer pack. + +![Image that shows the Kubecost UI](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp) + +Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal +window it is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process of rolling back to a previous version using Terraform is similar to the process of applying a new version. + +Open the **terraform.tfvars** file, set the `deploy-gcp-kubecost` variable to false, and save the file. Once applied, +this action will make the active cluster use version **1.0.0** of the cluster profile again. + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that the deployed cluster will now use version `1.0.0` of the cluster profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp) + +## Cleanup + +Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all +the resources you created through Terraform. + +```shell +terraform destroy --auto-approve +``` + +A successful execution of `terraform destroy` will output the following. + +```shell +Destroy complete! Resources: 3 destroyed. +``` + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force +delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to +delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue +the following command to stop and remove the container. + + + + + +```shell +docker stop tutorialContainer && \ +docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +```shell +podman stop tutorialContainer && \ +podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +## Wrap-Up + +In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host +GCP cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to perform +cluster profile roll backs. + +We encourage you to check out the [Scale, Upgrade, and Secure Clusters](./scale-secure-cluster.md) tutorial to learn how +to perform common Day-2 operations on your deployed clusters. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/gcp/gcp.md b/docs/docs-content/getting-started/gcp/gcp.md new file mode 100644 index 0000000000..b3657d821e --- /dev/null +++ b/docs/docs-content/getting-started/gcp/gcp.md @@ -0,0 +1,64 @@ +--- +sidebar_label: "Deploy a Cluster to GCP" +title: "Deploy a Cluster to Google Cloud Platform (GCP)" +description: "Spectro Cloud Getting Started with GCP" +hide_table_of_contents: false +sidebar_custom_props: + icon: "" +tags: ["getting-started", "gcp"] +--- + +Palette supports integration with [Google Cloud Platform](https://cloud.google.com/). You can deploy and manage +[Host Clusters](../../glossary-all.md#host-cluster) in GCP. The concepts you learn about in the Getting Started section +are centered around a fictional case study company. This approach gives you a solution focused approach, while +introducing you with Palette workflows and capabilities. + +## 🧑‍🚀 Welcome to Spacetastic! + + + +## Get Started + +In this section, you learn how to create a cluster profile. Then, you deploy a cluster to GCP by using Palette. Once +your cluster is deployed, you can update it using cluster profile updates. + + diff --git a/docs/docs-content/getting-started/gcp/scale-secure-cluster.md b/docs/docs-content/getting-started/gcp/scale-secure-cluster.md new file mode 100644 index 0000000000..64a1a13ebf --- /dev/null +++ b/docs/docs-content/getting-started/gcp/scale-secure-cluster.md @@ -0,0 +1,527 @@ +--- +sidebar_label: "Scale, Upgrade, and Secure Clusters" +title: "Scale, Upgrade, and Secure Clusters" +description: "Learn how to scale, upgrade, and secure Palette host clusters deployed to GCP." +icon: "" +hide_table_of_contents: false +sidebar_position: 60 +tags: ["getting-started", "gcp", "tutorial"] +--- + +Palette has in-built features to help with the automation of Day-2 operations. Upgrading and maintaining a deployed +cluster is typically complex because you need to consider any possible impact on service availability. Palette provides +out-of-the-box functionality for upgrades, observability, granular Role Based Access Control (RBAC), backup and security +scans. + +This tutorial will teach you how to use the Palette UI to perform scale and maintenance tasks on your clusters. You will +learn how to create Palette projects and teams, import a cluster profile, safely upgrade the Kubernetes version of a +deployed cluster and scale up your cluster nodes. The concepts you learn about in the Getting Started section are +centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate +Palette for use with your GCP cloud account. + +Additionally, you should install kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +## Create Palette Projects + +Palette projects help you organize and manage cluster resources, providing logical groupings. They also allow you to +manage user access control through Role Based Access Control (RBAC). You can assign users and teams with specific roles +to specific projects. All resources created within a project are scoped to that project and only available to that +project, but a tenant can have multiple projects. + +Log in to [Palette](https://console.spectrocloud.com). + +Click on the **drop-down Menu** at the top of the page and switch to the **Tenant Admin** scope. Palette provides the +**Default** project out-of-the-box. + +![Image that shows how to select tenant admin scope](/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp) + +Navigate to the left **Main Menu** and click on **Projects**. Click on the **Create Project** button. The **Create a new +project** dialog appears. + +Fill out the input fields with values from the table below to create a project. + +| Field | Description | Value | +| ----------- | ----------------------------------- | --------------------------------------------------------- | +| Name | The name of the project. | `Project-ScaleSecureTutorial` | +| Description | A brief description of the project. | Project for Scale, Upgrade, and Secure Clusters tutorial. | +| Tags | Add tags to the project. | `env:dev` | + +Click **Confirm** to create the project. Once Palette finishes creating the project, a new card appears on the +**Projects** page. + +Navigate to the left **Main Menu** and click on **Users & Teams**. + +Select the **Teams** tab. Then, click on **Create Team**. + +Fill in the **Team Name** with **scale-secure-tutorial-team**. Click on **Confirm**. + +Once Palette creates the team, select it from the **Teams** list. The **Team Details** pane opens. + +On the **Project Roles** tab, click on **New Project Role**. The list of project roles appears. + +Select the **Project-ScaleSecureTutorial** from the **Projects** drop-down. Then, select the **Cluster Profile Viewer** +and **Cluster Viewer** roles. Click on **Confirm**. + +![Image that shows how to select team roles](/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp) + +Any users that you add to this team inherit the project roles assigned to it. Roles are the foundation of Palette's RBAC +enforcement. They allow a single user to have different types of access control based on the resource being accessed. In +this scenario, any user added to this team will have access to view any cluster profiles and clusters in the +**Project-ScaleSecureTutorial** project, but not modify them. Check out the +[Palette RBAC](../../user-management/palette-rbac/palette-rbac.md) section for more details. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on **Open project** on the **Project-ScaleSecureTutorial** card. + +![Image that shows how to open the tutorial project](/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp) + +Your scope changes from **Tenant Admin** to **Project-ScaleSecureTutorial**. All further resources you create will be +part of this project. + +## Import a Cluster Profile + +Palette provides three resource contexts. They help you customize your environment to your organizational needs, as well +as control the scope of your settings. + +| Context | Description | +| ------- | ---------------------------------------------------------------------------------------- | +| System | Resources are available at the system level and to all tenants in the system. | +| Tenant | Resources are available at the tenant level and to all projects belonging to the tenant. | +| Project | Resources are available within a project and not available to other projects. | + +All of the resources you have created as part of your Getting Started journey have used the **Project** context. They +are only visible in the **Default** project. Therefore, you will need to create a new cluster profile in +**Project-ScaleSecureTutorial**. + +Navigate to the left **Main Menu** and click on **Profiles**. Click on **Import Cluster Profile**. The **Import Cluster +Profile** pane opens. + +Paste the following in the text editor. Click on **Validate**. The **Select repositories** dialog appears. + + + +Click on **Confirm**. Then, click on **Confirm** on the **Import Cluster Profile** pane. Palette creates a new cluster +profile named **gcp-profile**. + +On the **Profiles** list, select **Project** from the **Contexts** drop-down. Your newly created cluster profile +displays. The Palette UI confirms that the cluster profile was created in the scope of the +**Project-ScaleSecureTutorial**. + +![Image that shows the cluster profile ](/getting-started/gcp/getting-started_scale-secure-cluster_cluster-profile-created.webp) + +Select the cluster profile to view its details. The cluster profile summary appears. + +This cluster profile deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) application using a +pack. Click on the **hellouniverse 1.2.0** layer. The pack manifest editor appears. + +Click on **Presets** on the right-hand side. You can learn more about the pack presets on the pack README, which is +available in the Palette UI. Select the **Enable Hello Universe API** preset. The pack manifest changes accordingly. + +![Screenshot of pack presets](/getting-started/gcp/getting-started_scale-secure-cluster_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to save your updates. + +## Deploy a Cluster + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +Palette will prompt you to select the type of cluster. Select **GCP IaaS** and click on **Start GCP IaaS +Configuration**. + +Continue with the rest of the cluster deployment flow using the cluster profile you created in the +[Import a Cluster Profile](#import-a-cluster-profile) section, named **gcp-profile**. Refer to the +[Deploy a Cluster](./deploy-k8s-cluster.md#deploy-a-cluster) tutorial for additional guidance or if you need a refresher +of the Palette deployment flow. + +### Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. + +When the application is deployed and ready for network traffic, Palette exposes the service URL in the **Services** +field. Click on the URL for port **:8080** to access the Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/gcp/getting-started_scale-secure-cluster_service_url.webp) + +## Upgrade Kubernetes Versions + +Regularly upgrading your Kubernetes version is an important part of maintaining a good security posture. New versions +may contain important patches to security vulnerabilities and bugs that could affect the integrity and availability of +your clusters. + +Palette supports three minor Kubernetes versions at any given time. We support the current release and the three +previous minor version releases, also known as N-3. For example, if the current release is 1.29, we support 1.28, 1.27, +and 1.26. + +:::warning + +Once you upgrade your cluster to a new Kubernetes version, you will not be able to downgrade. + +::: + +We recommend using cluster profile versions to safely upgrade any layer of your cluster profile and maintain the +security of your clusters. Expand the following section to learn how to create a new cluster profile version with a +Kubernetes upgrade. + +
+ +Upgrade Kubernetes using Cluster Profile Versions + +Navigate to the left **Main Menu** and click on **Profiles**. Select the cluster profile that you used to deploy your +cluster, named **gcp-profile**. The cluster profile details page appears. + +Click on the version drop-down and select **Create new version**. The version creation dialog appears. + +Fill in **1.1.0** in the **Version** input field. Then, click on **Confirm**. The new cluster profile version is created +with the same layers as version **1.0.0**. + +Select the **kubernetes 1.27.x** layer of the profile. The pack manifest editor appears. + +Click on the **Pack Version** dropdown. All of the available versions of the **Palette eXtended Kubernetes** pack +appear. The cluster profile is configured to use the latest patch version of **Kubernetes 1.27**. + +![Cluster profile with all Kubernetes versions](/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-versions.webp) + +The official guidelines for Kubernetes upgrades recommend upgrading one minor version at a time. For example, if you are +using Kubernetes version 1.26, you should upgrade to 1.27, before upgrading to version 1.28. You can learn more about +the official Kubernetes upgrade guidelines in the +[Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/) page. + +Select **1.28.x** from the version dropdown. This selection follows the Kubernetes upgrade guidelines as the cluster +profile is using **1.27.x**. + +The manifest editor highlights the changes made by this upgrade. Once you have verified that the upgrade changes +versions as expected, click on **Confirm changes**. + +Click on **Confirm Updates**. Then, click on **Save Changes** to persist your updates. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Profile** tab. Your cluster is currently using the **1.0.0** version of your cluster profile. + +Change the cluster profile version by selecting **1.1.0** from the version drop-down. Click on **Review & Save**. The +**Changes Summary** dialog appears. + +Click on **Review changes in Editor**. The **Review Update Changes** dialog displays the same Kubernetes version +upgrades as the cluster profile editor previously did. Click on **Update**. + +
+ +Upgrading the Kubernetes version of your cluster modifies an infrastructure layer. Therefore, Kubernetes needs to +replace its nodes. This is known as a repave. Check out the +[Node Pools](../../clusters/cluster-management/node-pool.md#repave-behavior-and-configuration) page to learn more about +the repave behavior and configuration. + +Click on the **Nodes** tab. You can follow along with the node upgrades on this screen. Palette replaces the nodes +configured with the old Kubernetes version with newly upgraded ones. This may affect the performance of your +application, as Kubernetes swaps the workloads to the upgraded nodes. + +![Node repaves in progress](/getting-started/gcp/getting-started_scale-secure-cluster_node-repaves.webp) + +### Verify the Application + +The cluster update completes when the Palette UI marks the cluster profile layers as green and the cluster is in a +**Healthy** state. The cluster **Overview** page also displays the Kubernetes version as **1.28**. Click on the URL for +port **:8080** to access the application and verify that your upgraded cluster is functional. + +![Kubernetes upgrade applied](/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp) + +## Scan Clusters + +Palette provides compliance, security, conformance, and Software Bill of Materials (SBOM) scans on tenant clusters. +These scans ensure cluster adherence to specific compliance and security standards, as well as detect potential +vulnerabilities. You can perform four types of scans on your cluster. + +| Scan | Description | +| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Kubernetes Configuration Security | This scan examines the compliance of deployed security features against the CIS Kubernetes Benchmarks, which are consensus-driven security guidelines for Kubernetes. By default, the test set will execute based on the cluster Kubernetes version. | +| Kubernetes Penetration Testing | This scan evaluates Kubernetes-related open-ports for any configuration issues that can leave the tenant clusters exposed to attackers. It hunts for security issues in your clusters and increases visibility of the security controls in your Kubernetes environments. | +| Kubernetes Conformance Testing | This scan validates your Kubernetes configuration to ensure that it conforms to CNCF specifications. Palette leverages an open-source tool called [Sonobuoy](https://sonobuoy.io) to perform this scan. | +| Software Bill of Materials (SBOM) | This scan details the various third-party components and dependencies used by your workloads and helps to manage security and compliance risks associated with those components. | + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Scan** tab. The list of all the available cluster scans appears. Palette indicates that you have never +scanned your cluster. + +![Scans never performed on the cluster](/getting-started/gcp/getting-started_scale-secure-cluster_never-scanned-cluster.webp) + +Click **Run Scan** on the **Kubernetes configuration security** and **Kubernetes penetration testing** scans. Palette +schedules and executes these scans on your cluster, which may take a few minutes. Once they complete, you can download +the report in PDF, CSV or view the results directly in the Palette UI. + +![Scans completed on the cluster](/getting-started/gcp/getting-started_scale-secure-cluster_scans-completed.webp) + +Click on **Configure Scan** on the **Software Bill of Materials (SBOM)** scan. The **Configure SBOM Scan** dialog +appears. + +Leave the default selections on this screen and click on **Confirm**. Optionally, you can configure an S3 bucket to save +your report into. Refer to the +[Configure an SBOM Scan](../../clusters/cluster-management/compliance-scan.md#configure-an-sbom-scan) guide to learn +more about the configuration options of this scan. + +Once the scan completes, click on the report to view it within the Palette UI. The third-party dependencies that your +workloads rely on are evaluated for potential security vulnerabilities. Reviewing the SBOM enables organizations to +track vulnerabilities, perform regular software maintenance, and ensure compliance with regulatory requirements. + +:::info + +The scan reports highlight any failed checks, based on Kubernetes community standards and CNCF requirements. We +recommend that you prioritize the rectification of any identified issues. + +::: + +As you have seen so far, Palette scans are crucial when maintaining your security posture. Palette provides the ability +to schedule your scans and periodically evaluate your clusters. In addition, it keeps a history of previous scans for +comparison purposes. Expand the following section to learn how to configure scan schedules for your cluster. + +
+ +Configure Cluster Scan Schedules + +Click on **Settings**. Then, select **Cluster Settings**. The **Settings** pane appears. + +Select the **Schedule Scans** option. You can configure schedules for you cluster scans. Palette provides common scan +schedules or you can provide a custom time. We recommend choosing a schedule when you expect the usage of your cluster +to be lowest. Otherwise, the scans may impact the performance of your nodes. + +![Scan schedules](/getting-started/gcp/getting-started_scale-secure-cluster_scans-schedules.webp) + +Palette will automatically scan your cluster according to your configured schedule. + +
+ +## Scale a Cluster + +A node pool is a group of nodes within a cluster that all have the same configuration. You can use node pools for +different workloads. For example, you can create a node pool for your production workloads and another for your +development workloads. You can update node pools for active clusters or create a new one for the cluster. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Nodes** tab. Your cluster has a **control-plane-pool** and a **worker-pool**. Each pool contains one node. + +Select the **Overview** tab. Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file. + +![kubeconfig download](/getting-started/gcp/getting-started_scale-secure-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.gcp-cluster.kubeconfig +``` + +Execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals two nodes, one for the worker pool and one for the control plane. Make a note of the name of your +worker node, which is the node that does not have the `control-plane` role. In the example below, +`gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv` is the name of the worker node. + +```shell +NAME STATUS ROLES AGE VERSION +gcp-cluster-cp-67943-pnh7m Ready control-plane 30m v1.28.13 +gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv Ready 22m v1.28.13 +``` + +The Hello Universe pack deploys three pods in the `hello-universe` namespace. Execute the following command to verify +where these pods have been scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that all of the pods have been scheduled on the worker node you made a note of previously. + +```shell +NAME READY STATUS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 20m gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv +postgres-698d7ff8f4-vbktf 1/1 Running 20m gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv +ui-5f777c76df-pplcv 1/1 Running 20m gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv +``` + +Navigate back to the Palette UI in your browser. Select the **Nodes** tab. + +Click on **New Node Pool**. The **Add node pool** dialog appears. This workflow allows you to create a new worker pool +for your cluster. Fill in the following configuration. + +| Field | Value | Description | +| --------------------- | ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Node pool name** | `worker-pool-2` | The name of your worker pool. | +| **Enable Autoscaler** | Enabled | Whether Palette should scale the pool horizontally based on its per-node workload counts. The **Minimum size** parameter specifies the lower bound of nodes in the pool and the **Maximum size** specifies the upper bound. By default, **Minimum size** is `1` and **Maximum size** is `3`. | +| **Instance Type** | `n1-standard-4` | Set the compute size equal to the already provisioned nodes. | +| **Availability Zone** | _Availability zone of your choice_ | Set the availability zone the same as the already provisioned nodes. | + +Click on **Confirm**. The dialog closes. Palette begins provisioning your node pool. Once the process completes, your +three node pools appear in a healthy state. + +![New worker pool provisioned](/getting-started/gcp/getting-started_scale-secure-cluster_third-node-pool.webp) + +Navigate back to your terminal and execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals three nodes, two for worker pools and one for the control plane. Make a note of the names of your +worker nodes. In the example below, `gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv ` and +`gcp-cluster-worker-pool-2-us-east1-b-2612-4bcck` are the worker nodes. + +```shell +NAME STATUS ROLES AGE VERSION +gcp-cluster-cp-67943-pnh7m Ready control-plane 36m v1.28.13 +gcp-cluster-worker-pool-2-us-east1-b-2612-4bcck Ready 3m5s v1.28.13 +gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv Ready 29m v1.28.13 +``` + +It is common to dedicate node pools to a particular type of workload. One way to specify this is through the use of +Kubernetes taints and tolerations. + +Taints provide nodes with the ability to repel a set of pods, allowing you to mark nodes as unavailable for certain +pods. Tolerations are applied to pods and allow the pods to schedule onto nodes with matching taints. Once configured, +nodes do not accept any pods that do not tolerate the taints. + +The animation below provides a visual representation of how taints and tolerations can be used to specify which +workloads execute on which nodes. + +![Taints repel pods to a new node](/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif) + +Switch back to Palette in your web browser. Navigate to the left **Main Menu** and select **Profiles**. Select the +cluster profile deployed to your cluster, named `gcp-profile`. Ensure that the **1.1.0** version is selected. + +Click on the **hellouniverse 1.2.0** layer. The manifest editor appears. Set the +`manifests.hello-universe.ui.useTolerations` field on line 20 to `true`. Then, set the +`manifests.hello-universe.ui.effect` field on line 22 to `NoExecute`. This toleration describes that the UI pods of +Hello Universe will tolerate the taint with the key `app`, value `ui` and effect `NoExecute`. The tolerations of the UI +pods should be as below. + +```yaml +ui: + useTolerations: true + tolerations: + effect: NoExecute + key: app + value: ui +``` + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to persist your changes. + +Navigate to the left **Main Menu** and select **Clusters**. Select your deployed cluster, named **gcp-cluster**. + +Due to the changes you have made to the cluster profile, this cluster has a pending update. Click on **Updates**. The +**Changes Summary** dialog appears. + +Click on **Review Changes in Editor**. The **Review Update Changes** dialog appears. The toleration changes appear as +incoming configuration. + +Click on **Apply Changes** to apply the update to your cluster. + +Select the **Nodes** tab. Click on **Edit** on the first worker pool, named **worker-pool**. The **Edit node pool** +dialog appears. + +Click on **Add New Taint** in the **Taints** section. Fill in `app` for the **Key**, `ui` for the **Value** and select +`NoExecute` for the **Effect**. These values match the toleration you specified in your cluster profile earlier. + +![Add taint to worker pool](/getting-started/getting-started_scale-secure-cluster_add-taint.webp) + +Click on **Confirm** to save your changes. The nodes in the `worker-pool` can now only execute the UI pods that have a +toleration matching the configured taint. + +Switch back to your terminal. Execute the following command again to verify where the Hello Universe pods have been +scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that the UI pods have remained scheduled on their original node named +`gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv`, while the other two pods have been moved to the node of the second +worker pool named `gcp-cluster-worker-pool-2-us-east1-b-2612-4bcck`. + +```shell +NAME READY STATUS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 20m gcp-cluster-worker-pool-2-us-east1-b-2612-4bcck +postgres-698d7ff8f4-vbktf 1/1 Running 20m gcp-cluster-worker-pool-2-us-east1-b-2612-4bcck +ui-5f777c76df-pplcv 1/1 Running 20m gcp-cluster-worker-pool-us-east1-b-17dc-6mqrv +``` + +Taints and tolerations are a common way of creating nodes dedicated to certain workloads, once the cluster has scaled +accordingly through its provisioned node pools. Refer to the +[Taints and Tolerations](../../clusters/cluster-management/taints.md) guide to learn more. + +### Verify the Application + +Select the **Overview** tab. Click on the URL for port **:8080** to access the Hello Universe application and verify +that the application is functioning correctly. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/gcp/getting-started_scale-secure-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name `gcp-cluster` to +proceed with the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +Click on the **drop-down Menu** at the top of the page and switch to **Tenant Admin** scope. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on the **three-dot Menu** of the **Project-ScaleSecureTutorial** and select **Delete**. A pop-up box will ask you +to confirm the action. Confirm the deletion. + +Navigate to the left **Main Menu** and click on **Users & Teams**. Select the **Teams** tab. + +Click on **scale-secure-tutorial-team** list entry. The **Team Details** pane appears. Click on **Delete Team**. A +pop-up box will ask you to confirm the action. Confirm the deletion. + +## Wrap-up + +In this tutorial, you learned how to perform very important operations relating to the scalability and availability of +your clusters. First, you created a project and team. Next, you imported a cluster profile and deployed a host GCP +cluster. Then, you upgraded the Kubernetes version of your cluster and scanned your clusters using Palette's scanning +capabilities. Finally, you scaled your cluster's nodes and used taints to select which Hello Universe pods execute on +them. + +We encourage you to check out the [Additional Capabilities](../additional-capabilities/additional-capabilities.md) to +explore other Palette functionalities. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/gcp/setup.md b/docs/docs-content/getting-started/gcp/setup.md new file mode 100644 index 0000000000..f5ffe5cf2e --- /dev/null +++ b/docs/docs-content/getting-started/gcp/setup.md @@ -0,0 +1,61 @@ +--- +sidebar_label: "Set up Palette" +title: "Set up Palette with GCP" +description: "Learn how to set up Palette with GCP." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["getting-started", "gcp"] +--- + +In this guide, you will learn how to set up Palette for use with your Google Cloud Platform (GCP) cloud account. These +steps are required in order to authenticate Palette and allow it to deploy host clusters. The concepts you learn about +in the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access. + +- Sign up to a service account from [GCP](https://cloud.google.com/docs/get-started). The GCP account must have the + required [IAM permissions](../../clusters/public-cloud/gcp/required-permissions.md). + +## Enablement + +Palette needs access to your GCP cloud account in order to create and manage GCP clusters and resources. + +### Add Cloud Account + + + +### Create a Palette API Key + +Follow the steps below to create a Palette API key. This is required for the +[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial. + + + +## Validate + +You can verify your account is added. + +1. Log in to [Palette](https://console.spectrocloud.com). + +2. From the left **Main Menu**, select **Tenant Settings**. + +3. Next, on the **Tenant Settings Menu**, select **Cloud Accounts**. + +4. The added cloud account is listed under **GCP** with all other available GCP cloud accounts. + +## Next Steps + +Now that you set up Palette for use with Google Cloud, you can start deploying Kubernetes clusters to your GCP account. +To learn how to get started with deploying Kubernetes clusters to GCP, we recommend that you continue to the +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to create a full cluster profile for your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/gcp/update-k8s-cluster.md b/docs/docs-content/getting-started/gcp/update-k8s-cluster.md new file mode 100644 index 0000000000..6b340b7fcf --- /dev/null +++ b/docs/docs-content/getting-started/gcp/update-k8s-cluster.md @@ -0,0 +1,298 @@ +--- +sidebar_label: "Deploy Cluster Profile Updates" +title: "Deploy Cluster Profile Updates" +description: "Learn how to update your deployed clusters using Palette Cluster Profiles." +icon: "" +hide_table_of_contents: false +sidebar_position: 40 +tags: ["getting-started", "gcp"] +--- + +Palette provides cluster profiles, which allow you to specify layers for your workloads using packs, Helm charts, Zarf +packages, or cluster manifests. Packs serve as blueprints to the provisioning and deployment process, as they contain +the versions of the container images that Palette will install for you. Cluster profiles provide consistency across +environments during the cluster creation process, as well as when maintaining your clusters. Check out +[Cluster Profiles](../introduction.md#cluster-profiles) to learn more. Once provisioned, there are three main ways to +update your Palette deployments. + +| Method | Description | Cluster application process | +| ------------------------ | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Cluster profile versions | Create a new version of the cluster profile with your updates. | Select the new version of the cluster profile. Apply this new profile version to the clusters you want to update. | +| Cluster profile updates | Change the cluster profile in place. | Palette detects the difference between the provisioned resources and this profile. A pending update is available to clusters using this profile. Apply pending updates to the clusters you want to update. | +| Cluster overrides | Change the configuration of a single deployed cluster outside its cluster profile. | Save and apply the changes you've made to your cluster. | + +This tutorial will teach you how to update a cluster deployed with Palette to Google Cloud Platform (GCP). You will +explore each cluster update method and learn how to apply these changes using Palette. The concepts you learn about in +the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with GCP](./setup.md) guide to authenticate +Palette for use with your GCP cloud account. + +Additionally, you should install Kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +Follow the instructions of the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy a cluster with the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. Your cluster should be successfully +provisioned and in a healthy state. + +The cluster profile name is `gcp-profile` and the cluster name is `gcp-cluster`. + +![Cluster details page with service URL highlighted](/getting-started/gcp/getting-started_deploy-k8s-cluster_service_url.webp) + +## Tag and Filter Clusters + +Palette provides the ability to add tags to your cluster profiles and clusters. This helps you organize and categorize +your clusters based on your custom criteria. You can add tags during the creation process or by editing the resource +after it has been created. + +Adding tags to your clusters helps you find and identify your clusters, without having to rely on cluster naming. This +is especially important when operating with many clusters or multiple cloud deployments. + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Find the `gcp-cluster` you +deployed with the _hello-universe_ application. Click on it to view its **Overview** tab. + +Click on the **Settings** drop-down Menu in the upper right corner and select **Cluster Settings**. + +Fill **service:hello-universe-frontend** in the **Tags (Optional)** input box. Click on **Save Changes**. Close the +panel. + +![Image that shows how to add a cluster tag](/getting-started/gcp/getting-started_update-k8s-cluster_add-service-tag.webp) + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Click on **Add Filter**, then +select the **Add custom filter** option. + +Use the drop-down boxes to fill in the values of the filter. Select **Tags** in the left-hand **drop-down Menu**. Select +**is** in the middle **drop-down Menu**. Fill in **service:hello-universe-frontend** in the right-hand input box. + +Click on **Apply Filter**. + +![Image that shows how to add a frontend service filter](/getting-started/gcp/getting-started_update-k8s-cluster_apply-frontend-filter.webp) + +Once you apply the filter, only the `gcp-cluster` with this tag is displayed. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. If you +do not specify a version for your cluster profile, it defaults to **1.0.0**. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster. It should be named `gcp-profile`. Select it to view its +details. + +![Image that shows the frontend cluster profile with cluster linked to it](/getting-started/gcp/getting-started_update-k8s-cluster_profile-with-cluster.webp) + +The current version is displayed in the **drop-down Menu** next to the profile name. This profile has the default value +of **1.0.0**, as you did not specify another value when you created it. The cluster profile also shows the host clusters +that are currently deployed with this cluster profile version. + +Click on the version **drop-down Menu**. Select the **Create new version** option. + +A dialog box appears. Fill in the **Version** input with **1.1.0**. Click on **Confirm**. + +Palette creates a new cluster profile version and opens it. The version dropdown displays the newly created **1.1.0** +profile. This profile version is not deployed to any host clusters. + +![Image that shows cluster profile version 1.1.0](/getting-started/gcp/getting-started_update-k8s-cluster_new-version-overview.webp) + +The version **1.1.0** has the same layers as the version **1.0.0** it was created from. + +Click on **Add New Pack**. Select the **Public Repo** registry and scroll down to the **Monitoring** section. Find the +**Kubecost** pack and select it. Alternatively, you can use the search function with the pack name **Kubecost**. + +![Image that shows how to select the Kubecost pack](/getting-started/gcp/getting-started_update-k8s-cluster_select-kubecost-pack.webp) + +Once selected, the pack manifest is displayed in the manifest editor. + +Click on **Confirm & Create**. The manifest editor closes. + +Click on **Save Changes** to finish the configuration of this cluster profile version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab of this cluster. You can select a new version of your cluster profile by using the version +dropdown. + +Select the **1.1.0** version. + +![Image that shows how to select a new profile version for the cluster](/getting-started/gcp/getting-started_update-k8s-cluster_profile-version-selection.webp) + +Click on **Save** to confirm your profile version selection. + +:::warning + +Palette has backup and restore capabilities available for your mission critical workloads. Ensure that you have adequate +backups before you make any cluster profile version changes in your production environments. You can learn more in the +[Backup and Restore](../../clusters/cluster-management/backup-restore/backup-restore.md) section. + +::: + +Palette now makes the required changes to your cluster according to the specifications of the configured cluster profile +version. Once your changes have completed, Palette marks your layers with the green status indicator. The Kubecost pack +will be successfully deployed. + +![Image that shows completed cluster profile updates](/getting-started/gcp/getting-started_update-k8s-cluster_completed-cluster-updates.webp) + +Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette UI. +This file enables you and other users to issue kubectl commands against the host cluster. + +![Image that the kubeconfig file](/getting-started/gcp/getting-started_update-k8s-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.gcp-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```shell +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +visualization tools. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer. + +![Image that shows the Kubecost UI](/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp) + +Once you are done exploring locally, you can stop the `kubectl port-forward` command by closing the terminal window it +is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process to roll back to a previous version is identical to the process for applying a new version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab. This cluster is currently deployed using cluster profile version **1.1.0**. Select the +option **1.0.0** in the version dropdown. This process is the reverse of what you have done in the previous section, +[Version Cluster Profiles](#version-cluster-profiles). + +Click on **Save** to confirm your changes. + +Palette now makes the changes required for the cluster to return to the state specified in version **1.0.0** of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Cluster details page with service URL highlighted](/getting-started/gcp/getting-started_deploy-k8s-cluster_service_url.webp) + +## Pending Updates + +Cluster profiles can also be updated in place, without the need to create a new cluster profile version. Palette +monitors the state of your clusters and notifies you when updates are available for your host clusters. You may then +choose to apply your changes at a convenient time. + +The previous state of the cluster profile will not be saved once it is overwritten. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the tag +**service:hello-universe-frontend**. Select it to view its **Overview** tab. + +Select the **Profiles** tab. Then, select the **hello-universe** pack. Change the `replicas` field to `2` on line `15`. +Click on **Save**. The editor closes. + +This cluster now contains an override over its cluster profile. Palette uses the configuration you have just provided +for the single cluster over its cluster profile and begins making the appropriate changes. + +Once these changes are complete, select the **Workloads** tab. Then, select the **hello-universe** namespace. + +Two **ui** pods are available, instead of the one specified by your cluster profile. Your override has been successfully +applied. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster, named `gcp-profile`. + +Click on it to view its details. Select **1.0.0** in the version dropdown. + +Select the **hello-universe** pack. The editor appears. Change the `replicas` field to `3` on line `15`. Click on +**Confirm Updates**. The editor closes. + +Click on **Save Changes** to confirm the changes you have made to your profile. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the clusters with the **service** tag. Both of +your clusters match this filter. Palette indicates that the cluster associated with the cluster profile you updated has +updates available. + +![Image that shows the pending updates ](/getting-started/gcp/getting-started_update-k8s-cluster_pending-update-clusters-view.webp) + +Select this cluster to open its **Overview** tab. Click on **Updates** to begin the cluster update. + +![Image that shows the Updates button](/getting-started/gcp/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp) + +A dialog appears which shows the changes made in this update. Review the changes and ensure the only change is the +`replicas` field value. The pending update removes your cluster override and sets the `replicas` field to `3`. At this +point, you can choose to apply the pending changes or keep it by modifying the right-hand side of the dialog. + +![Image that shows the available updates dialog ](/getting-started/gcp/getting-started_update-k8s-cluster_available-updates-dialog.webp) + +Click on **Apply Changes** once you have finished reviewing your changes. + +Palette updates your cluster according to cluster profile specifications. Once these changes are complete, select the +**Workloads** tab. Then, select the **hello-universe** namespace. + +Three **ui** pods are available. The cluster profile update is now reflected by your cluster. + +## Cluster Observability + + + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/gcp/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +## Wrap-Up + +In this tutorial, you created deployed cluster profile updates. After the cluster was deployed to AWS, you updated the +cluster profile through three different methods: create a new cluster profile version, update a cluster profile in +place, and cluster profile overrides. After you made your changes, the Hello Universe application functioned as a +three-tier application with a REST API backend server. + +Cluster profiles provide consistency during the cluster creation process, as well as when maintaining your clusters. +They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or +rolling back workloads across your environments. + +We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to +learn about how you can use Palette with Terraform. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/getting-started.md b/docs/docs-content/getting-started/getting-started.md index 2b3cff4f32..1fdc586690 100644 --- a/docs/docs-content/getting-started/getting-started.md +++ b/docs/docs-content/getting-started/getting-started.md @@ -58,40 +58,28 @@ Explore more through the following pages. relativeURL: "./introduction", }, { - title: "Palette Dashboard", - description: "Tour the Palette Project and Tenant Admin dashboards.", + title: "Deploy a Cluster to Amazon Web Services (AWS)", + description: "Deploy and update a Palette host cluster to AWS.", buttonText: "Learn more", - relativeURL: "./dashboard", + relativeURL: "./aws", }, { - title: "Cluster Profiles", - description: "Learn about Palette Cluster Profiles and Packs.", + title: "Deploy a Cluster to Microsoft Azure", + description: "Deploy and update a Palette host cluster to Azure.", buttonText: "Learn more", - relativeURL: "./cluster-profiles", + relativeURL: "./azure", }, { - title: "Create a Cluster Profile", - description: "Create a full cluster profile in Palette.", + title: "Deploy a Cluster to Google Cloud Platform (GCP)", + description: "Deploy and update a Palette host cluster to Google Cloud.", buttonText: "Learn more", - relativeURL: "./create-cluster-profile", + relativeURL: "./gcp", }, { - title: "Deploy a Cluster", - description: "Deploy a Palette host cluster in AWS, Azure or Google Cloud.", + title: "Deploy a Cluster to VMware", + description: "Deploy and update a Palette host cluster to VMware vSphere.", buttonText: "Learn more", - relativeURL: "./deploy-k8s-cluster", - }, - { - title: "Deploy Cluster Profile Updates", - description: "Update your deployed clusters using Palette Cluster Profiles.", - buttonText: "Learn more", - relativeURL: "./update-k8s-cluster", - }, - { - title: "Deploy a Cluster with Terraform", - description: "Deploy a Palette host cluster with Terraform.", - buttonText: "Learn more", - relativeURL: "./terraform", + relativeURL: "./vmware", }, { title: "Additional Capabilities", diff --git a/docs/docs-content/getting-started/introduction.md b/docs/docs-content/getting-started/introduction.md index 00b5d30f9d..4d5ad8d29b 100644 --- a/docs/docs-content/getting-started/introduction.md +++ b/docs/docs-content/getting-started/introduction.md @@ -9,39 +9,15 @@ tags: ["getting-started"] --- Palette is a complete and integrated platform that enables organizations to effectively manage the entire lifecycle of -any combination of new or existing, simple or complex, small or large Kubernetes environments, whether in a data center -or the cloud. +any combination of new or existing Kubernetes environments, whether in a data center or the cloud. With a unique approach to managing multiple clusters, Palette gives IT teams complete control, visibility, and production-scale efficiencies to provide developers with highly curated Kubernetes stacks and tools based on their specific needs, with granular governance and enterprise-grade security. -Palette VerteX edition is also available to meet the stringent requirements of regulated industries such as government -and public sector organizations. Palette VerteX integrates Spectro Cloud’s Federal Information Processing Standards -(FIPS) 140-2 cryptographic modules. To learn more about FIPS-enabled Palette, check out -[Palette VerteX](../vertex/vertex.md). - ![Palette product high level overview eager-load](/getting-started/getting-started_introduction_product-overview.webp) -## What Makes Palette Different? - -Palette provides benefits to developers and platform engineers who maintain Kubernetes environments. - -### Full-Stack Management - -Unlike rigid and prepackaged Kubernetes solutions, Palette allows users to construct flexible stacks from OS, -Kubernetes, container network interfaces (CNI), and container storage interfaces (CSI) to additional add-on application -services. As a result, the entire stack - not just the infrastructure - of Kubernetes is deployed, updated, and managed -as one unit, without split responsibility from virtual machines, base OS, Kubernetes infra, and add-ons. - -### End-to-End Declarative Lifecycle Management - -Palette offers the most comprehensive profile-based management for Kubernetes. It enables teams to drive consistency, -repeatability, and operational efficiency across multiple clusters in multiple environments with comprehensive day 0 - -day 2 management. Check out the [Cluster Profiles](./cluster-profiles.md) page to learn more about how cluster profiles -simplifies cluster deployment and maintenance. - -### Any Environment +## Supported Environments Palette has the richest coverage in supported environments that includes: @@ -50,3 +26,52 @@ Palette has the richest coverage in supported environments that includes: - Data Centers: VMware, Nutanix, and OpenStack - Bare Metal: Canonical MAAS - Edge + +The Getting Started section covers deployment flows for clusters hosted in [AWS](./aws/aws.md), +[Azure](./azure/azure.md), [Google Cloud](./gcp/gcp.md) and [VMware vSphere](./vmware/vmware.md). + +## Cluster Profiles + +Cluster profiles are the declarative, full-stack models that Palette follows when it provisions, scales, and maintains +your clusters. Cluster profiles are composed of layers using packs, Helm charts, Zarf packages, or cluster manifests to +meet specific types of workloads on your Palette cluster deployments. You can create as many profiles as needed for your +workloads. + +Cluster profiles provide you with a repeatable deployment process for all of your development and production +environments. They also give you visibility on the layers, packages and versions present on your deployed clusters. + +Finally, if you want to update or maintain your deployed workloads, cluster profiles give you the flexibility to make +changes to all clusters deployed with the profile by removing, swapping or adding a new layer. Palette will then +reconcile the current state of your workloads with the desired state specified by the profile. + +Below are cluster profile types you can create: + +- _Infrastructure_ profiles provide the essential components for workload cluster deployments within a + [tenant](../glossary-all.md#tenant): Operating System (OS), Kubernetes, Network, and Storage. Collectively, these + layers form the infrastructure for your cluster. + +- _Add-on_ profiles are exclusively composed of add-on layers. They usually do not contain infrastructure components and + are instead designed for reusability across multiple clusters and multiple projects within a tenant. Since they + provide the flexibility to configure clusters based on specific requirements, _add-on_ profiles can be added to + _infrastructure_ profiles to create what we call a _full profile_. + +- _Full profiles_ combine infrastructure packs with add-on layers. By adding layers, you can enhance cluster + functionality. For example, you might add system apps, authentication, monitoring, ingress, load balancers, and more + to your cluster. + +The diagram below illustrates the components of these profile types and how you can build on infrastructure layers with +add-on layers to create a full cluster profile. You can also create separate add-on profiles to reuse among multiple +clusters. + +![A flow diagram that shows how you can add layers to an infrastructure profile to create a full profile.](/getting-started/getting-started_cluster-profiles_cluster-profiles.webp) + +## Packs + +Packs are the smallest component of a cluster profile. Each layer of a cluster profile is made up of a specific pack. +Palette provides packs that are tailored for specific uses to support the core infrastructure a cluster needs. You can +also use add-on packs, or create your own custom pack to extend Kubernetes functionality. + +The diagram below illustrates some of the popular technologies that you can use in your cluster profile layers. Check +out the [Packs List](../integrations/integrations.mdx) page to learn more about individual packs. + +![Diagram of stack grouped as a unit](/getting-started/getting-started_cluster-profiles_stack-grouped-packs.webp) diff --git a/docs/docs-content/getting-started/terraform.md b/docs/docs-content/getting-started/terraform.md deleted file mode 100644 index e8d8515519..0000000000 --- a/docs/docs-content/getting-started/terraform.md +++ /dev/null @@ -1,541 +0,0 @@ ---- -sidebar_label: "Deploy a Cluster with Terraform" -title: "Deploy a Cluster with Terraform" -description: "Learn to deploy a Palette host cluster with Terraform." -icon: "" -hide_table_of_contents: false -sidebar_position: 70 -tags: ["getting-started"] ---- - -The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider -enables you to create and manage Palette resources in a codified manner by leveraging Infrastructure as Code (IaC). Some -notable reasons why you would want to utilize IaC are: - -- The ability to automate infrastructure. - -- Improved collaboration in making infrastructure changes. - -- Self-documentation of infrastructure through code. - -- Allows tracking all infrastructure in a single source of truth. - -If want to become more familiar with Terraform, we recommend you check out the -[Terraform](https://developer.hashicorp.com/terraform/intro) learning resources from HashiCorp. - -This tutorial will teach you how to deploy a host cluster with Terraform using Amazon Web Services (AWS), Microsoft -Azure, or Google Cloud Platform (GCP) cloud providers. You will learn about _Cluster Mode_ and _Cluster Profiles_ and -how these components enable you to deploy customized applications to Kubernetes with minimal effort using the -[Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider. - -## Prerequisites - -To complete this tutorial, you will need the following items - -- Basic knowledge of containers. -- [Docker Desktop](https://www.docker.com/products/docker-desktop/), [Podman](https://podman.io/docs/installation) or - another container management tool. -- Create a Cloud account from one of the following providers. - - - [AWS](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account) - - [Azure](https://learn.microsoft.com/en-us/training/modules/create-an-azure-account) - - [GCP](https://cloud.google.com/docs/get-started) - -- Register the [cloud account with Palette](https://console.spectrocloud.com/auth/signup). Use the following resource - for additional guidance. - - - [Register and Manage AWS Accounts](../clusters/public-cloud/aws/add-aws-accounts.md) - - [Register and Manage Azure Cloud Accounts](../clusters/public-cloud/azure/azure-cloud.md) - - [Register and Manage GCP Accounts](../clusters/public-cloud/gcp/add-gcp-accounts.md) - -## Set Up Local Environment - -You can clone the tutorials repository locally or follow along by downloading a Docker image that contains the tutorial -code and all dependencies. - -
- -:::warning - -If you choose to clone the repository instead of using the tutorial container make sure you have Terraform v1.4.0 or -greater installed. - -::: - -
- - - - - -Ensure Docker Desktop on your local machine is available. Use the following command and ensure you receive an output -displaying the version number. - -```bash -docker version -``` - - - - - -Navigate to the tutorial code. - -```shell -cd /terraform/iaas-cluster-deployment-tf -``` - - - - - -If you are not running a Linux operating system, create and start the Podman Machine in your local environment. -Otherwise, skip this step. - -```bash -podman machine init -podman machine start -``` - -Use the following command and ensure you receive an output displaying the installation information. - -```bash -podman info -``` - - - - - -Navigate to the tutorial code. - -```shell -cd /terraform/iaas-cluster-deployment-tf -``` - - - - - -Open a terminal window and download the tutorial code from GitHub. - -```shell -git@github.com:spectrocloud/tutorials.git -``` - -Change the directory to the tutorial folder. - -```shell -cd tutorials/ -``` - - - -Change the directory to the tutorial code. - -```shell -cd terraform/iaas-cluster-deployment-tf/ -``` - - - - - -## Create an API Key - -Before you can get started with the Terraform code, you need a Spectro Cloud API key. - -To create an API key, log in to [Palette](https://console.spectrocloud.com) and click on the user **User Menu** and -select **My API Keys**. - -![Image that points to the user drop-down Menu and points to the API key link](/tutorials/deploy-clusters/clusters_public-cloud_deploy-k8s-cluster_create_api_key.webp) - -Next, click on **Add New API Key**. Fill out the required input field, **API Key Name**, and the **Expiration Date**. -Click on **Confirm** to create the API key. Copy the key value to your clipboard, as you will use it shortly. - -
- -In your terminal session, issue the following command to export the API key as an environment variable. - -
- -```shell -export SPECTROCLOUD_APIKEY=YourAPIKeyHere -``` - -The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider -requires credentials to interact with the Palette API. The Spectro Cloud Terraform provider will use the environment -variable to authenticate with the Spectro Cloud API endpoint. - -## Resources Review - -To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either Azure, -GCP, or AWS. Before you deploy a host cluster to your target provider, take a few moments to review the following files -in the folder structure. - -- **providers.tf** - This file contains the Terraform providers that are used to support the deployment of the cluster. - -- **inputs.tf** - This file contains all the Terraform variables for the deployment logic. - -- **data.tf** - This file contains all the query resources that perform read actions. - -- **cluster_profiles.tf** - This file contains the cluster profile definitions for each cloud provider. - -- **cluster.tf** - This file has all the required cluster configurations to deploy a host cluster to one of the cloud - providers. - -- **terraform.tfvars** - Use this file to customize the deployment and target a specific cloud provider. This is the - primary file you will modify. - -- **outputs.tf** - This file contains content that will be output in the terminal session upon a successful Terraform - `apply` action. - -The following section allows you to review the core Terraform resources more closely. - -#### Provider - -The **provider.tf** file contains the Terraform providers and their respective versions. The tutorial uses two -providers - the Spectro Cloud Terraform provider and the TLS Terraform provider. Note how the project name is specified -in the `provider "spectrocloud" {}` block. You can change the target project by changing the value specified in the -`project_name` parameter. - -```hcl -terraform { - required_providers { - spectrocloud = { - version = ">= 0.13.1" - source = "spectrocloud/spectrocloud" - } - tls = { - source = "hashicorp/tls" - version = "4.0.4" - } - } -} - -provider "spectrocloud" { - project_name = "Default" -} -``` - -The next file you should become familiar with is the **cluster-profiles.tf** file. - -The Spectro Cloud Terraform provider has several resources available for use. When creating a cluster profile, use -`spectrocloud_cluster_profile`. This resource can be used to customize all layers of a cluster profile. You can specify -all the different packs and versions to use and add a manifest or Helm chart. - -In the **cluster-profiles.tf** file, the cluster profile resource is declared three times. Each instance of the resource -is for a specific cloud provider. Using the AWS cluster profile as an example, note how the **cluster-profiles.tf** file -uses `pack {}` blocks to specify each layer of the profile. The order in which you arrange contents of the `pack {}` -blocks plays an important role, as each layer maps to the core infrastructure in a cluster profile. - -The first listed `pack {}` block must be the OS, followed by Kubernetes, the container network interface, and the -container storage interface. The first `pack {}` block in the list equates to the bottom layer of the cluster profile. -Ensure you define the bottom layer of the cluster profile - the OS layer - first in the list of `pack {}` blocks. - -```hcl -resource "spectrocloud_cluster_profile" "aws-profile" { - name = "tf-aws-profile" - description = "A basic cluster profile for AWS" - tags = concat(var.tags, ["env:aws"]) - cloud = "aws" - type = "cluster" - - pack { - name = data.spectrocloud_pack.aws_ubuntu.name - tag = data.spectrocloud_pack.aws_ubuntu.version - uid = data.spectrocloud_pack.aws_ubuntu.id - values = data.spectrocloud_pack.aws_ubuntu.values - } - - pack { - name = data.spectrocloud_pack.aws_k8s.name - tag = data.spectrocloud_pack.aws_k8s.version - uid = data.spectrocloud_pack.aws_k8s.id - values = data.spectrocloud_pack.aws_k8s.values - } - - pack { - name = data.spectrocloud_pack.aws_cni.name - tag = data.spectrocloud_pack.aws_cni.version - uid = data.spectrocloud_pack.aws_cni.id - values = data.spectrocloud_pack.aws_cni.values - } - - pack { - name = data.spectrocloud_pack.aws_csi.name - tag = data.spectrocloud_pack.aws_csi.version - uid = data.spectrocloud_pack.aws_csi.id - values = data.spectrocloud_pack.aws_csi.values - } - - pack { - name = "hello-universe" - type = "manifest" - tag = "1.0.0" - values = "" - manifest { - name = "hello-universe" - content = file("manifests/hello-universe.yaml") - } - } -} -``` - -The last `pack {}` block contains a manifest file with all the Kubernetes configurations for the -[Hello Universe](https://github.com/spectrocloud/hello-universe) application. Including the application in the profile -ensures the application is installed during cluster deployment. If you wonder what all the data resources are for, head -to the next section to review them. - -You may have noticed that each `pack {}` block contains references to a data resource. - -```hcl - pack { - name = data.spectrocloud_pack.aws_csi.name - tag = data.spectrocloud_pack.aws_csi.version - uid = data.spectrocloud_pack.aws_csi.id - values = data.spectrocloud_pack.aws_csi.values - } -``` - -[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in -Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more -dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query -Palette for information about a specific pack. You can get information about the pack using the data resource such as -unique ID, registry ID, available versions, and the pack's YAML values. - -Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.27.5`. - -```hcl -data "spectrocloud_pack" "aws_k8s" { - name = "kubernetes" - version = "1.27.5" -} -``` - -Using the data resource, you avoid manually typing in the parameter values required by the cluster profile's `pack {}` -block. - -The **clusters.tf** file contains the definitions for deploying a host cluster to one of the cloud providers. To create -a host cluster, you must use a cluster resource for the cloud provider you are targeting. - -In this tutorial, the following Terraform cluster resources are used. - -| Terraform Resource | Platform | -| ------------------------------------------------------------------------------------------------------------------------------------- | -------- | -| [`spectrocloud_cluster_aws`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_aws) | AWS | -| [`spectrocloud_cluster_azure`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_azure) | Azure | -| [`spectrocloud_cluster_gcp`](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_gcp) | GCP | - -Using the `spectrocloud_cluster_azure` resource in this tutorial as an example, note how the resource accepts a set of -parameters. When deploying a cluster, you can change the same parameters in the Palette user interface (UI). You can -learn more about each parameter by reviewing the resource documentation page hosted in the Terraform registry. - -```hcl -resource "spectrocloud_cluster_azure" "cluster" { - name = "azure-cluster" - tags = concat(var.tags, ["env:azure"]) - cloud_account_id = data.spectrocloud_cloudaccount_azure.account[0].id - - cloud_config { - subscription_id = var.azure_subscription_id - resource_group = var.azure_resource_group - region = var.azure-region - ssh_key = tls_private_key.tutorial_ssh_key[0].public_key_openssh - } - - cluster_profile { - id = spectrocloud_cluster_profile.azure-profile[0].id - } - - machine_pool { - control_plane = true - control_plane_as_worker = true - name = "control-plane-pool" - count = var.azure_control_plane_nodes.count - instance_type = var.azure_control_plane_nodes.instance_type - azs = var.azure_control_plane_nodes.azs - is_system_node_pool = var.azure_control_plane_nodes.is_system_node_pool - disk { - size_gb = var.azure_control_plane_nodes.disk_size_gb - type = "Standard_LRS" - } - } - - machine_pool { - name = "worker-basic" - count = var.azure_worker_nodes.count - instance_type = var.azure_worker_nodes.instance_type - azs = var.azure_worker_nodes.azs - is_system_node_pool = var.azure_worker_nodes.is_system_node_pool - } - - timeouts { - create = "30m" - delete = "15m" - } -} -``` - -To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open the **terraform.tfvars** -file in the editor of your choice, and locate the cloud provider you will use to deploy a host cluster. - -To simplify the process, we added a toggle variable in the Terraform template, that you can use to select the deployment -environment. Each cloud provider has a section in the template that contains all the variables you must populate. -Variables to populate are identified with `REPLACE_ME`. - -In the example AWS section below, you would change `deploy-aws = false` to `deploy-aws = true` to deploy to AWS. -Additionally, you would replace all the variables with a value `REPLACE_ME`. You can also update the values for nodes in -the control plane pool or worker pool. - -```hcl -########################### -# AWS Deployment Settings -############################ -deploy-aws = false # Set to true to deploy to AWS - -aws-cloud-account-name = "REPLACE_ME" -aws-region = "REPLACE_ME" -aws-key-pair-name = "REPLACE_ME" - -aws_control_plane_nodes = { - count = "1" - control_plane = true - instance_type = "m4.2xlarge" - disk_size_gb = "60" - availability_zones = ["REPLACE_ME"] # If you want to deploy to multiple AZs, add them here -} - -aws_worker_nodes = { - count = "1" - control_plane = false - instance_type = "m4.2xlarge" - disk_size_gb = "60" - availability_zones = ["REPLACE_ME"] # If you want to deploy to multiple AZs, add them here -} -``` - -When you are done making the required changes, issue the following command to initialize Terraform. - -```shell -terraform init -``` - -Next, issue the `plan` command to preview the changes. - -```shell -terraform plan -``` - -Output: - -```shell -Plan: 2 to add, 0 to change, 0 to destroy. -``` - -If you change the desired cloud provider's toggle variable to `true,` you will receive an output message that two new -resources will be created. The two resources are your cluster profile and the host cluster. - -To deploy all the resources, use the `apply` command. - -```shell -terraform apply -auto-approve -``` - -To check out the cluster profile creation in Palette, log in to [Palette](https://console.spectrocloud.com), and from -the left **Main Menu** click on **Profiles**. Locate the cluster profile with the name pattern -`tf-[cloud provier]-profile`. Click on the cluster profile to review its details, such as layers, packs, and versions. - -![A view of the cluster profile](/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp) - -You can also check the cluster creation process by navigating to the left **Main Menu** and selecting **Clusters**. - -![Update the cluster](/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp) - -Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more. - -The cluster deployment may take several minutes depending on the cloud provider, node count, node sizes used, and the -cluster profile. You can learn more about the deployment progress by reviewing the event log. Click on the **Events** -tab to check the event log. - -![Update the cluster](/getting-started/getting-started_deploy-k8s-cluster_event_log.webp) - -## Verify the Application - -When the cluster deploys, you can access the Hello Universe application. From the cluster's **Overview** page, click on -the URL for port **:8080** next to the **hello-universe-service** in the **Services** row. This URL will take you to the -application landing page. - -:::warning - -It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few -moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. - -::: - -![Deployed application](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-without-api.webp) - -Welcome to Hello Universe, a demo application to help you learn more about Palette and its features. Feel free to click -on the logo to increase the counter and for a fun image change. - -You have deployed your first application to a cluster managed by Palette through Terraform. Your first application is a -single container application with no upstream dependencies. - -## Cleanup - -Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all -the resources you created through Terraform. - -```shell -terraform destroy --auto-approve -``` - -Output: - -```shell -Destroy complete! Resources: 2 destroyed. -``` - -:::info - -If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force -delete, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to delete -the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours. - -::: - -If you are using the tutorial container and want to exit the container, type `exit` in your terminal session and press -the **Enter** key. Next, issue the following command to stop the container. - - - - - - - - - - - - - - - - - -## Wrap-Up - -In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a -host cluster. You then deployed a host cluster onto your preferred cloud service provider using Terraform. - -We encourage you to check out the -[Deploy an Application using Palette Dev Engine](../tutorials/cluster-deployment/pde/deploy-app.md) tutorial to learn -more about Palette. Palette Dev Engine can help you deploy applications more quickly through the usage of -[virtual clusters](../glossary-all.md#palette-virtual-cluster). Feel free to check out the reference links below to -learn more about Palette. - -- [Palette Modes](../introduction/palette-modes.md) - -- [Palette Clusters](../clusters/clusters.md) - -- [Hello Universe GitHub repository](https://github.com/spectrocloud/hello-universe) diff --git a/docs/docs-content/getting-started/update-k8s-cluster.md b/docs/docs-content/getting-started/update-k8s-cluster.md deleted file mode 100644 index 049b85f25f..0000000000 --- a/docs/docs-content/getting-started/update-k8s-cluster.md +++ /dev/null @@ -1,462 +0,0 @@ ---- -sidebar_label: "Deploy Cluster Profile Updates" -title: "Deploy Cluster Profile Updates" -description: "Learn how to update your deployed clusters using Palette Cluster Profiles." -icon: "" -hide_table_of_contents: false -sidebar_position: 60 -tags: ["getting-started"] ---- - -Palette provides cluster profiles, which allow you to specify layers for your workloads using packs, Helm charts, Zarf -packages, or cluster manifests. Packs serve as blueprints to the provisioning and deployment process, as they contain -the versions of the container images that Palette will install for you. Cluster profiles provide consistency across -environments during the cluster creation process, as well as when maintaining your clusters. Check out the -[cluster profiles](./cluster-profiles.md) page to learn more. Once provisioned, there are three main ways to update your -Palette deployments. - -| Method | Description | Cluster application process | -| ------------------------ | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Cluster profile versions | Create a new version of the cluster profile with your updates. | Select the new version of the cluster profile. Apply this new profile version to the clusters you want to update. | -| Cluster profile updates | Change the cluster profile in place. | Palette detects the difference between the provisioned resources and this profile. A pending update is available to clusters using this profile. Apply pending updates to the clusters you want to update. | -| Cluster overrides | Change the configuration of a single deployed cluster outside its cluster profile. | Save and apply the changes you've made to your cluster. | - -This tutorial will teach you how to update a cluster deployed with Palette to Amazon Web Services (AWS), Microsoft -Azure, or Google Cloud Platform (GCP) cloud providers. You will explore each cluster update method and learn how to -apply these changes using Palette. - -## Prerequisites - -This tutorial builds upon the resources and steps outlined in the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial -for creating initial clusters. To complete it, you will need the following items. - -- A public cloud account from one of these providers: - - - [AWS](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account) - - [Azure](https://learn.microsoft.com/en-us/training/modules/create-an-azure-account) - - [GCP](https://cloud.google.com/docs/get-started) - -- Register the [cloud account with Palette](https://console.spectrocloud.com/auth/signup). Use the following resource - for additional guidance. - - - [Register and Manage AWS Accounts](../clusters/public-cloud/aws/add-aws-accounts.md) - - [Register and Manage Azure Cloud Accounts](../clusters/public-cloud/azure/azure-cloud.md) - - [Register and Manage GCP Accounts](../clusters/public-cloud/gcp/add-gcp-accounts.md) - -- An SSH Key Pair. Use the [Create and Upload an SSH Key](../clusters/cluster-management/ssh-keys.md) guide to learn how - to create an SSH key and upload it to Palette. - - - AWS users must create an AWS Key pair before starting the tutorial. If you need additional guidance, check out the - [Create EC2 SSH Key Pair](https://docs.aws.amazon.com/ground-station/latest/ug/create-ec2-ssh-key-pair.html) - tutorial. - -## Set Up Clusters - -Follow the instructions of the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to create a cluster profile and -cluster with the [_hello-universe_](https://github.com/spectrocloud/hello-universe) application. Your cluster should be -successfully provisioned and in a healthy state in the cloud of your choosing. - -The cluster profile name follows the pattern `[cloud provider]-profile`. The cluster name follows the pattern -`[cloud provider]-cluster`. This tutorial uses Azure for illustration purposes. - -Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile -corresponding to your cluster in the list of profiles. Click on the **three-dot Menu** and select **Clone**. - -A dialog appears to confirm the details of the cloned cluster profile. Fill in the **Name** input using the pattern -`[cloud provider]-profile-api`. Click on **Confirm** to create the profile. - -The list of cluster profiles appears. Select the cloned cluster profile to view its details. - -Select the **hello-universe** manifest. The editor appears. In the manifest editor, replace the existing code with the -following content. - -```yaml -apiVersion: v1 -kind: Namespace -metadata: - name: hello-universe-api ---- -apiVersion: v1 -kind: Service -metadata: - name: hello-universe-api-service - namespace: hello-universe-api -spec: - type: LoadBalancer - ports: - - protocol: TCP - port: 3000 - targetPort: 3000 - selector: - app: hello-universe-api ---- -apiVersion: v1 -kind: Service -metadata: - name: hello-universe-db-service - namespace: hello-universe-api -spec: - type: ClusterIP - ports: - - protocol: TCP - port: 5432 - targetPort: 5432 - selector: - app: hello-universe-db ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: hello-universe-api-deployment - namespace: hello-universe-api -spec: - replicas: 1 - selector: - matchLabels: - app: hello-universe-api - template: - metadata: - labels: - app: hello-universe-api - spec: - containers: - - name: hello-universe-api - image: ghcr.io/spectrocloud/hello-universe-api:1.0.9 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 3000 - env: - - name: DB_HOST - value: "hello-universe-db-service.hello-universe-api.svc.cluster.local" ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: hello-universe-db-deployment - namespace: hello-universe-api -spec: - replicas: 1 - selector: - matchLabels: - app: hello-universe-db - template: - metadata: - labels: - app: hello-universe-db - spec: - containers: - - name: hello-universe-db - image: ghcr.io/spectrocloud/hello-universe-db:1.0.0 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 5432 -``` - -The code snippet you added deploys the [_hello-universe-api_](https://github.com/spectrocloud/hello-universe-api) and -[_hello-universe-db_](https://github.com/spectrocloud/hello-universe-db) applications. These applications serve as the -API server and database for the [_hello-universe_](https://github.com/spectrocloud/hello-universe) application. - -Click on **Confirm Updates** and close the editor. - -Click on **Save Changes** to confirm your updates. - -Deploy this cluster profile to a new cluster using the same steps outlined in the -[Deploy a Cluster](./deploy-k8s-cluster.md) tutorial. - -Once you have completed these steps and the host cluster creation process has finished, navigate to the left **Main -Menu** and select **Clusters** to view your deployed clusters. You should have two healthy clusters. - -![Image that shows the two clusters in the clusters list](/getting-started/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp) - -## Tag and Filter Clusters - -Palette provides the ability to add tags to your cluster profiles and clusters. This helps you organize and categorize -your clusters based on your custom criteria. You can add tags during the creation process or by editing the resource -after it has been created. - -Adding tags to your clusters helps you find and identify your clusters, without having to rely on cluster naming. This -is especially important when operating with many clusters or multiple cloud deployments. - -Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Find the -`[cloud provider]-cluster` you deployed with the _hello-universe_ application. Click on it to view its **Overview** tab. - -Click on the **Settings** drop-down Menu in the upper right corner and select **Cluster Settings**. - -Fill **service:hello-universe-frontend** in the **Tags (Optional)** input box. Click on **Save Changes**. Close the -panel. - -![Image that shows how to add a cluster tag](/getting-started/getting-started_update-k8s-cluster_add-service-tag.webp) - -Repeat the steps above for the `[cloud provider]-cluster-api` cluster you deployed with the _hello-universe-api_. Add -the **service:hello-universe-backend** tag to it. - -Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Click on **Add Filter**, then -select the **Add custom filter** option. - -Use the drop-down boxes to fill in the values of the filter. Select **Tags** in the left-hand **drop-down Menu**. Select -**is** in the middle **drop-down Menu**. Fill in **service:hello-universe-frontend** in the right-hand input box. - -Click on **Apply Filter**. - -![Image that shows how to add a frontend service filter](/getting-started/getting-started_update-k8s-cluster_apply-frontend-filter.webp) - -Once you apply the filter, only the `[cloud provider]-cluster` with this tag is displayed. - -## Version Cluster Profiles - -Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with -better change visibility and control over the layers in your host clusters. Profile versions are commonly used for -adding or removing layers and pack configuration updates. - -The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. If you -do not specify a version for your cluster profile, it defaults to **1.0.0**. - -Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the -**service:hello-universe-backend** tag. You can review how to filter your clusters in the -[Tag and Filter Clusters](#tag-and-filter-clusters) section. - -Select cluster to open its **Overview** tab. Make a note of the IP address of the **hello-universe-api-service** present -in this cluster. You can find it by opening the **:3000** URL. - -Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile -corresponding to your _hello-universe-frontend_ cluster. It should be named using the pattern -`[cloud provider]-profile`. Select it to view its details. - -![Image that shows the frontend cluster profile with cluster linked to it](/getting-started/getting-started_update-k8s-cluster_profile-with-cluster.webp) - -The current version is displayed in the **drop-down Menu** next to the profile name. This profile has the default value -of **1.0.0**, as you did not specify another value when you created it. The cluster profile also shows the host clusters -that are currently deployed with this cluster profile version. - -Click on the version **drop-down Menu**. Select the **Create new version** option. - -A dialog box appears. Fill in the **Version** input with **1.1.0**. Click on **Confirm**. - -Palette creates a new cluster profile version and opens it. The version dropdown displays the newly created **1.1.0** -profile. This profile version is not deployed to any host clusters. - -![Image that shows cluster profile version 1.1.0](/getting-started/getting-started_update-k8s-cluster_new-version-overview.webp) - -The version **1.1.0** has the same layers as the version **1.0.0** it was created from. Click on the **hello-universe** -manifest layer. The manifest editor appears. - -Replace the code in the editor with the following content. - -```yaml {41,42,43} -apiVersion: v1 -kind: Namespace -metadata: - name: hello-universe ---- -apiVersion: v1 -kind: Service -metadata: - name: hello-universe-service - namespace: hello-universe -spec: - type: LoadBalancer - ports: - - protocol: TCP - port: 8080 - targetPort: 8080 - selector: - app: hello-universe ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: hello-universe-deployment - namespace: hello-universe -spec: - replicas: 2 - selector: - matchLabels: - app: hello-universe - template: - metadata: - labels: - app: hello-universe - spec: - containers: - - name: hello-universe - image: ghcr.io/spectrocloud/hello-universe:1.1.0 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 8080 - env: - - name: API_URI - value: "http://REPLACE_ME:3000" -``` - -The code snippet you added deploys the [_hello-universe_](https://github.com/spectrocloud/hello-universe) application -with the extra environment variable `API_URI`. This environment variable allows you to specify a hostname and port for -the _hello-universe_ API server. Check out the -[_hello-universe_ readme](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#connecting-to-api-server) to -learn more about how to expand the capabilities of the _hello-universe_ application with an API Server. - -Replace the _REPLACE_ME_ placeholder in the code snippet provided with the IP address of the -_hello-universe-api-service_ that you made a note of earlier. - -Click on **Confirm Updates**. The manifest editor closes. - -Click on **Save Changes** to finish the configuration of this cluster profile version. - -Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the -**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. - -Select the **Profile** tab of this cluster. You can select a new version of your cluster profile by using the version -dropdown. - -Select the **1.1.0** version. - -![Image that shows how to select a new profile version for the cluster](/tutorials/deploy-cluster-profile-updates/clusters_cluster-management_deploy-cluster-profile-updates_profile-version-selection.webp) - -Click **Review & Save**. Palette prompts you to preview the change summary. - -Click **Review changes in Editor**. Palette displays the changes, with the current configuration on the left and the -incoming configuration on the right. - -Click **Apply Changes**. - -![Palette Editor that displays changes coming from the profile version update.](/getting-started/getting-started_update-k8s-cluster_editor-changes.webp) - -:::warning - -Palette has backup and restore capabilities available for your mission critical workloads. Ensure that you have adequate -backups before you make any cluster profile version changes in your production environments. You can learn more in the -[Backup and Restore](../clusters/cluster-management/backup-restore/backup-restore.md) section. - -::: - -Palette now makes the required changes to your cluster according to the specifications of the configured cluster profile -version. Once your changes have completed, Palette marks your layers with the green status indicator. - -![Image that shows completed cluster profile updates](/getting-started/getting-started_update-k8s-cluster_completed-cluster-updates.webp) - -Click on the URL for port **:8080** to access the Hello Universe application. The landing page of the application -indicates that it is connected to the API server. - -![Image that shows hello-universe with API server](/getting-started/getting-started_update-k8s-cluster_hello-universe-with-api.webp) - -## Roll Back Cluster Profiles - -One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of -previously known working states. The ability to roll back to a previously working cluster profile in one action shortens -the time to recovery in the event of an incident. - -The process to roll back to a previous version is identical to the process for applying a new version. - -Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the -**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. - -Select the **Profile** tab. This cluster is currently deployed using cluster profile version **1.1.0**. Select the -option **1.0.0** in the version dropdown. This process is the reverse of what you have done in the previous section, -[Version Cluster Profiles](#version-cluster-profiles). Click on **Save** to confirm your changes. - -Palette now makes the changes required for the cluster to return to the state specified in version **1.0.0** of your -cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. - -Click on the URL for port **:8080** to access the Hello Universe application. The landing page of the application -indicates that the application has returned to its original state and is no longer connected to the API server. - -## Pending Updates - -Cluster profiles can also be updated in place, without the need to create a new cluster profile version. Palette -monitors the state of your clusters and notifies you when updates are available for your host clusters. You may then -choose to apply your changes at a convenient time. - -The previous state of the cluster profile will not be saved once it is overwritten. - -Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the tag -**service:hello-universe-frontend**. Select it to view its **Overview** tab. - -Select the **Profile** tab. Then, select the **hello-universe** manifest. Change the `replicas` field to `1` on line -`26`. Click on **Save**. The editor closes. - -This cluster now contains an override over its cluster profile. Palette uses the configuration you have just provided -for the single cluster over its cluster profile and begins making the appropriate changes. - -Once these changes are complete, select the **Workloads** tab. Then, select the **hello-universe** namespace. - -One replica of the **hello-universe-deployment** is available, instead of the two specified by your cluster profile. -Your override has been successfully applied. - -Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile -corresponding to your _hello-universe-frontend_ cluster. Its name follows the pattern `[cloud provider]-profile`. - -Click on it to view its details. Select **1.0.0** in the version dropdown. - -Select the **hello-universe** manifest. The editor appears. Change the `replicas` field to `3` on line `26`. Click on -**Confirm Updates**. The editor closes. - -Click on **Save Changes** to confirm the changes you have made to your profile. - -Navigate to the left **Main Menu** and select **Clusters**. Filter for the clusters with the **service** tag. Both of -your clusters match this filter. Palette indicates that the cluster associated with the cluster profile you updated has -updates available. - -![Image that shows the pending updates ](/getting-started/getting-started_update-k8s-cluster_pending-update-clusters-view.webp) - -Select this cluster to open its **Overview** tab. Click on **Updates** to begin the cluster update. - -![Image that shows the Updates button](/getting-started/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp) - -A dialog appears which shows the changes made in this update. Click on **Review changes in Editor**. As previously, -Palette displays the changes, with the current configuration on the left and the incoming configuration on the right. - -Review the changes and ensure the only change is the `replicas` field value. You can choose to maintain your cluster -override or apply the incoming cluster profile update. - -![Image that shows the available updates dialog ](/getting-started/getting-started_update-k8s-cluster_available-updates-dialog.webp) - -Click on **Apply Changes** once you have finished reviewing your changes. This removes your cluster override. - -Palette updates your cluster according to cluster profile specifications. Once these changes are complete, select the -**Workloads** tab. Then, select the **hello-universe** namespace. - -Three replicas of the **hello-universe-deployment** are available. The cluster profile update is now reflected by your -cluster. - -## Cleanup - -Use the following steps to remove all the resources you created for the tutorial. - -To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to -delete to access its details page. - -Click on **Settings** to expand the menu, and select **Delete Cluster**. - -![Delete cluster](/getting-started/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) - -You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with -the delete step. The deletion process takes several minutes to complete. - -Repeat the same steps for the other cluster. - -:::info - -If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force -delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette -automatically removes clusters stuck in the cluster deletion phase for over 24 hours. - -::: - -Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you -created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the -selection to remove the cluster profile. - -Repeat the same steps to the delete the cluster profile named with the pattern `[cloud provider]-profile-api`. - -## Wrap-Up - -In this tutorial, you created two clusters and cluster profiles. After the clusters deployed to your chosen cloud -provider, you updated one cluster profile in through three different methods: create a new cluster profile version, -update a cluster profile in place, and cluster profile overrides. After you made your changes, the Hello Universe -application functioned as a three-tier application with a REST API backend server. - -Cluster profiles provide consistency during the cluster creation process, as well as when maintaining your clusters. -They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or -rolling back workloads across your environments. - -We recommend that you continue to the [Terraform Support](./terraform.md) page to learn about how you can use Palette -with Terraform. diff --git a/docs/docs-content/getting-started/vmware/_category_.json b/docs/docs-content/getting-started/vmware/_category_.json new file mode 100644 index 0000000000..0b49ba8465 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 70 +} diff --git a/docs/docs-content/getting-started/vmware/create-cluster-profile.md b/docs/docs-content/getting-started/vmware/create-cluster-profile.md new file mode 100644 index 0000000000..40e5530f02 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/create-cluster-profile.md @@ -0,0 +1,147 @@ +--- +sidebar_label: "Create a Cluster Profile" +title: "Create a Cluster Profile" +description: "Learn to create a full cluster profile in Palette." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["getting-started", "vmware"] +--- + +Palette offers profile-based management for Kubernetes, enabling consistency, repeatability, and operational efficiency +across multiple clusters. A cluster profile allows you to customize the cluster infrastructure stack, allowing you to +choose the desired Operating System (OS), Kubernetes, Container Network Interfaces (CNI), Container Storage Interfaces +(CSI). You can further customize the stack with add-on application layers. For more information about cluster profile +types, refer to [Cluster Profiles](../introduction.md#cluster-profiles). + +In this tutorial, you create a full profile directly from the Palette dashboard. Then, you add a layer to your cluster +profile by using a [community pack](../../integrations/community_packs.md) to deploy a web application. The concepts you +learn about in the Getting Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- Follow the steps described in the [Set up Palette with VMware](./setup.md) guide to authenticate Palette for use with + your VMware user account. +- Refer to the [Deploy a PCG with Palette CLI](./deploy-pcg.md) guide to create the required infrastructure that enables + communication with your cluster. +- Ensure that the [Palette Community Registry](../../registries-and-packs/registries/registries.md#default-registries) + is available in your Palette environment. Refer to the + [Add OCI Packs Registry](../../registries-and-packs/registries/oci-registry/add-oci-packs.md) guide for additional + guidance. + +## Create a Full Cluster Profile + +Log in to [Palette](https://console.spectrocloud.com) and navigate to the left **Main Menu**. Select **Profiles** to +view the cluster profile page. You can view the list of available cluster profiles. To create a cluster profile, click +on **Add Cluster Profile**. + +Follow the wizard to create a new profile. + +In the **Basic Information** section, assign the name **vmware-profile**, provide a profile description, select the type +as **Full**, and assign the tag **env:vmware**. You can leave the version empty if you want to. Just be aware that the +version defaults to **1.0.0**. Click on **Next**. + +Cloud Type allows you to choose the infrastructure provider with which this cluster profile is associated. Select +**VMware** and click on **Next**. + +The **Profile Layers** step is where you specify the packs that compose the profile. There are four required +infrastructure packs and several optional add-on packs you can choose from. Every pack requires you to select the **Pack +Type**, **Registry**, and **Pack Name**. + +For this tutorial, use the following packs: + +| Pack Name | Version | Layer | +| --------------- | ------- | ---------------- | +| ubuntu-vsphere | 22.4.x | Operating System | +| kubernetes | 1.28.x | Kubernetes | +| cni-calico | 3.27.x | Network | +| csi-vsphere-csi | 3.1.x | Storage | + +As you fill out the information for each layer, click on **Next** to proceed to the next layer. + +Click on **Confirm** after you have completed filling out all the core layers. + +![VMware core layers](/getting-started/vmware/getting-started_create-cluster-profile_cluster-profile-core-stack.webp) + +The review section gives an overview of the cluster profile configuration you selected. Click on **Finish +Configuration** to create the cluster profile. + +## Add Packs + +Navigate to the left **Main Menu** and select **Profiles**. Select the cluster profile you created earlier. + +Click on **Add New Pack** at the top of the page. + + + +Add the **MetalLB (Helm)** pack to your profile. The pack provides a +load-balancer implementation for your Kubernetes cluster, as VMware does not offer a load balancer solution natively. +The load balancer is required to help the _LoadBalancer_ service specified in the Hello Universe application manifest +obtain an IP address, so that you can access the application from your browser. + + + +| Pack Name | Version | Layer | +| --------------- | ------- | ------------- | +| lb-metallb-helm | 0.14.x | Load Balancer | + +Now, under **Pack Details**, click on **Values** and replace the predefined `192.168.10.0/24` IP CIDR listed below the +**addresses** line with a valid IP address or IP range from your VMware environment to be assigned to your load +balancer. Next, click **Confirm & Create** to add the MetalLB pack. + +![Metallb Helm-based pack.](/getting-started/vmware/getting-started_create-cluster-profile_metallb-pack.webp) + +Click on **Confirm & Create** to save the pack. + +Click on **Add New Pack** at the top of the page. + +Select the **Palette Community Registry** from the **Registry** dropdown. Then, click on the latest **Hello Universe** +pack with version **v1.2.0**. + +![Screenshot of hello universe pack](/getting-started/vmware/getting-started_create-cluster-profile_add-pack.webp) + +Once you have selected the pack, Palette will display its README, which provides you with additional guidance for usage +and configuration options. The pack you added will deploy the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. + +![Screenshot of pack readme](/getting-started/vmware/getting-started_create-cluster-profile_pack-readme.webp) + +Click on **Values** to edit the pack manifest. Click on **Presets** on the right-hand side. + +This pack has two configured presets: + +1. **Disable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a standalone frontend application. This is the default preset selection. +2. **Enable Hello Universe API** configures the [_hello-universe_](https://github.com/spectrocloud/hello-universe) + application as a three-tier application with a frontend, API server, and Postgres database. + +Select the **Enable Hello Universe API** preset. The pack manifest changes according to this preset. + +![Screenshot of pack presets](/getting-started/vmware/getting-started_create-cluster-profile_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. + +Click on **Confirm & Create** to save the manifest. Then, click on **Save Changes** to save this new layer to the +cluster profile. + +## Wrap-Up + +In this tutorial, you created a cluster profile, which is a template that contains the core layers required to deploy a +host cluster using VMware vSphere. Then, you added a load balancer and a community pack to your profile to deploy a +custom workload. + +We recommend that you continue to the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy this cluster +profile to a host cluster onto VMware. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/vmware/deploy-k8s-cluster.md b/docs/docs-content/getting-started/vmware/deploy-k8s-cluster.md new file mode 100644 index 0000000000..297e66a71f --- /dev/null +++ b/docs/docs-content/getting-started/vmware/deploy-k8s-cluster.md @@ -0,0 +1,187 @@ +--- +sidebar_label: "Deploy a Cluster" +title: "Deploy a Cluster" +description: "Learn to deploy a Palette host cluster." +icon: "" +hide_table_of_contents: false +sidebar_position: 30 +tags: ["getting-started", "vmware"] +--- + +This tutorial will teach you how to deploy a host cluster with Palette using VMware vSphere and a Private Cloud Gateway +(PCG). You will learn about _Cluster Mode_ and _Cluster Profiles_ and how these components enable you to deploy +customized applications to Kubernetes with minimal effort. + +As you navigate the tutorial, refer to this diagram to help you understand how Palette uses a cluster profile as a +blueprint for the host cluster you deploy. Palette clusters have the same node pools you may be familiar with: _control +plane nodes_ and _worker nodes_ where you will deploy applications. The result is a host cluster that Palette manages. +The concepts you learn about in the Getting Started section are centered around a fictional case study company, +Spacetastic Ltd. + +![A view of Palette managing the Kubernetes lifecycle](/getting-started/getting-started_deploy-k8s-cluster_application.webp) + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following. + +- Follow the steps described in the [Set up Palette with VMware](./setup.md) guide to authenticate Palette for use with + your VMware user account. + +- A successfully deployed PCG. Follow the steps described in the [Deploy a PCG with Palette CLI](./deploy-pcg.md) + tutorial to deploy a PCG using the Palette CLI. + +- A Palette cluster profile. Follow the [Create a Cluster Profile](./create-cluster-profile.md) tutorial to create the + required VMware cluster profile. + +## Deploy a Cluster + +The following steps will guide you through deploying the cluster infrastructure. + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +![Palette clusters overview page](/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp) + +Palette will prompt you to select the type of cluster. Select **VMware** and click the **Start VMware Configuration** +button. Use the following steps to create a host cluster in VMware. + +In the **Basic Information** section, insert the general information about the cluster, such as the **Cluster name**, +**Description** and **Tags**. + +Select the VMware cloud account that was registered with Palette during the PCG creation. The cloud account has the same +name as the PCG. In this tutorial, the cloud account is called `gateway-tutorial`. + +Click on **Next**. + +![Palette clusters basic information](/getting-started/vmware/getting-started_deploy-k8s-cluster_basic_info.webp) + +Click on **Add Cluster Profile**. A list is displayed of available profiles you can choose to deploy to VMware. Select +the cluster profile you created in the [Create a Cluster Profile](./create-cluster-profile.md) tutorial, named +**vmware-profile**, and click on **Confirm**. + +The **Cluster Profile** section displays all the layers in the cluster profile. + +![Palette clusters basic information](/getting-started/vmware/getting-started_deploy-k8s-cluster_clusters_parameters.webp) + +Each layer has a pack manifest file with the deploy configurations. The pack manifest file is in a YAML format. Each +pack contains a set of default values. You can change the manifest values if needed. Click on **Next** to proceed. + +The **Cluster Config** section allows you to provide specific information about your VMware vSphere environment. First, +select the **Datacenter** and **Deployment Folder** where the cluster nodes will be launched. Next, select the **Image +Template Folder** to which the Spectro templates are imported, and choose **DHCP** as the **Network Type**. Finally, +provide the **SSH key** for accessing the cluster nodes. Proceed by clicking **Next** to advance to the **Nodes +Configuration** section. + +The **Nodes Config** section allows you to configure the nodes that make up the control plane and worker nodes of the +host cluster. + +Provide the details for the nodes of the control plane and worker pools. + +| **Field** | **Control Plane Pool** | **Worker Pool** | +| --------------------------- | ---------------------- | --------------- | +| Node pool name | control-plane-pool | worker-pool | +| Number of nodes in the pool | `1` | `1` | +| Allow worker capability | No | Not applicable | +| Enable Autoscaler | Not applicable | No | +| Rolling update | Not applicable | Expand First | + +Keep the **Cloud Configuration** settings the same for both pools, with **CPU** set to 4 cores, **memory** allocated at +8 GB, and **disk** space at 60 GB. Next, populate the **Compute cluster**, **Resource Pool**, **Datastore**, and +**Network** fields according to your VMware vSphere environment. Click **Next** to proceed. + +Select **Next** to proceed with the cluster deployment. + +The **Cluster Settings** section offers advanced options for OS patching, scheduled scans, scheduled backups, and +cluster role binding. For this tutorial, you can use the default settings. Click on **Validate** to continue. + +The **Review** section allows you to review the cluster configuration before deploying the cluster. Review all the +settings and click on **Finish Configuration** to deploy the cluster. + +![Newly created cluster](/getting-started/vmware/getting-started_deploy-k8s-cluster_profile_review.webp) + +Navigate to the left **Main Menu** and select **Clusters**. + +![Update the cluster](/getting-started/vmware/getting-started_deploy-k8s-cluster_new_cluster.webp) + +The cluster deployment process can take 15 to 30 min. The deployment time varies depending on the cloud provider, +cluster profile, cluster size, and the node pool configurations provided. You can learn more about the deployment +progress by reviewing the event log. Click on the **Events** tab to view the log. + +![Update the cluster](/getting-started/vmware/getting-started_deploy-k8s-cluster_event_log.webp) + +## Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/vmware/getting-started_deploy-k8s-cluster_service_url.webp) + +
+ +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +
+ +![Image that shows the cluster overview of the Hello Universe Frontend Cluster](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed service. + +You have deployed your first application to a cluster managed by Palette. Your first application is a three-tier +application with a frontend, API server, and Postgres database. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/vmware/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +
+ +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +
+ +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + + + +## Wrap-Up + +In this tutorial, you used the cluster profile you created in the previous +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to deploy a host cluster onto VMware vSphere. After the +cluster deployed, you verified the Hello Universe application was successfully deployed. + +We recommend that you continue to the [Deploy Cluster Profile Updates](./update-k8s-cluster.md) tutorial to learn how to +update your host cluster. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/vmware/deploy-manage-k8s-cluster-tf.md b/docs/docs-content/getting-started/vmware/deploy-manage-k8s-cluster-tf.md new file mode 100644 index 0000000000..d96c58a811 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/deploy-manage-k8s-cluster-tf.md @@ -0,0 +1,805 @@ +--- +sidebar_label: "Cluster Management with Terraform" +title: "Cluster Management with Terraform" +description: "Learn how to deploy and update a Palette host cluster to VMware vSphere with Terraform." +icon: "" +hide_table_of_contents: false +sidebar_position: 50 +toc_max_heading_level: 2 +tags: ["getting-started", "vmware", "terraform"] +--- + +The [Spectro Cloud Terraform](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) provider +allows you to create and manage Palette resources using Infrastructure as Code (IaC). With IaC, you can automate the +provisioning of resources, collaborate on changes, and maintain a single source of truth for your infrastructure. + +This tutorial will teach you how to use Terraform to deploy and update a VMware vSphere host cluster. You will learn how +to create two versions of a cluster profile with different demo applications, update the deployed cluster with the new +cluster profile version, and then perform a rollback. The concepts you learn about in the Getting Started section are +centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, you will need the following items in place: + +- Follow the steps described in the [Set up Palette with VMware](./setup.md) guide to authenticate Palette for use with + your VMware vSphere account. +- Follow the steps described in the [Deploy a PCG](./deploy-pcg.md) tutorial to deploy a VMware vSphere Private Cloud + Gateway (PCG). +- [Docker Desktop](https://www.docker.com/products/docker-desktop/) or [Podman](https://podman.io/docs/installation) + installed if you choose to follow along using the tutorial container. +- If you choose to clone the repository instead of using the tutorial container, make sure you have the following + software installed: + - [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) v1.9.0 or greater + - [Git](https://git-scm.com/downloads) + - [Kubectl](https://kubernetes.io/docs/tasks/tools/) + +## Set Up Local Environment + +You can clone the [Tutorials](https://github.com/spectrocloud/tutorials) repository locally or follow along by +downloading a container image that includes the tutorial code and all dependencies. + + + + + +Start Docker Desktop and ensure that the Docker daemon is available by issuing the following command. + +```bash +docker ps +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +docker run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +If you are not using a Linux operating system, create and start the Podman Machine in your local environment. Otherwise, +skip this step. + +```bash +podman machine init +podman machine start +``` + +Use the following command and ensure you receive an output displaying the installation information. + +```bash +podman info +``` + +Next, download the tutorial image, start the container, and open a bash session into it. + +```shell +podman run --name tutorialContainer --interactive --tty ghcr.io/spectrocloud/tutorials:1.1.9 bash +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + +:::warning + +Do not exit the container until the tutorial is complete. Otherwise, you may lose your progress. + +::: + + + + + +Open a terminal window and download the tutorial code from GitHub. + +```shell +git clone https://github.com/spectrocloud/tutorials.git +``` + +Change the directory to the tutorial folder. + +```shell +cd tutorials/ +``` + +Check out the following git tag. + +```shell +git checkout v1.1.9 +``` + +Navigate to the folder that contains the tutorial code. + +```shell +cd /terraform/getting-started-deployment-tf +``` + + + + + +## Resources Review + +To help you get started with Terraform, the tutorial code is structured to support deploying a cluster to either AWS, +Azure, GCP, or VMware vSphere. Before you deploy a host cluster to VMware vSphere, review the following files in the +folder structure. + +| **File** | **Description** | +| ----------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| **provider.tf** | This file contains the Terraform providers that are used to support the deployment of the cluster. | +| **inputs.tf** | This file contains all the Terraform variables required for the deployment logic. | +| **data.tf** | This file contains all the query resources that perform read actions. | +| **cluster_profiles.tf** | This file contains the cluster profile definitions for each cloud provider. | +| **clusters.tf** | This file has the cluster configurations required to deploy a host cluster to one of the cloud providers. | +| **terraform.tfvars** | Use this file to target a specific cloud provider and customize the deployment. This is the only file you must modify. | +| **ippool.tf** | This file contains the configuration required for VMware deployments that use static IP placement. | +| **ssh-key.tf** | This file has the SSH key resource definition required for Azure and VMware deployments. | +| **outputs.tf** | This file contains the content that will be displayed in the terminal after a successful Terraform `apply` action. | + +The following section reviews the core Terraform resources more closely. + +#### Provider + +The **provider.tf** file contains the Terraform providers used in the tutorial and their respective versions. This +tutorial uses four providers: + +- [Spectro Cloud](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs) +- [TLS](https://registry.terraform.io/providers/hashicorp/tls/latest) +- [vSphere](https://registry.terraform.io/providers/hashicorp/vsphere/latest) +- [Local](https://registry.terraform.io/providers/hashicorp/local/latest) + +Note how the project name is specified in the `provider "spectrocloud" {}` block. You can change the target project by +modifying the value of the `palette-project` variable in the **terraform.tfvars** file. + +```hcl +terraform { + required_providers { + spectrocloud = { + version = ">= 0.20.6" + source = "spectrocloud/spectrocloud" + } + + tls = { + source = "hashicorp/tls" + version = "4.0.4" + } + + vsphere = { + source = "hashicorp/vsphere" + version = ">= 2.6.1" + } + + local = { + source = "hashicorp/local" + version = "2.4.1" + } + } + + required_version = ">= 1.9" +} + +provider "spectrocloud" { + project_name = var.palette-project +} +``` + +#### Cluster Profile + +The next file you should become familiar with is the **cluster_profiles.tf** file. The `spectrocloud_cluster_profile` +resource allows you to create a cluster profile and customize its layers. You can specify the packs and versions to use +or add a manifest or Helm chart. + +The cluster profile resource is declared eight times in the **cluster-profiles.tf** file, with each pair of resources +being designated for a specific provider. In this tutorial, two versions of the VMware vSphere cluster profile are +deployed: version `1.0.0` deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) pack, while +version `1.1.0` deploys the [Kubecost](https://www.kubecost.com/) pack along with the +[Hello Universe](https://github.com/spectrocloud/hello-universe) application. + +The cluster profiles also include layers for the Operating System (OS), Kubernetes, container network interface, +container storage interface, and load balancer implementation for bare-metal clusters. The first `pack {}` block in the +list equates to the bottom layer of the cluster profile. Ensure you define the bottom layer of the cluster profile - the +OS layer - first in the list of `pack {}` blocks, as the order in which you arrange the contents of the `pack {}` blocks +plays an important role in the cluster profile creation. The table below displays the packs deployed in each version of +the cluster profile. + +| **Pack Type** | **Pack Name** | **Version** | **Cluster Profile v1.0.0** | **Cluster Profile v1.1.0** | +| ------------- | ----------------- | ----------- | -------------------------- | -------------------------- | +| OS | `ubuntu-vsphere` | `22.04` | :white_check_mark: | :white_check_mark: | +| Kubernetes | `kubernetes` | `1.28.3` | :white_check_mark: | :white_check_mark: | +| Network | `cni-calico` | `3.26.3` | :white_check_mark: | :white_check_mark: | +| Storage | `csi-vsphere-csi` | `3.0.2` | :white_check_mark: | :white_check_mark: | +| Load Balancer | `lb-metallb-helm` | `0.14.8` | :white_check_mark: | :white_check_mark: | +| App Services | `hellouniverse` | `1.2.0` | :white_check_mark: | :white_check_mark: | +| App Services | `cost-analyzer` | `1.103.3` | :x: | :white_check_mark: | + +The Hello Universe pack has two configured [presets](../../glossary-all.md#presets). The first preset deploys a +standalone frontend application, while the second one deploys a three-tier application with a frontend, API server, and +Postgres database. This tutorial deploys the three-tier version of the +[Hello Universe](https://github.com/spectrocloud/hello-universe) pack. The preset selection in the Terraform code is +specified within the Hello Universe pack block with the `values` field and by using the **values-3tier.yaml** file. +Below is an example of version `1.0.0` of the VMware vSphere cluster profile Terraform resource. + +```hcl +resource "spectrocloud_cluster_profile" "vmware-profile" { + count = var.deploy-vmware ? 1 : 0 + + name = "tf-vmware-profile" + description = "A basic cluster profile for VMware" + tags = concat(var.tags, ["env:VMware"]) + cloud = "vsphere" + type = "cluster" + version = "1.0.0" + + pack { + name = data.spectrocloud_pack.vmware_ubuntu.name + tag = data.spectrocloud_pack.vmware_ubuntu.version + uid = data.spectrocloud_pack.vmware_ubuntu.id + values = data.spectrocloud_pack.vmware_ubuntu.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.vmware_k8s.name + tag = data.spectrocloud_pack.vmware_k8s.version + uid = data.spectrocloud_pack.vmware_k8s.id + values = data.spectrocloud_pack.vmware_k8s.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.vmware_cni.name + tag = data.spectrocloud_pack.vmware_cni.version + uid = data.spectrocloud_pack.vmware_cni.id + values = data.spectrocloud_pack.vmware_cni.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.vmware_csi.name + tag = data.spectrocloud_pack.vmware_csi.version + uid = data.spectrocloud_pack.vmware_csi.id + values = data.spectrocloud_pack.vmware_csi.values + type = "spectro" + } + + pack { + name = data.spectrocloud_pack.vmware_metallb.name + tag = data.spectrocloud_pack.vmware_metallb.version + uid = data.spectrocloud_pack.vmware_metallb.id + values = replace(data.spectrocloud_pack.vmware_metallb.values, "192.168.10.0/24", var.metallb_ip) + type = "oci" + } + + pack { + name = data.spectrocloud_pack.hellouniverse.name + tag = data.spectrocloud_pack.hellouniverse.version + uid = data.spectrocloud_pack.hellouniverse.id + values = templatefile("manifests/values-3tier.yaml", { + namespace = var.app_namespace, + port = var.app_port, + replicas = var.replicas_number, + db_password = base64encode(var.db_password), + auth_token = base64encode(var.auth_token) + }) + type = "oci" + } +} +``` + +#### Data Resources + +Each `pack {}` block contains references to a data resource. +[Data resources](https://developer.hashicorp.com/terraform/language/data-sources) are used to perform read actions in +Terraform. The Spectro Cloud Terraform provider exposes several data resources to help you make your Terraform code more +dynamic. The data resource used in the cluster profile is `spectrocloud_pack`. This resource enables you to query +Palette for information about a specific pack, such as its unique ID, registry ID, available versions, and YAML values. + +Below is the data resource used to query Palette for information about the Kubernetes pack for version `1.28.3`. + +```hcl +data "spectrocloud_pack" "vmware_k8s" { + name = "kubernetes" + version = "1.28.3" + registry_uid = data.spectrocloud_registry.public_registry.id +} +``` + +Using the data resource helps you avoid manually entering the parameter values required by the cluster profile's +`pack {}` block. + +#### Cluster + +The **clusters.tf** file contains the definitions required for deploying a host cluster to one of the infrastructure +providers. To create a VMware vSphere host cluster, you must set the `deploy-vmware` variable in the +**terraform.tfvars** file to true. + +When deploying a cluster using Terraform, you must provide the same parameters as those available in the Palette UI for +the cluster deployment step, such as the instance size and number of nodes. You can learn more about each parameter by +reviewing the +[VMware vSphere cluster resource](https://registry.terraform.io/providers/spectrocloud/spectrocloud/latest/docs/resources/cluster_vsphere) +documentation. + +```hcl +resource "spectrocloud_cluster_vsphere" "vmware-cluster" { + count = var.deploy-vmware ? 1 : 0 + + name = "vmware-cluster" + tags = concat(var.tags, ["env:vmware"]) + cloud_account_id = data.spectrocloud_cloudaccount_vsphere.account[0].id + + cloud_config { + ssh_keys = [local.ssh_public_key] + datacenter = var.datacenter_name + folder = var.folder_name + static_ip = var.deploy-vmware-static # If true, the cluster will use static IP placement. If false, the cluster will use DDNS. + network_search_domain = var.search_domain + } + + cluster_profile { + id = var.deploy-vmware && var.deploy-vmware-kubecost ? resource.spectrocloud_cluster_profile.vmware-profile-kubecost[0].id : resource.spectrocloud_cluster_profile.vmware-profile[0].id + } + + scan_policy { + configuration_scan_schedule = "0 0 * * SUN" + penetration_scan_schedule = "0 0 * * SUN" + conformance_scan_schedule = "0 0 1 * *" + } + + machine_pool { + name = "control-plane-pool" + count = 1 + control_plane = true + control_plane_as_worker = true + + instance_type { + cpu = 4 + disk_size_gb = 60 + memory_mb = 8000 + } + + placement { + cluster = var.vsphere_cluster + datastore = var.datastore_name + network = var.network_name + resource_pool = var.resource_pool_name + # Required for static IP placement. + static_ip_pool_id = var.deploy-vmware-static ? resource.spectrocloud_privatecloudgateway_ippool.ippool[0].id : null + } + + } + + machine_pool { + name = "worker-pool" + count = 1 + control_plane = false + + instance_type { + cpu = 4 + disk_size_gb = 60 + memory_mb = 8000 + } + + placement { + cluster = var.vsphere_cluster + datastore = var.datastore_name + network = var.network_name + resource_pool = var.resource_pool_name + # Required for static IP placement. + static_ip_pool_id = var.deploy-vmware-static ? resource.spectrocloud_privatecloudgateway_ippool.ippool[0].id : null + } + } +} +``` + +## Terraform Tests + +Before starting the cluster deployment, test the Terraform code to ensure the resources will be provisioned correctly. +Issue the following command in your terminal. + +```bash +terraform test +``` + +A successful test execution will output the following. + +```text hideClipboard +Success! 16 passed, 0 failed. +``` + +## Input Variables + +To deploy a cluster using Terraform, you must first modify the **terraform.tfvars** file. Open it in the editor of your +choice. The tutorial container includes the editor [Nano](https://www.nano-editor.org). + +The file is structured with different sections. Each provider has a section with variables that need to be filled in, +identified by the placeholder `REPLACE_ME`. Additionally, there is a toggle variable named `deploy-` +available for each provider, which you can use to select the deployment environment. + +In the **Palette Settings** section, modify the name of the `palette-project` variable if you wish to deploy to a +Palette project different from the default one. + +```hcl {4} +##################### +# Palette Settings +##################### +palette-project = "Default" # The name of your project in Palette. +``` + +Next, in the **Hello Universe Configuration** section, provide values for the database password and authentication token +for the Hello Universe pack. For example, you can use the value `password` for the database password and the default +token provided in the +[Hello Universe](https://github.com/spectrocloud/hello-universe/tree/main?tab=readme-ov-file#reverse-proxy-with-kubernetes) +repository for the authentication token. + +```hcl {7-8} +############################## +# Hello Universe Configuration +############################## +app_namespace = "hello-universe" # The namespace in which the application will be deployed. +app_port = 8080 # The cluster port number on which the service will listen for incoming traffic. +replicas_number = 1 # The number of pods to be created. +db_password = "REPLACE ME" # The database password to connect to the API database. +auth_token = "REPLACE ME" # The auth token for the API connection. +``` + +Locate the VMware vSphere provider section and change `deploy-vmware = false` to `deploy-vmware = true`. Additionally, +replace all occurrences of `REPLACE_ME` with the required variable values. + +- **metallb_ip** - Range of IP addresses for your MetalLB load balancer. If using static IP placement, this range must + be included in the PCG's static IP pool range. +- **pcg_name** - Name of the PCG that will be used to deploy the Palette cluster. +- **datacenter_name** - Name of the data center in vSphere. +- **folder_name** - Name of the folder in vSphere. +- **search_domain** - Name of the network search domain. +- **vsphere_cluster** - Name of the cluster as it appears in vSphere. +- **datastore_name** - Name of the datastore as it appears in vSphere. +- **network_name** - Name of the network as it appears in vSphere. +- **resource_pool_name** - Name of the resource pool as it appears in vSphere. +- **ssh_key** - Path to a public SSH key. If not provided, a new key pair will be created. +- **ssh_key_private** - Path to a private SSH key. If not provided, a new key pair will be created. + +```hcl {4,7-15} +############################ +# VMware Deployment Settings +############################ +deploy-vmware = false # Set to true to deploy to VMware. +deploy-vmware-kubecost = false # Set to true to deploy to VMware and include Kubecost to your cluster profile. + +metallb_ip = "REPLACE ME" +pcg_name = "REPLACE ME" +datacenter_name = "REPLACE ME" +folder_name = "REPLACE ME" +search_domain = "REPLACE ME" +vsphere_cluster = "REPLACE ME" +datastore_name = "REPLACE ME" +network_name = "REPLACE ME" +resource_pool_name = "REPLACE ME" +ssh_key = "" +ssh_key_private = "" +``` + +:::info + +If you deployed the PCG using static IP placement, you must create an +[IPAM pool](../../clusters/pcg/manage-pcg/create-manage-node-pool.md) before deploying clusters. Set the +`deploy-vmware-static` variable to true and provide the required values for the variables under the **Static IP Pool +Variables** section. + +::: + +When you are done making the required changes, save the file. + +## Deploy the Cluster + +Before starting the cluster provisioning, export your [Palette API key](./setup.md#create-a-palette-api-key) as an +environment variable. This step allows the Terraform code to authenticate with the Palette API. + +```bash +export SPECTROCLOUD_APIKEY= +``` + +Next, issue the following command to initialize Terraform. The `init` command initializes the working directory that +contains the Terraform files. + +```shell +terraform init +``` + +```text hideClipboard +Terraform has been successfully initialized! +``` + +:::warning + +Before deploying the resources, ensure that there are no active clusters named `vmware-cluster` or cluster profiles +named `tf-vmware-profile` in your Palette project. + +::: + +Issue the `plan` command to preview the resources that Terraform will create. + +```shell +terraform plan +``` + +The output indicates that six new resources will be created: two versions of the VMware vSphere cluster profile, the +host cluster, and the files associated with the SSH key pair if you have not provided one. The host cluster will use +version `1.0.0` of the cluster profile. + +```shell +Plan: 6 to add, 0 to change, 0 to destroy. +``` + +To deploy the resources, use the `apply` command. + +```shell +terraform apply -auto-approve +``` + +To check that the cluster profile was created correctly, log in to [Palette](https://console.spectrocloud.com), and +click **Profiles** from the left **Main Menu**. Locate the cluster profile named `tf-vmware-profile`. Click on the +cluster profile to review its layers and versions. + +![A view of the cluster profile](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp) + +You can also check the cluster creation process by selecting **Clusters** from the left **Main Menu**. + +![Update the cluster](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp) + +Select your cluster to review its details page, which contains the status, cluster profile, event logs, and more. + +The cluster deployment may take 15 to 30 minutes depending on the cloud provider, cluster profile, cluster size, and the +node pool configurations provided. You can learn more about the deployment progress by reviewing the event log. Click on +the **Events** tab to check the log. + +![Update the cluster](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp) + +### Verify the Application + +In Palette, navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. When the application is deployed and ready for network traffic, +indicated in the **Services** field, Palette exposes the service URL. Click on the URL for port **:8080** to access the +Hello Universe application. + +:::warning + +It can take up to three minutes for DNS to properly resolve the public load balancer URL. We recommend waiting a few +moments before clicking on the service URL to prevent the browser from caching an unresolved DNS request. + +::: + +![Deployed application](/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp) + +Welcome to Spacetastic's astronomy education platform. Feel free to explore the pages and learn more about space. The +statistics page offers information on visitor counts on your deployed service. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. In this +tutorial, you used Terraform to deploy two versions of a VMware vSphere cluster profile. The snippet below displays a +segment of the Terraform cluster profile resource version `1.0.0` that was deployed. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "vmware-profile" { + count = var.deploy-vmware ? 1 : 0 + + name = "tf-vmware-profile" + description = "A basic cluster profile for VMware" + tags = concat(var.tags, ["env:VMware"]) + cloud = "vsphere" + type = "cluster" + version = "1.0.0" +``` + +Open the **terraform.tfvars** file, set the `deploy-vmware-kubecost` variable to true, and save the file. Once applied, +the host cluster will use version `1.1.0` of the cluster profile with the Kubecost pack. + +The snippet below displays the segment of the Terraform resource that creates the cluster profile version `1.1.0`. Note +how the name `tf-vmware-profile` is the same as in the first cluster profile resource, but the version is different. + +```hcl {4,9} +resource "spectrocloud_cluster_profile" "vmware-profile-kubecost" { + count = var.deploy-vmware ? 1 : 0 + + name = "tf-vmware-profile" + description = "A basic cluster profile for VMware with Kubecost" + tags = concat(var.tags, ["env:VMware"]) + cloud = "vsphere" + type = "cluster" + version = "1.1.0" +``` + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that one resource will be modified. The deployed cluster will now use version `1.1.0` of the cluster +profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette will now reconcile the current state of your workloads with the desired state specified by the new cluster +profile version. + +To visualize the reconciliation behavior, log in to [Palette](https://console.spectrocloud.com), and click **Clusters** +from the left **Main Menu**. + +Select the cluster named `vmware-cluster`. Click on the **Events** tab. Note how a cluster reconciliation action was +triggered due to cluster profile changes. + +![Image that shows the cluster profile reconciliation behavior](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp) + +Next, click on the **Profile** tab. Observe that the cluster is now using version `1.1.0` of the `tf-vmware-profile` +cluster profile. + +![Image that shows the new cluster profile version with Kubecost](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp) + +Once the changes have been completed, Palette marks the cluster layers with a green status indicator. Click the +**Overview** tab to verify that the Kubecost pack was successfully deployed. + +![Image that shows the cluster with Kubecost](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp) + +Next, download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette +UI. This file enables you and other users to issue `kubectl` commands against the host cluster. + +![Image that shows the cluster's kubeconfig file location](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp) + +Open a new terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```bash +export KUBECONFIG=~/Downloads/admin.vmware-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```bash +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +information about your cluster. + +To use Kubecost in VMware vSphere clusters, you must enable the +[custom pricing](https://docs.kubecost.com/architecture/pricing-sources-matrix#cloud-provider-on-demand-api) option in +the Kubecost UI and manually set the monthly cluster costs. + +Read more about [Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to +make the most of the cost analyzer pack. + +![Image that shows the Kubecost UI](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp) + +Once you are done exploring the Kubecost dashboard, stop the `kubectl port-forward` command by closing the terminal +window it is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process of rolling back to a previous version using Terraform is similar to the process of applying a new version. + +Open the **terraform.tfvars** file, set the `deploy-vmware-kubecost` variable to false, and save the file. Once applied, +this action will make the active cluster use version **1.0.0** of the cluster profile again. + +In the terminal window, issue the following command to plan the changes. + +```bash +terraform plan +``` + +The output states that the deployed cluster will now use version `1.0.0` of the cluster profile. + +```text hideClipboard +Plan: 0 to add, 1 to change, 0 to destroy. +``` + +Issue the `apply` command to deploy the changes. + +```bash +terraform apply -auto-approve +``` + +Palette now makes the changes required for the cluster to return to the state specified in version `1.0.0` of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Image that shows the cluster using version 1.0.0 of the cluster profile](/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp) + +## Cleanup + +Use the following steps to clean up the resources you created for the tutorial. Use the `destroy` command to remove all +the resources you created through Terraform. + +```shell +terraform destroy --auto-approve +``` + +A successful execution of `terraform destroy` will output the following. + +```shell +Destroy complete! Resources: 6 destroyed. +``` + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for force delete. To trigger a force +delete action, navigate to the cluster’s details page and click on **Settings**. Click on **Force Delete Cluster** to +delete the cluster. Palette automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +If you are using the tutorial container, type `exit` in your terminal session and press the **Enter** key. Next, issue +the following command to stop and remove the container. + + + + + +```shell +docker stop tutorialContainer && \ +docker rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +```shell +podman stop tutorialContainer && \ +podman rmi --force ghcr.io/spectrocloud/tutorials:1.1.9 +``` + + + + + +## Wrap-Up + +In this tutorial, you learned how to create different versions of a cluster profile using Terraform. You deployed a host +VMware vSphere cluster and then updated it to use a different version of a cluster profile. Finally, you learned how to +perform cluster profile roll backs. + +We encourage you to check out the [Scale, Upgrade, and Secure Clusters](./scale-secure-cluster.md) tutorial to learn how +to perform common Day-2 operations on your deployed clusters. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/vmware/deploy-pcg.md b/docs/docs-content/getting-started/vmware/deploy-pcg.md new file mode 100644 index 0000000000..9dcd5aeb25 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/deploy-pcg.md @@ -0,0 +1,73 @@ +--- +sidebar_label: "Deploy a PCG" +title: "Deploy a PCG with Palette CLI" +description: "Learn to deploy a PCG with Palette CLI." +icon: "" +hide_table_of_contents: false +sidebar_position: 20 +tags: ["getting-started", "vmware"] +--- + +Palette Private Cloud Gateway (PCG) is a crucial infrastructure support component that acts as a bridge between your +private cloud environment or data center and Palette. + +A PCG is required in environments lacking direct network access to Palette. For example, many infrastructure +environments reside within private networks that restrict external connections, preventing internal devices and +resources from reaching Palette directly. + +Upon installation, the PCG initiates a connection from inside the private network to Palette, serving as an endpoint for +Palette to communicate with the infrastructure environment. The PCG continuously polls Palette for instructions to +either deploy or delete Kubernetes clusters within the environment. This connection uses a secure communication channel +that is encrypted using the Transport Layer Security (TLS) protocol. Once a cluster is deployed, the PCG is no longer +involved in the communication between Palette and the deployed cluster. The cluster then communicates directly with +Palette through the Palette agent available within each cluster, which originates all network requests outbound toward +Palette. Refer to the [PCG Architecture](../../clusters/pcg/architecture.md) section for more information. + +In this tutorial, you will deploy a VMware PCG using Palette CLI. + +### Prerequisites + +Follow the steps described in the [Set up Palette with VMware](./setup.md) guide to authenticate Palette for use with +your VMware user account. + +You will need a Linux x86-64 machine with access to a terminal and Internet, as well as connection to both Palette and +VMware vSphere. + + - The following IP address requirements must be met in your VMware vSphere environment: + - One IP address available for the single-node PCG deployment. Refer to the [PCG Sizing](../../clusters/pcg/manage-pcg/scale-pcg-nodes.md) section for more information on sizing. + - One IP address reserved for cluster repave operations. + - One IP address for the Virtual IP (VIP). + - DNS must be able to resolve the domain `api.spectrocloud.com`. + - NTP server must be reachable from the PCG. + - The following minimum resources must be available in your VMware vSphere environment: + - CPU: 4 cores. + - Memory: 4 GiB. + - Storage: 60 GiB. + +
+ + :::info + + In production environments, we recommend deploying a three-node PCG, each node with 8 cores of CPU, 8 GiB of memory, and 100 GiB of storage. + + ::: + + - Ensure the following software is installed and available on your Linux machine. + - [Palette CLI](../../automation/palette-cli/install-palette-cli.md). + - [Docker](https://docs.docker.com/desktop). + - [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation). + - [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + +## Authenticate with Palette + + + +## Deploy a PCG + + + +## Next Steps + +In this tutorial, you deployed a PCG to connect Palette to your VMware vSphere environment. To learn how to get started +with deploying Kubernetes clusters to VMware, we recommend that you continue to the +[Create a Cluster Profile](./create-cluster-profile.md) tutorial to create a full cluster profile for your host cluster. diff --git a/docs/docs-content/getting-started/vmware/scale-secure-cluster.md b/docs/docs-content/getting-started/vmware/scale-secure-cluster.md new file mode 100644 index 0000000000..5e3142f6d9 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/scale-secure-cluster.md @@ -0,0 +1,542 @@ +--- +sidebar_label: "Scale, Upgrade, and Secure Clusters" +title: "Scale, Upgrade, and Secure Clusters" +description: "Learn how to scale, upgrade, and secure Palette host clusters deployed to VMware." +icon: "" +hide_table_of_contents: false +sidebar_position: 60 +tags: ["getting-started", "vmware", "tutorial"] +--- + +Palette has in-built features to help with the automation of Day-2 operations. Upgrading and maintaining a deployed +cluster is typically complex because you need to consider any possible impact on service availability. Palette provides +out-of-the-box functionality for upgrades, observability, granular Role Based Access Control (RBAC), backup and security +scans. + +This tutorial will teach you how to use the Palette UI to perform scale and maintenance tasks on your clusters. You will +learn how to create Palette projects and teams, import a cluster profile, safely upgrade the Kubernetes version of a +deployed cluster and scale up your cluster nodes. The concepts you learn about in the Getting Started section are +centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with VMware](./setup.md) guide to +authenticate Palette for use with your VMware vSphere account. + +Follow the steps described in the [Deploy a PCG](./deploy-pcg.md) tutorial to deploy a VMware vSphere Private Cloud +Gateway (PCG). + +Additionally, you should install kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +## Create Palette Projects + +Palette projects help you organize and manage cluster resources, providing logical groupings. They also allow you to +manage user access control through Role Based Access Control (RBAC). You can assign users and teams with specific roles +to specific projects. All resources created within a project are scoped to that project and only available to that +project, but a tenant can have multiple projects. + +Log in to [Palette](https://console.spectrocloud.com). + +Click on the **drop-down Menu** at the top of the page and switch to the **Tenant Admin** scope. Palette provides the +**Default** project out-of-the-box. + +![Image that shows how to select tenant admin scope](/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp) + +Navigate to the left **Main Menu** and click on **Projects**. Click on the **Create Project** button. The **Create a new +project** dialog appears. + +Fill out the input fields with values from the table below to create a project. + +| Field | Description | Value | +| ----------- | ----------------------------------- | --------------------------------------------------------- | +| Name | The name of the project. | `Project-ScaleSecureTutorial` | +| Description | A brief description of the project. | Project for Scale, Upgrade, and Secure Clusters tutorial. | +| Tags | Add tags to the project. | `env:dev` | + +Click **Confirm** to create the project. Once Palette finishes creating the project, a new card appears on the +**Projects** page. + +Navigate to the left **Main Menu** and click on **Users & Teams**. + +Select the **Teams** tab. Then, click on **Create Team**. + +Fill in the **Team Name** with **scale-secure-tutorial-team**. Click on **Confirm**. + +Once Palette creates the team, select it from the **Teams** list. The **Team Details** pane opens. + +On the **Project Roles** tab, click on **New Project Role**. The list of project roles appears. + +Select the **Project-ScaleSecureTutorial** from the **Projects** drop-down. Then, select the **Cluster Profile Viewer** +and **Cluster Viewer** roles. Click on **Confirm**. + +![Image that shows how to select team roles](/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp) + +Any users that you add to this team inherit the project roles assigned to it. Roles are the foundation of Palette's RBAC +enforcement. They allow a single user to have different types of access control based on the resource being accessed. In +this scenario, any user added to this team will have access to view any cluster profiles and clusters in the +**Project-ScaleSecureTutorial** project, but not modify them. Check out the +[Palette RBAC](../../user-management/palette-rbac/palette-rbac.md) section for more details. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on **Open project** on the **Project-ScaleSecureTutorial** card. + +![Image that shows how to open the tutorial project](/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp) + +Your scope changes from **Tenant Admin** to **Project-ScaleSecureTutorial**. All further resources you create will be +part of this project. + +## Import a Cluster Profile + +Palette provides three resource contexts. They help you customize your environment to your organizational needs, as well +as control the scope of your settings. + +| Context | Description | +| ------- | ---------------------------------------------------------------------------------------- | +| System | Resources are available at the system level and to all tenants in the system. | +| Tenant | Resources are available at the tenant level and to all projects belonging to the tenant. | +| Project | Resources are available within a project and not available to other projects. | + +All of the resources you have created as part of your Getting Started journey have used the **Project** context. They +are only visible in the **Default** project. Therefore, you will need to create a new cluster profile in +**Project-ScaleSecureTutorial**. + +Navigate to the left **Main Menu** and click on **Profiles**. Click on **Import Cluster Profile**. The **Import Cluster +Profile** pane opens. + +Paste the following in the text editor. Click on **Validate**. The **Select repositories** dialog appears. + + + +Click on **Confirm**. Then, click on **Confirm** on the **Import Cluster Profile** pane. Palette creates a new cluster +profile named **vmware-profile**. + +On the **Profiles** list, select **Project** from the **Contexts** drop-down. Your newly created cluster profile +displays. The Palette UI confirms that the cluster profile was created in the scope of the +**Project-ScaleSecureTutorial**. + +![Image that shows the cluster profile ](/getting-started/vmware/getting-started_scale-secure-cluster_cluster-profile-created.webp) + +Select the cluster profile to view its details. The cluster profile summary appears. + +This cluster profile deploys the [Hello Universe](https://github.com/spectrocloud/hello-universe) application using a +pack. Click on the **hellouniverse 1.2.0** layer. The pack manifest editor appears. + +Click on **Presets** on the right-hand side. You can learn more about the pack presets on the pack README, which is +available in the Palette UI. Select the **Enable Hello Universe API** preset. The pack manifest changes accordingly. + +![Screenshot of pack presets](/getting-started/vmware/getting-started_scale-secure-cluster_pack-presets.webp) + +The pack requires two values to be replaced for the authorization token and for the database password when using this +preset. Replace these values with your own base64 encoded values. The +[_hello-universe_](https://github.com/spectrocloud/hello-universe?tab=readme-ov-file#single-load-balancer) repository +provides a token that you can use. + +Click on **Confirm Updates**. The manifest editor closes. + +Click on the **lb-metallb-helm** layer. The pack manifest editor appears. + +Replace the predefined `192.168.10.0/24` IP CIDR listed below the **addresses** line with a valid IP address or IP range +from your VMware environment to be assigned to your load balancer. + +![Metallb Helm-based pack.](/getting-started/vmware/getting-started_scale-secure-cluster_metallb-pack.webp) + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to save your updates. + +## Deploy a Cluster + +Navigate to the left **Main Menu** and select **Clusters**. Click on **Create Cluster**. + +Palette will prompt you to select the type of cluster. Select **VMware** and click on **Start VMware Configuration**. + +Continue with the rest of the cluster deployment flow using the cluster profile you created in the +[Import a Cluster Profile](#import-a-cluster-profile) section, named **vmware-profile**. Refer to the +[Deploy a Cluster](./deploy-k8s-cluster.md#deploy-a-cluster) tutorial for additional guidance or if you need a refresher +of the Palette deployment flow. + +### Verify the Application + +Navigate to the left **Main Menu** and select **Clusters**. + +Select your cluster to view its **Overview** tab. + +When the application is deployed and ready for network traffic, Palette exposes the service URL in the **Services** +field. Click on the URL for port **:8080** to access the Hello Universe application. + +![Cluster details page with service URL highlighted](/getting-started/vmware/getting-started_scale-secure-cluster_service_url.webp) + +## Upgrade Kubernetes Versions + +Regularly upgrading your Kubernetes version is an important part of maintaining a good security posture. New versions +may contain important patches to security vulnerabilities and bugs that could affect the integrity and availability of +your clusters. + +Palette supports three minor Kubernetes versions at any given time. We support the current release and the three +previous minor version releases, also known as N-3. For example, if the current release is 1.29, we support 1.28, 1.27, +and 1.26. + +:::warning + +Once you upgrade your cluster to a new Kubernetes version, you will not be able to downgrade. + +::: + +We recommend using cluster profile versions to safely upgrade any layer of your cluster profile and maintain the +security of your clusters. Expand the following section to learn how to create a new cluster profile version with a +Kubernetes upgrade. + +
+ +Upgrade Kubernetes using Cluster Profile Versions + +Navigate to the left **Main Menu** and click on **Profiles**. Select the cluster profile that you used to deploy your +cluster, named **vmware-profile**. The cluster profile details page appears. + +Click on the version drop-down and select **Create new version**. The version creation dialog appears. + +Fill in **1.1.0** in the **Version** input field. Then, click on **Confirm**. The new cluster profile version is created +with the same layers as version **1.0.0**. + +Select the **kubernetes 1.27.x** layer of the profile. The pack manifest editor appears. + +Click on the **Pack Version** dropdown. All of the available versions of the **Palette eXtended Kubernetes** pack +appear. The cluster profile is configured to use the latest patch version of **Kubernetes 1.27**. + +![Cluster profile with all Kubernetes versions](/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-versions.webp) + +The official guidelines for Kubernetes upgrades recommend upgrading one minor version at a time. For example, if you are +using Kubernetes version 1.26, you should upgrade to 1.27, before upgrading to version 1.28. You can learn more about +the official Kubernetes upgrade guidelines in the +[Version Skew Policy](https://kubernetes.io/releases/version-skew-policy/) page. + +Select **1.28.x** from the version dropdown. This selection follows the Kubernetes upgrade guidelines as the cluster +profile is using **1.27.x**. + +The manifest editor highlights the changes made by this upgrade. Once you have verified that the upgrade changes +versions as expected, click on **Confirm changes**. + +Click on **Confirm Updates**. Then, click on **Save Changes** to persist your updates. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Profile** tab. Your cluster is currently using the **1.0.0** version of your cluster profile. + +Change the cluster profile version by selecting **1.1.0** from the version drop-down. Click on **Review & Save**. The +**Changes Summary** dialog appears. + +Click on **Review changes in Editor**. The **Review Update Changes** dialog displays the same Kubernetes version +upgrades as the cluster profile editor previously did. Click on **Update**. + +
+ +Upgrading the Kubernetes version of your cluster modifies an infrastructure layer. Therefore, Kubernetes needs to +replace its nodes. This is known as a repave. Check out the +[Node Pools](../../clusters/cluster-management/node-pool.md#repave-behavior-and-configuration) page to learn more about +the repave behavior and configuration. + +Click on the **Nodes** tab. You can follow along with the node upgrades on this screen. Palette replaces the nodes +configured with the old Kubernetes version with newly upgraded ones. This may affect the performance of your +application, as Kubernetes swaps the workloads to the upgraded nodes. + +![Node repaves in progress](/getting-started/vmware/getting-started_scale-secure-cluster_node-repaves.webp) + +### Verify the Application + +The cluster update completes when the Palette UI marks the cluster profile layers as green and the cluster is in a +**Healthy** state. The cluster **Overview** page also displays the Kubernetes version as **1.28**. Click on the URL for +port **:8080** to access the application and verify that your upgraded cluster is functional. + +![Kubernetes upgrade applied](/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp) + +## Scan Clusters + +Palette provides compliance, security, conformance, and Software Bill of Materials (SBOM) scans on tenant clusters. +These scans ensure cluster adherence to specific compliance and security standards, as well as detect potential +vulnerabilities. You can perform four types of scans on your cluster. + +| Scan | Description | +| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| Kubernetes Configuration Security | This scan examines the compliance of deployed security features against the CIS Kubernetes Benchmarks, which are consensus-driven security guidelines for Kubernetes. By default, the test set will execute based on the cluster Kubernetes version. | +| Kubernetes Penetration Testing | This scan evaluates Kubernetes-related open-ports for any configuration issues that can leave the tenant clusters exposed to attackers. It hunts for security issues in your clusters and increases visibility of the security controls in your Kubernetes environments. | +| Kubernetes Conformance Testing | This scan validates your Kubernetes configuration to ensure that it conforms to CNCF specifications. Palette leverages an open-source tool called [Sonobuoy](https://sonobuoy.io) to perform this scan. | +| Software Bill of Materials (SBOM) | This scan details the various third-party components and dependencies used by your workloads and helps to manage security and compliance risks associated with those components. | + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Scan** tab. The list of all the available cluster scans appears. Palette indicates that you have never +scanned your cluster. + +![Scans never performed on the cluster](/getting-started/vmware/getting-started_scale-secure-cluster_never-scanned-cluster.webp) + +Click **Run Scan** on the **Kubernetes configuration security** and **Kubernetes penetration testing** scans. Palette +schedules and executes these scans on your cluster, which may take a few minutes. Once they complete, you can download +the report in PDF, CSV or view the results directly in the Palette UI. + +![Scans completed on the cluster](/getting-started/vmware/getting-started_scale-secure-cluster_scans-completed.webp) + +Click on **Configure Scan** on the **Software Bill of Materials (SBOM)** scan. The **Configure SBOM Scan** dialog +appears. + +Leave the default selections on this screen and click on **Confirm**. Optionally, you can configure an S3 bucket to save +your report into. Refer to the +[Configure an SBOM Scan](../../clusters/cluster-management/compliance-scan.md#configure-an-sbom-scan) guide to learn +more about the configuration options of this scan. + +Once the scan completes, click on the report to view it within the Palette UI. The third-party dependencies that your +workloads rely on are evaluated for potential security vulnerabilities. Reviewing the SBOM enables organizations to +track vulnerabilities, perform regular software maintenance, and ensure compliance with regulatory requirements. + +:::info + +The scan reports highlight any failed checks, based on Kubernetes community standards and CNCF requirements. We +recommend that you prioritize the rectification of any identified issues. + +::: + +As you have seen so far, Palette scans are crucial when maintaining your security posture. Palette provides the ability +to schedule your scans and periodically evaluate your clusters. In addition, it keeps a history of previous scans for +comparison purposes. Expand the following section to learn how to configure scan schedules for your cluster. + +
+ +Configure Cluster Scan Schedules + +Click on **Settings**. Then, select **Cluster Settings**. The **Settings** pane appears. + +Select the **Schedule Scans** option. You can configure schedules for you cluster scans. Palette provides common scan +schedules or you can provide a custom time. We recommend choosing a schedule when you expect the usage of your cluster +to be lowest. Otherwise, the scans may impact the performance of your nodes. + +![Scan schedules](/getting-started/vmware/getting-started_scale-secure-cluster_scans-schedules.webp) + +Palette will automatically scan your cluster according to your configured schedule. + +
+ +## Scale a Cluster + +A node pool is a group of nodes within a cluster that all have the same configuration. You can use node pools for +different workloads. For example, you can create a node pool for your production workloads and another for your +development workloads. You can update node pools for active clusters or create a new one for the cluster. + +Navigate to the left **Main Menu** and select **Clusters**. Select your cluster to view its **Overview** tab. + +Select the **Nodes** tab. Your cluster has a **control-plane-pool** and a **worker-pool**. Each pool contains one node. + +Select the **Overview** tab. Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file. + +![kubeconfig download](/getting-started/vmware/getting-started_scale-secure-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.vmware-cluster.kubeconfig +``` + +Execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals two nodes, one for the worker pool and one for the control plane. Make a note of the name of your +worker node, which is the node that does not have the `control-plane` role. In the example below, +`vmware-cluster-worker-pool-7d6d76b55b-dhffq` is the name of the worker node. + +```shell +NAME STATUS ROLES AGE VERSION +vmware-cluster-cp-xcqlw Ready control-plane 28m v1.28.11 +vmware-cluster-worker-pool-7d6d76b55b-dhffq Ready 28m v1.28.11 +``` + +The Hello Universe pack deploys three pods in the `hello-universe` namespace. Execute the following command to verify +where these pods have been scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that all of the pods have been scheduled on the worker node you made a note of previously. + +```shell +NAME READY STATUS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq +postgres-698d7ff8f4-vbktf 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq +ui-5f777c76df-pplcv 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq +``` + +Navigate back to the Palette UI in your browser. Select the **Nodes** tab. + +Click on **New Node Pool**. The **Add node pool** dialog appears. This workflow allows you to create a new worker pool +for your cluster. Fill in the following configuration. + +| Field | Value | Description | +| --------------------- | --------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Node pool name** | `worker-pool-2` | The name of your worker pool. | +| **Enable Autoscaler** | Enabled | Whether Palette should scale the pool horizontally based on its per-node workload counts. The **Minimum size** parameter specifies the lower bound of nodes in the pool and the **Maximum size** specifies the upper bound. By default, **Minimum size** is `1` and **Maximum size** is `3`. | +| **CPU** | 4 cores | Set the number of CPUs equal to the already provisioned nodes. | +| **Memory** | 8 GB | Set the memory allocation equal to the already provisioned nodes. | +| **Disk** | 60 GB | Set the disk space equal to the already provisioned nodes. | + +Next, populate the **Compute cluster**, **Resource Pool**, **Datastore**, and **Network** fields according to your +VMware vSphere environment. + +Click on **Confirm**. The dialog closes. Palette begins provisioning your node pool. Once the process completes, your +three node pools appear in a healthy state. + +![New worker pool provisioned](/getting-started/vmware/getting-started_scale-secure-cluster_third-node-pool.webp) + +Navigate back to your terminal and execute the following command in your terminal to view the nodes of your cluster. + +```shell +kubectl get nodes +``` + +The output reveals three nodes, two for worker pools and one for the control plane. Make a note of the names of your +worker nodes. In the example below, `vmware-cluster-worker-pool-7d6d76b55b-dhffq ` and +`vmware-cluster-worker-pool-2-5b4b559f6d-znbtm` are the worker nodes. + +```shell +NAME STATUS ROLES AGE VERSION +vmware-cluster-cp-xcqlw Ready control-plane 58m v1.28.11 +vmware-cluster-worker-pool-2-5b4b559f6d-znbtm Ready 30m v1.28.11 +vmware-cluster-worker-pool-7d6d76b55b-dhffq Ready 58m v1.28.11 +``` + +It is common to dedicate node pools to a particular type of workload. One way to specify this is through the use of +Kubernetes taints and tolerations. + +Taints provide nodes with the ability to repel a set of pods, allowing you to mark nodes as unavailable for certain +pods. Tolerations are applied to pods and allow the pods to schedule onto nodes with matching taints. Once configured, +nodes do not accept any pods that do not tolerate the taints. + +The animation below provides a visual representation of how taints and tolerations can be used to specify which +workloads execute on which nodes. + +![Taints repel pods to a new node](/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif) + +Switch back to Palette in your web browser. Navigate to the left **Main Menu** and select **Profiles**. Select the +cluster profile deployed to your cluster, named `vmware-profile`. Ensure that the **1.1.0** version is selected. + +Click on the **hellouniverse 1.2.0** layer. The manifest editor appears. Set the +`manifests.hello-universe.ui.useTolerations` field on line 20 to `true`. Then, set the +`manifests.hello-universe.ui.effect` field on line 22 to `NoExecute`. This toleration describes that the UI pods of +Hello Universe will tolerate the taint with the key `app`, value `ui` and effect `NoExecute`. The tolerations of the UI +pods should be as below. + +```yaml +ui: + useTolerations: true + tolerations: + effect: NoExecute + key: app + value: ui +``` + +Click on **Confirm Updates**. The manifest editor closes. Then, click on **Save Changes** to persist your changes. + +Navigate to the left **Main Menu** and select **Clusters**. Select your deployed cluster, named **vmware-cluster**. + +Due to the changes you have made to the cluster profile, this cluster has a pending update. Click on **Updates**. The +**Changes Summary** dialog appears. + +Click on **Review Changes in Editor**. The **Review Update Changes** dialog appears. The toleration changes appear as +incoming configuration. + +Click on **Apply Changes** to apply the update to your cluster. + +Select the **Nodes** tab. Click on **Edit** on the first worker pool, named **worker-pool**. The **Edit node pool** +dialog appears. + +Click on **Add New Taint** in the **Taints** section. Fill in `app` for the **Key**, `ui` for the **Value** and select +`NoExecute` for the **Effect**. These values match the toleration you specified in your cluster profile earlier. + +![Add taint to worker pool](/getting-started/getting-started_scale-secure-cluster_add-taint.webp) + +Click on **Confirm** to save your changes. The nodes in the `worker-pool` can now only execute the UI pods that have a +toleration matching the configured taint. + +Switch back to your terminal. Execute the following command again to verify where the Hello Universe pods have been +scheduled. + +```shell +kubectl get pods --namespace hello-universe --output wide +``` + +The output verifies that the UI pods have remained scheduled on their original node named +`vmware-cluster-worker-pool-7d6d76b55b-dhffq`, while the other two pods have been moved to the node of the second worker +pool named `vmware-cluster-worker-pool-2-5b4b559f6d-znbtm`. + +```shell +NAME READY STATUS AGE NODE +api-7db799cf85-5w5l6 1/1 Running 20m vmware-cluster-worker-pool-2-5b4b559f6d-znbtm +postgres-698d7ff8f4-vbktf 1/1 Running 20m vmware-cluster-worker-pool-2-5b4b559f6d-znbtm +ui-5f777c76df-pplcv 1/1 Running 20m vmware-cluster-worker-pool-7d6d76b55b-dhffq +``` + +Taints and tolerations are a common way of creating nodes dedicated to certain workloads, once the cluster has scaled +accordingly through its provisioned node pools. Refer to the +[Taints and Tolerations](../../clusters/cluster-management/taints.md) guide to learn more. + +### Verify the Application + +Select the **Overview** tab. Click on the URL for port **:8080** to access the Hello Universe application and verify +that the application is functioning correctly. + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/vmware/getting-started_scale-secure-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name `vmware-cluster` +to proceed with the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + +Click on the **drop-down Menu** at the top of the page and switch to **Tenant Admin** scope. + +Navigate to the left **Main Menu** and click on **Projects**. + +Click on the **three-dot Menu** of the **Project-ScaleSecureTutorial** and select **Delete**. A pop-up box will ask you +to confirm the action. Confirm the deletion. + +Navigate to the left **Main Menu** and click on **Users & Teams**. Select the **Teams** tab. + +Click on **scale-secure-tutorial-team** list entry. The **Team Details** pane appears. Click on **Delete Team**. A +pop-up box will ask you to confirm the action. Confirm the deletion. + +## Wrap-up + +In this tutorial, you learned how to perform very important operations relating to the scalability and availability of +your clusters. First, you created a project and team. Next, you imported a cluster profile and deployed a host VMware +vSphere cluster. Then, you upgraded the Kubernetes version of your cluster and scanned your clusters using Palette's +scanning capabilities. Finally, you scaled your cluster's nodes and used taints to select which Hello Universe pods +execute on them. + +We encourage you to check out the [Additional Capabilities](../additional-capabilities/additional-capabilities.md) to +explore other Palette functionalities. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/vmware/setup.md b/docs/docs-content/getting-started/vmware/setup.md new file mode 100644 index 0000000000..6e6e03d335 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/setup.md @@ -0,0 +1,65 @@ +--- +sidebar_label: "Set up Palette" +title: "Set up Palette with VMware" +description: "Learn how to set up Palette with VMware." +icon: "" +hide_table_of_contents: false +sidebar_position: 10 +tags: ["getting-started", "vmware"] +--- + +In this guide, you will learn how to set up Palette for use with your VMware user account. These steps are required in +order to authenticate Palette and allow it to deploy host clusters. The concepts you learn about in the Getting Started +section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +- A Palette account with [tenant admin](../../tenant-settings/tenant-settings.md) access. + +- A [VMware vSphere](https://docs.vmware.com/en/VMware-vSphere/index.html) user account with the + [required permissions](../../clusters/data-center/vmware/permissions.md). + +## Enablement + +Palette needs access to your VMware user account in order to create and manage VMware resources. + +### Create a Palette API Key + + + +### Create and Upload an SSH Key + +Follow the steps below to create an SSH key using the terminal and upload it to Palette. This step is optional for the +[Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) tutorial. + + + +## Validate + +You can verify your Palette API key is added. + +1. Log in to [Palette](https://console.spectrocloud.com). + +2. Switch to the **Tenant Admin** scope. + +3. Navigate to the left **Main Menu** and select **Tenant Settings**. + +4. From the **Tenant Settings Menu**, select **API Keys**. + +5. Verify the API key is listed in the table with the correct user name and expiration date. + +## Next Steps + +Now that you set up Palette for use with VMware vSphere, you can start deploying a Private Cloud Gateway (PCG), which is +the bridge between Palette and your private infrastructure environment. + +To learn how to get started with deploying Kubernetes clusters to VMware virtual machines, we recommend that you +continue to the [Deploy a PCG with Palette CLI](./deploy-pcg.md) tutorial. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/vmware/update-k8s-cluster.md b/docs/docs-content/getting-started/vmware/update-k8s-cluster.md new file mode 100644 index 0000000000..f3ad5d391d --- /dev/null +++ b/docs/docs-content/getting-started/vmware/update-k8s-cluster.md @@ -0,0 +1,305 @@ +--- +sidebar_label: "Deploy Cluster Profile Updates" +title: "Deploy Cluster Profile Updates" +description: "Learn how to update your deployed clusters using Palette Cluster Profiles." +icon: "" +hide_table_of_contents: false +sidebar_position: 40 +tags: ["getting-started", "vmware"] +--- + +Palette provides cluster profiles, which allow you to specify layers for your workloads using packs, Helm charts, Zarf +packages, or cluster manifests. Packs serve as blueprints to the provisioning and deployment process, as they contain +the versions of the container images that Palette will install for you. Cluster profiles provide consistency across +environments during the cluster creation process, as well as when maintaining your clusters. Check out +[Cluster Profiles](../introduction.md#cluster-profiles) to learn more. Once provisioned, there are three main ways to +update your Palette deployments. + +| Method | Description | Cluster application process | +| ------------------------ | ---------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Cluster profile versions | Create a new version of the cluster profile with your updates. | Select the new version of the cluster profile. Apply this new profile version to the clusters you want to update. | +| Cluster profile updates | Change the cluster profile in place. | Palette detects the difference between the provisioned resources and this profile. A pending update is available to clusters using this profile. Apply pending updates to the clusters you want to update. | +| Cluster overrides | Change the configuration of a single deployed cluster outside its cluster profile. | Save and apply the changes you've made to your cluster. | + +This tutorial will teach you how to update a cluster deployed with Palette to VMware vSphere. You will explore each +cluster update method and learn how to apply these changes using Palette. The concepts you learn about in the Getting +Started section are centered around a fictional case study company, Spacetastic Ltd. + +## 🧑‍🚀 Back at Spacetastic HQ + + + +## Prerequisites + +To complete this tutorial, follow the steps described in the [Set up Palette with VMware](./setup.md) guide to +authenticate Palette for use with your VMware user account. + +Additionally, you should install Kubectl locally. Use the Kubernetes +[Install Tools](https://kubernetes.io/docs/tasks/tools/) page for further guidance. + +Follow the instructions of the [Deploy a Cluster](./deploy-k8s-cluster.md) tutorial to deploy a cluster with the +[_hello-universe_](https://github.com/spectrocloud/hello-universe) application. Your cluster should be successfully +provisioned and in a healthy state. + +The cluster profile name is `vmware-profile` and the cluster name is `vmware-cluster`. + +![Cluster details page with service URL highlighted](/getting-started/vmware/getting-started_deploy-k8s-cluster_service_url.webp) + +## Tag and Filter Clusters + +Palette provides the ability to add tags to your cluster profiles and clusters. This helps you organize and categorize +your clusters based on your custom criteria. You can add tags during the creation process or by editing the resource +after it has been created. + +Adding tags to your clusters helps you find and identify your clusters, without having to rely on cluster naming. This +is especially important when operating with many clusters or multiple cloud deployments. + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Find the `vmware-cluster` you +deployed with the _hello-universe_ application. Click on it to view its **Overview** tab. + +Click on the **Settings** drop-down Menu in the upper right corner and select **Cluster Settings**. + +Fill **service:hello-universe-frontend** in the **Tags (Optional)** input box. Click on **Save Changes**. Close the +panel. + +![Image that shows how to add a cluster tag](/getting-started/vmware/getting-started_update-k8s-cluster_add-service-tag.webp) + +Navigate to the left **Main Menu** and select **Clusters** to view your deployed clusters. Click on **Add Filter**, then +select the **Add custom filter** option. + +Use the drop-down boxes to fill in the values of the filter. Select **Tags** in the left-hand **drop-down Menu**. Select +**is** in the middle **drop-down Menu**. Fill in **service:hello-universe-frontend** in the right-hand input box. + +Click on **Apply Filter**. + +![Image that shows how to add a frontend service filter](/getting-started/vmware/getting-started_update-k8s-cluster_apply-frontend-filter.webp) + +Once you apply the filter, only the `vmware-cluster` with this tag is displayed. + +## Version Cluster Profiles + +Palette supports the creation of multiple cluster profile versions using the same profile name. This provides you with +better change visibility and control over the layers in your host clusters. Profile versions are commonly used for +adding or removing layers and pack configuration updates. + +The version number of a given profile must be unique and use the semantic versioning format `major.minor.patch`. If you +do not specify a version for your cluster profile, it defaults to **1.0.0**. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster. It should be named `vmware-profile`. Select it to view its +details. + +![Image that shows the frontend cluster profile with cluster linked to it](/getting-started/vmware/getting-started_update-k8s-cluster_profile-with-cluster.webp) + +The current version is displayed in the **drop-down Menu** next to the profile name. This profile has the default value +of **1.0.0**, as you did not specify another value when you created it. The cluster profile also shows the host clusters +that are currently deployed with this cluster profile version. + +Click on the version **drop-down Menu**. Select the **Create new version** option. + +A dialog box appears. Fill in the **Version** input with **1.1.0**. Click on **Confirm**. + +Palette creates a new cluster profile version and opens it. The version dropdown displays the newly created **1.1.0** +profile. This profile version is not deployed to any host clusters. + +![Image that shows cluster profile version 1.1.0](/getting-started/vmware/getting-started_update-k8s-cluster_new-version-overview.webp) + +The version **1.1.0** has the same layers as the version **1.0.0** it was created from. + +Click on **Add New Pack**. Select the **Public Repo** registry and scroll down to the **Monitoring** section. Find the +**Kubecost** pack and select it. Alternatively, you can use the search function with the pack name **Kubecost**. + +![Image that shows how to select the Kubecost pack](/getting-started/vmware/getting-started_update-k8s-cluster_select-kubecost-pack.webp) + +Once selected, the pack manifest is displayed in the manifest editor. + +Click on **Confirm & Create**. The manifest editor closes. + +Click on **Save Changes** to finish the configuration of this cluster profile version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab of this cluster. You can select a new version of your cluster profile by using the version +dropdown. + +Select the **1.1.0** version. + +![Image that shows how to select a new profile version for the cluster](/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-selection.webp) + +Click on **Save** to confirm your profile version selection. + +:::warning + +Palette has backup and restore capabilities available for your mission critical workloads. Ensure that you have adequate +backups before you make any cluster profile version changes in your production environments. You can learn more in the +[Backup and Restore](../../clusters/cluster-management/backup-restore/backup-restore.md) section. + +::: + +Palette now makes the required changes to your cluster according to the specifications of the configured cluster profile +version. Once your changes have completed, Palette marks your layers with the green status indicator. The Kubecost pack +will be successfully deployed. + +![Image that shows completed cluster profile updates](/getting-started/vmware/getting-started_update-k8s-cluster_completed-cluster-updates.webp) + +Download the [kubeconfig](../../clusters/cluster-management/kubeconfig.md) file for your cluster from the Palette UI. +This file enables you and other users to issue kubectl commands against the host cluster. + +![Image that the kubeconfig file](/getting-started/vmware/getting-started_update-k8s-cluster_download-kubeconfig.webp) + +Open a terminal window and set the environment variable `KUBECONFIG` to point to the kubeconfig file you downloaded. + +```shell +export KUBECONFIG=~/Downloads/admin.vmware-cluster.kubeconfig +``` + +Forward the Kubecost UI to your local network. The Kubecost dashboard is not exposed externally by default, so the +command below will allow you to access it locally on port **9090**. If port 9090 is already taken, you can choose a +different one. + +```shell +kubectl port-forward --namespace kubecost deployment/cost-analyzer-cost-analyzer 9090 +``` + +Open your browser window and navigate to `http://localhost:9090`. The Kubecost UI provides you with a variety of cost +visualization tools. Read more about +[Navigating the Kubecost UI](https://docs.kubecost.com/using-kubecost/navigating-the-kubecost-ui) to make the most of +the cost analyzer. + +![Image that shows the Kubecost UI](/getting-started/getting-started_update-k8s-cluster_kubecost-ui.webp) + +Once you are done exploring locally, you can stop the `kubectl port-forward` command by closing the terminal window it +is executing from. + +## Roll Back Cluster Profiles + +One of the key advantages of using cluster profile versions is that they make it possible to maintain a copy of +previously known working states. The ability to roll back to a previously working cluster profile in one action shortens +the time to recovery in the event of an incident. + +The process to roll back to a previous version is identical to the process for applying a new version. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the +**service:hello-universe-frontend** tag. Select it to view its **Overview** tab. + +Select the **Profile** tab. This cluster is currently deployed using cluster profile version **1.1.0**. Select the +option **1.0.0** in the version dropdown. This process is the reverse of what you have done in the previous section, +[Version Cluster Profiles](#version-cluster-profiles). + +Click on **Review & Save** to confirm your changes. The **Changes Summary** dialog appears again. + +Click on **Review changes in Editor**. The editor shows that the incoming version no longer contains the three-tier +application configuration. + +Click on **Apply Changes**. Select the **Overview** tab. + +Palette now makes the changes required for the cluster to return to the state specified in version **1.0.0** of your +cluster profile. Once your changes have completed, Palette marks your layers with the green status indicator. + +![Cluster details page with service URL highlighted](/getting-started/vmware/getting-started_deploy-k8s-cluster_service_url.webp) + +## Pending Updates + +Cluster profiles can also be updated in place, without the need to create a new cluster profile version. Palette +monitors the state of your clusters and notifies you when updates are available for your host clusters. You may then +choose to apply your changes at a convenient time. + +The previous state of the cluster profile will not be saved once it is overwritten. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the cluster with the tag +**service:hello-universe-frontend**. Select it to view its **Overview** tab. + +Select the **Profiles** tab. Then, select the **hello-universe** pack. Change the `replicas` field to `2` on line `15`. +Click on **Save**. The editor closes. + +This cluster now contains an override over its cluster profile. Palette uses the configuration you have just provided +for the single cluster over its cluster profile and begins making the appropriate changes. + +Once these changes are complete, select the **Workloads** tab. Then, select the **hello-universe** namespace. + +Two **ui** pods are available, instead of the one specified by your cluster profile. Your override has been successfully +applied. + +Navigate to the left **Main Menu** and select **Profiles** to view the cluster profile page. Find the cluster profile +corresponding to your _hello-universe-frontend_ cluster, named `vmware-profile`. + +Click on it to view its details. Select **1.0.0** in the version dropdown. + +Select the **hello-universe** pack. The editor appears. Change the `replicas` field to `3` on line `15`. Click on +**Confirm Updates**. The editor closes. + +Click on **Save Changes** to confirm the changes you have made to your profile. + +Navigate to the left **Main Menu** and select **Clusters**. Filter for the clusters with the **service** tag. Both of +your clusters match this filter. Palette indicates that the cluster associated with the cluster profile you updated has +updates available. + +![Image that shows the pending updates ](/getting-started/vmware/getting-started_update-k8s-cluster_pending-update-clusters-view.webp) + +Select this cluster to open its **Overview** tab. Click on **Updates** to begin the cluster update. + +![Image that shows the Updates button](/getting-started/vmware/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp) + +A dialog appears which shows the changes made in this update. Review the changes and ensure the only change is the +`replicas` field value. The pending update removes your cluster override and sets the `replicas` field to `3`. At this +point, you can choose to apply the pending changes or keep it by modifying the right-hand side of the dialog. + +![Image that shows the available updates dialog ](/getting-started/vmware/getting-started_update-k8s-cluster_available-updates-dialog.webp) + +Click on **Confirm updates** once you have finished reviewing your changes. + +Palette updates your cluster according to cluster profile specifications. Once these changes are complete, select the +**Workloads** tab. Then, select the **hello-universe** namespace. + +Three **ui** pods are available. The cluster profile update is now reflected by your cluster. + +## Cluster Observability + + + +## Cleanup + +Use the following steps to remove all the resources you created for the tutorial. + +To remove the cluster, navigate to the left **Main Menu** and click on **Clusters**. Select the cluster you want to +delete to access its details page. + +Click on **Settings** to expand the menu, and select **Delete Cluster**. + +![Delete cluster](/getting-started/vmware/getting-started_deploy-k8s-cluster_delete-cluster-button.webp) + +You will be prompted to type in the cluster name to confirm the delete action. Type in the cluster name to proceed with +the delete step. The deletion process takes several minutes to complete. + +:::info + +If a cluster remains in the delete phase for over 15 minutes, it becomes eligible for a force delete. To trigger a force +delete, navigate to the cluster’s details page, click on **Settings**, then select **Force Delete Cluster**. Palette +automatically removes clusters stuck in the cluster deletion phase for over 24 hours. + +::: + +Once the cluster is deleted, navigate to the left **Main Menu** and click on **Profiles**. Find the cluster profile you +created and click on the **three-dot Menu** to display the **Delete** button. Select **Delete** and confirm the +selection to remove the cluster profile. + + + +## Wrap-Up + +In this tutorial, you created deployed cluster profile updates. After the cluster was deployed to VMware, you updated +the cluster profile through three different methods: create a new cluster profile version, update a cluster profile in +place, and cluster profile overrides. After you made your changes, the Hello Universe application functioned as a +three-tier application with a REST API backend server. + +Cluster profiles provide consistency during the cluster creation process, as well as when maintaining your clusters. +They can be versioned to keep a record of previously working cluster states, giving you visibility when updating or +rolling back workloads across your environments. + +We recommend that you continue to the [Cluster Management with Terraform](./deploy-manage-k8s-cluster-tf.md) page to +learn about how you can use Palette with Terraform. + +## 🧑‍🚀 Catch up with Spacetastic + + diff --git a/docs/docs-content/getting-started/vmware/vmware.md b/docs/docs-content/getting-started/vmware/vmware.md new file mode 100644 index 0000000000..8a38d55860 --- /dev/null +++ b/docs/docs-content/getting-started/vmware/vmware.md @@ -0,0 +1,70 @@ +--- +sidebar_label: "Deploy a Cluster to VMware" +title: "Deploy a Cluster to VMware" +description: "Spectro Cloud Getting Started with VMware" +hide_table_of_contents: false +sidebar_custom_props: + icon: "" +tags: ["getting-started", "vmware"] +--- + +Palette supports integration with [VMware](https://www.vmware.com). You can deploy and manage +[Host Clusters](../../glossary-all.md#host-cluster) on VMware. The concepts you learn about in the Getting Started +section are centered around a fictional case study company. This approach gives you a solution focused approach, while +introducing you with Palette workflows and capabilities. + +## 🧑‍🚀 Welcome to Spacetastic! + + + +## Get Started + +In this section, you learn how to create a cluster profile. Then, you deploy a cluster to VMware vSphere using Palette. +Once your cluster is deployed, you can update it using cluster profile updates. + + diff --git a/docs/docs-content/getting-started/dashboard.md b/docs/docs-content/introduction/dashboard.md similarity index 99% rename from docs/docs-content/getting-started/dashboard.md rename to docs/docs-content/introduction/dashboard.md index 8736f6c7c9..2c9011ae8c 100644 --- a/docs/docs-content/getting-started/dashboard.md +++ b/docs/docs-content/introduction/dashboard.md @@ -4,7 +4,7 @@ title: "Palette Dashboard" description: "Explore the Spectro Cloud Palette Dashboard." icon: "" hide_table_of_contents: false -sidebar_position: 20 +sidebar_position: 10 tags: ["getting-started"] --- diff --git a/docs/docs-content/introduction/palette-modes.md b/docs/docs-content/introduction/palette-modes.md index 9c06d8c402..eef2f4a8a5 100644 --- a/docs/docs-content/introduction/palette-modes.md +++ b/docs/docs-content/introduction/palette-modes.md @@ -4,7 +4,7 @@ title: "App Mode and Cluster Mode" description: "Learn about the two modes available in Palette and how they benefit your Kubernetes experience." icon: "" hide_table_of_contents: false -sidebar_position: 0 +sidebar_position: 20 tags: ["app mode", "cluster mode"] --- diff --git a/docs/docs-content/introduction/resource-usage-estimation.md b/docs/docs-content/introduction/resource-usage-estimation.md index d90401fa45..895df64ca3 100644 --- a/docs/docs-content/introduction/resource-usage-estimation.md +++ b/docs/docs-content/introduction/resource-usage-estimation.md @@ -3,7 +3,7 @@ sidebar_label: "Resource Usage Calculation" title: "How Palette Calculates Your Resource Usage" description: "Learn what kCh is and how Palette measures your resource usage." hide_table_of_contents: false -sidebar_position: 20 +sidebar_position: 30 tags: ["usage", "kCh"] --- diff --git a/docs/docs-content/tenant-settings/projects/projects.md b/docs/docs-content/tenant-settings/projects/projects.md index ffe46b8820..3a4d156b0c 100644 --- a/docs/docs-content/tenant-settings/projects/projects.md +++ b/docs/docs-content/tenant-settings/projects/projects.md @@ -42,7 +42,7 @@ The following resources are scoped to a project by default: ## Project Dashboard -When a user logs in to Palette, the [project dashboard](../../getting-started/dashboard.md) is displayed by default. The +When a user logs in to Palette, the [project dashboard](../../introduction/dashboard.md) is displayed by default. The project dashboard displays a map containing all the clusters deployed in the project. A summary of the clusters deployed in the project, deleted, failed deployments, and the number of clusters pending an update is also displayed. diff --git a/docs/docs-content/tutorials/cluster-deployment/pcg/deploy-app-pcg.md b/docs/docs-content/tutorials/cluster-deployment/pcg/deploy-app-pcg.md index 30809471eb..4d155d27db 100644 --- a/docs/docs-content/tutorials/cluster-deployment/pcg/deploy-app-pcg.md +++ b/docs/docs-content/tutorials/cluster-deployment/pcg/deploy-app-pcg.md @@ -75,238 +75,11 @@ To complete this tutorial, you will need the following prerequisites in place. ## Authenticate with Palette -The initial step to deploy a PCG using Palette CLI involves authenticating with your Palette environment using the -[`palette login`](../../../automation/palette-cli/commands/login.md) command. - -In your terminal, execute the following command. - -```bash -palette login -``` - -Once issued, you will be prompted for several parameters to complete the authentication. The table below outlines the -required parameters along with the values that will be utilized in this tutorial. If a parameter is specific to your -environment and Palette account, such as your Palette API key, ensure to input the value according to your environment. -Check out the [Deploy a PCG to VMware vSphere](../../../clusters/pcg/deploy-pcg/vmware.md) guide for more information. -option. - -| **Parameter** | **Value** | **Environment-Specific** | -| ------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | -| **Spectro Cloud Console** | `https://console.spectrocloud.com`. If using a self-hosted instance of Palette, enter the URL for that instance. | No | -| **Allow Insecure Connection** | `Y`. Enabling this option bypasses x509 CA verification. In production environments, enter `Y` if you are using a self-hosted Palette or VerteX instance with self-signed TLS certificates and need to provide a file path to the instance CA. Otherwise, enter `N`. | No | -| **Spectro Cloud API Key** | Enter your Palette API Key. | Yes | -| **Spectro Cloud Organization** | Select your Palette Organization name. | Yes | -| **Spectro Cloud Project** | `None (TenantAdmin)` | No | -| **Acknowledge** | Accept the login banner message. [Login banner](../../../tenant-settings/login-banner.md) messages are only displayed if the tenant admin enabled a login banner. | Yes | - -After accepting the login banner message, you will receive the following output confirming you have successfully -authenticated with Palette. - -```text hideClipboard -Welcome to Spectro Cloud Palette -``` - -The video below demonstrates Palette's authentication process. Ensure you utilize values specific to your environment, -such as the correct Palette URL. Contact your Palette administrator for the correct URL if you use a self-hosted Palette -or VerteX instance. - - + ## Deploy a PCG with Palette CLI -After authenticating with Palette, you can proceed with the PCG creation process. Issue the command below to start the -PCG installation. - -```bash -palette pcg install -``` - -The `palette pcg install` command will prompt you for information regarding your PCG cluster, vSphere environment, and -resource configurations. The following tables display the required parameters along with the values that will be used in -this tutorial. Enter the provided values when prompted. If a parameter is specific to your environment, such as your -vSphere endpoint, enter the corresponding value according to your environment. For detailed information about each -parameter, refer to the [Deploy a PCG to VMware vSphere](../../../clusters/pcg/deploy-pcg/vmware.md) guide. - -:::info - -The PCG to be deployed in this tutorial is intended for educational purposes only and is not recommended for production -environments. - -::: - -1. **PCG General Information** - - Configure the PCG general information, including the **Cloud Type** and **Private Cloud Gateway Name**, as shown in - the table below. - - | **Parameter** | **Value** | **Environment-Specific** | - | :--------------------------------------------------- | ------------------ | ------------------------ | - | **Management Plane Type** | `Palette` | No | - | **Enable Ubuntu Pro (required for production)** | `N` | No | - | **Select an image registry type** | `Default` | No | - | **Cloud Type** | `VMware vSphere` | No | - | **Private Cloud Gateway Name** | `gateway-tutorial` | No | - | **Share PCG Cloud Account across platform Projects** | `Y` | No | - -2. **Environment Configuration** - - Enter the environment configuration information, such as the **Pod CIDR** and **Service IP Range** according to the - table below. - - | **Parameter** | **Value** | **Environment-Specific** | - | :------------------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------ | - | **HTTPS Proxy** | Skip. | No | - | **HTTP Proxy** | Skip. | No | - | **Pod CIDR** | `172.16.0.0/20`. The pod IP addresses should be unique and not overlap with any machine IPs in the environment. | No | - | **Service IP Range** | `10.155.0.0/24`. The service IP addresses should be unique and not overlap with any machine IPs in the environment. | No | - -3. **vSphere Account Information** - - Enter the information specific to your vSphere account. - - | **Parameter** | **Value** | **Environment-Specific** | - | -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------ | - | **vSphere Endpoint** | Your vSphere endpoint. You can specify a Full Qualified Domain Name (FQDN) or an IP address. Make sure you specify the endpoint without the HTTP scheme `https://` or `http://`. Example: `vcenter.mycompany.com`. | Yes | - | **vSphere Username** | Your vSphere account username. | Yes | - | **vSphere Password** | Your vSphere account password. | Yes | - | **Allow Insecure Connection (Bypass x509 Verification)** | `Y`. Enabling this option bypasses x509 CA verification. In production environments, enter `N` if using a custom registry with self-signed SSL certificates. Otherwise, enter `Y`. | No | - -4. **vSphere Cluster Configuration** - - Enter the PCG cluster configuration information. For example, specify the vSphere **Resource Pool** to be targeted by - the PCG cluster. - - | **Parameter** | **Value** | **Environment-Specific** | - | -------------------------------------------------------- | ---------------------------------------------------------------------- | ------------------------ | - | **Datacenter** | The vSphere data center to target when deploying the PCG cluster. | Yes | - | **Folder** | The vSphere folder to target when deploying the PCG cluster. | Yes | - | **Network** | The port group to which the PCG cluster will be connected. | Yes | - | **Resource Pool** | The vSphere resource pool to target when deploying the PCG cluster. | Yes | - | **Cluster** | The vSphere compute cluster to use for the PCG deployment. | Yes | - | **Select specific Datastore or use a VM Storage Policy** | `Datastore` | No | - | **Datastore** | The vSphere datastore to use for the PCG deployment. | Yes | - | **Add another Fault Domain** | `N` | No | - | **NTP Servers** | Skip. | No | - | **SSH Public Keys** | Provide a public OpenSSH key to be used to connect to the PCG cluster. | Yes | - -5. **PCG Cluster Size** - - This tutorial will deploy a one-node PCG with dynamic IP placement (DDNS). If needed, you can convert a single-node - PCG to a multi-node PCG to provide additional capacity. Refer to the - [Increase PCG Node Count](../../../clusters/pcg/manage-pcg/scale-pcg-nodes.md) guide for more information. - - | **Parameter** | **Value** | **Environment-Specific** | - | ------------------- | ---------------------------------------------------------------------------- | ------------------------ | - | **Number of Nodes** | `1` | No | - | **Placement Type** | `DDNS` | No | - | **Search domains** | Comma-separated list of DNS search domains. For example, `spectrocloud.dev`. | Yes | - -6. **Cluster Settings** - - Set the parameter **Patch OS on boot** to `N`, meaning the OS of the PCG hosts will not be patched on the first boot. - - | **Parameter** | **Value** | **Environment-Specific** | - | -------------------- | --------- | ------------------------ | - | **Patch OS on boot** | `N` | No | - -7. **vSphere Machine Configuration** - - Set the size of the PCG as small (**S**) as this PCG will not be used in production environments. - - | **Parameter** | **Value** | **Environment-Specific** | - | ------------- | --------------------------------------------- | ------------------------ | - | **S** | `4 CPU, 4 GB of Memory, and 60 GB of Storage` | No | - -8. **Node Affinity Configuration Information** - - Set **Node Affinity** to `N`, indicating no affinity between Palette pods and control plane nodes. - - | **Parameter** | **Value** | **Environment-Specific** | - | ----------------- | --------- | ------------------------ | - | **Node Affinity** | `N` | No | - -After answering the prompts of the `pcg install` command, a new PCG configuration file is generated, and its location is -displayed on the console. - -```text hideClipboard -==== PCG config saved ==== Location: /home/ubuntu/.palette/pcg/pcg-20240313152521/pcg.yaml -``` - -Next, Palette CLI will create a local [kind](https://kind.sigs.k8s.io/) cluster that will be used to bootstrap the PCG -cluster deployment in your VMware environment. Once installed, the PCG registers itself with Palette and creates a -VMware cloud account with the same name as the PCG. - -The following recording demonstrates the `pcg install` command with the `--config-only` flag. When using this flag, a -reusable configuration file named **pcg.yaml** is created under the path **.palette/pcg**. You can then utilize this -file to install a PCG with predefined values using the command `pcg install` with the `--config-file` flag. Refer to the -[Palette CLI PCG Command](../../../automation/palette-cli/commands/pcg.md) page for further information about the -command. - - - -
-
- -You can monitor the PCG cluster creation by logging into Palette and switching to the **Tenant Admin** scope. Next, -click on **Tenant Settings** from the left **Main Menu** and select **Private Cloud Gateways**. Then, click on the PCG -cluster you just created and check the deployment progress under the **Events** tab. - -![PCG Events page.](/clusters_pcg_deploy-app-pcg_pcg-events.webp) - -You can also track the PCG deployment progress from your terminal. Depending on the PCG size and infrastructure -environment, the deployment might take up to 30 minutes. Upon completion, the local kind cluster is automatically -deleted from your machine. - -:::tip - -To avoid potential vulnerabilities, once the installation is complete, remove the `kind` images that were installed in -the environment where you initiated the installation. - -
- -Remove `kind` Images - -Issue the following command to list all instances of `kind` that exist in the environment. - -```shell -docker images -``` - -```shell -REPOSITORY TAG IMAGE ID CREATED SIZE -kindest/node v1.26.13 131ad18222cc 5 months ago 910MB -``` - -Then, use the following command template to remove all instances of `kind`. - -```shell -docker image rm kindest/node: -``` - -Consider the following example for reference. - -```shell -docker image rm kindest/node:v1.26.13 -``` - -```shell -Untagged: kindest/node:v1.26.13 -Untagged: kindest/node@sha256:15ae92d507b7d4aec6e8920d358fc63d3b980493db191d7327541fbaaed1f789 -Deleted: sha256:131ad18222ccb05561b73e86bb09ac3cd6475bb6c36a7f14501067cba2eec785 -Deleted: sha256:85a1a4dfc468cfeca99e359b74231e47aedb007a206d0e2cae2f8290e7290cfd -``` - -
- -::: - -![Palette CLI PCG deployment](/clusters_pcg_deploy-app-pcg_pcg-cli.webp) - -Next, log in to Palette as a tenant admin. Navigate to the left **Main Menu** and select **Tenant Settings**. Click on -**Private Cloud Gateways** from the **Tenant Settings Menu** and select the PCG you just created. Ensure that the PCG -cluster status is **Running** and **Healthy** before proceeding. - -![PCG Overview page.](/clusters_pcg_deploy-app-pcg_pcg-health.webp) + ## Create a Cluster Profile and Deploy a Cluster @@ -653,22 +426,7 @@ Destroy complete! Resources: 5 destroyed. ### Delete the PCG -After deleting your VMware cluster and cluster profile, proceed with the PCG deletion. Log in to Palette as a tenant -admin, navigate to the left **Main Menu** and select **Tenant Settings**. Next, from the **Tenant Settings Menu**, click -on **Private Cloud Gateways**. Identify the PCG you want to delete, click on the **Three-Dot Menu** at the end of the -PCG row, and select **Delete**. Click **OK** to confirm the PCG deletion. - -![Delete PCG image](/clusters_pcg_deploy-app-pcg_pcg-delete.webp) - -Palette will delete the PCG and the Palette services deployed on the PCG node. However, the underlying infrastructure -resources, such as the virtual machine, must be removed manually from VMware vSphere. - -Log in to your VMware vSphere server and select the VM representing the PCG node named `gateway-tutorial-cp`. Click on -the **Three-Dot Actions** button, select **Power**, and **Power Off** to power off the machine. Once the machine is -powered off, click on the **Three-Dot Actions** button again and select **Delete from Disk** to remove the machine from -your VMware vSphere environment. - -![Delete VMware VM](/clusters_pcg_deploy-app-pcg_vmware-delete.webp) + ## Wrap-Up diff --git a/docs/docs-content/user-management/authentication/api-key/create-api-key.md b/docs/docs-content/user-management/authentication/api-key/create-api-key.md index 86390bec9a..1263496443 100644 --- a/docs/docs-content/user-management/authentication/api-key/create-api-key.md +++ b/docs/docs-content/user-management/authentication/api-key/create-api-key.md @@ -70,35 +70,7 @@ Ensure you save the API key in a secure location. You will not be able to view t -1. Log in to [Palette](https://console.spectrocloud.com) as a tenant admin. - -2. Switch to the **Tenant Admin** scope - -3. Navigate to the left **Main Menu** and select **Tenant Settings**. - -4. From the **Tenant Settings Menu**, select **API Keys**. - -5. Click on **Add New API key**. - -6. Fill out the following input fields: - -| **Input Field** | **Description** | -| ------------------- | ----------------------------------------------------------------------------------------------------------------- | -| **API Key Name** | Assign a name to the API key. | -| **Description** | Provide a description for the API key. | -| **User Name** | Select the user to assign the API key. | -| **Expiration Date** | Select an expiration date from the available options. You can also specify a custom date by selecting **Custom**. | - -5. Click the **Generate** button. - -6. Copy the API key and save it in a secure location, such as a password manager. Share the API key with the user you - created the API key for. - -:::warning - -Ensure you save the API key in a secure location. You will not be able to view the API key again. - -::: + diff --git a/docs/docs-content/vertex/system-management/ssl-certificate-management.md b/docs/docs-content/vertex/system-management/ssl-certificate-management.md index f275de2277..9f5d5f54e6 100644 --- a/docs/docs-content/vertex/system-management/ssl-certificate-management.md +++ b/docs/docs-content/vertex/system-management/ssl-certificate-management.md @@ -12,8 +12,8 @@ keywords: ["self-hosted", "vertex"] Palette VerteX uses Secure Sockets Layer (SSL) certificates to secure internal and external communication with Hypertext Transfer Protocol Secure (HTTPS). External VerteX endpoints, such as the [system console](../system-management/system-management.md#system-console), -[VerteX dashboard](../../getting-started/dashboard.md), the VerteX API, and the gRPC endpoint, are enabled by default -with HTTPS using an auto-generated self-signed certificate. +[VerteX dashboard](../../introduction/dashboard.md), the VerteX API, and the gRPC endpoint, are enabled by default with +HTTPS using an auto-generated self-signed certificate. ## Update System Address and Certificates diff --git a/docs/docs-content/vm-management/create-manage-vm/advanced-topics/deploy-import-ova.md b/docs/docs-content/vm-management/create-manage-vm/advanced-topics/deploy-import-ova.md index 4c1a5dd031..7ff5b96ab1 100644 --- a/docs/docs-content/vm-management/create-manage-vm/advanced-topics/deploy-import-ova.md +++ b/docs/docs-content/vm-management/create-manage-vm/advanced-topics/deploy-import-ova.md @@ -67,8 +67,9 @@ kubeadmconfig: SystemdCgroup = true ``` -If you are in a proxied environment, you must configure the CDI custom resource in order to deploy to a `DataVolume`. -Refer to the +If you are in a proxied environment, you must configure the +[Containerized Data Importer](https://kubevirt.io/user-guide/storage/containerized_data_importer/) (CDI) custom resource +in order to deploy to a `DataVolume`. Refer to the [CDI Configuration](https://github.com/kubevirt/containerized-data-importer/blob/main/doc/cdi-config.md#options) documentation. @@ -254,18 +255,18 @@ name, for example `cdi-uploadproxy.mycompany.io`, to the Nginx load balancer’s The Palette CLI prompts you for information regarding the OVA you want to import. - | **Parameter** | **Description** | **Values** | - | ------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | - | **OVA Path** | The path to the image your have uploaded to your VM. The path for the example provided is `/root/bitnami-wordpress-6.2.2-r1-debian-11-amd64.ova/`. | | - | **Container Disk Upload Method** | Indicate whether to upload the image directly to the target cluster as a `DataVolume` or build and push a Docker image. You will need to provide an existing image registry if you select Docker. | `DataVolume` / `Docker Image` | - | **Kubeconfig Path** | The path to the kubeconfig file you have uploaded to your VM. | | - | **DataVolume Namespace** | The namespace to create your `DataVolume`, if you selected this option previously. | | - | **DataVolume Name** | The name of your `DataVolume`. | | - | **Overhead Percentage for DataVolume Size** | Set an overhead percentage for your `DataVolume` compared to the OVA specification. This parameter is optional and can be skipped with the value `-1`. If skipped, the filesystem overhead percentage will be inferred from the CDI Custom Resource in your VMO cluster. Refer to the [CDI Configuration](https://github.com/kubevirt/containerized-data-importer/blob/main/doc/cdi-config.md#options) for further details. | | - | **Access Mode for the PVC** | Set the access mode for your `DataVolume`. Ensure that your configured CSI supports your selection. | `ReadWriteMany` / `ReadWriteOnce` | - | **Create a PVC with VolumeMode=Block** | Indicate whether to set `Block` volume mode on the `DataVolume`. | `y` / `N` | - | **StorageClass** | The storage class on the destination that will be used to create the VM volume. | | - | **CDI Upload Proxy URL** | Optionally, provide a URL to upload the CDI custom resource. If you have configured a CDI as part of your environment, specify `https://cdi-uploadproxy.mycompany.io`. Refer to the [Prerequisites](#prerequisites) section for configuration details. | | + | **Parameter** | **Description** | **Values** | + | ------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | + | **OVA Path** | The path to the OVA you uploaded to your VM. The path for the example provided is `/root/bitnami-wordpress-6.2.2-r1-debian-11-amd64.ova/`. | | + | **Container Disk Upload Method** | Indicate whether to upload the image directly to the target cluster as a `DataVolume` or build and push a Docker image. You will need to provide an existing image registry if you select Docker. | `DataVolume` / `Docker Image` | + | **Kubeconfig Path** | The path to the kubeconfig file you have uploaded to your VM. | | + | **DataVolume Namespace** | The namespace to create your `DataVolume`, if you selected this option previously. | | + | **DataVolume Name** | The name of your `DataVolume`. | | + | **Overhead Percentage for DataVolume Size** | Set an overhead percentage for your `DataVolume` compared to the OVA specification. This parameter is optional and can be skipped with the value `-1`. If skipped, the filesystem overhead percentage will be inferred from the CDI Custom Resource in your VMO cluster. Refer to the [CDI Configuration](https://github.com/kubevirt/containerized-data-importer/blob/main/doc/cdi-config.md#options) for further details. | | + | **Access Mode for the PVC** | Set the access mode for your `DataVolume`. Ensure that your configured CSI supports your selection. | `ReadWriteMany` / `ReadWriteOnce` | + | **Create a PVC with VolumeMode=Block** | Indicate whether to set `Block` volume mode on the `DataVolume`. | `y` / `N` | + | **StorageClass** | The storage class on the destination that will be used to create the VM volume. | | + | **CDI Upload Proxy URL** | Optionally provide a custom CDI upload proxy URL. If ingress is configured for the CDI upload proxy, the ingress hostname will be used by default and must be resolvable via DNS. If the CDI upload proxy is exposed via a NodePort, a node IP and ephemeral port will be used. Depending on how CDI and DNS are configured, you may need to edit `/etc/hosts` to ensure DNS resolution. You may also port-forward the CDI upload proxy via `kubectl --namespace cdi port-forward deployment/cdi-uploadproxy 8443` and provide `https://localhost:8443` as the CDI upload proxy URL. However, this approach will be less efficient. | | 12. The import may take a few minutes to complete. The Palette CLI outputs the path for your OVA configuration file. Make a note of it. diff --git a/redirects.js b/redirects.js index 902ebd4511..ae078511fc 100644 --- a/redirects.js +++ b/redirects.js @@ -76,8 +76,16 @@ const redirects = [ to: `/getting-started/`, }, { - from: `/clusters/public-cloud/eks/`, - to: `/clusters/public-cloud/aws/eks/`, + from: `/getting-started/dashboard`, + to: `/introduction/dashboard`, + }, + { + from: `/getting-started/cluster-profiles`, + to: `/getting-started/introduction`, + }, + { + from: `/clusters/public-cloud/eks`, + to: `/clusters/public-cloud/aws/eks`, }, { from: `/clusters/public-cloud/aks/`, diff --git a/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_add-pack.webp b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_add-pack.webp new file mode 100644 index 0000000000..abb0a79fc8 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_add-pack.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_clusters_parameters.webp b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_clusters_parameters.webp index 8925465b36..dcff1e100a 100644 Binary files a/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_clusters_parameters.webp and b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_clusters_parameters.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_create-cluster-profile_manifest.webp b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_manifest.webp similarity index 100% rename from static/assets/docs/images/getting-started/getting-started_create-cluster-profile_manifest.webp rename to static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_manifest.webp diff --git a/static/assets/docs/images/getting-started/getting-started_create-cluster-profile_manifest_blue_btn.webp b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_manifest_blue_btn.webp similarity index 100% rename from static/assets/docs/images/getting-started/getting-started_create-cluster-profile_manifest_blue_btn.webp rename to static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_manifest_blue_btn.webp diff --git a/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_pack-presets.webp b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_pack-presets.webp new file mode 100644 index 0000000000..d0c7a4a10c Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_pack-readme.webp b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_pack-readme.webp new file mode 100644 index 0000000000..be1fb42d08 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_create-cluster-profile_pack-readme.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_create_cluster.webp new file mode 100644 index 0000000000..cb3d75ab18 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_event_log.webp new file mode 100644 index 0000000000..7469124440 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..5d865446d4 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp index 0f9e681485..903ff3c581 100644 Binary files a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_basic_info.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_basic_info.webp index 2a67c52733..1a5205cac0 100644 Binary files a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_basic_info.webp and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_basic_info.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_creation_parameters.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_creation_parameters.webp index 97d810712e..4370f50166 100644 Binary files a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_creation_parameters.webp and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_clusters_creation_parameters.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp index 5d40c18fe2..29a5f4090f 100644 Binary files a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..5c87fda32e Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_event_log.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_event_log.webp new file mode 100644 index 0000000000..d1cbba09d8 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp index 2db8348ec7..fd9ac472f6 100644 Binary files a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_profile_cluster_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_service_url.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_service_url.webp new file mode 100644 index 0000000000..5794cfdf8b Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-k8s-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp new file mode 100644 index 0000000000..5e8171dfd0 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp new file mode 100644 index 0000000000..ca7af5ac9d Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..512cf3eeaa Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp new file mode 100644 index 0000000000..db5c0e58cd Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp new file mode 100644 index 0000000000..6022dc8564 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubecost.webp new file mode 100644 index 0000000000..717013585a Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..06c1646959 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp new file mode 100644 index 0000000000..fbcc0c3aad Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp new file mode 100644 index 0000000000..e4a04abf28 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_reconciliation.webp new file mode 100644 index 0000000000..1ea3a72e53 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_cluster-profile-created.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_cluster-profile-created.webp new file mode 100644 index 0000000000..4c4ae7c386 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_cluster-profile-created.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..fd60dc7366 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..6ee8fe58c7 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp new file mode 100644 index 0000000000..307992cbff Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-versions.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-versions.webp new file mode 100644 index 0000000000..33b8475c52 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_kubernetes-versions.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_never-scanned-cluster.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_never-scanned-cluster.webp new file mode 100644 index 0000000000..918bd55322 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_never-scanned-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_node-repaves.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_node-repaves.webp new file mode 100644 index 0000000000..2735e95425 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_node-repaves.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_pack-presets.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_pack-presets.webp new file mode 100644 index 0000000000..c032d81f9b Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_scans-completed.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_scans-completed.webp new file mode 100644 index 0000000000..42894b789c Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_scans-completed.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_scans-schedules.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_scans-schedules.webp new file mode 100644 index 0000000000..4c203c9228 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_scans-schedules.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_service_url.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_service_url.webp new file mode 100644 index 0000000000..4de9ea6f39 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_third-node-pool.webp b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_third-node-pool.webp new file mode 100644 index 0000000000..d46b2ecd8f Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_scale-secure-cluster_third-node-pool.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_add-service-tag.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_add-service-tag.webp new file mode 100644 index 0000000000..4a6b0f750c Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_add-service-tag.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_apply-frontend-filter.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_apply-frontend-filter.webp new file mode 100644 index 0000000000..9bf322392a Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_apply-frontend-filter.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_available-updates-dialog.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_available-updates-dialog.webp new file mode 100644 index 0000000000..467c1c1e21 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_available-updates-dialog.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_completed-cluster-updates.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_completed-cluster-updates.webp new file mode 100644 index 0000000000..5d4f78f812 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_completed-cluster-updates.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..6d18407c79 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_new-version-overview.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_new-version-overview.webp new file mode 100644 index 0000000000..7333d8309e Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_new-version-overview.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_pending-update-clusters-view.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_pending-update-clusters-view.webp new file mode 100644 index 0000000000..fe845bd0b8 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_pending-update-clusters-view.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_profile-version-selection.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_profile-version-selection.webp new file mode 100644 index 0000000000..d2b38ec18b Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_profile-version-selection.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..7e1e124cb8 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_select-kubecost-pack.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_select-kubecost-pack.webp new file mode 100644 index 0000000000..3b5bca4a3f Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_select-kubecost-pack.webp differ diff --git a/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp new file mode 100644 index 0000000000..2a4336e301 Binary files /dev/null and b/static/assets/docs/images/getting-started/aws/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_add-pack.webp b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_add-pack.webp new file mode 100644 index 0000000000..0bec9ce216 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_add-pack.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_cluster_profile_stack.webp b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_cluster_profile_stack.webp index a75f011675..e1614a50f1 100644 Binary files a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_cluster_profile_stack.webp and b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_cluster_profile_stack.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_manifest.webp b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_manifest.webp new file mode 100644 index 0000000000..49828e253b Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_manifest.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_manifest_blue_btn.webp b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_manifest_blue_btn.webp new file mode 100644 index 0000000000..a1680f9614 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_manifest_blue_btn.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_pack-presets.webp b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_pack-presets.webp new file mode 100644 index 0000000000..6d1f4f2200 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_pack-readme.webp b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_pack-readme.webp new file mode 100644 index 0000000000..90b68f0556 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_create-cluster-profile_pack-readme.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..5e6f740302 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp index 16cfe0d72e..41d3e493fd 100644 Binary files a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_clusters_basic_info.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_clusters_basic_info.webp index ca20544a25..00defa5a6d 100644 Binary files a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_clusters_basic_info.webp and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_clusters_basic_info.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp index 406bbbe6ee..8c745af678 100644 Binary files a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..b4b677c13a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_event_log.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_event_log.webp new file mode 100644 index 0000000000..9ab9507fbd Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp new file mode 100644 index 0000000000..da7d5c9c2f Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_parameters.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_parameters.webp index e210a6654f..1583749424 100644 Binary files a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_parameters.webp and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_parameters.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_profile_review.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_profile_review.webp index 3a90c96868..6297b22b80 100644 Binary files a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_profile_review.webp and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_service_url.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_service_url.webp new file mode 100644 index 0000000000..6aa6186093 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-k8s-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp new file mode 100644 index 0000000000..eec1bec82f Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp new file mode 100644 index 0000000000..c9988c912a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..a57951a53f Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp new file mode 100644 index 0000000000..da7d5c9c2f Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp new file mode 100644 index 0000000000..919aa9400e Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp new file mode 100644 index 0000000000..7ba1da2f0a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..c89c110113 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp new file mode 100644 index 0000000000..a9de76dbf1 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp new file mode 100644 index 0000000000..7347f7da81 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp new file mode 100644 index 0000000000..8558cc808a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_cluster-profile-created.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_cluster-profile-created.webp new file mode 100644 index 0000000000..cc8cb3c0cd Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_cluster-profile-created.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..ff84d2cdde Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..84a78191f3 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp new file mode 100644 index 0000000000..aa30720228 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-versions.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-versions.webp new file mode 100644 index 0000000000..b79ea8a920 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_kubernetes-versions.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_never-scanned-cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_never-scanned-cluster.webp new file mode 100644 index 0000000000..eec2a8092e Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_never-scanned-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_node-repaves.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_node-repaves.webp new file mode 100644 index 0000000000..2566114bf6 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_node-repaves.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_pack-presets.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_pack-presets.webp new file mode 100644 index 0000000000..067027438d Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_scans-completed.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_scans-completed.webp new file mode 100644 index 0000000000..68772ab92c Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_scans-completed.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_scans-schedules.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_scans-schedules.webp new file mode 100644 index 0000000000..70da2f55b7 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_scans-schedules.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_service_url.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_service_url.webp new file mode 100644 index 0000000000..f21da0545e Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_third-node-pool.webp b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_third-node-pool.webp new file mode 100644 index 0000000000..d596fff143 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_scale-secure-cluster_third-node-pool.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_add-service-tag.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_add-service-tag.webp new file mode 100644 index 0000000000..90a1244707 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_add-service-tag.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_apply-frontend-filter.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_apply-frontend-filter.webp new file mode 100644 index 0000000000..6f7c27afab Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_apply-frontend-filter.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_available-updates-dialog.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_available-updates-dialog.webp new file mode 100644 index 0000000000..1099f995dc Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_available-updates-dialog.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_cluster-healthy.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_cluster-healthy.webp new file mode 100644 index 0000000000..bfb55a1fb2 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_cluster-healthy.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_completed-cluster-updates.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_completed-cluster-updates.webp new file mode 100644 index 0000000000..477f52d107 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_completed-cluster-updates.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp similarity index 100% rename from static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp rename to static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..37e8adb34a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_kubecost-ui.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_kubecost-ui.webp new file mode 100644 index 0000000000..7ba1da2f0a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_kubecost-ui.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_new-version-overview.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_new-version-overview.webp new file mode 100644 index 0000000000..29ce6153d2 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_new-version-overview.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_pending-update-clusters-view.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_pending-update-clusters-view.webp new file mode 100644 index 0000000000..bee498c20f Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_pending-update-clusters-view.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_profile-version-selection.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_profile-version-selection.webp new file mode 100644 index 0000000000..d09de7757b Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_profile-version-selection.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..12557c0fb2 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_rollback.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_rollback.webp new file mode 100644 index 0000000000..1f1d6e853a Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_rollback.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_select-kubecost-pack.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_select-kubecost-pack.webp new file mode 100644 index 0000000000..2b86e7ba05 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_select-kubecost-pack.webp differ diff --git a/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp new file mode 100644 index 0000000000..de17b2bff9 Binary files /dev/null and b/static/assets/docs/images/getting-started/azure/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_add-pack.webp b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_add-pack.webp new file mode 100644 index 0000000000..7c75e9679c Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_add-pack.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_cluster_profile_stack.webp b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_cluster_profile_stack.webp index b6fc07a261..83f5269f58 100644 Binary files a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_cluster_profile_stack.webp and b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_cluster_profile_stack.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_manifest.webp b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_manifest.webp new file mode 100644 index 0000000000..63457c3086 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_manifest.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_manifest_blue_btn.webp b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_manifest_blue_btn.webp new file mode 100644 index 0000000000..e4bb9175cb Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_manifest_blue_btn.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_pack-presets.webp b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_pack-presets.webp new file mode 100644 index 0000000000..de607ef5e6 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_pack-readme.webp b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_pack-readme.webp new file mode 100644 index 0000000000..9504dbb493 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_create-cluster-profile_pack-readme.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..09446cfdd1 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_basic_info.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_basic_info.webp index 0858719fb7..a3bfb2b94d 100644 Binary files a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_basic_info.webp and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_basic_info.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp index 91823489bc..2a57e47301 100644 Binary files a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_cluster_nodes_config.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_clusters_parameters.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_clusters_parameters.webp index 6c2945b177..aaaf28dec2 100644 Binary files a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_clusters_parameters.webp and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_clusters_parameters.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..7f3f670f56 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_event_log.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_event_log.webp new file mode 100644 index 0000000000..6e0db95b27 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp index afe3a541fc..4570dc3705 100644 Binary files a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_new_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_profile_review.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_profile_review.webp index beefc295f8..da7eb7f70c 100644 Binary files a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_profile_review.webp and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_service_url.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_service_url.webp new file mode 100644 index 0000000000..8f43dde350 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-k8s-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp new file mode 100644 index 0000000000..ed094c1606 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp new file mode 100644 index 0000000000..8ee2af297e Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..09f64996e6 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp new file mode 100644 index 0000000000..db5c0e58cd Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp new file mode 100644 index 0000000000..36a5672bce Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp new file mode 100644 index 0000000000..86d1378bdd Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..80f8934db7 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp new file mode 100644 index 0000000000..7210f528f5 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp new file mode 100644 index 0000000000..12c1480f59 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp new file mode 100644 index 0000000000..41d674121d Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_cluster-profile-created.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_cluster-profile-created.webp new file mode 100644 index 0000000000..bc8e75f3d2 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_cluster-profile-created.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..3b1ba8eecd Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..28bc7e94b6 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp new file mode 100644 index 0000000000..42510eac09 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-versions.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-versions.webp new file mode 100644 index 0000000000..c7eb5b3332 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_kubernetes-versions.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_never-scanned-cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_never-scanned-cluster.webp new file mode 100644 index 0000000000..f8bdf078b8 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_never-scanned-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_node-repaves.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_node-repaves.webp new file mode 100644 index 0000000000..74710e7aa3 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_node-repaves.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_pack-presets.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_pack-presets.webp new file mode 100644 index 0000000000..b38e413c94 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_scans-completed.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_scans-completed.webp new file mode 100644 index 0000000000..3921853914 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_scans-completed.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_scans-schedules.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_scans-schedules.webp new file mode 100644 index 0000000000..f2f44d09e8 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_scans-schedules.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_service_url.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_service_url.webp new file mode 100644 index 0000000000..2650ab8f4e Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_third-node-pool.webp b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_third-node-pool.webp new file mode 100644 index 0000000000..f50bf726ea Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_scale-secure-cluster_third-node-pool.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_add-service-tag.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_add-service-tag.webp new file mode 100644 index 0000000000..99fe3319be Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_add-service-tag.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_apply-frontend-filter.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_apply-frontend-filter.webp new file mode 100644 index 0000000000..35d4791160 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_apply-frontend-filter.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_available-updates-dialog.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_available-updates-dialog.webp new file mode 100644 index 0000000000..1851c88517 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_available-updates-dialog.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_completed-cluster-updates.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_completed-cluster-updates.webp new file mode 100644 index 0000000000..9492452843 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_completed-cluster-updates.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp new file mode 100644 index 0000000000..84f77bbe47 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_deployed-clusters-start-setup.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..76b0b31ceb Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_new-version-overview.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_new-version-overview.webp new file mode 100644 index 0000000000..85c4f4c7ed Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_new-version-overview.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_pending-update-clusters-view.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_pending-update-clusters-view.webp new file mode 100644 index 0000000000..ca87ca897e Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_pending-update-clusters-view.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_profile-version-selection.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_profile-version-selection.webp new file mode 100644 index 0000000000..e1554c9dc0 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_profile-version-selection.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..4ed90a1d56 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_select-kubecost-pack.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_select-kubecost-pack.webp new file mode 100644 index 0000000000..586f6140a5 Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_select-kubecost-pack.webp differ diff --git a/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp new file mode 100644 index 0000000000..1a22e01e3b Binary files /dev/null and b/static/assets/docs/images/getting-started/gcp/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_delete-cluster-button.webp deleted file mode 100644 index 9a8ea2babe..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_delete-cluster-button.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_event_log.webp b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_event_log.webp deleted file mode 100644 index 8eb8a58367..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_event_log.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp new file mode 100644 index 0000000000..db5c0e58cd Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_hello-universe-with-api.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp index 299fd2f09c..2f0b109f1a 100644 Binary files a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp and b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_new_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_service_url.webp b/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_service_url.webp deleted file mode 100644 index bce2c63ceb..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_deploy-k8s-cluster_service_url.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_introduction_product-overview.webp b/static/assets/docs/images/getting-started/getting-started_introduction_product-overview.webp index f4bdc2ded8..33322eb551 100644 Binary files a/static/assets/docs/images/getting-started/getting-started_introduction_product-overview.webp and b/static/assets/docs/images/getting-started/getting-started_introduction_product-overview.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_landing_kubernetes-challenges.webp b/static/assets/docs/images/getting-started/getting-started_landing_kubernetes-challenges.webp new file mode 100644 index 0000000000..fa6965ad66 Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_landing_kubernetes-challenges.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_landing_meet-the-team.webp b/static/assets/docs/images/getting-started/getting-started_landing_meet-the-team.webp new file mode 100644 index 0000000000..fff8846d3c Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_landing_meet-the-team.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_landing_spacetastic-systems.webp b/static/assets/docs/images/getting-started/getting-started_landing_spacetastic-systems.webp new file mode 100644 index 0000000000..79f10c97e6 Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_landing_spacetastic-systems.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_add-taint.webp b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_add-taint.webp new file mode 100644 index 0000000000..a158847423 Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_add-taint.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp new file mode 100644 index 0000000000..09c28320a5 Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_open-tutorial-project.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp new file mode 100644 index 0000000000..a9b14100cc Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_select-team-roles.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp new file mode 100644 index 0000000000..37b4a5bc8f Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_switch-tenant-admin-scope.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif new file mode 100644 index 0000000000..d96e97c91b Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_scale-secure-cluster_taints-in-action.gif differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_add-service-tag.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_add-service-tag.webp deleted file mode 100644 index 01948c0049..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_add-service-tag.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_apply-frontend-filter.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_apply-frontend-filter.webp deleted file mode 100644 index 1d0736f094..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_apply-frontend-filter.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_available-updates-dialog.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_available-updates-dialog.webp deleted file mode 100644 index af05988624..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_available-updates-dialog.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_completed-cluster-updates.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_completed-cluster-updates.webp deleted file mode 100644 index 3663c7d6bf..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_completed-cluster-updates.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_editor-changes.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_editor-changes.webp deleted file mode 100644 index b1aba03261..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_editor-changes.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_hello-universe-with-api.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_hello-universe-with-api.webp deleted file mode 100644 index d5cde3e418..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_hello-universe-with-api.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_kubecost-ui.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_kubecost-ui.webp new file mode 100644 index 0000000000..04bbaa721d Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_kubecost-ui.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_new-version-overview.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_new-version-overview.webp deleted file mode 100644 index 77b683659a..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_new-version-overview.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_pending-update-clusters-view.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_pending-update-clusters-view.webp deleted file mode 100644 index ca438fc6d6..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_pending-update-clusters-view.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-version-changes.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-version-changes.webp new file mode 100644 index 0000000000..0291c2ec66 Binary files /dev/null and b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-version-changes.webp differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-version-selection.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-version-selection.webp deleted file mode 100644 index dcb9f2cd07..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-version-selection.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-with-cluster.webp deleted file mode 100644 index 8698f1fdfd..0000000000 Binary files a/static/assets/docs/images/getting-started/getting-started_update-k8s-cluster_profile-with-cluster.webp and /dev/null differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_add-pack.webp b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_add-pack.webp new file mode 100644 index 0000000000..d07503fd33 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_add-pack.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_cluster-profile-core-stack.webp b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_cluster-profile-core-stack.webp new file mode 100644 index 0000000000..c10d989e19 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_cluster-profile-core-stack.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_metallb-pack.webp b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_metallb-pack.webp new file mode 100644 index 0000000000..f5ca8fd1b3 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_metallb-pack.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_pack-presets.webp b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_pack-presets.webp new file mode 100644 index 0000000000..4ff4d9d49c Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_pack-readme.webp b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_pack-readme.webp new file mode 100644 index 0000000000..35349120ee Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_create-cluster-profile_pack-readme.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_basic_info.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_basic_info.webp new file mode 100644 index 0000000000..0d7dbac3a9 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_basic_info.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_clusters_parameters.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_clusters_parameters.webp new file mode 100644 index 0000000000..407d4d6345 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_clusters_parameters.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..e6351e1450 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_event_log.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_event_log.webp new file mode 100644 index 0000000000..df5c2686cd Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_new_cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_new_cluster.webp new file mode 100644 index 0000000000..de3b3d7d5e Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_new_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_profile_review.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_profile_review.webp new file mode 100644 index 0000000000..c272c6823f Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_service_url.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_service_url.webp new file mode 100644 index 0000000000..95d0647ce1 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-k8s-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp new file mode 100644 index 0000000000..18e42a37a5 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_create_cluster.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp new file mode 100644 index 0000000000..1af5809d39 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_event_log.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp new file mode 100644 index 0000000000..d9d32fc8d1 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster-tf_profile_review.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp new file mode 100644 index 0000000000..76381ca2f7 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_hello-universe-w-api.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp new file mode 100644 index 0000000000..0fba0ab22b Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp new file mode 100644 index 0000000000..d9226d7950 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..6d1ec023b3 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp new file mode 100644 index 0000000000..4c6748c11c Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-with-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp new file mode 100644 index 0000000000..253497fcc6 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_profile-without-kubecost.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp new file mode 100644 index 0000000000..5be0326d61 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_deploy-manage-k8s-cluster_reconciliation.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_cluster-profile-created.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_cluster-profile-created.webp new file mode 100644 index 0000000000..3772d51d1f Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_cluster-profile-created.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_delete-cluster-button.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_delete-cluster-button.webp new file mode 100644 index 0000000000..a958bde553 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_delete-cluster-button.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..cc8ef3c63e Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp new file mode 100644 index 0000000000..5611519d1b Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-upgrade-applied.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-versions.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-versions.webp new file mode 100644 index 0000000000..4e8e8d191b Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_kubernetes-versions.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_metallb-pack.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_metallb-pack.webp new file mode 100644 index 0000000000..deb8b21479 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_metallb-pack.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_never-scanned-cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_never-scanned-cluster.webp new file mode 100644 index 0000000000..4b9701f927 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_never-scanned-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_node-repaves.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_node-repaves.webp new file mode 100644 index 0000000000..5447f6b1d5 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_node-repaves.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_pack-presets.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_pack-presets.webp new file mode 100644 index 0000000000..229f99b9f0 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_pack-presets.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_scans-completed.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_scans-completed.webp new file mode 100644 index 0000000000..92bab26f59 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_scans-completed.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_scans-schedules.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_scans-schedules.webp new file mode 100644 index 0000000000..cda5c5266b Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_scans-schedules.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_service_url.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_service_url.webp new file mode 100644 index 0000000000..d7e2771fbd Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_service_url.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_third-node-pool.webp b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_third-node-pool.webp new file mode 100644 index 0000000000..ae973ab12c Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_scale-secure-cluster_third-node-pool.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_add-service-tag.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_add-service-tag.webp new file mode 100644 index 0000000000..f66305fbe0 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_add-service-tag.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_apply-frontend-filter.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_apply-frontend-filter.webp new file mode 100644 index 0000000000..9a5190a49e Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_apply-frontend-filter.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_available-updates-dialog.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_available-updates-dialog.webp new file mode 100644 index 0000000000..d63501227d Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_available-updates-dialog.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_completed-cluster-updates.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_completed-cluster-updates.webp new file mode 100644 index 0000000000..43e0223403 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_completed-cluster-updates.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_download-kubeconfig.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_download-kubeconfig.webp new file mode 100644 index 0000000000..4f56dd1fea Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_download-kubeconfig.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_new-version-overview.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_new-version-overview.webp new file mode 100644 index 0000000000..11ce7255fe Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_new-version-overview.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_pending-update-clusters-view.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_pending-update-clusters-view.webp new file mode 100644 index 0000000000..37f838c32d Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_pending-update-clusters-view.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-changes.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-changes.webp new file mode 100644 index 0000000000..e03416e22e Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-changes.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-selection.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-selection.webp new file mode 100644 index 0000000000..d01595d080 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-version-selection.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-with-cluster.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-with-cluster.webp new file mode 100644 index 0000000000..eae14de0b4 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_profile-with-cluster.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_select-kubecost-pack.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_select-kubecost-pack.webp new file mode 100644 index 0000000000..a0530e7e20 Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_select-kubecost-pack.webp differ diff --git a/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp new file mode 100644 index 0000000000..aed3c7046a Binary files /dev/null and b/static/assets/docs/images/getting-started/vmware/getting-started_update-k8s-cluster_updates-available-button-cluster-overview.webp differ