Skip to content

Commit

Permalink
Merge branch 'oracle-livelabs:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
davidastart authored Sep 13, 2023
2 parents da0f8c0 + e5ad71d commit 5b5a641
Show file tree
Hide file tree
Showing 15 changed files with 156 additions and 88 deletions.
22 changes: 20 additions & 2 deletions kubernetes-for-oracledbas/access-cluster/access-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,30 @@ This lab assumes you have:
## Task 1: Create the Kubeconfig file

1. In OCI, navigate to Developer Services -> Kubernetes Clusters(OKE).

![OCI OKE Navigation](images/oci_oke_nav.png "OCI OKE Navigation")

2. Select your cluster and click the "Access Cluster" button. Follow the steps to "Manage the cluster via Cloud Shell".
2. Ensure the Compartment is set to [](var:oci_compartment).

![OCI OKE Compartment](images/oci_oke_compartment.png "OCI OKE Compartment")

3. Select your cluster and click the "Access Cluster" button. Follow the steps to "Manage the cluster via Cloud Shell".

![OCI Create Kubeconfig](images/oci_create_kubeconfig.png "OCI Create Kubeconfig")

3. Paste the copied command into Cloud Shell. This will create a configuration file, the *kubeconfig*, that *kubectl* uses to access the cluster in the default location of `$HOME/.kube/config`.
4. Paste the copied command into Cloud Shell. This will create a configuration file, the *kubeconfig*, that *kubectl* uses to access the cluster in the default location of `$HOME/.kube/config`.

### Notes about the Kubeconfig File

The authentication token generated by the command in the kubeconfig file are short-lived, cluster-scoped, and specific to your account. As a result, you cannot share this kubeconfig file with other users to access the Kubernetes cluster.

> the authentication token could expire resulting in an error
If you perform this workshop over a number of hours or days, the authentication token could expire resulting in an error:

`error: You must be logged in to the server (Unauthorized)`

To resolve, re-run Task 1.

## Task 2: Test Kubernetes Access

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 3 additions & 5 deletions kubernetes-for-oracledbas/bind-adb/bind-adb.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,14 @@

## Introduction

In this lab, you will provision a new Oracle Autonomous Database (ADB) and bind to an existing one using the **OraOperator**.
In this lab, you will bind to an existing Oracle Autonomous Database (ADB) using the **OraOperator**.

*Estimated Time:* 10 minutes

[Lab 7](videohub:1_wdee00m6)


### Objectives

* Provision a new Oracle Autonomous Database (ADB) using the **OraOperator**
* Bind to an existing ADB using the **OraOperator**

### Prerequisites
Expand Down Expand Up @@ -223,7 +221,7 @@ Now that you've defined two *Secrets* in Kubernetes, redefine the `adb-existing`
Take a quick look at the syntax:
You are appending to the `adb_existing.yaml` manifest to **redefine** the `adb-existing` resource, setting the `spec.details.adminPassword` and `spec.details.wallet` keys. Under the wallet section, you are specifying the name of a *Secret*, `adb-tns-admin`, that the OraOperator will define to to store the wallet.
You are appending to the `adb_existing.yaml` manifest to **redefine** the `adb-existing` resource, setting the `spec.details.adminPassword` and `spec.details.wallet` keys. Under the wallet section, you are specifying the name of a *Secret*, `adb-tns-admin`, that the OraOperator will define to store the wallet.
2. Apply the manifest:
Expand All @@ -233,7 +231,7 @@ Now that you've defined two *Secrets* in Kubernetes, redefine the `adb-existing`
</copy>
```
![ADB Modify](images/adb_secrets.png "ADB Modify")
![ADB Modify](images/adb_modify.png "ADB Modify")
## Task 8: Review ADB Wallet Secrets
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ The ORDS configuration does not store any sensitive data, so build a *manifest f
### Liquibase ChangeLog
In an ADB, the `ORDS_PUBLIC_USER` already exists for providing Rest Data Services out-of-the-box, you'll want to avoid messing with that database user. Instead, you'll want to create a new, similar user for your application. You can do this as part of the deployment using **SQLcl + Liquibase** inside what is known as an *initContainer*.
In an ADB, the `ORDS_PUBLIC_USER` already exists for providing Rest Data Services out-of-the-box. You'll want to avoid messing with that database user and instead create a new, similar user for your application. You can do this as part of the deployment using **SQLcl + Liquibase** inside what is known as an *initContainer*.
An *initContainer* is just like an regular application container, except it will run to completion and stop. They are perfect for ensuring the database has the correct users, permissions, and objects present for the application container to use.
Expand Down Expand Up @@ -229,7 +229,7 @@ The below *ConfigMap* will create two new users in the ADB: `ORDS_PUBLIC_USER_K8
</copy>
```
By using a variable for the passwords, you are not exposing any sensitive information in your code. The value for the variable will be set using environment variables in the Applications *Deployment* specification.
By using a variable for the passwords, you are not exposing any sensitive information in your code. The value for the variable will be set using environment variables in the Applications *Deployment* specification, which will pull the values from the *Secret* you createdenv.
## Task 5: Create the Application
Expand Down Expand Up @@ -425,7 +425,7 @@ spec:
Only one *replica* was created, which translates to the single *Pod* `sqldev-web-0` in the *Namespace*. If you think of *replica's* as an instance in a RAC database, when you only have one it is easy to route traffic to it. However, if you have multiple instances and they can go up and down independently, ensuring High Availability, then you need something to keep track of those "Endpoints" for routing traffic. In a RAC, this is the SCAN Listener, in a K8s cluster, this is a *Service*.
1. Define the *Service* for your application, routing all traffic from port 443 to 8080 (the port the application is listening on).
1. Define the *Service* for your application, routing all traffic from port 80 to 8080 (the port the application is listening on).
The `selector` from your deployment will need to match the `selector` in the service, this is how it knows which *Pods* are valid endpoints:
Expand Down Expand Up @@ -543,6 +543,8 @@ While you could delete the individual resources manually, or by using the *manif
![Application Down](images/FourOhFour.png "Application Down")
**Note:** instead of the 404, you may experience spinning with an eventual timeout. This is because the deleted *Ingress* resource will eventually cause the *cloud-controller-manager* to reconcile away the public IP exposing the application.
You may now **proceed to the next lab**
## Learn More
Expand All @@ -556,4 +558,4 @@ You may now **proceed to the next lab**
* **Authors** - [](var:authors)
* **Contributors** - [](var:contributors)
* **Last Updated By/Date** - John Lathouwers, July 2023
* **Last Updated By/Date** - John Lathouwers, September 2023
22 changes: 11 additions & 11 deletions kubernetes-for-oracledbas/deploy-stack/deploy-stack.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@

## Introduction

In this lab, you will provision and configure the Oracle Cloud resources required for the Workshop using *Oracle Resource Manager (ORM)*. ORM will stand-up the Infrastructure using *Terraform* and perform some basic Configuration using *Ansible*.
In this lab, you will provision and configure the Oracle Cloud resources required for the Workshop using *Oracle Resource Manager (ORM)*. ORM will stand-up the Infrastructure using *Terraform* and perform some basic Configuration using *Ansible*.

Don't panic, how this works will be explored... while it is working.
Don't panic, how this works will be explored... while it is working.

*Estimated Time:* 10 minutes

Expand Down Expand Up @@ -92,25 +92,25 @@ The Infrastructure deployment and configuration will take approximately **15 min

### Troubleshooting

When using OCI Free Tier, it is possible that the Stack deployment will fail due to a lack of compute resources in your chosen region. Fear not and realise the power of Infrastructure as Code!
When using OCI Free Tier, it is possible that the Stack deployment will fail due to a lack of compute resources in your chosen region. Fear not and realise the power of Infrastructure as Code!

Should the Stack Deployment fail due to **"Out of host capacity."**, please follow the [Out of Capacity](?lab=troubleshooting#Task1:OutofCapacity) guide.
Should the Stack Deployment fail due to **"Out of host capacity."**, please follow the [Out of Capacity](?lab=troubleshooting#Task2:OutofCapacity) guide.

## Task 5: Learn about Infrastructure as Code (IaC) using Terraform

Terraform is a tool for managing infrastructure as code (IaC) that can be version-controlled. Take a look at the IaC that creates the Autonomous Oracle Database (ADB) for this Workshop.
Terraform is a tool for managing infrastructure as code (IaC) that can be version-controlled. Take a look at the IaC that creates the Autonomous Oracle Database (ADB) for this Workshop.

The ADB is defined, in 16 lines, using the `oci_database_autonomous_database` resource from the [Oracle provided Terraform OCI provider](https://registry.terraform.io/providers/oracle/oci/latest/docs). Most the arguments are set by variables, allowing you to customise the ADB without having to rewrite the code which describes it.
The ADB is defined, in 16 lines, using the `oci_database_autonomous_database` resource from the [Oracle provided Terraform OCI provider](https://registry.terraform.io/providers/oracle/oci/latest/docs). Most the arguments are set by variables, allowing you to customise the ADB without having to rewrite the code which describes it.

When you **Apply** the IaC, it calls underlying APIs to provision the resource as it is defined, and records it in a "State" file.

![ADB Terraform](images/adb_terraform.png "ADB Terraform")

For the DBA this is invaluable as it means you can define the ADB once, use variables where permitted and constants where mandated for your organisations standards. Those 16 lines of IaC can then be used over and over to provision, and tear-down, hundreds of ADBs.
For the DBA this is invaluable as it means you can define the ADB once, use variables where permitted and constants where mandated for your organisations standards. Those 16 lines of IaC can then be used over and over to provision, and tear-down, hundreds of ADBs.

> use variables where permitted and constants where mandated
As Terraform is declarative, that IaC can also be used to modify existing ADBs that were created by by it, by comparing the configuration in the "State" file with the real-world resources.
As Terraform is declarative, that IaC can also be used to modify existing ADBs that were created by it, by comparing the configuration in the "State" file with the real-world resources.

During the ORM interview phase, when you ticked the "Show Database Options?" the `Autonomous Database CPU Core Count` was set to `1`. That value was assigned to `var.adb_cpu_core_count` during provisioning.

Expand Down Expand Up @@ -141,7 +141,7 @@ All of what you do manually in this Workshop can be automated by Ansible as part

> All of what you do manually in this Workshop can be automated by Ansible
In the Workshop Stack, Terraform will write a number of variable and inventory files **(2)** describing the Infrastructure that it has provisioned. It will then call Ansible **(3)** to run Playbooks (a series of tasks that include conditionals, loops, and error handling) against the infrastructure definition **(4)** to configure it **(5)**.
In the Workshop Stack, Terraform will write a number of variable and inventory files **(2)** describing the Infrastructure that it has provisioned. It will then call Ansible **(3)** to run Playbooks (a series of tasks that include conditionals, loops, and error handling) against the infrastructure definition **(4)** to configure it **(5)**.

![Terraform and Ansible](images/terraform_ansible.png "Terraform and Ansible")

Expand All @@ -153,15 +153,15 @@ Scrolling through the actions section of the log, you will see Ansible kick into

![Ansible Log](images/ansible_log.png "Ansible Log")

It is important to note, Ansible has a robust community and ecosystem, with many third-party modules available for common tasks like interacting with cloud providers, databases, and other services.
It is important to note, Ansible has a robust community and ecosystem, with many third-party modules available for common tasks like interacting with cloud providers, databases, and other services.

Oracle has released an [OCI Ansible Collection](https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/ansible.htm) to support the automation of cloud configurations and the orchestration of complex operational processes.

You may now **proceed to the next lab**

## Common Issues

* [Out of Capacity](?lab=troubleshooting#OutofCapacity)
* [Out of Capacity](?lab=troubleshooting#Task2:OutofCapacity)

## Learn More

Expand Down
24 changes: 12 additions & 12 deletions kubernetes-for-oracledbas/lifecycle-adb/lifecycle-adb.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The actions that the **OraOperator** support for the AutonomousDatabase resource
* Create an Autonomous Database
* Manage ADMIN database user password
* Download instance credentials (wallets)
* Scale the OCPU core count or storage
* Scale the CPU or storage
* Rename an Autonomous Database
* Stop/Start/Terminate an Autonomous Database
* Delete the resource from the Kubernetes cluster
Expand Down Expand Up @@ -43,7 +43,7 @@ In the [Bind to an ADB](?lab=bind-adb) Lab, you redefined the `adb-existing` res
<copy>
export ORACLE_HOME=$(pwd)
export TNS_ADMIN=$ORACLE_HOME/network/admin
mkdir -p $ORACLE_HOME/network/adminrr
mkdir -p $ORACLE_HOME/network/admin

# Extract the tnsnames.ora secret
kubectl get secret/adb-tns-admin \
Expand Down Expand Up @@ -95,9 +95,9 @@ In the [Bind to an ADB](?lab=bind-adb) Lab, you redefined the `adb-existing` res

Everything you needed to make a connection to the ADB could be obtained from Kubernetes. Applications in Kubernetes using the ADB as a backend data store will be able to do the same.

## Task 2: Scale the OCPU and Storage - Up
## Task 2: Scale the CPU and Storage - Up

1. **Redefine** the ADB resource to adjust its OCPU and Storage.
1. **Redefine** the ADB resource to adjust its CPU and Storage.

While you could modify the *manifest file* used to bind to the ADB and apply it, try a different approach and use the `kubectl patch` functionality to update the **AutonomousDatabase** resource in place.

Expand All @@ -113,11 +113,11 @@ Everything you needed to make a connection to the ADB could be obtained from Kub

![ADB Patched](images/adb_patched.png "ADB Patched")

2. In the OCI Console, Navigate to Oracle Databases -> Autonomous Database and you should see your ADB in a "Scaling In Progress" state, increasing the OCPU and Storage.
2. In the OCI Console, Navigate to Oracle Databases -> Autonomous Database and you should see your ADB in a "Scaling In Progress" state, increasing the CPU and Storage.

![ADB Scaling](images/adb_scaling.png "ADB Scaling")

3. You can also watch the ADB Resource scale from Kubernetes.
3. You can also watch the ADB Resource scale from Kubernetes.

You'll already be familiar with the `kubectl get` command; by appending a `-w` you can put `kubectl` into a "Watch" loop:
Expand All @@ -129,9 +129,9 @@ Everything you needed to make a connection to the ADB could be obtained from Kub
Press `Ctrl-C` to break the loop
## Task 3: Scale the OCPU and Storage - Down
## Task 3: Scale the CPU and Storage - Down
You've now have seen how to apply a *manifest file* and use `kubectl patch` to redefine a Kubernetes resource, but you can also edit the resource directly:
You have now seen how to apply a *manifest file* and use `kubectl patch` to redefine a Kubernetes resource, but you can also edit the resource directly:
1. Edit the resource:
Expand All @@ -152,7 +152,7 @@ You've now have seen how to apply a *manifest file* and use `kubectl patch` to r
![Edit ADB](images/adb_edit.png "Edit ADB")
2. In the OCI Console, Navigate to Oracle Databases -> Autonomous Database and you should see your ADB back in a "Scaling In Progress" state, decreasing the OCPU and Storage.
2. In the OCI Console, Navigate to Oracle Databases -> Autonomous Database and you should see your ADB in the "Scaling In Progress" state, decreasing the CPU and Storage.
3. Of course you can also watch it from Kubernetes:
Expand Down Expand Up @@ -259,9 +259,9 @@ However, in the next Tasks, you will be using an in-built *Service Account* call
## Task 5: Scheduled Stop and Start
You can execute any of the methods you used to scale the ADB to also change the ADBs `lifecycleState` (AVAILABLE or STOPPED) manually. However, you can also take advantage of another built-in Kubernetes resource, the *CronJob*, to schedule a change to the `lifecycleState`.
You can execute any of the methods you used to scale the ADB to also change the ADBs `lifecycleState` (AVAILABLE or STOPPED) manually. However, you can also take advantage of another built-in Kubernetes resource, the *CronJob*, to schedule a change to the `lifecycleState`.
This is especially useful for Autonomous Databases as when the database is STOPPED you are not charged for the OCPUs. With the *Role* and *RoleBindings* in-place for the `default Service Account`, create a *CronJob*:
This is especially useful for Autonomous Databases as when the database is STOPPED you are not charged for the CPUs. With the *Role* and *RoleBindings* in-place for the `default Service Account`, create a *CronJob*:
### Schedule a CronJob
Expand Down Expand Up @@ -328,7 +328,7 @@ This is especially useful for Autonomous Databases as when the database is STOPP
</copy>
```
The *CronJob* is using the same `patch` method you used to scale the OCPU and Storage up.
The *CronJob* is using the same `patch` method you used to scale the CPU and Storage up.
2. Apply the manifest:
Expand Down
Loading

0 comments on commit 5b5a641

Please sign in to comment.