Skip to content

chemsss/devops-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web application - DevOps Project

Welcome to the repository of our web application DevOps project. The goal of this project is to implement softwares covering the whole DevOps cycle to a simple API web application that uses storages in a Redis database. The softwares help us automate the building, testing, deployment and running of our project.

In this repository is explained how to set up :

  • The User API web application
  • A CI/CD pipeline with GitHub Actions and Heroku
  • A Vagrant configured virtual machine provisionned with Ansible
  • A Docker image of our application
  • Container orchestration using Docker Compose
  • Docker orchestration using Kubernetes
  • A service mesh using Istio
  • Monitoring with Prometheus and Grafano to the application containerized in a K8s cluster

1. Web application

The app is a basic NodeJS web application exposing REST API where you can create and store user parameters in a Redis database.
You can create a user by sending a curl POST method to the application with the user data, and access that data in the app in the http://localhost:3000/user route and adding to the route /username with the username corresponding the user data you want to access.

Installation

This application is written on NodeJS and it uses a Redis database.

  1. Install NodeJS
  2. Install Redis
  3. Install application

Go to the userapi/ directory of the cloned repository and run:

npm install 

Usage

  1. Start a web server

From the /userapi directory of the project run:

npm run start

It will start a web server available in your browser at http://localhost:3000.

  1. Create a user

Send a POST (REST protocol) request using terminal:

curl --header "Content-Type: application/json" \
  --request POST \
  --data '{"username":"sergkudinov","firstname":"sergei","lastname":"kudinov"}' \
  http://localhost:3000/user

It will output:

{"status":"success","msg":"OK"}

After, if you go to http://localhost:3000/user/sergkudinov, with "sergkudinov" being the username that you had in your POST data, it will display in the browser the following, with correspondance to the data that you posted:

{"status":"success","msg":{"firstname":"sergei","lastname":"kudinov"}}

Another way to test your REST API is to use Postman.

Testing

From the root directory of the project, run:

npm run test

It should pass the 12 tests:

image

2. CI/CD pipeline with GitHub Actions and Heroku

  • The continuous integration workflow has been setup with GitHub Actions.
    The workflow automates building and tests of our NodeJS project. Before every deployment we check if the workflow tests have passed successfully to make sure the code runs fine.

  • The continuous deployment has been done with Heroku.
    Heroku helps us deploying our project and allows automatic deployment. We had to add Heroku to our GitHub Actions workflow.

To create the workflow we went into the "Actions" tab of our project and created the workflow from the yaml template provided by GitHub. image

We then in the main.yaml have put this code:

# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions

name: Main CI/CD

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  # CI part
  test:
    runs-on: ubuntu-latest
    # Define `working-directory` if you application is in a subfolder
    defaults:
      run:
        working-directory: userapi
     # Service containers to run with `runner-job`
    services:
      # Label used to access the service container
      redis:
        # Docker Hub image
        image: redis
        ports:
          # Opens tcp port 6379 on the host and service container
          - 6379:6379
    strategy:
      matrix:
        node-version: [16.x]
        # See supported Node.js release schedule at https://nodejs.org/en/about/releases/
    steps:
    - uses: actions/checkout@v2
    - name: Use Node.js ${{ matrix.node-version }}
      uses: actions/setup-node@v2
      with:
        node-version: ${{ matrix.node-version }}
        cache: 'npm'
        cache-dependency-path: '**/package-lock.json'
    - run: npm ci
    - run: npm test
  # CD part
  deploy:
    needs: test # Requires CI part to be succesfully completed
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      # Read instructions how to configure this action: https://github.com/marketplace/actions/deploy-to-heroku#getting-started
      - uses: akhileshns/[email protected] # This is the action
        with:
          heroku_api_key: ${{secrets.HEROKU_API_KEY}}
          heroku_app_name: "warm-woodland-30605" # Must be unique in Heroku
          heroku_email: "[email protected]" # Heroku account email
          appdir: userapi # Define appdir if you application is in a subfolder

The heroku_app_name is the name of the Heroku app that we have created in Heroku's website. The API key was retreaved from the Heroku account setting and had to be added in the GitHub secrets. This allows for automatic deployment on Heroku after we push our project to GitHub.

We have also, in the Heroku app's deployment settings, enabled automatic deploys and ticked the "Wait for CI to pass before deploy" checkbox.

We can check for ourselves the tests in the CI/CD workflow of our GitHub project: image

After deployment, we can access in the Heroku website the deployment of our project:
image

image

However, we have only our NodeJS app and not the Redis database, so the deployment on Heroku is only partially functionnal, as the user API does not work.
While the Redis service is free on Heroku, it requires adding credit card information, that is why we have not bothered adding it.

3. Configuring and provisionning a virtual environment using the IaC approach

To use the Infrastructure as code (IaC) approach, we have used Vagrant to configure and manage our virtual machine and used Ansible to provision the virtual machine.

Installation

For this, in addition to installing Vagrant, you have to make sure you have installed VirtualBox (or another virtualization software that is accepted by Vagrant).

  1. Install VirtualBox (or other)
  2. Install Vagrant

Creating and provisionning the virtual machine (VM)

  • Go to the /IaC directory (where there is the Vagrantfile) and run in the terminal:
vagrant up

It should start initializing and booting the VM.

The VM's OS is hashicorp/bionic64, which is a basic, highly optimized and small in size version of Ubuntu 18.04 64-bit available for minimal use cases and made by HashiCorp. You can choose whatever OS you want to use in your VM by modifying the VM box property. Ressouces are available online about the Vagrantfile and how to change boxes.

Then it will download automatically Ansible and will start the provisionning set up by the Ansible playbooks. The playbooks' tasks set up the downloading and enabling of the packages and services that are needed to run the userapi project on the VM. However, the installation of the Redis database is not complete as only its downloading could work for us but not its installation.

  • After the downloads have ended, you can enter your VM via SSH with the following Vagrant command:
vagrant ssh nodejs_server

The folder userapi located in the repository's clone of your host is shared with the VM thanks to the "synced_folder" folder property in the Vagrantfile.

  • When connecting by SSH, you can find the folder by typing the following commands in the terminal:
cd ../..
cd home/userapi/
/home/userapi ls

You can see that the files being showen by the terminal are the same than the one in the host's folder.

image

You can keep working in the host's folder and the guest machine's synced folder will keep automatically the files up to date with the host's files.

image image

We have also kept in the roles folders a "main.yaml" file with tasks for the installation and launch of GitLab on the VM that works fine. If you want to use it you have to add a role with the path of the tasks file and with the "install" tag in the "run.yml" file in the playbooks folder. When installed and launched on the VM, you will be able to access the GitLab page through the 20.20.20.2 address on your host machine thanks to the server.vm.network and ip properties in the Vagrantfile.

4. Docker image of the app

To be able to "containerize" our application we created a Docker image of it. Docker enables us to run our app in the exact environment(s) that we want.

Installation

Install Docker Desktop

  • In the root directory of the repository's clone (where there is the Dockerfile), run the following command to build image (don't forget the dot):
docker build -t userapi .

We have also pushed our Docker image to DockerHub.
image

  • So, instead, you can simply pull the image from DockerHub:
docker pull chemss/userapi
  • You can check if it appears in your local Docker images:
docker images
  • Then, run the container:
docker run -p 3000:3000 -d chemss/userapi
  • Check if it the container is running:
docker ps

image

  • Stop the container:
docker stop <CONTAINER_ID>

5. Container orchestration using Docker Compose

The image we have built with the Dockerfile runs only a container which has our app but not the database.

Docker Compose allows us to run multi-container Docker applications. The services and images are set up in the docker-compose.yaml file.

  • Run the docker-compose command to create and start the redis and web services from the configuration file:
docker-compose up

image

  • You can delete the containers with:
docker-compose rm

5. Docker orchestration using Kubernetes

Kubernetes is an open-source system for automating the deployment, scaling and management of containerized applications. Compared to Kubernetes, Docker Compose has limited functionnality.

Install Minikube

Minikube is a tool that makes it easy tu run Kubernetes locally.

Install Minikube following the instructions depending on your OS.

  • Start Minikube:
minikube start
  • Check that everything is OK:
minikube status

Running the Kubernetes deployments

  • Go to the /k8s directory and run this command for every file:
kubectl apply -f <file_name.yaml>
  • The deployment.yaml file describes the desired states of the redis and userapi deployments.
  • The service.yaml file exposes the redis and userapi apps as network services and * gives them the right ports.
  • The persistentvolume.yaml file creates a piece of storage in the cluster which has a lifecycle independent of any individual Pod that uses the PersistentVolume.
  • The persistentvolumeclaim.yaml file create a request for storage by a user.

Check that everything is running

  • Check that the deployments are running:
kubectl get deployments

The result should be as following:

image

  • Check that the services are running:
kubectl get services

Should output the following:

image

  • Check that the PersistentVolume is running:
kubectl get pv

Outputs the following:

image

  • Check that the PersistentVolumeClaim is running:
kubectl get pvc

Outputs the following:

image

We can see in the outputs that the PersistentVolumeClaim is bound to the PersistentVolume. The claim requests at least 3Gi from our hostPath PersistentVolume.

  • You can also check that everything is running through the minikube dashboard:
minikube dashboard

image

Accessing the containerized app

  • Run the following command to the userapi service:
 kubectl port-forward service/userapi-deployment 3000:3000

The home page of our app should display when going to http://localhost:3000/ on your browser.

  • Run the following command:
kubectl get pods

Outputs the following:

image

  • You can send a bash command to one of the 3 pod replicas created with the userapi deployment with the following command:
 kubectl exec <POD_NAME> -- <COMMAND>
 #or
 kubectl exec -it <POD_NAME> -- <COMMAND>

5. Making a service mesh using Istio

Istio is a service mesh that can control the traffic flow between micro-services.
For example, it can be used to redirect a parts of the users into different versions of a service.

Installation

  • Make sure you have Minikube installed and run:
minikube config set vm-driver virtualbox (or vmware, or kvm2) 
#or minikube config set vm-driver virtualbox
minikube start --memory=16384 --cpus=4 --kubernetes-version=v1.18.0 
#configure the RAM and CPU usage according to your system

Intructions: https://istio.io/docs/setup/getting-started/

Follow the installation instructions until the Deploy the sample application section.

Yaml Deployment files

With Istio, we are going to try to route requests between 2 different version of our app. So, in the istio folder, we have changed the deployment.yaml file and copy pasted the userapi et redis deployments, so now we have 4 deployments. However, the first userapi et redis deployments are linked to the version "v1" and the 2 others to version "v2".

  • Run the following command in the /istio directory for each file in the folder:
kubectl apply -f <file_name.yaml>

Routing

  • Default routing

By applying virtual services, we can set the default version the microservices that we want. In the virtual-service-v1-v2.yaml file, we have set the version v1 for redis and for userapi as the default version:

apiVersion: networking.istio.io/v1alpha3
kind: Service
metadata:
  name: redis-service
spec:
  hosts:
  - redis-service
  http:
  - route:
    - destination:
        host: redis-service
        subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: userapi-service
spec:
  hosts:
  - userapi-service
  http:
  - route:
    - destination:
        host: userapi-service
        subset: v1
  • User identity based routing

With the virtual-service-user-routing.yaml file, we applied a virtual service to have a user based routing.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: userapi-service
spec:
  hosts:
    - userapi-service
  http:
  - match:
    - headers:
        username:
          exact: chems
    route:
    - destination:
        host: userapi-service
        subset: v1
  - route:
    - destination:
        host: userapi-service
        subset: v2

For our service userapi-service, all the connections that are sending an HTTP request with the username equal "chems" in its header will be sent to userapi-service:v2.

Traffic shifting

Traffic shifting is usually used to migrate traffic gradually from an older version of a microservice to a new one. You can send a part of the whole traffic to be sent to the version of the micro-services of your choice.

The virtual-service-traffic-shifting.yaml file applies a virtual service that redirect 50% of traffic into the v1 of the userapi deployment and the other 50% into the v2 of userapi:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: userapi-service
spec:
  hosts:
    - userapi-service
  http:
  - route:
    - destination:
        host: userapi-service
        subset: 2
      weight: 50
    - destination:
        host: userapi-service
        subset: v1
      weight: 50

6. Monitoring containerized application with Prometheus and Grafana

Isitio being a service mesh that identifies the amount of traffic comming into micro-services, it gives also the possibility to monitorize our containerized application thanks to its many addons and packages that can be installed.

Installation

Follow the same installation guide than in the last part but stop at the View the dashboard part.

Prometheus

Follow intructions for the installation of Prometheus: https://istio.io/latest/docs/ops/integrations/prometheus/

Prometheus is installed through Istio thanks to addons. Prometheus works by scrapping the data emitted by the Istio service mesh to be able to generate its dashboard. To make it work, you must customize Promeheus' scrapping configurations. Scrapping configurations are provided in the above guide for to scrape Istio's http-monitoring port and Envoy stats. TLS Settings are also provided to scrape using Istio certificates.

Grafana

Follow intructions for the installation of Grafana: https://istio.io/latest/docs/ops/integrations/grafana/

Grafana is also installed through Istio's addons. To create it dashboard, Grafana can import Istio's dashboard through a script that is provided in the above guide. Grafana can be also installed and configured through other methods. There is documentation in the guide to import Istio dashboards with other intallation methods of Grafana.

About

ECE 2021 Fall DevOps project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published