Skip to content

Commit

Permalink
Merge pull request #66 from RHEcosystemAppEng/v0.2
Browse files Browse the repository at this point in the history
V0.2
  • Loading branch information
r2dedios authored Dec 9, 2024
2 parents f72a7b3 + d4f0083 commit c40ab4b
Show file tree
Hide file tree
Showing 36 changed files with 2,696 additions and 479 deletions.
6 changes: 5 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,8 @@ define HELP_MSG
\033[1;36mpush:\033[0m \033[0;37m Pushes every container image into remote repo
\033[1;36mpush-api:\033[0m \033[0;37m Pushes API container image
\033[1;36mpush-scanner:\033[0m \033[0;37m Pushes cluster-iq-scanner container image
\033[1;36mstart-dev:\033[0m \033[0;37m Starts a local environment using 'docker/podman-compose'
\033[1;36mstart-dev:\033[0m \033[0;37m Starts a local environment using 'docker/podman-compose' and initializes the Database with some fake data
\033[1;36mdeploy-compose:\033[0m \033[0;37m Starts a local environment using 'docker/podman-compose'
\033[1;36mstop-dev:\033[0m \033[0;37m Stops the local environment using 'docker/podman-compose'
\033[1;36mswagger-editor:\033[0m \033[0;37m Starts Swagger Editor using a docker container
\033[1;36mswagger-doc:\033[0m \033[0;37m generates Swagger Documentation of the API
Expand Down Expand Up @@ -128,6 +129,9 @@ stop-dev:
@echo "### [Stopping dev environment] ###"
@$(CONTAINER_ENGINE)-compose -f $(DEPLOYMENTS_DIR)/docker-compose/docker-compose.yaml down

init-psql-test-data:
export PGPASSWORD=password ; psql -h localhost -p 5432 -U user -d clusteriq < db/sql/test_data.sql


# Tests
test:
Expand Down
69 changes: 20 additions & 49 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ available for every cloud provider:

| Cloud Provider | Compute Resources | Billing | Managing |
|----------------|-------------------|---------|----------|
| AWS | Yes | No | No |
| AWS | Yes | Yes | No |
| Azure | No | No | No |
| GCP | No | No | No |

Expand All @@ -33,19 +33,25 @@ The following graph shows the architecture of this project:
![ClusterIQ architecture diagram](./doc/arch.png)


## Deployment
This section explains how to deploy ClusterIQ and ClusterIQ Console
1. Create `secrets` folder
## Installation
This section explains how to deploy ClusterIQ and ClusterIQ Console.


### Prerequisites:
#### Accounts Configuration
1. Create a folder called `secrets` for saving the cloud credentials. This folder is ignored on this repo to keep your
credentials safe.
```text
mkdir secrets
export CLUSTER_IQ_CREDENTIALS_FILE="./secrets/credentials"
```
:warning: Please take care and don't include them on the repo.
2. Create your credentials file with the AWS credentials of the accounts you
want to scrape. The file must follow the following format:
```text
echo "
[AWS_ACCOUNT_NAME]
[ACCOUNT_NAME]
provider = aws/gcp/azure
user = XXXXXXX
key = YYYYYYY
Expand All @@ -54,53 +60,12 @@ This section explains how to deploy ClusterIQ and ClusterIQ Console
:warning: The values for `provider` are: `aws`, `gcp` and `azure`, but the
scraping is only supported for `aws` by the moment. The credentials file
should be placed on the path `secrets/*` to work with
`docker/podman-compose`. This repo ignores `secrets` folder for preventing
you to push your cloud credentials to the repo.
`docker/podman-compose`.
:exclamation: This file structure was design to be generic, but it works
differently depending on the cloud provider. For AWS, `user` refers to the
`ACCESS_KEY`, and `key` refers to `SECRET_ACCESS_KEY`.
3. Continue to "Local Deployment" for running ClusterIQ on your local using
Podman.
4. Continue to "Openshift Deployment" for deploying ClusterIQ on an Openshift
cluster.
### Local Deployment (for development)
1. Use the Makefile targets for building the components
```sh
make build
```
2. Deploy dev environment
```sh
make start-dev
```
3. Undeploy dev environment
```sh
make stop-dev
```
:warning: Make sure you have access to `registry.redhat.io` for downloading
required images.
:warning: If you're having issues mounting your local files (like init.psql or
the credentials file) check if your SELinux is enforcing. This could prevent
podman to bind these files into the containers.
```sh
# Run this carefully and under your own responsability
sudo setenforce 0
```

ClusterIQ includes two exposed components, the API and the UI.

| Service | URL |
|----------------|-----------------------|
| UI | <http://localhost:8080> |
| API | <http://localhost:8081/api/v1> |


### Openshift Deployment
1. Prepare your cluster and CLI
```sh
Expand Down Expand Up @@ -157,6 +122,13 @@ ClusterIQ includes two exposed components, the API and the UI.
oc apply -n $NAMESPACE -f ./deployments/openshift/05_console.yaml
```
## Local Deployment (for development)
For deploying ClusterIQ in local for development purposes, check the following
[document](./doc/development-setup.md)
### Configuration
Available configuration via Env Vars:
| Key | Value | Description |
Expand All @@ -174,7 +146,7 @@ and on `./<PROJECT_FOLDER>/deploy/openshift/config.yaml` to deploy it on Openshi
### Scanners
[![Docker Repository on Quay](https://quay.io/repository/ecosystem-appeng/cluster-iq-aws-scanner/status "Docker Repository on Quay")](https://quay.io/repository/ecosystem-appeng/cluster-iq-aws-scanner)
[![Docker Repository on Quay](https://quay.io/repository/ecosystem-appeng/cluster-iq-scanner/status "Docker Repository on Quay")](https://quay.io/repository/ecosystem-appeng/cluster-iq-aws-scanner)
As each cloud provider has a different API and because of this, a specific
scanner adapted to the provider is required.
Expand All @@ -189,7 +161,6 @@ By default, every build rule will be performed using the Dockerfile for each
specific scanner

#### AWS Scanner
The scanner should run periodically to keep the inventory up to date.

```shell
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.1-alpha
v0.2
63 changes: 57 additions & 6 deletions cmd/api/api_server.go
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
// TODO: Read this article https://ferencfbin.medium.com/golang-own-structscan-method-for-sql-rows-978c5c80f9b5
package main

import (
"context"
"net/http"
"os"
"os/signal"
"syscall"
"time"

"github.com/RHEcosystemAppEng/cluster-iq/cmd/api/docs"
Expand All @@ -14,12 +17,14 @@ import (
_ "github.com/lib/pq"
"go.uber.org/zap"

// swagger embed files
// swagger embed files
swaggerFiles "github.com/swaggo/files" // swagger embed files
ginSwagger "github.com/swaggo/gin-swagger" // gin-swagger middleware
)

const (
APITimeoutSeconds = 10
)

var (
// version reflects the current version of the API
version string
Expand All @@ -29,6 +34,8 @@ var (
inven *inventory.Inventory
// Gin Router for serving API
router *gin.Engine
// Gin Server
server *http.Server
// API serving URL
apiURL string
// DB URL (example: "postgresql://user:password@pgsql:5432/clusteriq?sslmode=disable")
Expand Down Expand Up @@ -58,6 +65,21 @@ func init() {
router.Use(ginzap.Ginzap(logger, time.RFC3339, true))
}

// signalHandler for managing incoming OS signals
func signalHandler(signal os.Signal) {
if signal == syscall.SIGTERM {
ctx, cancel := context.WithTimeout(context.Background(), APITimeoutSeconds*time.Second)
defer cancel()
logger.Warn("SIGTERM signal received. Stopping ClusterIQ API server")
if err := server.Shutdown(ctx); err != nil {
logger.Fatal("API Shutdown error", zap.Error(err))
os.Exit(-1)
}
} else {
logger.Warn("Ignoring signal: ", zap.String("signal_id", signal.String()))
}
}

// addHeaders adds the requerired HTTP headers for API working
func addHeaders(c *gin.Context) {
// To deal with CORS
Expand All @@ -82,7 +104,6 @@ func addHeaders(c *gin.Context) {

// @externalDocs.description OpenAPI
// @externalDocs.url https://swagger.io/resources/open-api/

func main() {
// Ignore Logger sync error
defer func() { _ = logger.Sync() }()
Expand All @@ -94,16 +115,29 @@ func main() {
// Preparing API Endpoints
baseGroup := router.Group("/api/v1")
{
healthcheckGroup := baseGroup.Group("/healthcheck")
{
healthcheckGroup.GET("", HandlerHealthCheck)
}
expensesGroup := baseGroup.Group("/expenses")
{
expensesGroup.GET("", HandlerGetExpenses)
expensesGroup.GET("/:instance_id", HandlerGetExpensesByInstance)
expensesGroup.POST("", HandlerPostExpense)
}
instancesGroup := baseGroup.Group("/instances")
instancesGroup.Use(HandlerRefreshInventory)
{
instancesGroup.GET("", HandlerGetInstances)
instancesGroup.GET("/expense_update", HandlerGetInstancesForBillingUpdate)
instancesGroup.GET("/:instance_id", HandlerGetInstanceByID)
instancesGroup.POST("", HandlerPostInstance)
instancesGroup.DELETE("/:instance_id", HandlerDeleteInstance)
instancesGroup.PATCH("/:instance_id", HandlerPatchInstance)
}

clustersGroup := baseGroup.Group("/clusters")
clustersGroup.Use(HandlerRefreshInventory)
{
clustersGroup.GET("", HandlerGetClusters)
clustersGroup.GET("/:cluster_id", HandlerGetClustersByID)
Expand All @@ -130,7 +164,7 @@ func main() {
// programmatically set swagger info
docs.SwaggerInfo.Title = "Cluster IP API doc"
docs.SwaggerInfo.Description = "This the API of the ClusterIQ project"
docs.SwaggerInfo.Version = "1.0"
docs.SwaggerInfo.Version = "0.2"
docs.SwaggerInfo.Host = "localhost"
docs.SwaggerInfo.BasePath = "/api/v1"
docs.SwaggerInfo.Schemes = []string{"http"}
Expand All @@ -143,7 +177,24 @@ func main() {
return
}

server = &http.Server{
Addr: apiURL,
Handler: router.Handler(),
}

// Start API
go func() {
if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
logger.Fatal("Server listen and serve error", zap.Error(err))
os.Exit(-1)
}
}()

logger.Info("API Ready to serve")
router.Run(apiURL)

quitChan := make(chan os.Signal, 1)
signal.Notify(quitChan, syscall.SIGTERM)
s := <-quitChan
signalHandler(s)
logger.Info("API server stopped")
}
Loading

0 comments on commit c40ab4b

Please sign in to comment.