Skip to content

Commit

Permalink
Merge pull request #11 from PLG-Works/input_changes
Browse files Browse the repository at this point in the history
input field type changes
  • Loading branch information
bala007 authored Oct 4, 2022
2 parents 0ba84b6 + f71acc0 commit ab9b407
Show file tree
Hide file tree
Showing 6 changed files with 111 additions and 110 deletions.
85 changes: 57 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,59 +1,88 @@
# Ghost Hosting CLI
Ghost hosting cli is a command line interactive tool to host the [Ghost](https://ghost.org/) on the AWS cloud. It simplifies the Ghost server deployment by utilizing the AWS infrastructure. It also provides the flexibility to host the fresh stack or to plug it into the existing infrastructure.
Ghost hosting cli is a command line interactive tool to host the [Ghost](https://ghost.org/) on the AWS cloud with the help of [Terraform-CDK](https://www.terraform.io/cdktf). It simplifies the Ghost server deployment by utilizing the AWS infrastructure. It provides the flexibility to host the fresh stack or to plug it into the existing infrastructure.

## Prerequisites
- Terraform >= 1.1.8
- NodeJS >= 14.17
- AWS account
- AWS account with admin access

## Deployment options
## How does it work?

As it is using AWS cloud platform, the following parameters are required by default. `AWS access key`, `AWS secret access key`, and `AWS region`. It expects to have Route53 configured for the domain where you want to host the Ghost.
Ghost Hosting CLI uses AWS cloud platform, the following parameters are required by default.
* `AWS access key`
* `AWS secret access key`
* `AWS region`

Either you can go with the existing VPC by providing comma-separated subnet ids or with the default selection for creating a new VPC and subnets.
It requires a `config.json` file. This `config.json` is get's generated while taking input from the user. If this file is already present at the location (from previous deployments), the CLI prompt ask user whether to use the existing configuration or to create new.

If you want to use the existing VPC, you need to provide the `subnet ids` to launch the ECS tasks (recommend private subnets) and `public subnet ids` to launch the load balancer (ALB). You can also use the existing load balancer (ALB) by providing the `load balancer listener ARN`.
Configuration file generation happens only in the deploy stage. Once the `config.json` file is ready, the CLI synthesizes Terraform configuration for an application.

It requires a `Ghost hosting url` where Ghost can be accessed on the web. If you want to host the static website for the generated content (please refer to how to generate the static content from Ghost [here](https://github.com/PLG-Works/ghost-static-website-generator)), you can also specify `Static website url` where it will provision the AWS S3 bucket to host the static website.
After this, rest is handled by terraform to deploy/destroy stacks.

It requires a MySQL database to store the Ghost configurations along with the content. For that, you can provide either the existing DB credentials like DB host, DB name, DB user password, and database name. Else it will create a new RDS instance for the same.
For the deployment, CLI create two stacks:
- **Backend stack**: S3 backend is used to provide state locking and consistency checking. S3 bucket is used to store the generated state files by the terraform and Dynamo DB table is used for the locking purpose.
- **Ghost stack**: Once the backend stack is deployed, the deployment for the Ghost stack begins. Changes in the infrastructure plan will be shown to user before deploying/destroying the stack.

Terraform CDK then utilizes the providers and modules specified to generate terraform configuration. This terraform configuration later used for deploying/destroying stacks.

> While executing `deploy`/`destroy` command, you might get timeout exceptions because of network interuptions. If that is the case, then re-run the command to complete the execution.
### Provides flexibility with:
1. **Existing VPC**: You can use the existing VPC by providing subnet ids as comma-separated values otherwise it'll create the new VPC and the subnets.
It expects to have Route53 configured for the domain where you want to host the Ghost. If you want to use the existing VPC, you need to provide the `subnet ids` to launch the ECS tasks (recommend private subnets) and `public subnet ids` to launch the load balancer (ALB).
2. **Existing Load Balancer**: You can use the existing load balancer (ALB) by providing the `load balancer listener ARN`.
3. **Hosting URL**: It requires a `Ghost hosting url` where Ghost can be accessed on the web.
4. **Static Website**: Refer [this](https://github.com/PLG-Works/ghost-static-website-generator) to host the static website for the generated content. You can also specify `Static website url` where it will provision the AWS S3 bucket to host the static website.
5. **Existing MySQL Database**: The cli requires a MySQL database to store the Ghost configurations along with the content. You can provide the existing DB credentials like DB host, DB name, DB user password, and database name. (otherwise it'll create a new RDS instance).

## Why do I need to use this tool?
It gives the following benefits:
It comes with the following benefits:
- Easy setup. You don't have to worry about provisioning each of the AWS resources by yourself.
- Make use of the existing infrastructure by providing the existing VPC subnets, load balancer, and database.
- Use this setup to provision and host the static website for the generated content.
- It uses AWS ECS with auto-scaling enabled. So, you don't have to worry about scalability.
- It can provide a cost-efficient setup by plugging in the existing load balancer and database. Also, it runs on AWS FARGATE and utilizes the FARGATE SPOT resources.

- It provides a cost-efficient setup by plugging the existing load balancer and database. Also, it runs on AWS `FARGATE` and utilizes the `FARGATE SPOT` resources.

## Example Usage:

- Deploy Ghost Stack
- Install the package:
```bash
npm install -g plg-ghost
```
- Deploy Ghost Stack and Backend Stack:
```bash
plg-ghost deploy
```

- Destroy Ghost Stack
- Destroy Ghost and Backend Stack:
```bash
plg-ghost destroy
```

## Development

```bash
$ git clone [email protected]:PLG-Works/ghost-hosting-cli.git
$ cd ghost-hosting-cli
npm install
npm run get
npm run build
# Or you can also use "npm run watch"
## Development:
- Clone the repository:
```bash
git clone [email protected]:PLG-Works/ghost-hosting-cli.git
```
- Install all dependencies:
```bash
cd ghost-hosting-cli
npm install
```
- Create build
```bash
npm run get # fetch required terraform providers and modules
npm run build # create a build
```
or
```bash
npm run watch
```
- Deploy stacks
```bash
npm run dev -- deploy
```
- Destroy stacks
```bash
npm run dev -- destroy
```

npm run dev -- deploy
```
> While executing **deploy**/**destroy** command, you might get timeout exceptions because of network interruptions. If that is the case, then re-run the command to complete the execution.
6 changes: 3 additions & 3 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"name": "plg-ghost",
"version": "1.0.0",
"description": "Host Ghost server in AWS ECS and provision to host static files on AWS S3",
"main": "./dist/run.js",
"main": "./dist/src/run.js",
"scripts": {
"dev": "ts-node ./src/run.ts",
"build": "tsc",
Expand Down Expand Up @@ -43,12 +43,12 @@
"@cdktf/provider-aws": "8.0.12",
"cdktf": "0.11.2",
"cdktf-cli": "0.11.2",
"chalk": "4.1.2",
"commander": "9.3.0",
"constructs": "10.1.37",
"psl": "1.9.0",
"readline-sync": "1.4.10",
"shelljs": "0.8.5",
"chalk": "4.1.2"
"shelljs": "0.8.5"
},
"devDependencies": {
"@types/jest": "28.1.1",
Expand Down
14 changes: 7 additions & 7 deletions src/driver.ts
Original file line number Diff line number Diff line change
Expand Up @@ -89,9 +89,9 @@ async function _deployStack(): Promise<void> {
});

console.log(chalk.blue.bold('Please review the above output for DEPLOY action.'));
const approve = readlineSync.question(chalk.blue.bold('Do you want to approve?(Y/n): '), { defaultInput: YES });
const approve = readlineSync.keyInYN(chalk.blue.bold('Do you want to approve?'));

if (approve === YES) {
if (approve === true) {
// Deploy ghost stack
await exec(`cd ${GHOST_OUTPUT_DIR} && terraform apply -auto-approve`).catch(() => {
process.exit(1);
Expand All @@ -108,7 +108,7 @@ async function _deployStack(): Promise<void> {

const { input, formattedOutput } = _readAndShowOutput();
_nextActionMessage(input, formattedOutput);
} else if (approve === NO) {
} else if (approve === false) {
console.log('Declined!');
} else {
console.log(INVALID_INPUT);
Expand Down Expand Up @@ -195,7 +195,7 @@ function _nextActionMessage(input: any, formattedOutput: any): void {
[chalk.cyan.bold('Value')]: formattedOutput['alb_alb_dns_name'],
},
];
if(input.hostStaticWebsite){
if (input.hostStaticWebsite) {
r53Records.push({
[chalk.cyan.bold('Domain Name')]: rootDomain,
[chalk.cyan.bold('Record Name')]: getDomainFromUrl(input.staticWebsiteUrl),
Expand Down Expand Up @@ -229,9 +229,9 @@ function _nextActionMessage(input: any, formattedOutput: any): void {
*/
async function _destroyStack(): Promise<void> {
console.log(chalk.blue.bold('\nThis action will destroy the stack.'));
const approve = readlineSync.question(chalk.blue.bold('Do you want to approve?(Y/n): '), { defaultInput: YES });
const approve = readlineSync.keyInYN(chalk.blue.bold('Do you want to approve?'));

if (approve === YES) {
if (approve === true) {
// Destroy ghost stack
console.log('Destroying Ghost stack...');
await exec(`cd ${GHOST_OUTPUT_DIR} && terraform destroy -auto-approve`).catch(() => {
Expand All @@ -245,7 +245,7 @@ async function _destroyStack(): Promise<void> {
process.exit(1);
});
console.log('S3 backend stack destroyed successfully.');
} else if (approve === NO) {
} else if (approve === false) {
console.log('Declined!');
} else {
console.log(INVALID_INPUT);
Expand Down
Loading

0 comments on commit ab9b407

Please sign in to comment.