Skip to content

Developer Guide

asharonbaltazar edited this page Dec 10, 2024 · 2 revisions

AWS Login

Authenticating to an AWS account(s) is a required first step for many workflows.

AWS Console Login

Summary

This procedure will take you to the AWS Console in a web browser, for one of the AWS accounts used by this project.

Prerequisites

Procedure

To get to the AWS Console:

  • Login to the cloud VPN, https://cloudvpn.cms.gov
  • Go to the CMS Kion (Cloudtamer) site. This is a great link to bookmark. Note: if the Kion site fails to load in your browser, it is very likely an issue with your VPN. The Kion site is only accessibly while actively on the VPN.
  • Login with your CMS EUA credentials.
  • Select the drop down menu next to the appropriate account.
  • Select Cloud Access Roles
  • Select the role you wish to assume.
  • Select Web Access. The AWS Console for the account should open in a new browser tab. Once the console is open, you may close your VPN connection, if you wish.

Notes

  • Once connected to the AWS Console, you can close your VPN connection if you’d like. The VPN is only needed when authenticating to Kion and gaining AWS credentials.
  • Your browser session is valid for up to 4 hours. After 4 hours, you will need to redo this procedure.

AWS CLI credentials

Summary

This procedure will show you how to retrieve AWS CLI credentils for one of the AWS accounts used by this project, granting you programmatic access to AWS. This is required for any operations you may run directly against AWS. Prerequisites

Procedure

  • Login to the cloud VPN, https://cloudvpn.cms.gov
  • Go to the CMS Kion (Cloudtamer) site. This is a great link to bookmark. Note: if the Kion site fails to load in your browser, it is very likely an issue with your VPN. The Kion site is only accessibly while actively on the VPN.
  • Login with your CMS EUA credentials.
  • Select the drop down menu next to the appropriate account.
  • Select Cloud Access Roles
  • Select the role you wish to assume.
  • Select ‘Short-term Access Keys’.
  • Click the code block under ‘Option 1’, to copy the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables to your clipboard.
  • Navigate to a terminal on your mac, and paste the credentials. You should now be able to interact with AWS programmatically, based on the role you selected in Kion.

Notes

  • There are three available options when getting access keys from Kion. The instructions above detail Option 1, which is essentially copying and pasting env variables to a terminal. Feel free to use one of the other options if you’d prefer. For sake of simplicity, Option 1 will be the only one documented and supported here.
  • Once you have credentials from Kion, you can close your VPN connection if you’d like. The VPN is only required when talking to Kion to obtain credentials.
  • The credentials are valid for 4 hours, after which you’ll need to redo this procedure.

Launch Darkly

LaunchDarkly is a feature management platform that enables software teams to effectively manage and control the lifecycle of features in their applications. It provides feature flagging, experimentation, and continuous delivery capabilities, allowing teams to release features with confidence and control.

AWS Console Login

Summary

These instructions will help you get started with Launch Darkly.

rerequisites

Procedure

To get to the AWS Console:

  • Login to the cloud VPN, https://cloudvpn.cms.gov
  • Follow these instructions to get access to the Launch Darkly client LaunchDarkly Guide. This is a great link to bookmark. Your EUA account will need the correct job codes before being able to access the LaunchDarkly client.
  • Once you have access to LaunchDarkly you will need to check if you have the correct user roles to access all three environments for MACPRO_MAKO. If not, Keith Littlejohn is our CMS LaunchDarkly point of contact.

Notes

  • Once you have access to LaunchDarkly you can switch on feature flags which should show/hide various features.
  • Access codes for UI, API, and mobile all live in AWS System Parameter Store. They were ddistrubted by Keith Littlejohn. We do not have access to the environment’s api keys. If we need to rotate them please reach out to him.
  • There is three environments in LaunchDarkly for Mako: Dev, IMPL, and Production which point to our three environments. Dev is used for our ephemeral environments.

Updating Architecture Diagram

Purpose

The goal of this document is to guide users on how to update the maco architectural diagram using eraser.io

What is eraser.io

Eraser is a tool that enables developers to create docs and diagrams at the speed of thought via a minimal UI, keyboard-driven flows, markdown, and diagram-as-code. To access the editable version of the MAKO use the link provided below Architecture Diagram Updating Architectural Diagram

  1. Go to the link provided above
  2. Make desired changes to the diagram using the tools provided by eraser.io
  3. Copy diagram
    • Select the entire diagram (highlight all images)
    • Right click on any area of the diagram while highlighted
    • Select “Copy/Past As” and select “Copy As SVG”
  4. Go to docs > assets > diagram-twomac.svg
    • Replace the copied item eraser.io with the item in line 4 to 6

Deploy a Stage

How-to deploy a new or existing stage to AWS.

Summary

This deploys the entire application, so the entire stage, to AWS.

Prerequisites:

Procedure

Deploy an individual service

Description

This will deploy a single service for a given stage. All other services on which your target service is dependent must already be deployed for the stage. For example: if service B depends on service A, and you want to use this procedure to deploy only service B, then service A must have already been deployed.

Prerequisites:

Procedure

 cd macpro-mako
 nvm use
 run deploy --stage foo

Deploy using GitHub Actions

Summary

This project uses GitHub Actions as its CI/CD tool. For the most part, this project also adheres to GitOps. That said…

Each branch pushed to the macpro-mako git repository is automatically deployed to AWS. GitHub Actions sees the ‘push’ event of a new branch, and runs our Deploy.yml workflow. After a few minutes, the branch will be fully deployed. This 1:1 relationship between git branches and deployed stages is the reason why ‘stage’ and ‘branch’ are sometimes used interchangeably to refer to a deployed set of the application.

Prerequisites:

  • Git repo write access; complete the Git access request portion of onboarding
Procedure
  • Obtain and set AWS CLI credentials

  • Deploy using the run script:

    cd macpro-mako
    git checkout main
    git pull
    git checkout -b foo
    git push --set-upstream origin foo
  • Monitor the status of your branch’s deployment in the repo’s Actions area.

Destroy a Stage

How-to destroy a stage in AWS.

Destroy using GitHub Actions - branch deletion

Summary

GitHub Actions is usually the best way to destroy a stage. A Destroy workflow exists for this project, which will neatly take down any and all infrastructure related to a branch/stage, as well as deactivate the GitHub Environment, if it exists.

In most cases, stages are deployed from a branch in the git repo. If this is the case, and if the branch can be safely deleted, destroying using GitHub Actions and branch deletion is the preferred approach.

Prerequisites

  • Git repo write access; complete the Git access request portion of onboarding

Procedure

  • Stop and think about what you are doing. Destroying is a lot easier to avoid then to undo.
  • Delete the branch for the stage you wish to delete.
      cd macpro-mako
      git push --delete origin foo
  • Monitor the status of your stage's destruction in the repo's [Actions area](https://github.com/{{ site.repo.org }}/{{ site.repo.name }}/actions).

Notes

  • None

Destroy using GitHub Actions - manual dispatch

Summary

The same GitHub Actions workflow referenced above can be triggered manually. This is primarily useful if there is AWS infrastructure that still exists for a branch that has been deleted, and you don't want to go to the trouble of running destroy from your Mac. Or, if you want to do a clean deploy of a stage, but you don't want to delete the branch, this can also be handy.

Prerequisites

  • Git repo write access; complete the Git access request portion of onboarding

Procedure

  • In a browser, go to the repo
  • Click the Actions tab
  • Click Destroy, located on the left hand side of the screen.
  • Click 'Run workflow'
    • Leave 'Use workflow from' set to main.
    • Enter the name of the stage you wish to destroy in the free text field.
    • Click 'Run workflow'
  • Monitor the status of your stage’s destruction in the repo’s Actions area.

Destroy a stage

Summary

This destroys an entire application, so the entire stage, to AWS.

Prerequisites:

  • Completed all [onboarding]({{ site.baseurl }}{% link docs/onboarding/onboarding.md %})

Procedure

  • Stop and think about what you are doing. Destroying is a lot easier to avoid then to undo.
  • Obtain and set AWS CLI credentials
  • Destroy using the run script:
      cd macpro-mako
      nvm use
      run destroy --stage foo

Notes

  • After running the above destroy command, the script will output any Cloudformation stacks that will be deleted, and ask you to verify the stage name to proceed with destruction. If you'd like to proceed, re-enter the stage name and hit enter.
  • The destroy script will hold your terminal process open until all stacks report as DESTROY_COMPLETE in cloudformation. If a stack fails to delete, or if there is a timeout, the script will fail. You may retry the script again, but it may be worth investigating the failure.
  • Please be mindful of what you are doing.

Destroy an individual service

Summary

This will destroy a single service for a given stage.

Prerequisites:

Procedure

  • Stop and think about what you are doing. Destroying is a lot easier to avoid then to undo.
  • Obtain and set AWS CLI credentials
  • Destroy a single service using the run script:
      cd macpro-mako
      nvm use
      run destroy --service bar --stage foo

Notes

  • All notes from the Destroy a Stage section (above) hold true for destroying an individual service.

Subscribing to Alerts

How-to to subscribe to alerts from a stage.

Summary

This project uses SNS for near real time alerting of application health and performance. Subscriptions to this topics are not made automatically, for a few reasons (see alerts service details). This will guide you in how to create a sbuscription.

Prerequisites:

Procedure

  • Go to the AWS Console
  • Choose your region in the top right drop down.
  • Navigate to SNS
  • Click topics (see left hand side hamburger menu) and select your stage's topic
  • Click add subscription and follow the prompts.
  • If you control the inbox for the subscription you just added, go to the inbox and click through the confirmation email from AWS.
  • Repeat these steps for the application's other region.

Pre-Commit Usage

How to use pre-commit on the project

Context

Pre-commit is a python package that enables projects to specifies a list of hooks to run before a commit is made (a pre-commit hook). This is really useful to enforce standards or conventions, as it prevents non conformant changes from getting committed.

On this project, we use pre-commit to automate several checks, including:

  • running a code formatting check based on prettier
  • checking for large files typically not desired to keep in source control
  • scanning for secret material, such as AWS keys

Aside from these checks being run prior to any commit being pushed, they are also run by a GitHub Actions workflow when a pull request is made.

Installation

Good news! If you completed onboarding and ran the workspace setup script, pre-commit should already be installed on your machine.

You can test that it's installed by running pre-commit -V in a terminal window. If you get a nominal return including a version number, you're all set. If the pre-commit command is not found, please refer back to the Onboarding / Workspace Setup section of this site. If pre-commit is not installed it is important to get it installed and setup on your machine. This is a part of the workflow for developing apps in this architecture. Luckily setup is simple.

Configuration

Although pre-commit is installed on your workstation, you must configure pre-commit to run for a given repository before it will begin blocking bad commits.

This procedure needs to only be run once per repository, or once each time the .pre-commit-config.yaml file is changed in the repository (very infrequently).

  • open a terminal
  • install all hooks configured in .pre-commit-config.yaml
      cd macpro-mako
      pre-commit install -a

That's it -- after running the above commands inside the project repository, pre-comit will run the project's configured checks before any commit.

Update from Base

How to update your project with the latest macpro-base-template releases.

Summary

This command fetches the latest macpro-base-template changes and merges them into your branch. You may then resolve any merge conflicts, and open a PR.

Prerequisites:

Procedure

  • Use the run script:
      nvm use
      run base-update

List Running Stages

How to get a list of currently running stages for this project in the current AWS account.

Summary

This returns a list of currently running stages for this project in the current AWS account.

Prerequisites:

Procedure

Run Report using GitHub Actions

Summary

This project uses GitHub Actions as its CI/CD tool.

Each of our repositories has a GitHub Actions workflow added to run this list running stages command and report the results to slack on a schedule. This workflow may also be manually invoked.

Prerequisites:

  • Git repo access; complete the Git access request portion of onboarding
  • Access to CMS slack to see the generated report.

Procedure

  • Browse to the actions page of the repository in GitHub, select the "Running Stage Notifier" workflow and press run workflow.

Running Top Level Commands

Overview

The src/run.ts file defines many helpful commands you can use to perform helpful tasks for your project. This file utilizes Yargs to provide command line interfaces to several commands for managing a serverless project.

This code at src/run.ts is implementing a command-line interface (CLI) for your project. The CLI accepts various commands to manage and deploy the project. The CLI tool uses the yargs library to parse the command-line arguments.

The CLI provides the following commands to use like run [command] [options]

Commands

install installs all service dependencies. ui configures and starts a local React UI against a remote backend. deploy deploys the project. test runs all available tests. test-gui opens the unit-testing GUI for vitest. destroy destroys a stage in AWS. connect prints a connection string that can be run to 'ssh' directly onto the ECS Fargate task. deleteTopics deletes topics from Bigmac which were created by development/ephemeral branches. syncSecurityHubFindings syncs Security Hub findings to GitHub Issues. docs starts the Jekyll documentation site in a Docker container, available on http://localhost:4000.

Options

Each command has its own set of options that can be passed in the command line.

For example, if the command deploy is used, it requires the options stage (type string, demanded option), and service (type string, not demanded option). The behavior of the command is defined by an async function, which will run the installation of all service dependencies and will execute the deployment process through the runner.run_command_and_output function with the command SLS Deploy and the options set in the command line.

The same approach is used for all other commands. They all start by installing the dependencies of the services, and then perform specific tasks based on the options passed in the command line.

The docs command starts a Jekyll documentation site in a Docker container. If the stop option is passed as true, it will stop any existing container. Otherwise, it will start a new container and run the documentation site at http://localhost:4000.

Unit Testing

Overview

Unit testing is done using the vitest framework.

Vitest

Vitest is a unit testing framework for testing JavaScript code. It allows you to write tests in a simple and concise manner, and provides tools for running and reporting on the results of those tests.

Running Tests

Tests can be run using the top level run commands:

  • run test --stage [stage] - running all tests
  • run test-gui --stage [stage] - running all tests displaying results in browser ui