Skip to content

Services

asharonbaltazar edited this page Dec 9, 2024 · 5 revisions

Details on each Serverless service

The macpro-mako project is a serverless monorepo. It is, for the most part, a collection of standalone Serverless Framework micro services bound together in a repository. Loose coupling of the micro services is facilitated using one or several tools, which include CloudFormation outputs, AWS Systems Manager Parameter Store paramters, and AWS Secrets Manager stores. This section will describe each service in a high level of detail.

alerts

Summary

The alerts service deploys a Simple Notification Service (SNS) topic to REGION_A. This topic can be leveraged by any other service for sending alerts.

Detail

  • To subscribe an email, phone number, or something else to the topic, find the SNS topic using the AWS Console and add the subscription manually.
  • No SNS subscriptions are made by the deployment process. The topic is created, and several other services are configured to publish notifications to the topic, but the topic itself is not automatically configured to fan out any notifications. Here’s why:
    • Since dev environments may receive many notifications due to failures related to development, and since those notifications can be noisy, we likely never want to automatically subscribe to dev environments’ SNS topics.
    • We likely only want to subscribe to notifications for higher/long running environments like main, val, and production.
    • Manually adding the subscription to higher environments was judged to be low effort, as it’s a one-time operation.
    • After adding an email as a subscriber to SNS, the email must be confirmed by clicking a link in a confirmation email. This added to the decision to handle subscriptions manually, as a human would need to verify the email manually even if the subscription was made automatically.

api

Summary

The api service deploys a lambda-backed API Gateway that is used by the frontend to interact with the data layer. Access to any of its endpoints is guarded at a high level by AWS Cognito, ensuring only authenticated users may reach it. The lambda functions that back each endpoint enforce further fine-grain access according to business rules.

Detail

The largest component of the api service is the API Gateway itself. This is a standard deployment of a regional, REST API Gateway. We do not apply custom certificates or DNS names to the api gateway endpoint (yet); instead, our application uses the amazon generated SSL endpoint.

There are four endpoints on the api. Each is guarded by AWS IAM, meaning that while the API Gateway is publicly available, the API will not forward your request to the backing lambda unless you provide valid credentials obtained through AWS Cognito. This way, only users with an account that we can authenticate may successfully call endpoints. The four endpoints are:

  • /search (POST): This endpoint accepts search queries from clients in the form of OpenSearch Query DSL queries. Once the query is received, the lambda adds extra query filters to ensure fine grain auth. This works by looking up the user making the call in Cognito, determining what type of user (cms or state) is making the call, determining what states that user has access to (if appropriate), and modifying the query in a way that will only return results for those states. By design, the only thing the search endpoint adds is related to authentication; the rest of the query building is left to the frontend for faster and more flexible development.
  • /item (POST): The item endpoint is used to fetch details for exactly one record. While you can form a query to do this and use the search endpoint, the item endpoint is for convenience. Simply make a post call containing the ID of the desired record to the item endpoint, and the record will be returned. Note that fine grain auth is still enforced in an identical way to search, whereby you will only obtain results for that ID if you should have access to that ID.
  • /getAttachmentUrl (POST): This endpoint is used to generate a presigned url for direct client downloading of S3 data, enforcing fine grain auth along the way. This is how we securely allow download of submission attachment data. From the details page, a user may click a file to download. Once clicked, their client makes a post to /getAttachmentUrl with the attachment metadata. The lambda function determines if the caller should or should not have access based on identical logic as the other endpoints (the UI would not display something they cannot download, but this guards against bad actors). If access is allowed, the lambda function generates a presigned url good for 60 seconds and returns it to the client browser, at which point files are downloaded automatically.
  • /allForms (GET): This endpoint serves GET requests and will return a list off all available webforms and their associated version. the result will look like: { ABP1: [ ‘1’, ‘2’ ], ABP2: [ ‘1’ ] }
  • /forms (GET): This endpoint function serves as the backend for handling forms and their associated data. This function provides various features, including retrieving form data, validating access, and serving the requested form content. The request to this endpoint must include a formId in the request body. Optionally, you can include a formVersion parameter. If you access this endpoint with formId without specifying formVersion, it will return the latest version. Form schemas are stored in a Lambda layer. Each form is organized in its directory, and each version is stored within that directory. The Lambda layer is located in the “opt” directory when deployed to aws. To access a specific version of a form with a formId, use the following URL structure: /opt/${formId}/v${formVersion}.js. The form schemas are versioned and stored in Git under the “api/layers” directory.

All endpoints and backing functions interact with the OpenSearch data layer. As such, and because OpenSearch is deployed within a VPC, all lambda functions of the api service are VPC based. The functions share a security group that allows outbound traffic.

All function share an IAM role. This is for convenicence; we can do one role per function if we find that valuable. The permissions include:

  • OpenSearch permissions to allow access to the data layer
  • Cognito permissions to look up user attributes; allows for enforcement of fine grain auth.
  • AssumeRole permissions for a very specific cross account role, which is required to generate the presigned urls for the legacy OneMac data.

auth

Summary

The auth service builds the infrastructure for our authentication and authorization solution: Amazon Cognito. A user pool and identity pool is deployed, and may conditionally be pointed to IDM (external identity provider).

Detail

The core of the api service is a cognito user pool and identity pool, which work together to provide an auth solution:

  • user pool: this is the user directory, where all active users and their attributes are stored.
    • This is where we specify the user attribute schema, informed by but not beholden to IDM.
    • The attribute schema is difficiult to update, and often requires deleting the user pool. This is acceptable for two reasons. One, updating the attribute schema would be a rare event. Two, since in higher environments all users are federated, the user pool itself holds no unique data; as such, it is safe to delete and simply rebuild without having data loss.
  • identity pool: this is associated with the user pool, and allows us to grant certain AWS permissions to authenticated and/or unauthenticated entities.
    • authenticated users may assume a role that gives them permissions to invoke the api gateway, as well as see information about their own cognito user.
    • unauthenticated user may assume a role that gives them no permissions.

In the near future, higher environments will configure IDM as an external identity provider. Ephemeral/dev environments will continue to use only the cognito user pool.

dashboard

Why do I need this?

Part of any good project is a way to determine how well it is working. The purpose of a CloudWatch Dashboard is to determine the performance, health, and a variety of other aspects that factor into the product being delivered. What we have done here is provided an easy to use solution that will make creating a dashboard easy and deploying it even easier.

Quick Disclaimer

In order to add the dashboard to existing projects it is important to note that is relies on consistant namespacing across aws services. It must be able to distinguish things such as the project and branch name for example.

Getting Started

In order to use the CloudWatch Dashboard you must bring over the dashboard folder which is located in the src/services/ directory of this repo. Where this folder gets added is entirely dependant upon the structure of the project it’s being added to, but the good news is that this service has no dependencies on other services (meaning it is standalone). Making edits to the Dashboard

Once the dashboard is deployed to the AWS account it can be found in the CloudWatch Dashboards section by the name of ${stage-name}-dashboard.

Edits can be made to this dashboard and when edits are complete simply save the dashboard and then click on the generate template button. The contents in here are what the templateDashboard.txt file should consist of. A simple copy, paste, and commit later and the changes are now ready to be deployed to higher environments.

data

Summary

The data service deploys our OpenSearch data layer and supporting infrastructure.

Detail

OpenSearch, Amazon’s managed Elasticsearch offering, was selected as the data layer tech.

email

Summary

The email service deploys the lambdas, SNS topics, and Configuration Sets needed to send email.

Detail

AWS SES is an account-wide service for basic sending and receiving of email. By creating lambdas to build the emails and sending the email with a branch-specific configuration set, we can follow the events of email sending and take action based on those events.

Secrets Manager

The workflow will not successfully deploy unless the emailAddressLookup object is defined:

Named {project}/default/emailAddressLookup or {project}/{stage}/emailAddressLookup { “sourceEmail”:”"CMS MACPro no-reply" [email protected]”, “osgEmail”:”"OSG" [email protected]”, “chipInbox”:”"CHIP Inbox" [email protected]”, “chipCcList”:”"CHIP CC 1" [email protected];"CHIP CC 2" [email protected]”, “dpoEmail”:”"DPO Action" [email protected]”, “dmcoEmail”:”"DMCO Action" [email protected]”, “dhcbsooEmail”:”"DHCBSOO Action" [email protected]” }

These values are set during deployment as environment variables on the lambda. You can edit these values in the AWS Console on the Lambda configuration tab.

LAUCH NOTE!!! The defined email addresses have been stored as om/default/emailAddressLookup in the production account, with om/production/emailAddressLookup overwriting those email addresses with the test email addresses. Delete the om/production/emailAddressLookup before the real launch deploy (you can also edit the environment variables after the lambda is built).

Test accounts

There are gmail accounts created to facilitate email testing. Please contact a MACPro team member for access to these inboxes. At this time, there is only one available email inbox.

Templates

The email services uses the serverless-ses-template plugin to manage the email templates being used for each stage. To edit the templates, edit index.js in ./ses-email-templates. Each template configuration object requires:

  • name: the template name (note, the stage name is appended to this during deployment so branch templates remain unique to that stage). At this time, the naming standard for email templates is based on the event details. Specifically, the action and the authority values from the decoded event. If action is not included in the event data, “new-submission” is
  • subject: the subject line of the email, may contain replacement values using .
  • html: the email body in html, may contain replacement values using .
  • text: the email body in text, may contain replacement values using .

Email Sending Service with AWS CDK

This guide provides an overview and implementation of a robust email sending service using AWS Cloud Development Kit (CDK). The service includes features such as dedicated IP pools, configuration sets, verified email identities, and monitoring through SNS topics.

UI

Overview

This service deploys a static web application to an S3 bucket with a CloudFront distribution in front of it for CDN caching and performance optimization. The template uses the serverless framework and includes several plugins to help with deployment and configuration.

Configuration

The custom section defines some custom variables, including the project name, stage, region, and CloudFormation termination protection for specific stages. The s3Sync section defines the S3 bucket to which the files will be synced, the local directory where the files will be found, and whether to delete removed files.

The cloudfrontInvalidate section invalidates the CloudFront distribution cache by specifying the distribution ID and the items to invalidate. The scripts section defines a script to set environment variables during deployment, which are used to specify the API region and URL.

The provider section configures the runtime environment for the Lambda functions, the AWS region, and stack tags. It does not include any IAM configuration since no Lambda functions are defined.

This template is mainly focused on deploying the static web application to S3 and configuring the CloudFront distribution to serve the content. The environment variables set in the scripts section are used by the application to connect to the backend API.

Scripts

There are three npm scripts that are defined in the package.json file of a project. These scripts are used to automate certain development tasks related to the project.

  1. dev: This script runs the Vite development server. Vite is a build tool that enables fast development by providing a development server that reloads the browser quickly whenever changes are made to the code. When the dev script is run, Vite starts the development server and serves the project files on a local web server. The output of this script will typically be a URL that can be opened in a web browser to access the development server.

  2. build: This script builds the project for production. This script first runs the TypeScript compiler (tsc) to compile the TypeScript code to JavaScript. After that, the Vite build tool is run to bundle the code and assets for production. The output of this script will typically be a set of static files that can be deployed to a web server.

  3. preview: This script starts a Vite server that serves the production build of the project on a local web server. This is useful for testing the production build locally before deploying it to a web server. When this script is run, Vite starts the production server and serves the project files on a local web server. The output of this script will typically be a URL that can be opened in a web browser to access the production server.

UI Infra

Summary

This service provides the appropriate infrastructure for the UI application running on AWS. It creates several resources including an S3 bucket, a bucket policy, a logging bucket, a logging bucket policy, and an IAM role with permissions.

Details

  • AWS IAM role with permissions for CloudWatch logs and an IAM boundary policy.
  • Serverless plugins to help with deploying and managing the infrastructure.
  • Configuration settings for different stages of the infrastructure, including DNS record, CloudFront domain name, and certificates.
  • A set of resources to be created, including S3 buckets for hosting the UI, logging, and their policies.

Resources

  • An S3 bucket with server-side encryption and the ability to serve static web content.
  • A bucket policy that allows access to the bucket from an AWS CloudFront distribution using an Origin Access Identity (OAI).
  • An S3 bucket for CloudFront access logs with server-side encryption and an access policy that allows AWS root account to write logs.
  • A conditional statement for DNS record creation and a conditional statement for CloudFront distribution creation.