Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Commit

Permalink
v1.1.0: Optimisation
Browse files Browse the repository at this point in the history
  • Loading branch information
eminaws committed Sep 15, 2021
1 parent 1b6f04f commit 79cd6a7
Show file tree
Hide file tree
Showing 79 changed files with 29,945 additions and 14,025 deletions.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ Steps to reproduce the behavior.
A clear and concise description of what you expected to happen.

**Please complete the following information about the solution:**
- [ ] Version: [e.g. v1.0.0]
- [ ] Version: [e.g. v1.1.0]

To get the version of the solution, you can look at the description of the created CloudFormation stack. For example, "_Amazon S3 Glacier Re:Freezer. Version **v1.0.0**_".
To get the version of the solution, you can look at the description of the created CloudFormation stack. For example, "_Amazon S3 Glacier Re:Freezer. Version **v1.1.0**_".

- [ ] Region: [e.g. us-east-1]
- [ ] Was the solution modified from the version published on this repository?
Expand Down
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,26 @@

All notable changes to this project will be documented in this file.

## [1.1.0] - 2021-07-25
### Added
- Amazon S3 Glacier Re:Freezer detects service throttling and automatically adjusts requestArchive call rate to allow extra time to process the vault aligned to the throttled metrics
- New CloudWatch Metrics:
- BytesRequested
- BytesStaged
- BytesValidated
- BytesCompleted
- ThrottledBytes
- ThrottledErrorCount
- FailedArchivesBytes
- FailedArchivesErrorCount

### Changed
- copyToDestination split from calculateTreehash as a separate SQS Queue and Lambda function
- downloading archives from Glacier is handled only by copyChunk function
- CloudWatch Metrics Dimension Name changed to "CloudFormationStack"
- CloudWatch Metrics metric names have been renamed as "ArchiveCount<Metric>"
- Updated CDK Version to 1.119.0

## [1.0.1] - 2021-06-09
### Changed
- Retrieval requests are evenly distributed throughout the runtime period
Expand Down
27 changes: 15 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ refer to [Deleting an Archive in Amazon S3 Glacier](https://docs.aws.amazon.com/
## Table of contents
- [Architecture](#architecture)
- [Project structure](#project-structure)
- [Anonymous metric collection](#anonymous-metric-collection)
- [Deployment](#deployment)
- [Runtime Monitoring](#monitoring)
- [Creating a custom build](#creating-a-custom-build)
Expand Down Expand Up @@ -55,6 +56,18 @@ The solution uses SHA256 Treehash to perform archive integrity checking on the c

During the copy operation process, Amazon DynamoDB is used to keep track of the status of the archive copies, where the copy operation progress is visibile through the provided Amazon CloudWatch dashboard.

## Anonymous metric collection

This solution collects anonymous operational metrics to help AWS improve the quality of features of the solution. For more information, including how to disable this capability, please see the [implementation guide](https://docs.aws.amazon.com/solutions/latest/amazon-s3-glacier-refreezer/collection-of-operational-metrics.html).

The following data points are collected:

- Region
- Target Storage Class
- Vault Archive Count
- Vault Size
- Solution version

## Deployment

> **Please ensure you test the solutions prior running it against any production vaults.**
Expand Down Expand Up @@ -95,15 +108,6 @@ Once deployed, the CloudFromation Output tab will have the link to Amazon CloudW

![Amazon S3 Glacier Re:Freezer Progress Metrics](source/images/dashboard.png)

### Anonymous Statistics Collection

The deployment will collect and send anonymously to the AWS Solution Builders team the following data points:

- Region
- Target Storage Class
- Vault Archive Count
- Vault Size

## Project structure

```
Expand Down Expand Up @@ -228,8 +232,8 @@ BUCKET_NAME=my-glacier-refreezer-ap-southeast-2 # full regional bucket name
SOLUTION_NAME=my-solution-name # custom solution name
VERSION=my-version # custom version number
aws s3 cp ./global-s3-assets/ s3://${BUCKET_NAME}/${SOLUTION}/${VERSION} --recursive --acl public-read --acl bucket-owner-full-control
aws s3 cp ./regional-s3-assets/ s3://${BUCKET_NAME}/${SOLUTION}/${VERSION} --recursive --acl public-read --acl bucket-owner-full-control
aws s3 cp ./global-s3-assets/ s3://${BUCKET_NAME}/${SOLUTION_NAME}/${VERSION} --recursive --acl public-read --acl bucket-owner-full-control
aws s3 cp ./regional-s3-assets/ s3://${BUCKET_NAME}/${SOLUTION_NAME}/${VERSION} --recursive --acl public-read --acl bucket-owner-full-control
echo "https://${BUCKET_NAME}.s3.amazonaws.com/${SOLUTION_NAME}/${VERSION}/${SOLUTION_NAME}.template"
```
Expand All @@ -252,7 +256,6 @@ echo "https://${BUCKET_NAME}.s3.amazonaws.com/${SOLUTION_NAME}/${VERSION}/${SOLU
- [Amazon S3](https://docs.aws.amazon.com/s3/) — creates an Amazon S3 bucket for the staging area to temporarily store the copied S3 Glacier vault archives.
- [AWS Lambda](https://docs.aws.amazon.com/lambda/) - 1) request and download the inventory file for the Amazon S3 Glacier vault, 2) request archives from Amazon S3 Glacier vault, 3) perform the archive copy function to the staging Amazon S3 bucket, 4) calculate SHA256 Treehash of copied objects, 5) move the validated objects to the destination Amazon S3 bucket, 6) collect and post metrics to Amazon CloudWatch, and 7) send anonymous statistics to the Solution Builder endpoint (if you elect to send anonymous statistics).


***

Copyright 2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
Expand Down
2 changes: 1 addition & 1 deletion deployment/build-s3-dist.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
set -e

# Important: CDK global version number
cdk_version=1.107.0
cdk_version=1.119.0

# Check to see if input has been provided:
if [ -z "$1" ] || [ -z "$2" ] || [ -z "$3" ]; then
Expand Down
Binary file modified source/images/amazon-s3-glacier-refreezer-architecture.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 24 additions & 11 deletions source/lambda/calculateMetrics/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -21,23 +21,36 @@ const dynamo = require("./lib/dynamo.js");

async function handler(event) {

let requested = 0;
let staged = 0;
let validated = 0;
let copied = 0;
let requestedCount = 0;
let stagedCount = 0;
let validatedCount = 0;
let copiedCount = 0;
let requestedBytes = 0;
let stagedBytes = 0;
let validatedBytes = 0;
let copiedBytes = 0;

for (const record of event.Records) {
if (record.eventName === "REMOVE") continue;
requested += dynamo.checkField(record, "aid");
staged += dynamo.checkField(record, "sgt");
validated += dynamo.checkField(record, "vdt");
copied += dynamo.checkField(record, "cpt");
requestedCount += dynamo.checkField(record, "cdt");
stagedCount += dynamo.checkField(record, "sgt");
validatedCount += dynamo.checkField(record, "vdt");
copiedCount += dynamo.checkField(record, "cpt");

requestedBytes += dynamo.getIncrementBytes(record, "cdt");
stagedBytes += dynamo.getIncrementBytes(record, "sgt");
validatedBytes += dynamo.getIncrementBytes(record, "vdt");
copiedBytes += dynamo.getIncrementBytes(record, "cpt");
}

if (requested > 0 || staged > 0 || validated > 0 || copied > 0 ) {
console.log(`r: ${requested} s: ${staged} v: ${validated} c: ${copied} `);
await dynamo.incrementCount(requested, staged, validated, copied);
if (requestedCount > 0 || stagedCount > 0 || validatedCount > 0 || copiedCount > 0) {
console.log(`r: ${requestedCount} s: ${stagedCount} v: ${validatedCount} c: ${copiedCount} `);
await dynamo.incrementCount(requestedCount, stagedCount, validatedCount, copiedCount);

console.log(`rb: ${requestedBytes} sb: ${stagedBytes} vb: ${validatedBytes} cb: ${copiedBytes} `);
await dynamo.incrementBytes(requestedBytes, stagedBytes, validatedBytes, copiedBytes);
}

}

module.exports = {
Expand Down
31 changes: 30 additions & 1 deletion source/lambda/calculateMetrics/lib/dynamo.js
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,15 @@ function checkField(record, field) {
return 0;
}

function getIncrementBytes(record, field) {
if ((!record.dynamodb.OldImage || !record.dynamodb.OldImage[field]) &&
record.dynamodb.NewImage[field]) {
return parseInt(record.dynamodb.NewImage['sz']['N']);
}

return 0;
}

async function incrementCount(requested, staged, validated, copied) {
await dynamodb.updateItem({
TableName: METRICS_TABLE,
Expand All @@ -50,7 +59,27 @@ async function incrementCount(requested, staged, validated, copied) {
}).promise();
}

async function incrementBytes(requested, staged, validated, copied) {
await dynamodb.updateItem({
TableName: METRICS_TABLE,
Key: {
pk: {
S: "volume"
}
},
ExpressionAttributeValues: {
":requested": { N: `${requested}` },
":staged": { N: `${staged}` },
":validated": { N: `${validated}` },
":copied": { N: `${copied}` }
},
UpdateExpression: "ADD requested :requested, staged :staged, validated :validated, copied :copied"
}).promise();
}

module.exports = {
checkField,
getIncrementBytes,
incrementCount,
checkField
incrementBytes
}
Loading

0 comments on commit 79cd6a7

Please sign in to comment.