footer: © NodeProgram.com, Node.University and Azat Mardan slidenumbers: true theme: Simple, 1 build-lists: true
Azat Mardan @azatmardan
[.slidenumbers: false]
Refresher from the AWS Intro course. Yell out loud the cloud computing benefits! 🔊
[.autoscale: true]
- Elastic, scalable, flexible and operational agile
- Disaster recovery
- Automatic software updates
- Capital-expenditure Free
- Increased collaboration
- Work from anywhere
- Standard and expertise
- Reduced time to market and competitiveness
- Environmentally friendly
- Easy to use
- Benefits of mass economy of scale
- Global delivery faster
- Azure
- AWS
- IaaS
- PaaS
- BaaS
- FaaS
- SaaS
- One of the first
- Massive scale
- Innovator with news features and services
- Lots of tools - good dev experience
- Almost a standard with lots of expertise, best practices, experts, books, etc.
[.autoscale: true]
- Horizontal and vertical scaling
- Redundancy
- Not just EC2- services instead of servers
- Loose coupling
- Stateless
- Automation
- Cost optimization
- Caching
^: User Data, CloudFormation; Spot, event driven, alerts and auto scaling groups; CloudFront, app caching
- Build and deploy faster
- Lower or mitigate risks
- Make informed decisions
- Learn AWS best practices
Whitepaper: pdfs/AWS_Well-Architected_Framework.pdf
- Security
- Reliability
- Performance Efficiency
- Cost Optimization
- Operational Excellence
- Speed of delivery - business value
- Reduce errors (automate)
- Save cost
- Bridge gap between IT Ops and devs - work as one team
- Automate everything
- Version: define infrastructure as code and store in version control system (like app code)
- Ability to deploy and roll back quickly and often
- Test app and infrastructure code
For apps:
Dev -> Code version control repository -> Build and Test on CI server -> Deploy to QA -> deploy to prod
For infra:
IT Ops -> Repo -> CI Server: Build images and validate templates (CloudFormation), test APIs -> Deploy
CI/CD pipeline is an automation of CIs. Could include stress testing and performance testing.
Not the same as Continuous Deployment (delivery has manual prod deploy).
- Repeatability: Humans make mistakes, machines less so (almost 0 when hardware is robust)
- Agility: Deploy quickly and often and roll back quickly and predictably if needed
- Auditing: Permissions and ACL with a history
- AWS CLI
- SDKs
- CloudFormation
- Others: Ansible, Terraform
- Provision environment/infrastructure: AWS CLI, CloudFormation, OpsWorks, Beanstalk
- Configuring servers with AWS: User Data, Docker, Beanstalk, CodeDeploy
- Configuring servers with other tools: Chef, Puppet SaltStack, Ansible
- Provision environment
- Deploy code
- Build
- Test
- Verify
Repo: https://github.com/azat-co/aws-intermediate
Git clone (you can fork first too):
git clone https://github.com/azat-co/aws-intermediate.git
Download with CURL and unzip (create a new folder):
curl https://codeload.github.com/azat-co/aws-intermediate/zip/master | tar -xv
- Sign up for free tier with an email you have access to
- Random verification (phone call or wait - ask me for access to my account)
- Debit/credit Card for verification and paid services
Free tier: https://aws.amazon.com/free, examples:
- EC2: 750 hours of t2.micro (~1 month of 1 EC2) - more than enough for this class and then some more
- S3: 5Gb
- RDS: 750 hours
- Lambda: 1,000,000 requests/mo
- More products!
- Host - your dev machine (recommended for Mac and Linux)
- Virtual machine - if you develop in VM (recommended for Windows)
- Remote machine - if you develop in the cloud or if you are setting up CD/CI environment
I develop natively on my dev machine, but you can use another EC2 instance
- AWS Account (requires email + credit/debit card)
- Python 2.7 or 3.x (latest is better)
- AWS CLI: Install with pip or brew or just use a bundle (see all options)
- Node and npm for HTTP server, tools and SKD code (installers)
- Git mostly for code deploys and Elastic Beanstalk
- Code editor Atom or VS code
- CURL and PuTTY (for Windows)
- Docker daemon/engine - advanced if we have time (instructions)
aws --version
v1.x - ok
node --version
npm --version
6.x - ok and 3.x - ok
Optional
docker --version
1.x - ok and 3.x - ok
- Slides, labs and code https://github.com/azat-co/aws-intermediate
- AWS account
- AWS CLI (pip, brew or bundle)
- Node and npm
- Docker engine
Detailed instructions and links are in labs/0-installs.md
Time: 15 minutes to download and install, go! 🚀
- Infrastructure as code: can save in a file and version
- Repeatability: bash script can be run multiple times
- Error free: no need to remember all the steps and configuration for web console
- Fast: no need to click around in the web console
- Can be run from any machine: Will work for CI/CD
\
in a CLI command, means a new line - optional and purely for formatting and larger font. \
works the same in bash/zsh.
Bad font:
aws ec2 describe-images --owners amazon --filters "Name=virtualization-type,Values=hvm" "Name=root-device-type,Values=ebs" "Name=name,Values=amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2"
Good font:
aws ec2 describe-images --owners amazon \
--filters "Name=virtualization-type,Values=hvm" "Name=root-device-type,Values=ebs" \
"Name=name,Values=amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2"
^Both commands will work the same
aws <command> <subcommand> [options and parameters]
- Access Key ID
- Secret Access Key
Copy your key and secret (root) or create a new user, give appropriate permissions and copy key and secret for that user (best practice).
Note: You can use AWS CLI to create a user too.
aws configure
- Provide access key ID
- Provide secret access key
- Set region to
us-west-1
and output to None or json
aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-1
Default output format [None]: json
aws ec2 describe-regions
{
"Regions": [
{
"Endpoint": "ec2.eu-north-1.amazonaws.com",
"RegionName": "eu-north-1"
},
{
"Endpoint": "ec2.ap-south-1.amazonaws.com",
"RegionName": "ap-south-1"
},
aws help
aws ec2 help
aws ec2 describe-regions help
Create user:
aws iam create-user --user-name MyUser
Attach policy from a file:
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole --policy-document file://C:\Temp\MyPolicyFile.json
Or a link:
aws iam put-user-policy --user-name MyUser --policy-name MyPowerUserRole --policy-document https://s3.amazonaws.com/checkr3/CC_IAM_FullPolicy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1491182980154",
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
}
]
}
Another IAM JSON file example
List policies for the user to verify:
aws iam list-user-policies --user-name MyUser
Create password to login to web console:
aws iam create-login-profile --user-name MyUser --password Welc0m3!
Create access key:
aws iam create-access-key --user-name MyUser
{
"AccessKey": {
"UserName": "Bob",
"Status": "Active",
"CreateDate": "2015-03-09T18:39:23.411Z",
"SecretAccessKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY",
"AccessKeyId": "AKIAIOSFODNN7EXAMPLE"
}
}
- Lock away your AWS account (root) access keys
- Create individual IAM users
- Use AWS-defined policies to assign permissions whenever possible
- Use groups to assign permissions to IAM users
- Grant least privilege
- Configure a strong password policy for your users
- Enable MFA for privileged users
- Use roles for applications that run on Amazon EC2 instances
- Delegate by using roles instead of by sharing credentials
- Rotate credentials regularly
- Remove unnecessary credentials
- Use policy conditions for extra security
- Monitor activity in your AWS account
aws ec2 describe-instances
aws ec2 run-instances
awc ec2 create-images
aws ec2 describe-images
Each command can use help
aws ec2 describe-instances help
aws ec2 run-instances help
awc ec2 create-images help
aws ec2 describe-images help
- Get image ID
- Execute
run-instances
command using image ID
Amazon Linux AMI IDs differ from region to region
- Web console
describe-images
- AWS List
aws ec2 describe-images --owners amazon \
--filters "Name=virtualization-type,Values=hvm" "Name=root-device-type,Values=ebs" \
"Name=name,Values=amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2"
Result is "ami-165a0876"
Docs: http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-images.html
Example of Amazon Linux AMI 2016.09.1 (HVM), SSD Volume Type Output:
{
"Images": [
{
"VirtualizationType": "hvm",
"Name": "amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2",
"Hypervisor": "xen",
"ImageOwnerAlias": "amazon",
"EnaSupport": true,
"SriovNetSupport": "simple",
"ImageId": "ami-165a0876",
"State": "available",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
...
}
}
],
"Architecture": "x86_64",
"ImageLocation": "amazon/amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2",
"RootDeviceType": "ebs",
"OwnerId": "137112412989",
"RootDeviceName": "/dev/xvda",
"CreationDate": "2017-01-20T23:39:56.000Z",
"Public": true,
"ImageType": "machine",
"Description": "Amazon Linux AMI 2016.09.1.20170119 x86_64 HVM GP2"
}
]
}
aws ec2 describe-images --owners amazon \
--filters "Name=virtualization-type,Values=hvm" "Name=root-device-type,Values=ebs" \
"Name=name,Values=amzn-ami-hvm-2016.09.1.20170119-x86_64-gp2" --query 'Images[*].ImageId'
[
"ami-165a0876"
]
Running instances is really launch instances (or creating them), i.e., Run in CLI = Launch in Console
aws ec2 run-instances --image-id ami-xxxxxxxx \
--count 1 --instance-type t2.micro \
--key-name MyKeyPair --security-groups my-sg
Note: You need to have security group first (if you don't have it).
aws ec2 run-instances --image-id ami-{xxxxxxxx} \
--count 1 --instance-type t2.micro \
--key-name {MyKeyPair} \
--security-group-ids sg-{xxxxxxxx} --subnet-id subnet-{xxxxxxxx}
Note: You need to have security group and subnet first (if you don't have them).
Create security group:
aws ec2 create-security-group \
--group-name MySecurityGroup \
--description "My security group"
Add RDP port 3389:
aws ec2 authorize-security-group-ingress \
--group-name my-sg --protocol tcp \
--port 3389 --cidr 203.0.113.0/24
Add SSH port 22:
aws ec2 authorize-security-group-ingress \
--group-name my-sg --protocol tcp \
--port 22 --cidr 203.0.113.0/24
Verify security group:
aws ec2 describe-security-groups --group-names my-sg
aws ec2 create-security-group --group-name \
open-sg --description "Open security group"
aws ec2 authorize-security-group-ingress \
--group-name open-sg --protocol all --port 0-65535 --cidr 0.0.0.0/0
aws ec2 describe-security-groups --group-names open-sg
aws ec2 create-tags --resources i-{xxxxxxxx} \
--tags Key={Name},Value={MyInstance}
Replace {xxx}, {Name} and {MyInstance}
aws ec2 describe-instances
aws ec2 stop-instances --instance-ids i-{xxxxxxxx}
aws ec2 start-instances --instance-ids i-{xxxxxxxx}
aws ec2 terminate-instances --instance-ids i-{xxxxxxxx}
Note: after stop you can start, after terminate no.
aws ec2 create-key-pair --key-name {MyKeyPair} \
--query 'KeyMaterial' --output text > {MyKeyPair}.pem
aws ec2 describe-key-pairs --key-name {MyKeyPair}
aws ec2 delete-key-pair --key-name {MyKeyPair}
{MyKeyPair}
is a string name, e.g., azat-aws-dev
.
init.d
or CloudInit for Ubuntu+Debian and other like CentOS with additional installation- User Data
- Command
Note: More on User Data is in the AWS Intro course
#!/bin/bash
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.32.0/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 6
node -e "console.log('Running Node.js ' + process.version)"
echo "require('http').createServer((req, res) => {
res.end('hello world')
}).listen(3000, (error)=>{
console.log('server is running on 3000')
})
" >> index.js
node index.js
LAMP Stack (Apache httpd, MySQL and PHP) for Amazon Linux:
#!/bin/bash
yum update -y
yum install -y httpd24 php56 mysql55-server php56-mysqlnd
service httpd start
chkconfig httpd on
groupadd www
usermod -a -G www ec2-user
chown -R root:www /var/www
chmod 2775 /var/www
find /var/www -type d -exec chmod 2775 {} +
find /var/www -type f -exec chmod 0664 {} +
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
echo "<?php echo 'Hello World!' ?>" > /var/www/html/index.php
You can supply base64 encoded string, normal string or a file. ami-9e247efe is Amazon Linux AMI for us-west-1:
aws ec2 run-instances --image-id ami-9e247efe \
--count 1 --instance-type t2.micro \
--key-name MyKeyPair \
--security-groups MySecurityGroup \
--user-data file://my_script.txt
Note: You can only run user-data once on launch (run-instances). Updating user data on existing instance will NOT run the User Data script.
More info on User Data:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Task: Install AWS CLI, configure, create an instance with apache httpd via AWS CLI and no SSH, and then make the HTML page (hello world) visible in the browser publicly
Detailed instructions and links are in labs/1-hello-world-aws-cli.md
(view on GitHub)
Time to finish: 15 min 👾
- Automate anything
- Build your own clients or interfaces for AWS
- No need to create HTTP requests and worry about payloads, formats, and headers
- Work in your favorite environment: Java, Python, Node and many more
- Amazon S3
- Amazon EC2
- DynamoDB
- Many more!
[.autoscale:true]
- Android
- Browser
- iOS
- Java
- .NET
- Node.js
- PHP
- Python
- Ruby
- Go
mkdir aws-node-sdk-test
cd aws-node-sdk-test
npm init -y
npm i -SE aws-sdk
- Home directory
- Environment variables
- JavaScript/Node or JSON file
Pick just one
~/.aws/credentials
or C:\Users\USER_NAME\.aws\credentials
for Windows users
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
(region
goes into Node file in new AWS.EC2
call)
AWS_ACCESS_KEY_ID=key
AWS_SECRET_ACCESS_KEY=secret
AWS_REGION=us-west-1
{
"accessKeyId": "your-access-key-here",
"secretAccessKey": "your-secret-key-here",
"region": "us-west-1"
}
describe.js
// Load the SDK for JavaScript
const AWS = require('aws-sdk')
// Load credentials and set region from JSON file
// AWS.config.loadFromPath('./config.json')
// Create EC2 service object
const ec2 = new AWS.EC2({apiVersion: '2016-11-15', region:'us-west-1'})
const params = {
// DryRun: true || false,
Filters: [
{
Name: 'endpoint',
Values: [
'ec2.us-west-1.amazonaws.com',
/* more items */
]
},
/* more items */
],
RegionNames: [
'us-west-1',
/* more items */
]
}
// Describe region
ec2.describeRegions(params, function(err, data) {
if (err) return console.log('Could not describe regions', err)
console.log(data)
const imageParams = {
Owners: ['amazon'],
Filters: [{
Name: 'virtualization-type',
Values: ['hvm']
}, {
Name: 'root-device-type',
Values: ['ebs']
}, {
Name: 'name',
Values: ['amzn-ami-hvm-2017.03.0.*-x86_64-gp2']
}]
}
ec2.describeImages(imageParams, (err, data)=>{
if (err) return console.log('Could not describe regions', err)
console.log(data)
})
})
cd code/sdk
node describe.js
Create and open provision-infra.js
:
// Load the SDK for JavaScript
const AWS = require('aws-sdk')
// Load credentials and set region from JSON file
AWS.config.loadFromPath('./config.json')
// Create EC2 service object
var ec2 = new AWS.EC2({apiVersion: '2016-11-15'})
//const ec2 = new AWS.EC2({apiVersion: '2016-11-15', region:'us-west-1'})
const fs = require('fs')
var params = {
ImageId: 'ami-7a85a01a', // us-west-1 Amazon Linux AMI 2017.03.0 (HVM), SSD Volume Type
InstanceType: 't2.micro',
MinCount: 1,
MaxCount: 1,
UserData: fs.readFileSync('./user-data.sh', 'base64'),
SecurityGroups: ['http-sg']
}
// Create the instance
ec2.runInstances(params, function(err, data) {
if (err) {
console.log('Could not create instance', err)
return
}
var instanceId = data.Instances[0].InstanceId
console.log('Created instance', instanceId)
// Add tags to the instance
params = {Resources: [instanceId], Tags: [
{
Key: 'Role',
Value: 'aws-course'
}
]}
ec2.createTags(params, function(err) {
console.log('Tagging instance', err ? 'failure' : 'success')
})
})
node provision-infra.js
Example: code/sdk
Run:
cd code/sdk
node provision-infra.js
Created instance i-0261a29f670faade4
Tagging instance success
Describe:
aws ec2 describe-instances --instance-ids i-0261a29f670faade4
AWS_DEFAULT_OUTPUT="table" aws ec2 describe-instances --instance-ids i-0261a29f670faade4
aws ec2 describe-instances --instance-ids i-0261a29f670faade4 \
--query 'Reservations[0].Instances[0].PublicDnsName'
node provision-infra.js ./user-data-qa.js
const userDataFile = process.argv[2]
//...
var params = {
ImageId: 'ami-7a85a01a', // us-west-1 Amazon Linux AMI 2017.03.0 (HVM), SSD Volume Type
InstanceType: 't2.micro',
MinCount: 1,
MaxCount: 1,
UserData: fs.readFileSync(userDataFile, 'base64')
}
Task: Write a Node script to create an instance with Node hello world (use User Data), and run it. Make sure you see the Hello World via public DNS.
Detailed instructions and link are in labs/02-sdk-runs-ec2.md
Time to finish: 10 min
TL;DR: Declarative - what I want and imperative - what to do.
Declarative requires that users specify the end state of the infrastructure they want, while imperative configures systems in a series of actions.
(AWS CLI and SDK are imperative.)
- Not simple and not simple to understand the end result
- Racing conditions
- Unpredictable results
^Imperative could be more flexible with dynamic if/else conditions while declarative would just break.
- Special format in a JSON or YAML file
- Declarative service
- Visual web editor
- Declarative and Flexible
- Easy to Use
- Infrastructure as Code 0️⃣1️⃣0️⃣1️⃣
- Supports a Wide Range of AWS Resources
- Customized via Parameters
- Visualize and Edit with Drag-and-Drop Interface
- Integration Ready
{
"Resources" : {
"HelloBucket" : {
"Type" : "AWS::S3::Bucket"
}
}
}
Resources:
HelloBucket:
Type: AWS::S3::Bucket
- CLI
- Web console
- SDK
- REST API calls
aws cloudformation create-stack --stack-name myteststack --template-body file:////home//local//test//sampletemplate.json
It will give you a stack ID which you can use later to check on the status of creation.
- Version
- Description
- Resources
- Parameters
- Mappings
- Outputs
- Resource must have a type of this format
AWS::ProductIdentifier::ResourceType
. See all resource types. - Some resources like S3 have defaults but others like EC2 will require more properties (image ID)
- You can get real property value with Ref function (ID, IP, etc.)
CloudFormation Example: S3 Bucket with Static Website
{
"Resources" : {
"HelloBucket" : {
"Type" : "AWS::S3::Bucket",
"Properties" : {
"AccessControl" : "PublicRead",
"WebsiteConfiguration" : {
"IndexDocument" : "index.html",
"ErrorDocument" : "error.html"
}
}
}
}
}
Let's use ref function. See list of ref functions.
{
"Resources" : {
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
"KeyName" : "azat-aws-course",
"ImageId" : "ami-9e247efe"
}
},
"InstanceSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable SSH access via port 22",
"SecurityGroupIngress" : [ {
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
} ]
}
}
}
}
{ "Fn::GetAtt" : [ "logicalNameOfResource", "attributeName" ] }
See reference.
"Resources" : {
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" } ],
"KeyName" : "azat-aws-course",
"ImageId" : "ami-9e247efe"
}
},
azat-aws-course
must exist before running this template... can we provide it later? Yes, it's a template!
{
"Parameters" : {
"KeyNameParam" : {
"Description" : "The EC2 Key Pair to allow SSH access to the instance",
"Type" : "AWS::EC2::KeyPair::KeyName"
}
},
"Resources" : {
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"SecurityGroups" : [ { "Ref" : "InstanceSecurityGroup" }, "MyExistingSecurityGroup" ],
"KeyName" : { "Ref" : "KeyNameParam"},
"ImageId" : "ami-7a11e213"
}
},
"InstanceSecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable SSH access via port 22",
"SecurityGroupIngress" : [ {
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
} ]
}
}
}
}
aws cloudformation create-stack --stack-name myteststack \
--template-body file:////home//local//test//sampletemplate.json \
--parameters ParameterKey=KeyNameParam,ParameterValue=azat-aws-course \
ParameterKey=SubnetIDs,ParameterValue=SubnetID1\\,SubnetID2
WordPress CloudFormation Parameters Example
"Parameters": {
"KeyNameParam": {
"Description" : "Name of an existing EC2 KeyPair to enable SSH access into the WordPress web server",
"Type": "AWS::EC2::KeyPair::KeyName"
},
"WordPressUser": {
"Default": "admin",
"NoEcho": "true",
"Description" : "The WordPress database admin account user name",
"Type": "String",
"MinLength": "1",
"MaxLength": "16",
"AllowedPattern" : "[a-zA-Z][a-zA-Z0-9]*"
},
"WebServerPort": {
"Default": "8888",
"Description" : "TCP/IP port for the WordPress web server",
"Type": "Number",
"MinValue": "1",
"MaxValue": "65535"
}
}
Resolved by CloudFormation, e.g., AWS::Region
- Fn::FindInMap
- Fn::Base64
- Conditional: Fn::And, Fn::Equals, Fn::If, Fn::Not, Fn::Or
See the full list in Intrinsic Function Reference.
Mappings is for specifying conditional values
Simple Parameters -> Mappings -> Complex Values
Example: Getting AMI ID (differs from region to region for the same image)
Mappings example: define mappings for AMI IDs based on regions:
"Mappings" : {
"RegionMap" : {
"us-east-1" : {
"AMI" : "ami-76f0061f"
},
"us-west-1" : {
"AMI" : "ami-655a0a20"
},
"eu-west-1" : {
"AMI" : "ami-7fd4e10b"
},
"ap-southeast-1" : {
"AMI" : "ami-72621c20"
},
"ap-northeast-1" : {
"AMI" : "ami-8e08a38f"
}
}
},
Find AMI ID based on region using mappings:
"Resources" : {
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"KeyName" : { "Ref" : "KeyNameParam" },
"ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "AMI" ]},
"UserData" : { "Fn::Base64" : "80" }
}
}
}
}
- Create EC2 using CloudFormation and User Data
aws cloudformation create-stack --stack-name myteststack \
--template-body file:////home//local//test//sampletemplate.json \
--parameters ParameterKey=KeyNameParam,ParameterValue=azat-aws-course
Task: Create an ELB, security group and auto scaling environment from CloudFormation template/blueprint; load/stress test it to see auto increase
You can use blueprint from code/cloudformation/AutoScalingMultiAZWithNotifications.json or one from AWS
Detailed instructions and link are in labs/03-form-the-cloud.md
Time to finish: 20min
Any ideas? Just say out loud.
- SSH, scp, sftp
- Git -
git push origin master
and thengit pull origin master
- Rsync
rsync -avzhe ssh backup.tar [email protected]:/backups/
- S3, e.g.,
aws s3 cp s3://{mybucket}/latest/install . --region us-east-1
and then curl or wget
- Code committed to bucket, repository, folder, etc.
- Event is issued (eg., a webhook)
- Code is deployed
Developers can implement their own solution or use one of the open sources... but AWS has a service... meet CodeDeploy!
- Automated Deployments
- Minimize Downtime
- Centralized Control
- Easy To Adopt
AWS CodePipeline is a CI/CD service. Its benefits:
- Rapid Delivery
- Improved Quality
- Configurable Workflow
- Get Started Fast
- Easy to Integrate
How CodePipeline, CodeDeploy and other CI/CD services can work together
![inline](images/codepipeline diagram.png)
- Create roles for the instance and service
- Create an instance with CodeDeploy agent
- Create custom CodeDeploy deployment and the app
- Create CodePipeline
- Test the app
- Verifying the updates and CI
code/install-codedeploy-agent.sh:
#!/bin/bash
yum install -y aws-cli
cd /home/ec2-user/
aws s3 cp 's3://aws-codedeploy-us-east-1/latest/codedeploy-agent.noarch.rpm' . \
--region us-east-1
yum -y install codedeploy-agent.noarch.rpm
- Jenkins
- TravisCI
- Bamboo
- TeamCity
- CircleCI
- CruiseControl
OpsWork: configuration management (stacks and layers) - narrower app-oriented resources than CloudFormation
CloudFormation: building block service for almost everything
Elastic Beanstalk: only app management service
Task: Build CI with CodeDeploy and code from GitHub, update code, and see change in a browser
Detailed instructions and links are in labs/4-codedeploy.md
Time to finish: 20 min
- Aurora
- PostgeSQL
- MySQL
- MariaDB
- Oracle
- MS SQL Server
- NoSQL: a fully managed cloud database and supports both document and key-value store models
- Local version for development
- Has tables, not databases
- Amazon DynamoDB Accelerator (DAX)-fully managed, highly available, in-memory cache
- In-memory data store and cache in the cloud
- Redis or Memcached
- Extreme performance and security
- Fast, simple, cost-effective data warehousing
- Standard SQL
- Redshift Spectrum: SQL exabytes of unstructured data in Amazon S3 without ETL
- Easy and simple to get started
- Increased developer productivity
- Automatic scaling
- Allows for a complete resource control
- No additional charge for AWS Elastic Beanstalk
- Python
- Ruby
- PHP
- Node.js
- Java
- .NET
- Go
- Docker
- GitHub
- zip
- Docker
- AWS CLI (EB CLI is deprecated)
- IDEs
- WAR files
Use Elastic Beanstalk to deploy a web app which uses RDS:
- http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.html
- Getting Started with Elastic Beanstalk
- Developer Resources
AWS Web console Elastic Beanstalk Node app example "EBNodeSampleApp"
To start the EBNodeSampleApp wizard for the app in us-west-1
(N. California), click the link.
- Ship More Software Faster
- Improve Developer Productivity
- Seamlessly Move Applications
- Standardize Application Operations
^Docker users on average ship software 7X more frequently than non-Docker users. Docker enables developers to ship isolated services as often as needed by eliminating the headaches of software dependencies. Docker reduces the time spent setting up new environments or troubleshooting differences between environments. Dockerized applications can be seamlessly moved from local development machines to production deployments on AWS. Small containerized applications make it easy to deploy, identify issues, and roll back for remediation.
- EC2 (Docker image)
- ECS
- ECR
- Elastic Beanstalk Containers
- Docker EE for AWS
- Create registry
- Build and push image
- Create task
- Configure service
- Set up ELB (optional)
- Configure cluster
- Launch
^Task is a blueprint for an application (what images, how many etc.); service runs and maintains (auto-recovery) tasks in a cluster; cluster is EC2 container instances (instances with container agent)
aws ecr get-login --region us-west-1
docker build -t my-repo .
docker tag my-repo:latest 161599702702.dkr.ecr.us-west-1.amazonaws.com/my-repo:latest
docker push 161599702702.dkr.ecr.us-west-1.amazonaws.com/my-repo:latest
https://node.university/p/node-in-production
- Detailed video Node in Production with Docker and AWS on Node University
- Detailed text walk-through with Node, ECR and Docker on GitHub
- Detailed video Node in Prod webinar
- No Servers to Manage
- Continuous Scaling
- Subsecond Metering
- REST API and mobile backend
- IoT backend (Kinesis)
- Data ETL (Redshift, S3)
- Real-time file and stream processing
- Web applications (S3)
[.footer:hide]
[.footer:hide]
- http://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html
- https://github.com/dwyl/learn-aws-lambda#hello-world-example-inline
GET https://h8uwddrasb.execute-api.us-west-1.amazonaws.com/prod/my-first-fn?TableName=my-first-table
POST https://h8uwddrasb.execute-api.us-west-1.amazonaws.com/prod/my-first-fn?TableName=my-first-table
{
"TableName": "my-first-table",
"Item": {
"main-part":"1",
"username":"joepineapple",
"password": "jp123"
}
}
Task: Create a microservice with AWS Lambda (public endpoint) to CRUD data to/from any DynamoDB table
Detailed instructions and links are in labs/5-serverless.md
Time to finish: 20 min