AWS Parameter Store is a hidden gem in the vast array of AWS services. Most engineers will never notice it unless someone tells them about it. After all, it is inconspicuously located within the Systems Manager Shared Resources section of the EC2 Console.
AWS Auto Scaling Group service allows you to set up a logical grouping of similar EC2 instances that can used to ensure that a certain amount of instances is running at all times. This can be done for many different purposes, such as high availability, automatic scaling based on external criteria (website getting hammered), and capacity management. Honestly, you should be using ASG at all times – even if you have one instance running. ASG can help ensure that this instance is recreated in case it is terminated.
AWS Auto Scaling Groups service consists of two critical components: Auto Scaling Groups and Launch Configurations. Auto Scaling Groups requires a Launch Configuration to function. A Launch Configuration is a configuration of which and how an AMI is instantiated into a running instance.
Think of it as a template that a particular Auto Scaling Group uses to launch EC2 instances within the AWS environment. You can specify things such as:
One thing to note is that you need to create a Launch Configuration before an Auto Scaling Group.
Auto Scaling Groups use Launch Configurations to ensure that there is at least 1 instance (minimum) of such configuration always running. You can set amaximum number of instances that can run sharing the same configuration. The super nice thing is that Auto Scaling Groups can span multiple Availability Zones within a region so you can protect yourself from a zone failure.
To further make them more powerful, Auto Scaling Groups can be attached to Elastic Load Balancers in order to automatically scale if instances become unhealthy or when demand raises. Instances within the Auto Scaling Group will automatically register with the load balancer and will be tracked using the ELB Health Checks.
With this functionality, you can have a set of web application servers within an Auto Scaling Group fronted by an ELB. If the ELB notices an unhealthy instance, it will automatically tell ASG to terminate the sick instance and create a new one. ELB will also send health data to CloudWatch for monitoring purposes. You can track information such as standby instances, healthy instances, pending instances, and terminating instances.
Note that Elastic Load Balancer can be attached before or after the ASG is created.
In addition to utilizing ELBs, you can set scaling policies within the Auto Scaling Group. Some of the example policies include a check for average CPU utilization. If that spikes, you can increase the amount of instances. Besides average CPU utilization you can also use the following metrics:
Application Load Balancer request Count Per Target
Average Network bytes in
Average Network bytes out
Auto Scaling Groups also support notifications using Amazon’s SNS service. This functionality is helpful in sending alerts when your instances are either:
With this you can be notified when your ASG does something. With a little bit of help from CloudWatch Events and Lambda functions you can also set off other mechanisms within your environment. Some of these include automatic config provisioning and application restarts. You can check out more about this here.
With a spike in recent major hacks and leaks, AWS S3 has been put in spotlight due to organizations’ failures to secure their object storage in the cloud. Just in June of this year, a big leak of US voter data was made public. This happened right after a May leak of French political campaign data. In July Verizon leaked data for 6 million users.
All these leaks came from a public S3 buckets. This is not surprising considering that S3 security can be a bit confusing to novice users as well as seasoned InfoSec professionals. Too many admins confuse ACLs and what they can do and disregard IAM policies because they’re “too hard”. And that’s with Amazon warning you when you make buckets public…
Let’s also not forget that human laziness knows no bounds. Too many times are secure S3 policies relaxed so “everyone” within AWS can get to the data without much thought left to figure out who is “everyone”.
In addition, more often than not AWS API keys are leaked by being checked into Github, Bitbucket, and other public source control services. It does not help that many of those API keys lead to users and roles with too many powers enabled in the IAM policies.
This practice has become so big that there are now multiple public search engines dedicated for searching and parsing leaked API keys and secrets.
This all stems from poorly understood security practices revolving around S3 and IAM. This article will help explain the three basic security controls around S3, how they can be tied into IAM wherever possible, and how to keep your cloud data secure.
The following is the access control available in S3:
ACLs can be used to limit access to buckets to other AWS accounts, but not users within your own account.
ACLs grant basic read/write permissions and/or make them public.
You can only set ACLs to provide access to other AWS accounts, yourself, everyone, and for log delivery.
Both buckets and objects can have ACLs.
Bucket policies are attached to buckets and set policies on the bucket level. Only buckets can have policies.
Bucket policies specify who can do what to this particular bucket or set of objects.
Bucket policies are limited to 20KB in size.
If you want to set a policy on all the objects within a bucket, you must use “bucket/*” nomenclature.
Objects do not inherit permissions from parent bucket so you have to go through them and set the permissions yourself or use “bucket/*” setting.
Bucket policies include “Principal” element which specifies who can access the bucket.
Bucket policies can use “Condition” to specify IP addresses that can access this bucket to add more security.
These are good if you have a lot of objects and buckets with different permissions.
IAM policies are attached to users, groups or roles and specify what they can do on particular bucket.
IAM policy limits include 2KB for users, 5KB for groups, and 10KB for roles. Compare this to S3 Bucket Policy which is limited to 20KB of data.
Best Practices for Keeping Data in S3 Secure
Use Multi-Factor Access for Deletes so two factors of authentication are required to delete an object from S3.
Remember the following parts of 2-factor authentication: Password: something you know. Token: something you have.
Enable versioning of objects. Users will be able to remove objects but an older version will be kept in S3 which can only be deleted by the owner of the bucket.
You can use Lifecycle Rules to help manage when objects get versioned. You will pay a little extra for the storage that you use but this security is worthwhile.
Remember to review your buckets and objects’ permissions regularly. Check for objects that should not be world-readable.
Make sure to go through your buckets and objects and verify their permissions. Don’t assume that all old objects are still secure.
Amazon will send you an email if your objects have wide-open permissions.
Utilize secure pre-signed URLs for letting 3rd party users to upload data to private S3 buckets.
Scrub your code that utilizes AWS API of any API keys and secrets. Check out git-secrets that can help you do that right before checking in the code.
This one is not really security related, but more of a performance related tuning tip – use randomized prefixes for S3 object names.
This ensures that objects are properly sharded across multiple data partitions. With this, S3 object access will not slow down since not one partition will be hammered for data.
Remember that in S3, objects are stored in indexes across multiple partitions – just like in DynamoDB.
Scrambled object/key names can help with obscurity and obfuscation of data.
To make your workloads highly available on AWS you can use AWS Elastic Load Balancing service. This service allows you to balance the incoming traffic between EC2 instances in different Availability Zones (AZ) within the same region*. This service will scale automatically based on the request demand without you having to worry about having extra capacity in your load balancers.
S3FS is a FUSE-based utility that lets you mount your AWS S3 bucket like it’s a filesystem. It handles all the abstractions of making API calls and puts/gets of files to the object store. It supports a number of features such as multistore upload, AWS S3 SSE (S3 server-side encryption using KMS and Customer Keys), setting object lifecycle to Reduced Redundancy Storage, and parallel uploads/downloads.
S3FS is a comparable product to AWS File/Storage Gateway as it does not require running any local AWS-provided VMs for the sake of sharing S3 files across NFS in your environment. This guide goes over how to install and configure S3FS on FreeBSD.
Note: Honestly speaking, the FreeBSD port should be updated. Port version is 1.78 (and that’s from 2014) and current S3FS release on Github is 1.82. This version has been released in May 2017. Newest github version isn’t compiling because missing xattr dependencies. I haven’t had much time to look into this but once I figure it out, I’ll post an update.