--profile are used for assuming other roles if you are using API key-based authentication.
Local credentials profile file (
You can set credential profiles within
~/.aws/credentials by using
[profileB] , etc. These credential profiles can be other IAM users or assumed roles.
Note that in your
~/.aws/config , each named profile will have to start with profile prefix, for example:
For more information on this, please see official AWS documentation on the Named Profiles.
Amazon ECS container credentials
Instance profile credentials
In this case, IAM instance profile (which is a service role) is assigned to the instance and used.
Please note that IAM roles created in AWS Console automatically have an instance profile created for a role.
Instance profiles do not need credential files when assuming roles because that information is picked up from EC2 metadata automatically.
You can run
aws sts assume-role to grab temporary credentials if needed and then use those with
--profile flag (ensure that you have both
~/.aws/credentials populated). Otherwise the instance profile will pick those up automatically from EC2 metadata. For more information on this, please see AWS documentation on Retrieving Security Credentials from Instance Metadata.
AWS Parameter Store is a hidden gem in the vast array of AWS services. Most engineers will never notice it unless someone tells them about it. After all, it is inconspicuously located within the Systems Manager Shared Resources section of the EC2 Console.
AWS Auto Scaling Group service allows you to set up a logical grouping of similar EC2 instances that can used to ensure that a certain amount of instances is running at all times. This can be done for many different purposes, such as high availability, automatic scaling based on external criteria (website getting hammered), and capacity management. Honestly, you should be using ASG at all times – even if you have one instance running. ASG can help ensure that this instance is recreated in case it is terminated.
AWS Auto Scaling Groups service consists of two critical components: Auto Scaling Groups and Launch Configurations. Auto Scaling Groups requires a Launch Configuration to function. A Launch Configuration is a configuration of which and how an AMI is instantiated into a running instance.
Think of it as a template that a particular Auto Scaling Group uses to launch EC2 instances within the AWS environment. You can specify things such as:
One thing to note is that you need to create a Launch Configuration before an Auto Scaling Group.
Auto Scaling Groups use Launch Configurations to ensure that there is at least 1 instance (minimum) of such configuration always running. You can set amaximum number of instances that can run sharing the same configuration. The super nice thing is that Auto Scaling Groups can span multiple Availability Zones within a region so you can protect yourself from a zone failure.
To further make them more powerful, Auto Scaling Groups can be attached to Elastic Load Balancers in order to automatically scale if instances become unhealthy or when demand raises. Instances within the Auto Scaling Group will automatically register with the load balancer and will be tracked using the ELB Health Checks.
With this functionality, you can have a set of web application servers within an Auto Scaling Group fronted by an ELB. If the ELB notices an unhealthy instance, it will automatically tell ASG to terminate the sick instance and create a new one. ELB will also send health data to CloudWatch for monitoring purposes. You can track information such as standby instances, healthy instances, pending instances, and terminating instances.
Note that Elastic Load Balancer can be attached before or after the ASG is created.
In addition to utilizing ELBs, you can set scaling policies within the Auto Scaling Group. Some of the example policies include a check for average CPU utilization. If that spikes, you can increase the amount of instances. Besides average CPU utilization you can also use the following metrics:
Application Load Balancer request Count Per Target
Average Network bytes in
Average Network bytes out
Auto Scaling Groups also support notifications using Amazon’s SNS service. This functionality is helpful in sending alerts when your instances are either:
With this you can be notified when your ASG does something. With a little bit of help from CloudWatch Events and Lambda functions you can also set off other mechanisms within your environment. Some of these include automatic config provisioning and application restarts. You can check out more about this here.
With a spike in recent major hacks and leaks, AWS S3 has been put in spotlight due to organizations’ failures to secure their object storage in the cloud. Just in June of this year, a big leak of US voter data was made public. This happened right after a May leak of French political campaign data. In July Verizon leaked data for 6 million users.
All these leaks came from a public S3 buckets. This is not surprising considering that S3 security can be a bit confusing to novice users as well as seasoned InfoSec professionals. Too many admins confuse ACLs and what they can do and disregard IAM policies because they’re “too hard”. And that’s with Amazon warning you when you make buckets public…
Let’s also not forget that human laziness knows no bounds. Too many times are secure S3 policies relaxed so “everyone” within AWS can get to the data without much thought left to figure out who is “everyone”.
In addition, more often than not AWS API keys are leaked by being checked into Github, Bitbucket, and other public source control services. It does not help that many of those API keys lead to users and roles with too many powers enabled in the IAM policies.
This practice has become so big that there are now multiple public search engines dedicated for searching and parsing leaked API keys and secrets.
This all stems from poorly understood security practices revolving around S3 and IAM. This article will help explain the three basic security controls around S3, how they can be tied into IAM wherever possible, and how to keep your cloud data secure.
The following is the access control available in S3:
ACLs can be used to limit access to buckets to other AWS accounts, but not users within your own account.
ACLs grant basic read/write permissions and/or make them public.
You can only set ACLs to provide access to other AWS accounts, yourself, everyone, and for log delivery.
Both buckets and objects can have ACLs.
Bucket policies are attached to buckets and set policies on the bucket level. Only buckets can have policies.
Bucket policies specify who can do what to this particular bucket or set of objects.
Bucket policies are limited to 20KB in size.
If you want to set a policy on all the objects within a bucket, you must use “bucket/*” nomenclature.
Objects do not inherit permissions from parent bucket so you have to go through them and set the permissions yourself or use “bucket/*” setting.
Bucket policies include “Principal” element which specifies who can access the bucket.
Bucket policies can use “Condition” to specify IP addresses that can access this bucket to add more security.
These are good if you have a lot of objects and buckets with different permissions.
IAM policies are attached to users, groups or roles and specify what they can do on particular bucket.
IAM policy limits include 2KB for users, 5KB for groups, and 10KB for roles. Compare this to S3 Bucket Policy which is limited to 20KB of data.
Best Practices for Keeping Data in S3 Secure
Use Multi-Factor Access for Deletes so two factors of authentication are required to delete an object from S3.
Remember the following parts of 2-factor authentication: Password: something you know. Token: something you have.
Enable versioning of objects. Users will be able to remove objects but an older version will be kept in S3 which can only be deleted by the owner of the bucket.
You can use Lifecycle Rules to help manage when objects get versioned. You will pay a little extra for the storage that you use but this security is worthwhile.
Remember to review your buckets and objects’ permissions regularly. Check for objects that should not be world-readable.
Make sure to go through your buckets and objects and verify their permissions. Don’t assume that all old objects are still secure.
Amazon will send you an email if your objects have wide-open permissions.
Utilize secure pre-signed URLs for letting 3rd party users to upload data to private S3 buckets.
Scrub your code that utilizes AWS API of any API keys and secrets. Check out git-secrets that can help you do that right before checking in the code.
This one is not really security related, but more of a performance related tuning tip – use randomized prefixes for S3 object names.
This ensures that objects are properly sharded across multiple data partitions. With this, S3 object access will not slow down since not one partition will be hammered for data.
Remember that in S3, objects are stored in indexes across multiple partitions – just like in DynamoDB.
Scrambled object/key names can help with obscurity and obfuscation of data.
To make your workloads highly available on AWS you can use AWS Elastic Load Balancing service. This service allows you to balance the incoming traffic between EC2 instances in different Availability Zones (AZ) within the same region*. This service will scale automatically based on the request demand without you having to worry about having extra capacity in your load balancers.
Salt Grains are really just descriptions of various pieces of the operating systems’ information on the minion.
As SaltStack documentation states, they basically provide you with “grains of information”. This allows you to grab information about the minion such as CPU information, underlying hardware (provided to you by dmidecode), network interfaces, and operating system revisions.
But wait, there is more! They don’t just display OS information! Grains can also be set as custom grains and you can place any sort of information that you want. You can set environment variables, system owner info, host group information – basically anything that you feel that might be useful to use as part of your Salt formula logic that can be parsed in the pillars or state files.
Each minion comes with pre-defined grains that are generated upon minion setup. You can assign custom grains and keep them in /etc/salt/grains or your minion config file. There are also existing auto_sync reactors that you can use to sync custom grains from a pillar (key-value store) to minions upon authentication or minion startup.
Besides using minion ID for minion targeting (which in itself isn’t a bad idea in secure environments), you can use these grains to target hosts in your salt CLI commands, states, and pillars. This empowers the SaltStack engine to create configuration templates for various environments on your network and to only affect those that you want. It’s also great if you’re shoehorning SaltStack into an existing, highly segmented environment.
You can view existing grains for a minion with the following command:
With that in mind, you can specify a grain and its value and execute an execution module against a set of minions with that matching grain.
Here we will install a package called nmap-ncat on all systems that are from the Red Hat family.
With -P flag, you can use PCRE (perl-compatible regex) on grains as well. This is described further in SaltStack’s Compound Matchers documentation (a highly recommended read in itself):
You can utilize grains in pillars and states using the following Jinja templating syntax. Jinja is a templating engine that is default in Salt and various web frameworks (such as Flask). It uses python-like syntax but is a bit more limited in terms of conditional tests that it can perform (unless you write your own filters).
So now comes a question, what if you want to execute some state against a minion which may have a list (of IP addresses) for its ‘ipv4’ grain? Checking the equality with ‘==’ (equal) sign will not work since the grain contains a list. Jinja renderer will start throwing errors. You can’t use ‘.startswith()’ or ‘.endswith()’ string methods either.
Well, you could check if a particular string is in a grain:
This should parse through the entire lists in ‘ipv4’ grain. You could also match on the first items in the ‘ipv4’ grain if you choose so by adding “” at the end of the salt[‘grains.get’]() method:
Or you could loop through each item in the ‘ipv4’ grain. Beware that this might cause problems since state/pillar IDs must be unique. You might need to affix a suffix at the end of the key to keep it unique:
Iterating through lists of items in singular Salt grains does pose a challenge and each use case may have different requirements. Due to limits in Jinja, you may have to come up with a hackish solution that is hopefully pythonic and doesn’t introduce race conditions when deploying states/pillars.
As you can see, Salt grains are fun! They’re not just boring static blobs of information. They can really bring your SaltStack environment alive and you should feel encouraged to use them wherever you can.
S3FS is a FUSE-based utility that lets you mount your AWS S3 bucket like it’s a filesystem. It handles all the abstractions of making API calls and puts/gets of files to the object store. It supports a number of features such as multistore upload, AWS S3 SSE (S3 server-side encryption using KMS and Customer Keys), setting object lifecycle to Reduced Redundancy Storage, and parallel uploads/downloads.
S3FS is a comparable product to AWS File/Storage Gateway as it does not require running any local AWS-provided VMs for the sake of sharing S3 files across NFS in your environment. This guide goes over how to install and configure S3FS on FreeBSD.
Note: Honestly speaking, the FreeBSD port should be updated. Port version is 1.78 (and that’s from 2014) and current S3FS release on Github is 1.82. This version has been released in May 2017. Newest github version isn’t compiling because missing xattr dependencies. I haven’t had much time to look into this but once I figure it out, I’ll post an update.
I have noticed the below error when one day i was starting up my KVM virtual machines to play around with Docker Swarm.
error:internal error:process exited whileconnecting tomonitor:qemu:could notload PC BIOS'bios-256k.bin'
There wasn’t much information on google or forums and most talked about symlinks that weren’t there. Most folks recommended reinstalling seabios and seabios-bin. But unfortunately, reinstalling these packages did not provide the necessary files.
Upon further inspection of the package versions, I noticed that seabios-1.8.2-1.el7 does not provide /usr/share/seabios/bios-256k.bin. Now seabios-bin-1.7.5-11.el7 does provide that file.
Make sure that you are installing seabios-1.7.5-11.el7 and seabios-bin-1.7.5-11.el7. Check that you do not have /etc/yum.repos.d/CentOS-Xen.repo enabled. These packages should be pulled from http://mirror.centos.org/centos/7/virt/x86_64.