reverse engineering things that predecessors left without any documentation and throwing them out the window because devops
Menu
menu

To make your workloads highly available on AWS you can use AWS Elastic Load Balancing service. This service allows you to balance the incoming traffic between EC2 instances in different Availability Zones (AZ) within the same region*. This service will scale automatically based on the request demand without you having to worry about having extra capacity in your load balancers.

Continue reading ›

Salt Grains are really just descriptions of various pieces of the operating systems’ information on the minion.

As SaltStack documentation states, they basically provide you with “grains of information”. This allows you to grab information about the minion such as CPU information, underlying hardware (provided to you by dmidecode), network interfaces, and operating system revisions.

But wait, there is more! They don’t just display OS information! Grains can also be set as custom grains and you can place any sort of information that you want. You can set environment variables, system owner info, host group information – basically anything that you feel that might be useful to use as part of your Salt formula logic that can be parsed in the pillars or state files.

Each minion comes with pre-defined grains that are generated upon minion setup. You can assign custom grains and keep them in /etc/salt/grains or your minion config file. There are also existing auto_sync reactors that you can use to sync custom grains from a pillar (key-value store) to minions upon authentication or minion startup.

Besides using minion ID for minion targeting (which in itself isn’t a bad idea in secure environments), you can use these grains to target hosts in your salt CLI commands, states, and pillars. This empowers the SaltStack engine to create configuration templates for various environments on your network and to only affect those that you want. It’s also great if you’re shoehorning SaltStack into an existing, highly segmented environment.

You can view existing grains for a minion with the following command:

With that in mind, you can specify a grain and its value and execute an execution module against a set of minions with that matching grain.

Here we will install a package called nmap-ncat on all systems that are from the Red Hat family.

With -P flag, you can use PCRE (perl-compatible regex) on grains as well. This is described further in SaltStack’s Compound Matchers documentation (a highly recommended read in itself):

You can utilize grains in pillars and states using the following Jinja templating syntax. Jinja is a templating engine that is default in Salt and various web frameworks (such as Flask). It uses python-like syntax but is a bit more limited in terms of conditional tests that it can perform (unless you write your own filters).

Grains are parsed into salt as a dictionary of values. So in python-speak, a grain will look like this:

Multiple grain values will be stored as a list inside of a dict. In this case, it’ll look like this:

So now comes a question, what if you want to execute some state against a minion which may have a list (of IP addresses) for its ‘ipv4’ grain? Checking the equality with ‘==’ (equal) sign will not work since the grain contains a list. Jinja renderer will start throwing errors. You can’t use ‘.startswith()’ or ‘.endswith()’ string methods either.

Well, you could check if a particular string is in a grain:

This should parse through the entire lists in ‘ipv4’ grain. You could also match on the first items in the ‘ipv4’ grain if you choose so by adding “[0]” at the end of the salt[‘grains.get’]() method:

Or you could loop through each item in the ‘ipv4’ grain. Beware that this might cause problems since state/pillar IDs must be unique. You might need to affix a suffix at the end of the key to keep it unique:

Iterating through lists of items in singular Salt grains does pose a challenge and each use case may have different requirements. Due to limits in Jinja, you may have to come up with a hackish solution that is hopefully pythonic and doesn’t introduce race conditions when deploying states/pillars.

As you can see, Salt grains are fun! They’re not just boring static blobs of information. They can really bring your SaltStack environment alive and you should feel encouraged to use them wherever you can.

S3FS is a FUSE-based utility that lets you mount your AWS S3 bucket like it’s a filesystem. It handles all the abstractions of making API calls and puts/gets of files to the object store. It supports a number of features such as multistore upload, AWS S3 SSE (S3 server-side encryption using KMS and Customer Keys), setting object lifecycle to Reduced Redundancy Storage, and parallel uploads/downloads.

S3FS is a comparable product to AWS File/Storage Gateway as it does not require running any local AWS-provided VMs for the sake of sharing S3 files across NFS in your environment. This guide goes over how to install and configure S3FS on FreeBSD.

Note: Honestly speaking, the FreeBSD port should be updated. Port version is 1.78 (and that’s from 2014) and current S3FS release on Github is 1.82. This version has been released in May 2017. Newest github version isn’t compiling because missing xattr dependencies. I haven’t had much time to look into this but once I figure it out, I’ll post an update.

Start by installing S3FS package on FreeBSD:

Create a credentials file:

Load the Fuse subsystem libraries:

Mount the S3FS filesystem:

Unmount the S3FS filesystem by running umount:

References:
https://code.google.com/archive/p/s3fs/wikis/FuseOverAmazon.wiki
https://github.com/s3fs-fuse/s3fs-fuse