reverse engineering things that predecessors left without any documentation and throwing them out the window because devops
Menu
menu

This one bugged me for a while. Whenever I would reboot my CentOS 7 server with Xen kernel, my console/framebuffer resolution would be terribly low. Since we live in 2016 and have huge monitors, there is no reason to use 800×600 for your screen resolution. Yes, it’s nice to keep your text editing to 79 columns, but these days we just do more than edit text in terminals (see tmux).

Anyway, to set a new console resolution in CentOS 7 with Xen kernel (on a dom0 of course), head over to /etc/default/grub.

If you are using Xen kernel, your grub config will will look like this:

Edit the file so it looks like this:

In this case we added the following lines:

Generate the new grub config file in /boot:

You should see something like this:

Now what we’ve done is remove the console handling stuff (I don’t use a serial console on my home hypervisor, but in a datacenter? Yes please!) and added the GRUB_GFXMODE and GRUB_GFXPAYLOAD settings. After rebooting the machine, you should see everything in a higher, crisper resolution on your monitor.

At first Pillars were something that I could not wrap my head around due to confusing Salt documentation. Eventually the docs became better and with working in Salt extensively, the pillars finally kicked in. They are something that any experienced Salt user should be using as they open up more doors in your DevOps infrastructure.

Pillars are basically a secure key value store in Salt that can be used by Salt States. You can store confidential information and options in pillars instead of keeping them in configs and the pass this data to specifically assigned minions (which you can do in pillars/top.sls file).

There are two main ways of getting a value from a pillar in your config file:

The ‘default value’ in first example can be substitute with an actual default value that you want to be filled in case your pillar dict is not found. In the second example, Salt will throw out Pillar SLS errors in case the pillar is not found. Clearly when you can store default values, using the first example is better.

To assign pillars to specific minions, you can do the following:

In the above example, we are assigning “grains” pillar to all the hosts (via “*”). All hosts matching a grains PCRE (P@) compound matcher will get nginx and java pillars. Finally, any hosts that have a grain that matches “specialgrain_match” will get the “special_state” pillar.

Recently I encountered a package issue in Bacula-server package on FreeBSD 10. The package from pkg system was compiled to use Postgresql while I still use mysqld (I know, I know, I should migrate to pgsql or at least MariaDB). Of course I can recompile it with the necessary flags using the ports system, but out of curiosity I decided to see if there is a way to avoid updating the package using pkg for the time being. I don’t recommend not applying patches to your system but in some use cases it is necessary to stop a package from breaking your ‘pkg upgrade’ command or to freeze it at a certain version.

Turns out there is a ‘lock’ feature in pkg that lets you lock a package in certain state to stop pkg from modifying or updating it. To lock a package run:

To unlock a package, run the following:

That’s all there is to it. When a package is locked, you can safely upgrade your system except for that one package.

Official Salt docs on installing Salt on Solaris are quite out of date so this article is meant to illustrate how you can install SaltStack minion package on Solaris 10 (x86 and SPARC) hosts. Just like the official docs, we will be using OpenCSW but Salt itself will be installed using PIP. We will need to install the following application stacks and their dependencies:

  • pkgutil
  • python2.7
  • py_pyzmq
  • py_m2crypto
  • py_crypto
  • py_pip
  • salt

Start off by installing the OpenCSW’s pkgutil application:

Continue reading ›

SaltStack has a pretty cool feature where you can manage your systems’ crontabs from a central location. You can keep them in state SLS files and look them up via salt cron exectuion module. With a little bit of extra work, you can also keep the crons in pillars and feed them to your systems (that’s something that I will certainly have to experiment with that later on).  Even with base functionality, this is a pretty cool thing that should help you with truly automating your environment to the point where you will rarely have to log into a server to perform management duties.

Let’s set up a basic crontab in Salt and distribute it to one of our minions just to get a basic idea how this all works.

Create a directory for your cron and an empty cron SLS file:

Add the following to your testcronjob.sls file:

This will execute “bash /opt/scripts/randomscript.sh” command every 2 minutes. You can also specify standard UNIX timing values via:

By default, if you don’t specify a timing value, Salt will implement * for each value in the schedule.

You can implement the cron via:

And you can view it by:

You can find more information about managing crontabs in Salt below::

Multi-master SaltStack setup is quite easy to build out. There is no need for VIPs or DNS CNAMEs (though they can be implemented) and all of the functionality is handled by Salt.  This greatly simplifies everything and you don’t have to rely on external tools.

To have working masters, you need to keep the a couple of directories in sync. You may use clustering filesystems or rsync to do that. In this example we will use rsync which is more than enough. With some extra ingenuity, you can even automate this sync process to happen automatically.

This howto describes how to do this on CentOS, but the setup should be the same on any other OS (such as FreeBSD).

Continue reading ›

To setup remote logging to a central Syslog server, you need to add the following line:

This will set up remote logging using UDP.  Note the single @ sign.  To set up TCP, use double @@ signs:

*.* stands for facility.severity.  Asterisks will pick up all facilities and severities so that means that all entries will be sent to the remote server.  :514 port portion is optional.  Syslog will use default port 514 but you may change that on the server.  If you’re not seeing any messages on the central log host get delivered, verify that ports are open on your firewalls and check if you’re getting any packets on the Syslog host using tcpdump:

Syslog service will open an initial connection using TCP and continue the session while sending packets with log entries.  Since TCP is connection-oriented (each packet is acknowledged) unlike UDP, every packet will be aknowledged.  UDP also lacks congestion control (useful when syslog client spams a ton of messages), may corrupt messages if there are issues on the line, and may deliver messages out of sequence.  Some of these cases are rare, but they may still happen.  It’s recommended to use TCP whenever possible.  If you know that your network equipment is reliable (i.e. no broadcast storms, etc) and you need every ounce of CPU processing power out of your systems, then UDP should be good enough.

This howto describes how to relay mail (such as system alerts) to email services such as gmail. First part describes doing so using sSMTP which only supports relaying local system mail and the second part shows how to do this using Postfix which is a fully featured MTA. Postfix might be an overkill in most cases but hey, it might have features that you may find useful!

This howto is tailored to FreeBSD systems but the main configuration will work on other operating systems.

Continue reading ›

Finally restored the website on a new server.  It took me a while to retrieve and restore the hard drive but it’s finally done.  Whoopie!

I’m writing this as a set of notes for future Arch Linux installations. I decided to revisit Arch Linux after hearing that their ncurses menu-based installer was long gone and how they have started using install scripts. I came upon this while reading the systemd vs SysV initialization method debate that everyone is raging about. I haven’t installed Arch in years as my current machines still happily run on the old installs which are up to date since Arch is a rolling release distribution. I figured now would be a good time to check it out (along with a growing interest in tiling window managers such as i3 or ratpoison).

Anyhoo, I installed Arch Linux from the latest ISO using the new method along with a help of numerous Arch wikis. I mixed the install a little bit with my way of setting up Linux computers and added LVM. One thing I must say is that installing Arch Linux has become a little bit harder than previous menu-based installation. I’m not saying it’s not doable, but it’s definitely a bit more complex or at least daunting for a newbie than just following some menus. It does help to know how Linux already works but this installation method should teach you a bit if you don’t know.
Continue reading ›