The Life and Times of Jeffrey Forman

Feb 27, 2016

I wrote my own network latency monitoring agent in Go

For a while I had used Smokeping to generate pretty graphs of network latency between various hosts on my network. The downside with Smokeping was always getting it working. Did I configure my webserver just right? Did I remember to save the webserver configs so that the next time I set this up, things just worked? Did I install all the right Perl modules (and the right versions of each) so that Smokeping's binary worked? Then there were the differences in operation depending on if I ran it on Linux, OpenBSD, or FreeBSD. There had to be a simpler solution.

I've been dabbling in Go and Graphite as side projects at home for a while. Go was a language I'd been wanting to use more given its popularity where I work. Graphite was always this itch I scratched whenever I wanted to visualize machine and network statistics for the various machines on my network. I knew I could come up with a simple solution using these two pieces of tech.

I wanted to start small. Smokeping provides graphs of minimum, maximium, average, and std deviation for round trip times, as well as packet loss. These are all statistics provided by the ping command line tool. Why couldn't I just wrap ping in a Go binary, and send those data points off to Carbon for graphing in Graphite?

I present the resultant Go binary and library.

parallelping is a Go binary used to ping remote hosts, in parallel. If provided with a Carbon host and port, the data is shipped off to Carbon/Graphite.

carbon-golang is a Go library used to take Carbon metrics and send them off to a Carbon Cache over TCP. I do admit I borrowed a lot of the logic from marpaia/graphite-golang, both because I couldn't quite get that library to integrate as documented, but also because I wanted the learning experience of building my own Go-based TCP client.

Both of these are my first non-trivial pieces of Go code. The more I spent time with Go the less I felt it's barrier to entry was as high as anticipated (I've been mainly a Python person for many years). Further usage documentation for each bit of code can be found on their respective Github project pages, eventually.

A screenshot so far:

Screenshot of Graphite graph showing Ping data

Enjoy!

Click to read and post comments

Feb 22, 2016

A brand new blog for 2016

A new year gave me an itch to scratch. For years I had been running a pretty standard setup when it came to blogging.

It was as vanilla a setup as one can get, running on a $10/month Linode instance out of their datacenter in Atlanta. I never used the VM much other than for keeping what was an almost-completely static blog. I never had any issues with it. I just wanted to try something new.

The new setup:

I save $5/month and run what I consider a more secure, simpler alternative. We'll see how this goes.

Click to read and post comments

May 09, 2015

From 0 to an OpenBSD install, with no hands and a custom disk layout

No one likes to do repetitive OS installs. You know the kind, where you are just clicking through a bunch of prompts for username, password, and partitioning scheme as fast as you can to quickly get to the point where you can get some work done. This scenario happens to me every time OpenBSD releases a new errata. As my OS of choice for firewalls/routers, I use a fresh OS install as the baseline for building a -stable branch of install set files.

While OpenBSD had automated away most of those manual-installation tasks with autoinstall(8), as of a week ago you still could not customize your disk layout. But thanks to commits by OpenBSD developers henning@ and rpe@,you can now specify your own disk layout programmatically to be used during an automated install.

While building a new set of install files is not part of this post, continue reading to see how I got one step  closer by completely automating the base OS install with my custom disk layout.

Using the below source presentation slide decks and Undeadly writeups, along with copius man page reading, my baseline infrastructure and configuration for a completely automated OpenBSD install is as follows:

DHCP server on local LAN configured to provide both an 'auto_install' and 'next-server' parameter. These two parameters point the pxe-booting host where to grab the code to run the next step of the install.

host openbsd-pxeboottest {
   hardware ethernet 52:54:aa:bb:cc:dd;
   filename "auto_install";
   next-server 10.10.0.1;
}

Next was a tftp server prepared to handle the request for auto_install:

$ ps auxww -U _tftpd
USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND
_tftpd 16244 0.0 0.1 780 1172 ?? Is Wed06AM 0:10.14 /usr/sbin/tftpd -4 -l 10.10.0.1 /tftp

$ ls -al /tftp/ 
total 14312
drwxrwxr-x 2 root wheel 512 May 8 19:46 .
drwxr-xr-x 17 root wheel 512 May 6 06:24 ..
lrwxr-xr-x 1 root wheel 7 May 6 06:25 auto_install -> pxeboot
-rw-r--r-- 1 root wheel 7612185 May 8 21:37 bsd
-rw-r--r-- 1 root wheel 80996 May 8 21:37 pxeboot

The pxeboot and bsd files are the same ones from the install set.

Last but not least is install.conf, the file which contains the answers to the various questions OpenBSD presents during an install. This file must be in the root directory of an httpd server  configured above as 'next-server', in my case http://10.10.0.1/install.conf.

Aside from all the normal answers for installation, the new prompt for auto-configuring disk layout is:

URL to autopartitioning template = http://10.10.0.1/autodisklabel

Autodisklabel config:

/    100M-* 75%
swap 10M-*  25%

This config states, with minimums of 100MB for / and10MB for swap, configure the disk layout to provide 75% of its space for / and 25% of its space for swap.

The install should continue as expected and reboot at the end. Upon logging in, I verified the disk layout was what I wanted.

# disklabel -pm wd0
# /dev/rwd0c:...
#  size    offset   fstype [fsize bsize cpg]
a: 6159.5M 64       4.2BSD 2048 16384 1 # /
b: 2029.8M 12614720 swap # none
c: 8192.0M 0        unuse

Based on a 8GB virtual disk file used for testing, \~6000MB for / and 2000MB for swap fits the bill.

Kudos to all the developers involved with this new functionality. I look forward to using it increasingly in the future.

Sources:

Click to read and post comments

Apr 26, 2015

All the bits, from anywhere.

Problem Statement: While OpenVPN has served me well over the past few years both for site-to-site and road-warrior style VPN connections, it always bugged me that I had to hack a config file, juggle certificates, and use a custom client that isn't part of the base OS to bring up the links. My Android phone has a built-in L2TP/IPSec VPN client. My Macbook Pro OS X 10.9 laptop has both an IPSec and L2TP VPN client GUI wrapped around racoon. I run OpenBSD as my firewall/router gateway at home. There must be a solution here.

Goal: To allow all remote clients (both site-to-site and road-warrior) to connect and route all their traffic securely over the Internet through my OpenBSD machine at home.

Some of the hurdles I dealt with, and corners I knew I was cutting, to get the below solution working:

  • The devices I carry with me most of the time (Nexus 5 Android phone, OS X laptop) only support IKEv1, and not IKEv2. Therefore I could not use iked on OpenBSD, I had to use isakmpd.
  • I know that using client certificates is the more secure way to go when authenticating IPSec traffic, but I used a pre-shared key in this example for expediency and simplicity. I plan to migrate to certificates once I get my head wrapped around easily managing them.

On to the configuration. First, the PPP server on the OpenBSD machine. A simple configuration using npppd, handing out IPs on 10.40.0.0/24, using statically configured usernames and passwords.

npppd.conf(5) configuration:

set user-max-session 2

authentication LOCAL type local {
    users-file "/etc/npppd/npppd-users"
}

tunnel L2TP protocol l2tp {
    listen on $external_ipv4_ip
    l2tp-hostname $external_dns_A_record
    idle-timeout 3600 # 1 hour
}

ipcp IPCP {
    pool-address "10.40.0.0/24"
    dns-servers 10.40.0.1
}

interface tun1 address $vpn_endpoint_IP ipcp IPCP
bind tunnel from L2TP authenticated by LOCAL to tun1

Next was isakmpd, the daemon responsible for handling security associations (SA) and handling encrypted and authenticated network traffic.

isakmpd.conf(5) configuration:

ike passive esp transport   
proto udp from $(external_IPv4_IP) to any port 1701   
main auth hmac-sha1 enc aes group modp1024   
quick auth hmac-sha2-256 enc aes group modp1024   
psk dce930cbf010a35f336e640de0b7ff8e94b6b2a512d0ec41268e8e20a154fooo

For my PSK (pre-shared key), I used OpenSSL to generate this random string. Don't worry, the following is not my PSK, and you should not copy this verbatim.

$ openssl rand -hex 32
dce930cbf010a35f336e640de0b7ff8e94b6b2a512d0ec41268e8e20a1546044

The below PF rules allow both authenticated and encrypted communication. I tried to be a specific as I could with all rules, having 'from any to any', or the like, was avoided at all costs. The last line is not specifically IPSec related, but I will explain it after.

$ext_if = "YOUR_EXTERNAL_INTERFACE_TO_THE_INTERNET"
ipsec_if = "enc0"
ipsec_tun = "tun1"
table <ipsec_net> { 10.40.0.0/24 }

pass in proto { esp } from any to ($ext_if)
pass in on $ext_if proto udp from any to any port {500, 1701, 4500} keep state
pass in on $ipsec_if from any to ($ext_if) keep state (if-bound)
pass in on $ipsec_tun from <ipsec_net> to any keep state (if-bound)
pass out on $ext_if inet from <ipsec_net> to any nat-to ($ext_if)

Ports:

  • 500: isakmpd key management
  • 1701: L2TP (used by npppd)
  • 4500: IPSec Nat-Traversal (used by isakmpd)

The last rule allows for remote road-warrior VPN clients to use NAT and route their traffic out my OpenBSD machine. The reason '\$ipsec_tun:network" is not a viable macro to use in the NAT rule is that the interface created by nppd is not configured with a subnet attached to it. Try as I might, even with configuring /etc/hostname.tun1, when npppd comes up, the interface is configured as pasted below. The only solution I found here was specifying the network itself as either a table or a variable macro.

$ ifconfig tun1
tun1: flags=20043<UP,BROADCAST,RUNNING,NOINET6> mtu 1500
    priority: 0
    groups: tun
    status: active
    inet 10.40.0.1 netmask 0xffffffff

Relevant rc.conf.local snippets

# IPSec Rules
ipsec=yes
ipsec_rules=/etc/ipsec.conf
isakmpd_flags="-K"
npppd_flags="-f /etc/npppd/npppd.conf"
npppd_flags=""

At this point, start npppd and isakmpd via their rc.d scripts. It is absolutely critical, given the '-K' flag for isakmpd, you load the IPSec rules manually each time you restart isakmpd via ipsecctl. This bit me many times when testing connections, as isakmpd complains there are encryption mismatches between what the client sent, and what the server expected.

# ipsecctl -f /etc/ipsec.conf

The above ipsecctl command is the magic incantation you must run manually (the above flags in rc.conf.local ensure it is run upon boot) every time you restart isakmpd.

At this point, I configured my client to use the above username, password, and pre-shared key to connect. I now had a working road-warrior style L2TP/IPSec VPN connection that I could use to access both my internal infrastructure and route traffic out through my Internet connection at home as if I was a client sitting on the internal network.

And now, some log snippets to show what it looks like when an active PPP/L2TP IPSec connection is made:

# npppctl session all
Ppp Id = 3
Ppp Id : 3
Username : jforman
Realm Name : LOCAL
Concentrated Interface : tun1
Assigned IPv4 Address : 10.40.0.56
Tunnel Protocol : L2TP
Tunnel From : $REMOTE_IP
Start Time : 2015/04/25 15:05:40
Elapsed Time : 23 sec
Input Bytes : 9029 (8.8 KB)
Input Packets : 86
Input Errors : 2 (2.3%)
Output Bytes : 345
Output Packets : 14
Output Errors : 0 (0.0%)

Npppd logs:

Apr 25 15:05:38 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 logtype=Started RecvSCCRQ from=$(PUBLIC SOURCE IP OF VPN CLIENT):63771/udp tunnel_id=6/7 protocol=1.0 winsize=4 hostname=roadwarrior.theinter.net vendor=(no vendorname) firm=0000
Apr 25 15:05:38 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 SendSCCRP
Apr 25 15:05:38 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 logtype=Started RecvSCCRQ from=$(PUBLIC SOURCE IP OF VPN CLIENT):63771/udp tunnel_id=6/7 protocol=1.0 winsize=4 hostname=roadwarrior.theinter.net vendor=(no vendorname) firm=0000
Apr 25 15:05:38 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 RecvSCCN
Apr 25 15:05:38 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 SendZLB
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 call=18525 RecvICRQ session_id=16020
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 call=18525 SendICRP session_id=18525
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 call=18525 RecvICCN session_id=16020 calling_number= tx_conn_speed=1000000 framing=async
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 call=18525 logtype=PPPBind ppp=3
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=base logtype=Started tunnel=L2TP($(PUBLIC SOURCE IP OF VPN CLIENT):63771)
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 call=18525 SendZLB
Apr 25 15:05:39 VPNCONCENTRATOR npppd[9306]: l2tpd ctrl=6 call=18525 logtype=PPPBind ppp=3
Apr 25 15:05:42 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=lcp logtype=Opened mru=1360/1360 auth=MS-CHAP-V2 magic=145d130e/4efdd7cc
Apr 25 15:05:42 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=chap proto=mschap_v2 logtype=Success username="$USERNAME" realm=LOCAL
Apr 25 15:05:43 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=ccp CCP is stopped
Apr 25 15:05:45 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=ipcp logtype=Opened ip=10.40.0.56 assignType=dynamic
Apr 25 15:05:45 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=base logtype=TUNNELSTART user="$USERNAME" duration=6sec layer2=L2TP layer2from=$(PUBLIC SOURCE IP OF VPN CLIENT):63771 auth=MS-CHAP-V2 ip=10.40.0.56 iface=tun1
Apr 25 15:05:45 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=base Using pipex=yes
Apr 25 15:05:45 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=base logtype=TUNNELSTART user="$USERNAME" duration=6sec layer2=L2TP layer2from=$(PUBLIC SOURCE IP OF VPN CLIENT):63771 auth=MS-CHAP-V2 ip=10.40.0.56 iface=tun1
Apr 25 15:05:45 VPNCONCENTRATOR /bsd: pipex: ppp=3 iface=tun1 protocol=L2TP id=18525 PIPEX is ready.
Apr 25 15:05:45 VPNCONCENTRATOR npppd[9306]: ppp id=3 layer=base Using pipex=yes

ipsecctl output showing flows and security associations:

# ipsecctl -s all

Flows:

flow esp in proto udp from $(PUBLIC SOURCE IP OF VPN CLIENT) port 63771 to $(PUBLIC IPV4 VPN TERMINATOR IP) port l2tp peer $(PUBLIC SOURCE IP OF VPN CLIENT) srcid $(PUBLIC IPV4 VPN TERMINATOR IP)/32 dstid $(RFC1918 IP of VPN CLIENT)/32 type use
flow esp out proto udp from $(PUBLIC IPV4 VPN TERMINATOR IP) port l2tp to $(PUBLIC SOURCE IP OF VPN CLIENT) port 63771 peer $(PUBLIC SOURCE IP OF VPN CLIENT) srcid $(PUBLIC IPV4 VPN TERMINATOR IP)/32 dstid $(RFC1918 IP of VPN CLIENT)/32 type require
flow esp out from ::/0 to ::/0 type deny

SAD:
esp transport from $(PUBLIC IPV4 VPN TERMINATOR IP) to $(PUBLIC SOURCE IP OF VPN CLIENT) spi 0x083cd308 auth hmac-sha1 enc aes-256
esp transport from $(PUBLIC SOURCE IP OF VPN CLIENT) to $(PUBLIC IPV4 VPN TERMINATOR IP) spi 0x5f9cd5a0 auth hmac-sha1 enc aes-256

Sources:

http://www.slideshare.net/GiovanniBechis/npppd-easy-vpn-with-openbsd

Many many OpenBSD man pages: isakmpd(8), iked(8), ipsec.conf(5), npppd(8), npppd.conf(5)

Click to read and post comments

Feb 19, 2015

Family Tech Support: Vacation Edition

This was an epic visit home, tech-wise. Just so I don't forget, and can hold it over my folks' head for a while:

  • Upgraded two five-year-old Linksys E2000 AP's to Netgear r6250's. Those old ones were just not reaching the entire length of the house anymore.
  • Upgraded the firewall/router from OpenBSD 5.5-stable to OpenBSD 5.6-stable. It just so happens I'm home every six months to stay relatively close to the most-recent errata.
  • Converted my father's Gmail account over from one-factor to two-factor authentication thanks to some nasty spyware/adware and potential identity-theft issues he's had recently. I wasn't willing to do this conversion remotely given the horror of Application Specific Passwords and how many devices I would have to do it on (desktops, laptops, one iPhone, and one iPad)
  • Reinstalled one Late 2009 21.5" iMac via Internet Recovery to OSX 10.10 due to aforementioned nasty adware infestation.
  • Upgraded that same iMac from 4GB RAM to 16 GB RAM.

All I can say is that it's nice having all Mac's in the house now, after finally kicking out the last Windows-based PC on my last visit.

Click to read and post comments

Nov 16, 2014

Third time's a charm? Gitolite, Git, Nagios, and a bunch of hooks

I was hoping with my past posts on this topic, I would have enough examples to just copy-and-paste along to configure my Gitolite+Nagios monitoring setup. Not so true. It looked like there were semi-colon's missing in my past examples. After looking at the huge number of changes in Gitolite, I had to re-do everything. Not to mention I always wanted a better way to manage the hooks as opposed to editing them directly on the host. In short, my goal is still simple: be able to manage and verify Nagios configuration remotely via Git. Below is how I did it. For the third time.

First, install Gitolite. I run Gitolite under the 'git' user on my Ubuntu-based VM, from now on called monitor1. I clone the Gitolite source under /home/git/gitolite.

In /home/git/.gitolite.rc, in the %RC block, uncomment:

LOCAL_CODE => "$rc{GL_ADMIN_BASE}/local",

This option tells Gitolite we have some local configuration in our gitolite-admin repository under the 'local' directory. More on this later.

In the ENABLE list, uncomment:

'repo-specific-hooks',

This option tells Gitolite we want to be able to to use repo-specific hooks, as opposed to having one set of hooks for all repositories.

Since several of our yet-to-be-defined hooks need elevated permissions, I have configured a sudoers file to allow so.

%nagios-admin ALL=(ALL) NOPASSWD: /usr/sbin/nagios3 -v /tmp/nagiostest-*/nagios.cfg
%nagios-admin ALL=(ALL) NOPASSWD: /usr/sbin/service nagios3 restart

Our 'nagios' user is added to the nagios-admin group, along with the git user. This allows us via Gitolite's hooks to test, update, and restart the Nagios installation.

This concludes all the work on monitor1 as it relates to Gitolite.

On your local workstation, clone the gitolite-admin repo. I've chosen to name the repo containing my Nagios configuration, 'nagios'. At this point, it is probably safe to copy a known-working copy of your Nagios configuration files into the nagios repository itself. The steps following here, if done in one fell swoop, could completely blow away your /etc/nagios3 directory on monitor1 if you are not careful.

One modification necessary for the nagios.cfg itself is to modify the references to the path of the the configuration files. By default, the nagios.cfg lists an absolute path to the files, e.x: /etc/nagios3/conf.d/. In our case, we will be checking out the configuration files to a temporary directory while we run our pre-flight checks and need to use a relative path instead to make this possible.

Therefore in your nagios.cfg file, perform the following changes:

cfg_file=commands.cfg
cfg_dir=conf.d

Now that I look at it, I'm not quite sure why these are specified separately, as your commands.cfg file could live under conf.d. But I'll leave that for readers who have their own structure preferences. The key here is that relative paths must be used, NOT absolute ones.

Next we move onto the gitolite-admin configuration:

repo nagios
  RW+ = jforman_rsa
  option hook.pre-receive = nagios-pre-receive
  option hook.post-receive = nagios-post-receive
  option hook.post-update = nagios-post-update

This tells Gitolite the name of my nagios config repository, who is ACL'd to read and write to it, and which hooks I wish to override with my custom hooks. Note that in Gitolite, you can only override these three hooks, pre and post-receive, and post-update. Other hooks such a post-merge and merge are special to Gitolite and you will be returned an error if you attempt to override them. Note that each hook, nagios-pre-receive, and so on, corresponds to a file name that will live under my gitolite-admin repository under the 'local' directory.

Now we come to the point of defining our hooks. Under your gitolite-admin directory, create the directory structure 'hooks/repo-specific' under the directory we defined in the above LOCAL_CODE definition. In our case, that corresponds to 'local'.

In other words. in our local checkout of the gitolite-admin repository:

mkdir -p ${gitolite_admin_path}/local/hooks/repo-specific

Under this repo-specific directory, using whatever language you prefer (Python, Shell, Perl, etc), create the files for the repository's hooks.

gitolite-admin$ tree local/
local/
  └── hooks
    └── repo-specific
      ├── nagios-post-receive
      ├── nagios-post-update
      └── nagios-pre-receive

nagios-pre-receive:

#!/bin/bash

umask 022 
while read OLD_SHA1 NEW_SHA1 REFNAME; 
do 
export GIT_WORK_TREE=/tmp/nagiostest-$NEW_SHA1 
mkdir -p $GIT_WORK_TREE
/usr/bin/git checkout -f $NEW_SHA1
sudo /usr/sbin/nagios3 -v $GIT_WORK_TREE/nagios.cfg
if [ "$?" -ne "0" ]; then
  echo "Nagios Preflight Failed" 
  echo "See the above error, fix your config, and re-push to attempt to update Nagios." 
  exit 1 
else 
  echo "Nagios Preflight Passed" 
  echo "Clearing temporary work directory." 
  rm -rf $GIT_WORK_TREE
exit 0
fi
done

nagios-pre-receive: Using the most recent commit, checkout this body of work into a temp directory and run the Nagios pre-flight checks over it. If those checks pass, exit 0 (without error). If those pre-flight checks fail, error out. This latter case will stop the Git push completely. Your running Nagios configs in /etc/nagios3 are untouched.

nagios-post-recieve

#!/bin/bash

echo "Updating repo /etc/nagios3"
/usr/bin/update-gitrepo /etc/nagios3

nagios-post-receive: This runs a companion script to update the cloned git repository at /etc/nagios3. Note that this is only run if the post-receive succeeds, and executes after the merge step of the git push has succeeded, which means our Gitolite nagios repository has now merged the commits we are attempting to push.

nagios-post-update

#!/bin/bash

sudo chown -R root:nagios-admin /etc/nagios3
sudo /usr/sbin/service nagios3 restart

nagios-post-update: The post-update step runs after the post-receive, ensuring our permissions are correct and then restarts Nagios.

At this point, the custom hooks and gitolite.conf should be committed and pushed to the remote Gitolite gitolite-admin repository. No more storing hooks in the bare repository itself! The hooks themselves are version controlled. This was what bugged me the most about my prior solutions. I always hated not having any history on how I fixed (broke) the scripts in the past.

update-gitrepo

#!/bin/bash
umask 022

REPO_DIR=$1
cd ${REPO_DIR}
unset GIT_DIR
/usr/bin/git pull origin master

update-gitrepo: Lives on the monitor1 under /usr/bin, and merely executes a 'git pull' under the passed directory.

This last bit of configuration I'm not too happy with, given it feels so manual and hacky. It's how you get /etc/nagios3 to be the Git repository checkout itself. On monitor1, I normally do this initial work in /tmp, as the 'git' user.

cd /tmp
git clone /home/git/repositories/nagios.git/
(as root) mv /etc/nagios3 /etc/nagios3.notgit
(as root) mv /tmp/nagios /etc/nagios3
(as root) chown -R root:nagios-admin /etc/nagios3
/usr/bin/update-gitrepo /etc/nagios3

This performs a checkout of the Nagios repository itself (note that we're skirting Gitolite's ACL control and accessing the repository directly on the file system). If you wish to further control who can check out the nagios directory, create an ssh-key for your 'git' user locally, add to the gitolite-admin repository ACL, and perform your checkout over SSH as opposed to using the absolute-path of the directory.

Wow. This post turned out to be much longer than I expected. If you've made it this far, you should have the groundwork laid to do remote clones of your Nagios configuration files, and have the ability to run the pre-flight check verify their correctness before ever getting near your production Nagios directory. Good luck.

Click to read and post comments

Nov 14, 2014

Stack it up: KVM, VLANs, Network Bridges, Linux/OpenBSD

I've had some free time and a desire to break stuff on my network at home. I wanted to fix my home network's topology to more correctly split up my wired (DHCP), wireless (DHCP) and server (statically-configured) subnets. At a high level, I had to create a server subnet, create vlan's on my layer-3 switch for each of those pervious subnets, then I had to move the network interfaces on my VM host around to only connect to the networks I wanted it to (wired and server).

First, I moved the secondary interface of my VM host at home from the wifi network to the new server network. The server network would have its own subnet (10.10.2.0/24) and its own VLAN, Vlan 12. (we all have Layer3 managed-switches at home, right?).

The fw/router on my network runs OpenBSD.  Interface 'em3' will be providing connectivity to  vlan 12.

$ cat /etc/hostname.em3
up description "Server Iface"

$ cat /etc/hostname.vlan12
inet 10.10.2.1 255.255.255.0 10.10.2.255 vlan 12 vlandev em3 description "Server Network vlan12"

Depending on if I want to throw more VLAN's on em3, I can just create more hostname.vlan{number} files with appropriate network configuration.

The switch managing my network's traffic is an HP 2910al-24G (J9145). The relevant configuration to handle tagged packets on vlan-12 on two ports (4, 12) is:

vlan 12
name "Servers-10-10-2-0-24"
tagged 4,12
ip address 10.10.2.15 255.255.255.0
exit

I've also added a management IP on this server subnet VLAN as well. On my VM host, this took me the most time to get right. The hardware network interface for the server network is eth1. I wanted to create a bridge on this interface so that multiple VM's could use this interface to bind to. This bridged interface on vlan-12 is br-vlan12. This host also has an IP address on the network itself so that I can access the VM host itself over the server subnet.

auto eth1
iface eth1 inet manual

auto br-vlan12
  iface br-vlan12 inet static
  address 10.10.2.21
  network 10.10.2.0
  netmask 255.255.255.0
  broadcast 10.10.2.255
  bridge_ports eth1.12
  bridge_stp off

One of the pieces that had me fumbling for so long is getting the bridge_ports and vlan_raw_interfaces specification on the bridge. It turns out when the former is specified in a certain way, the latter is not needed.

From interfaces(5):

VLAN AND BRIDGE INTERFACES
To ease the configuration of VLAN interfaces, interfaces having . (full stop character) in the name are configured as 802.1q tagged virtual LAN interface. For exam‐
ple, interface eth0.1 is a virtual interface having eth0 as physical link, with VLAN ID 1.

For compatibility with bridge-utils package, if bridge_ports option is specified, VLAN interface configuration is not performed.

I am still having some fun issues with routing on my network, where I can ping from my wifi network to the wired-network interface on my LAN, but not wifi -> server. I think this has to do with reverse path forwarding (RPF) checking on the server, given its default route is over the 'wired' network and not the server network interface. An invaluable tool to debugging these types of issues has been the sysctl setting below, logging martians. It logs instances where packets come into an interface which it does not expect, and therefore by-default rejects.

net.ipv4.conf.all.log_martians = 1

The fun and breakage continues.

Helpful links found during my foray into this topic:

Click to read and post comments

Sep 13, 2014

Unattended Ubuntu installs, part 2

In my initial post about unattended Ubuntu installs, I made the less-automated choice of hacking at the Ubuntu installation ISO and baking my preseed configuration right into the ISO. This proved to be incredibly inefficient and prevented a lot of the customization and quick-spin-up potential of what I interested in. In other words, if I wanted to spin up five identical VMs differing only by their hostname, was I really expected to bake five custom ISO's whose preseed file only differed by their specification of the hostname?

Solution: With a bit of Internet poking, I found that you can specify the URL of a preseed file, accessible via HTTP, for your VM to read during OS installation as a kernel boot parameter. Given all this, there really was no reason to bake my own ISO in the first place. I had to test using virt-install specifying all these parameters on the command line, including using a straight Ubuntu install ISO. Results? Success!

For those curious of the command-line I used:

sudo /usr/bin/virt-install   
--connect qemu:///system   
--disk vol=<disk pool>/<disk volume>,cache=none   
--extra-args "locale=en_US.UTF-8 console-keymaps-at/keymap=us console-setup/ask_detect=false console-setup/layoutcode=us keyboard-configuration/layout=USA keyboard-configuration/variant=US netcfg/get_hostname=<VM hostname> netcfg/get_domainname=<VM domain name> console=tty0 console=ttyS0,115200n8 preseed/url=<URL to preseed file>"   
--location /mnt/raid1/dump/ubuntu-14.04.1-server-amd64.iso \ 
--network bridge=<bridge if> \ 
--name <VM name according to libvirt>   
--os-type linux \ 
--ram 512   
--vcpus 1   
--virt-type kvm \

Preseed file: This file can live on any accessible-from-your-VM http server. During the install process, it is retrieved via wget as part of the install procedure. But how do you specify the URL for the preseed file?

The only one modification I did have to make to my preseed file had to do with selecting a mirror. I was constantly prompted to select a mirror hostname. After another couple Google searches, I was left with what seems to work, by default picking a US-based HTTP mirror for Ubuntu packages:

d-i mirror/http/countries select US
d-i mirror/http/hostname string archive.ubuntu.com
d-i mirror/http/directory string /debian
d-i mirror/http/mirror select us.archive.ubuntu.com
d-i mirror/protocol select http
d-i mirror/country string US
d-i mirror/http/proxy string

Enjoy!

Click to read and post comments

Sep 01, 2014

Look ma', no hands with Ubuntu installs.

In my day job, it's all about automation. Automate what is repeatable, and move on to more interesting and not-yet-automated tasks. For a while, I've run a KVM/libvirt setup at home, running various iterations and distributions of Linux, OpenBSD and FreeBSD for various pet projects. Going through each distribution's install procedure was getting old, requiring me to input the same parameters, set up the same users and passwords, over and over again. Given I use Ubuntu mostly as a VM guest, I dug into their preseed infrastructure, to be able to automate the installation and get me past the drudgery of adding another VM. Below are the steps and a bit of sample configuration that got me through the process.

I did find some examples of automating this all the way from virt-install (libvirt's way of adding a VM instance to your cluster), but that is for another time.

[Update 2014-09-13: Even more unattended. Part 2]

Grab a Ubuntu Server ISO from their web site. Mount the ISO locally and rsync its contents to a new directory for your own customization.

mount -o loop /path/to/iso /some/mountpoint
rsync -av /cdrom/ /opt/cd-image

Now we're left with the customization of the install. I wanted the installation to be completely hands-free. I shouldn't have to enter in any partition information, user names, or network information. Right now my parameters are, for the most part, configured in the preseed file. My eventual goal is the factor those out into my own personal install script so that the command line arguments from my script are passed as kernel options to the install and are read at run-time as opposed to at CD-creation time. Doing it that way alleviates the need to re-create a new ISO with hard-coded values for the host name, domain name, and various network information, among others, each time you want to build a new VM in the preseed file.

/opt/cd-image/isolinux/txt.cfg (additions):

LABEL forman-preseed
menu label ^Forman Preseed
kernel /install/vmlinuz
append preseed/file=/cdrom/preseed/ubuntu-server-custom.seed vga=788 initrd=/install/initrd.gz locale=en_US.UTF-8 console-keymaps-at/keymap=us console-setup/ask_detect=false console-setup/layoutcode=us keyboard-configuration/layout=USA keyboard-configuration/variant=USA --

/opt/cd-image/preseed/ubuntu-server-custom.preseed:

d-i debian-installer/locale string en_US
d-i netcfg/choose_interface select eth0
d-i netcfg/get_hostname string preseedhost-1
d-i netcfg/get_domain string foobar.mylan
d-i netcfg/wireless_wep string

d-i time/zone string US/Eastern
d-i clock-setup/ntp boolean true

d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/default_filesystem string ext4
d-i partman-auto/init_automatically_partition select biggest_free
d-i partman-auto/choose_recipe select atomic
d-i partman-auto/method string regular
d-i partman-auto/select_disk string /dev/vda
d-i partman-md/confirm boolean true
d-i partman-partitioning/confirm_write_new_label boolean true

d-i passwd/root-password-crypted password <echo "foo" | mkpasswd -m md5 ->
d-i passwd/user-fullname string FirstName LastName
d-i passwd/username string myfirstuser
d-i passwd/user-password-crypted password <echo "foo" | mkpasswd -m md5 ->
d-i user-setup/allow-password-weak boolean true
d-i user-setup/encrypt-home boolean false

d-i mirror/http/proxy string
d-i pkgsel/include string openssh-server irssi
d-i pkgsel/upgrade select full-upgrade
d-i pkgsel/update-policy select none

tasksel tasksel/first multiselect basic-ubuntu-server

d-i clock-setup/utc boolean true

d-i grub-installer/only_debian boolean true
d-i grub-installer/timeout string 2
d-i finish-install/keep-consoles boolean true
d-i finish-install/reboot_in_progress note

This creates a VM with the various properties (highlights for brevity):

  • hostname = preseed-host1
  • domain name = foobar.mylan
  • network = DHCP via eth0
  • partitioning scheme = one big / partition, with space leftover for SWAP
  • one user with password set, along with root's password.
  • install openssh-server and irssi to confirm package installation works.

Create the CD image:

IMAGE=custom.iso
BUILD=/opt/cd-image/

mkisofs -r -V "Custom Ubuntu Install CD"   
-cache-inodes   
-J -l -b isolinux/isolinux.bin   
-c isolinux/boot.cat   
-no-emul-boot   
-boot-load-size 4   
-boot-info-table   
-o $IMAGE   
$BUILD

Voila. Boot that as your install CD and behold the magic! Booting from this ISO inside your VM instance should leave you with a fully functionining instance.

The only real hiccups I hit along the way, given the multitude of documentation out there on the Internet, was getting past the keyboard selection prompts. Specifying the keyboard model, layout, and 'ask_boolean false' values for that set of questions inside the preseed file had no affect, I was still prompted. Those configuration values seemed to be REQUIRED to be set in the isolinux config, which in my case I stuck in txt.cfg.

Sources (of inspiration):

Click to read and post comments

May 03, 2014

i3wm, i3bar, and rhythmbox

I was interested in customizing my i3wm setup a bit more, and wanted to display the current song playing in Rhythmbox while running the i3wm window manager.  It turned out to be just a few lines of configuration to my i3bar config.

First, I grabbed a copy of the Python wrapper around i3bar, wrapper.py. This wrapper merely takes the output of a command, wraps it in compliant JSON, and returns in a way that i3bar uses it generate its output. The wrapper allows the user to add on an arbitrary number of additions to their status bar, although you don't want to overwhelm what is supposed to be just a status bar.

My own code I added to wrapper.py:

import subprocess

def get_rhythmbox_status():
    """ Get the current song playing and elapsed time from rhythmbox if it is running. """
    cmd = "rhythmbox-client --no-start --print-playing-format '%aa - %tt (%te)'"
   try:
       rb_output = subprocess.check_output(cmd, shell=True)
       rb_output = rb_output.strip()
   except subprocess.CalledProcessError, cpe:
       rb_output = "Rhythmbox Client: Error"

   if "(Unknown)" in rb_output:
       rb_output = "Rhythmbox: Not Playing"

   return rb_output

This function executes rhythmbox-client. Command line flags include purposely not starting Rhythmbox if it is not started, and displaying a custom format of output related to the current song being played. This output is the artist name, track title, and elapsed time of the current track. If Rhythmbox is not started, no output is present in the i3bar. If Rhythmbox is started but no song is playing, "Rhythmbox: Not Playing" is displayed.

in the 'main function', I modified the line which inserts an entry in the JSON object:

j.insert(0, {'full_text' : get_rhythmbox_status(), 'name' : 'rhythmbox'})

Inside my i3wm config's bar specification, my status_command line looks like the following:

status_command i3status --config ~/.i3/i3status.conf | ~/.i3/wrapper.py

If your i3bar updates on an interval, you will see the current elapsed time of the song update as it plays.

Click to read and post comments
Next → Page 1 of 8