Remind Me: Initial Data in a Django class-based Form

I love Django‘s class-based way of handling forms. You name the class, articulate each field (data point of your form), and attach it to a view. Voila. But what happens when you want some initial data in the form?

Initial to the rescue!

What your class might look like:

class PersonForm(forms.Form): first_name = forms.CharField(max_length=100) last_name = forms.CharField(max_length=100) gender = forms.CharField(max_length=1) hair_color = forms.CharField(max_length=256) 

If you now wanted to initialize your form for males with blonde hair, include this snippet in your view:

form = PersonForm(initial = { 'gender' : "M", 'hair_color' : "blonde" } ) 

Then pass that form as part of your render return:

return render_to_response('add_person.htm', { 'form' : form }) 

This post is brought to you by #neverwantingtosearchtheinternetforthisagain, and StackOverflow for inspiration.

Boston Barcamp 6, Day Two

Finally got this post out after having a bit of a busy week.  

Location based networking, anurag wakhlu (coloci inc) * location based apps: where are you now? or where will you be? * where are you now: foursquare, gowalla, loopt, etc * where will you be: coloci, fyesa, tripit, plancast * interest based networking: the reason to talk to someone who is near you. tie an interest: sending someone a coupon when they are near starbucks. if they arent near starbucks, what good is a coupon? * proactive coupons: dont wait for a check-in. if someone is 2 blocks from starbucks, send them a notification for coupon. ex// minority report. walk by a billboard, recognizes you, tailors ad specifically to you. 52% of US consumers willing to share location for retail perks. * foursquare background checkin? automatically check you in when you are in a close enough vicinity to a location * Do privacy concerns have a potential impact on services becoming more popular? ex// European privacy laws about broadcasting who you are, where you are, etc. * Have to trust your device that when you disallow authority to know your location, it actually does not broadcast where you are. * Trade off of convenience versus privacy. Debit card is a lot more convenient than cash, people are more than likely to give up privacy. * If you really want to not be tracked, you really need to disconnect yourself from the computer. Go cash only. Re-education might help. “You might already be sharing this info somewhere else, so what difference is it now that you do it via your phone?” * Tracking someone’s history via CSS visited tag. Firefox supposedly has fixed this issue where websites cannot do this anymore. * Using EZpass, who is responsible for giving a ticket if you did 60 miles in faster than 60 minutes? Using your location to know your broke the law. At the start, Anurag gave a wonderfully succint history of location based networking, highighting the current giants like Foursquare and Facebook Places. We talked about how the potential is there to enable your phone to alert you about consumer deals in your vicinity, having more of a ‘push’ aspect to networking, or your phone could alert you to friends being near as well. Eventually though, the attendants turned the talk into a big privacy discussion. Not necessarily as flame-worthy as it could have been, but still talking about how much of our information we want to broadcast and allow to advertisers. Broadcasting location and private information. Could the situation eventually get to the point like Minority Report where your phone is overtly/covertly broadcasting who you are to potential advertisers or other potentially nefarious people.

Economics of open source * reputation is a kind of currency. ancillary benefits of ‘being known.’ ex// popular github repo, can get you a book deal, flown to conferences, etc. * are we cheapening what we do by giving it away? software produces so much cash for people. not everything is oss. still need people to customize it and apply. * discussion: can donations kill a project? the comptroller decides who gets money, and those who donate time but dont get paid feel slighted, and the project can take a nose dive. Content of presentation was a bit bland/dry, but the discussion was involved. War story: giving training away for free when a company charges for it. you are hurting the ecosystem by giving it away rather than someone paying for it. This was fairly interesting, delving past the common topic of software being ‘free as in beer.’

Interviewing well as a coder round table * feel okay sitting there for a couple minutes thinking. Dont feel stressed to start writing code right away. * some questions to ask you to regurgitate syntax. what happens if you get confused between languages. * design issues “show us where you would add X feature.” stylistics versus code syntax. * code portfolios: employers look at your github profile. see the code you’ve written. if your code is ‘too good’, employer wants you to find bugs in their code. * how to practice your whiteboarding skills? codekata: short programming problems. * asking questions that there is no solution to. can you be an asshole interviewing? * be prepared for personal questions because employers will google you and find your personal interests * spin negative questions as positive: what do you see improving in your work environment? * questions back to employee: what do you hope to improve for our company? * if you list a skill in your skills list, be ready to whiteboard the code.

Can the internet make you healthier? jason jacobs, runkeeper founder * convergence of health/athletic data and IT * virtual coaching: ahead/behind pace, in-app reminders to go faster or slower on their iOS app. The more data you have over what you’re doing physically, can help you react. How am I doing against my peers? This was interesting, since Jason sees his company’s first product ‘Run Keeper’ as the jumping off point to more athletic-body sensing applications. The point was raised about what point does the app which suggests a certain pace while running, dance the line of being medical advice. I think it is a good point, that the app needs more information about your health before suggesting a certain distance or pace for exercise. I’ll be curious myself as I use the app more, how I am improving athletically.

Overall, I found the signal-to-noise ratio of the unconference to be very high. For my first Barcamp, I would suggest it to all technically-inclined folks who just want to let their interests and imaginations plot the course of which talks they attend. I know I will be a repeat attendee.

Barcamp Boston 6, Day One

Having never been to a Barcamp before, I knew the overall structure of the conference, but was curious if I would actually like it. Truth be told, I found it full of content, without a lot of fluff, even for the talks I sat in on where I had no prior knowledge. My notes follow, thanks to the great OSX app Notational Velocity hooked up to Simplenote. My overall thoughts in italic after each post.:

how to give a presentation people love and learn from
break presentation into 7-10 minute chunks
then transition 7 minutes into the talk to another topic, to keep people’s attention
insert emotion, a story. rather than just X happened.
(For a talk to be this meta, a presentation about giving presentations, I was not hooked. There weren’t any real nuggets of information here that made me sit up and say “Wow, I haven’t been doing this in the presentation I make.”)

how to run a startup like genghis khan, by @wufoo
* work like a nomad
* build an audience first. protect your audience. make the audience part of the show.
* make developers handle support requests. once devs get same question two or three times, they go in and fix the code so they dont get the question again.
(Presenter absolutely killed it. Engaging, fast talking (without mumbling), great slides that presented the information in clear and sometimes humorous ways. Made me think more about engaging the people I am trying to convince to my way of thinking)

android developer: war stories and antipatterns
yoni, lead android dev at scavngr
* dont code splashscreens. more of an iOS thing. if you have to preload data, show a progress bar in the app already open
* dont force orientation (landscape/portrait). support both
* dont assume their screen size. use relative layouts
(I went into this talk curious and with no prior experience or knowledge of writing an Android app. I don’t even own an Android phone. This was much more a round table, with those devs in the room very willing to share their experiences and war stories. I found they really had good experiential tips, rather than “This is the best practice” and moving on.)

ask a plasma physics grad student anything
(I must say this was completely over my head. The student at the front of the room, from MIT’s Plasma Science and Fusion Center, seemed to know his stuff and was genuienly interested in challenging the audience. What blew me away was the knowledge of the audience, asking very pointed questions with what sounded like real science to back it up.)

building fast websites, making users happy (@jonathanklein)
* google injected 400ms delay into search page, dropped 0.76% searches/users over time.
* faster sites rank better in google. site speed is part of search ranking.
* what’s load time? server side generation time, client side render time. 80-90% of load time takes place on the client.
* best practices:
* reduce http requests: combine css/js, use image sprites (one download and cut up into multiple images).
* minify css/jss: strip comments out and white space. (yuy, java library). will rename variables into shortest name possible
* gzip all text: html, css, js
* for graphics, use png8 (restricts you to 256 different colors in the image)
* jpegs can be saved at 75% quality
* image compressor: (from yahoo dev network), lossless compression.
* measuring performance
* google webmaster tools,
* yotta, firebug, yslow, page speed, dynatrace ajax edition
(For an ops guy, I was really interested in this talk. Jonathan blew through his material at break-neck speed, but covered the topics and answered questions without feeling like the talk was broken up. Some really good information through his experiences, and things I would like to dig into more myself.)

nosql round table
* some are relational, others are key value
* redis, redis + resque
* cassandra
* mongodb
* why nosql over mysql? no schema, lack of migrations from version to version. being able to store different things. replication (single threaded)
* keeping mysql in sync with nosql layer about: broadcast updates from mysql over rapidmq(?).  nosql service grabs update from mysql.
* solutions that they discarded:
* cassandra: v0.6, latency spikes between nodes. node would get flagged as awol. cascading failure because data gets rebalanced. use “hinted handoff” to prefer the direction of the failover. supposedly better in v0.7. documentation is messy.
* in the cloud or in a dc? mostly EC2. local storage with evs slave.
* search via solr
(Another one where I went in having nothing but curiosity, since noSQL is one of the popular buzz words these days. Very engaged audience who shared war stories, both good and bad, implementing noSQL solutions in their workplaces. Left me with a stalk of websites to dig into.)

agile development war stories
* problems it tries to solve: waste. business approach.
* more collab between business and engineering. dont just throw the ‘stories’ from biz over the wall.
* focus on testable behavior. how can we test each iteration? should be part of the original story.
* be smaller, quicker, more iterative. ex// dont go off for 18 months planning your solution. business might change underneath you
* people do “agile but..” and tend to modify the methodology.
* burn down?
* should tasks stay <1 day? sounds a bit unreasonable, since “speeding up the server by 20%” is unable to be done in one day. task size should have a reason.
* average sprint time: 2 weeks
* do a code review before the planning meeting. so estimations on a piece of work can be completed in the meeting. ex// dont trace the code for the 1st time in a meeting.
* software to track scrums/managing stories: soft2, scrum ninja, team foundations studio (windows), white boards, index cards on a wall, ibm rational, pivotal tracker (good for distributed teams), mingle from softworks.
(Another highly-concentrated buzzword round table where I was more curious than anything. Some real good information about what works and what doesn’t when it comes to managing time and projects. Lots to read up on here, and see if I can apply it to my daily work life.)

New toy, Nikon style.

It had only been ‘recently’ that I had purchased myself a micro four-thirds digital camera for my honeymoon. It took pretty good pictures, and I loved its compactness when roaming around Portugal for 10 days. But I had always wanted a bit more control over the photos I took; whether it was exposure modification, lense type, or overall flexiblity for shooting in different situations (low light at night).

‘Lo and behold, Nikon announced the D7000. It had all the features of my Father’s D90, but with better HD video capture, an upgraded AF sensor and a whole host of other functionality too long to list. Patiently I waited for my bonus to be direct deposited, reading up on the manual (a hefty 350 pages), investigating some photography walks and classes in the area. Now in my hot little hands:

The weather finally reached an acceptable temperature where it was just above the level of being uncomfortable for an afternoon stroll. I wandered my way through Somerville and Cambridge on my way through Harvard Square. I captured the below uniquely-painted house. I was mainly playing around with Aperture-priority today, but look forward to digging into more of the image control options to bring out different detail.

Overall the camera is great, not too heavy for a couple hours slung over my shoulder, even with the included Nikon strap and an 18-105mm lens connected. The Op/Tech neoprene strap I have on order should make things a heck of a lot more comfortable when that arrives. I am really looking forward to the weather warming up when I can explore more of Boston with the camera and take it up into the mountains for some day-hikes.

Remind Me: Adding SNMP mibs for querying

I was having issues trying to get Nagios to more easily query my APC UPS with the APC-provided MIB. It took me a while to figure out the right bits both on the file system and in my query to have the MIB ‘processed.’ I still don’t know how to add that MIB to the “automatically process me too if snmpwalk is run” piece of the puzzle.

But for what I have running a home, some notes for myself and others who ripped out enough hair already.

jforman@monitor:/usr/share/snmp/mibs$ ls powernet401.mib  jforman@monitor:~$ cat /etc/snmp/snmp.conf mibs +PowerNet-MIB  jforman@monitor:/usr/share/snmp/mibs$ snmpwalk -v1 -c snmpcommunity ups1 apc PowerNet-MIB::upsBasicIdentModel.0 = STRING: "SMART-UPS 700" PowerNet-MIB::upsBasicIdentName.0 = STRING: "ups1" PowerNet-MIB::upsAdvIdentFirmwareRevision.0 = STRING: "50.14.D" 

Relevant Nagios configs:

define service { use generic-service check_command snmp_apcups_batterystatus!snmpcommunity service_description UPS Battery Status host_name ups1 }  define command { # OID corresponds to: PowerNet-MIB::upsBasicBatteryStatus.0 command_name snmp_apcups_batterystatus command_line /usr/lib/nagios/plugins/check_snmp -H '$HOSTADDRESS$' -C '$ARG1$' -o upsBasicBatteryStatus.0 -s "batteryNormal(2)" } 

Help and inspiration courtesy of

You go here, you go there. Bending DHCP to your will.

TL;DR: How to hand out DNS servers in different orders to different clients based upon MAC address.

Background: I was connected into my office’s VPN a few months ago and was noticed some very slow DNS resolution of host names back at the office. I would attempt to ssh into another host, and the connection would sit there for more than a few seconds before finally proceeding. This didn’t happen for just ssh, but also for making http requests. I dug into my resolv.conf locally and tried sending a few DNS queries via dig to the two DNS servers I was provided. The first one failed, the second one returned immediately with the correct response. I swapped the two entries and DNS resolution locally was back to where I would expect it, very fast. I alerted our IT group and the issue was fixed (the first DNS server had become hung, and needed a process restart).

Curiosity: This got me thinking, was everyone suffering my issue? Did the two DNS servers handed out always come in that same order? If so, DNS would have been slow for everyone. We’d all be timing out trying to query the first server, waiting for our local resolvers to query the second operational server. Could I get a DHCP server to randomize the list of DNS servers to its querying clients?

Assumptions: I am ignoring the fact that our VPN concentrator might not be running the ISC DHCPD, which my examples are based upon. I will split up each DHCP subnet into two groups, in a binary fashion.

How I did it: After doing some Google searches, I came across a post on the mailing list for ISC DHCPD users. It explained that you could do some logic on the incoming MAC address, and based upon that, hand out unique information, among it, DNS, routers, domain names, etc.I was curious to see if I could actually get this working.

I figure the easiest way to do this is paste some config data and go from there.


class "binary-group-0" { match if suffix(binary-to-ascii(2, 8, "", substring(hardware, 6, 1)), 1) = "0"; }  class "binary-group-1" { match if suffix(binary-to-ascii(2, 8, "", substring(hardware, 6, 1)), 1) = "1"; }  subnet netmask { option routers; option domain-name "";  pool { allow members of "binary-group-0"; range; option domain-name-servers,; on commit { execute("/bin/echo", "GROUP ZERO"); } } pool { allow members of "binary-group-1"; range; option domain-name-servers,; on commit { execute("/bin/echo", "GROUP ONE"); } } } 

Configuration explanation:

On lines 2-8, each named class is populated by clients whose MAC address corresponds to the appropriate ‘match’ line. These match do the following: Starting from the 6th byte of the client’s MAC address, grab one byte of data. Once we have that data, convert the binary data to ascii characters, without a separator using base two, each bit of data being eight bits long. With that data, take a string onecharacter from the end. We have created two classes here, one where the last character is 0 (zero), and the other is 1 (one).

On lines 10-30,we have a standard subnet declaration, with two pools. Each pool (lines 14-20, 21-29) uses the ‘allow members of’ to control which class of users from above the pool applies to. In this instance, we hand out two different ranges and sets of domain name servers depending on what class a user belongs. For my own debugging, I stick an ‘on commit’ execution in each pool. This outputs in the log when a lease is acquired for a particular client, and gave me some explanation about where I was in the config. These ‘on commit’ lines are purely for debugging, and can be removed for production. Clients whose MAC address ends in a binary ‘0’, are placed in the range with DNS servers and Those whose MAC address ends in a binary ‘1’ are given an address in the pool, with DNS servers, You could easily put your own DNS servers in this section, modifying the order in any way you please.

Log Output: Using a couple of VM’s on an isolated network at home (and playing with the MAC address of the client), I was able to test my above configuration. Notice on each they are given IP’s from each appropriate range, with the correct ‘echo’ statement being executed.

DHCPDISCOVER from 52:54:00:4c:f3:d4 (testvm1) via re1 DHCPOFFER on to 52:54:00:4c:f3:d4 (testvm1) via re1 execute_statement argv[0] = /bin/echo execute_statement argv[1] = GROUP ZERO GROUP ZERO DHCPREQUEST for ( from 52:54:00:4c:f3:d4 (testvm1) via re1 DHCPACK on to 52:54:00:4c:f3:d4 (testvm1) via re1  DHCPDISCOVER from 52:54:00:4c:f3:d3 via re1 DHCPOFFER on to 52:54:00:4c:f3:d3 (testvm1) via re1 execute_statement argv[0] = /bin/echo execute_statement argv[1] = GROUP ONE GROUP ONE DHCPREQUEST for ( from 52:54:00:4c:f3:d3 (testvm1) via re1 DHCPACK on to 52:54:00:4c:f3:d3 (testvm1) via re1 


For those curious about how I came to breaking up the MAC address of the client, I became painfully familiar with the dhcp-eval man page. I honestly would not wish that man page on anyone, it is woefully confusing for someone who does not dabble in DHCPD configuration on a daily basis.

MAC Address: 52:54:00:4c:f3:d3  binary-to-ascii(2, 8, ":", hardware); 1:1010010:1010100:0:1001100:11110011:11010011  binary-to-ascii(2,8,".", substring(hardware,6,1)); 11010011  suffix(binary-to-ascii(2, 8, ":", hardware), 1); 1 

There you have it. Now you can break up your clients into binary groups, handing them different network information depending on where they fall. Obviously if you want to split them up into more than two groups, the match statements become a bit more verbose for each condition. Ultimately this would not have solved my problem of being given a ‘bad’ DNS server first in line for my request (since my MAC address would always be the same), but it does spread the load among DNS servers over local clients. I am now curious, when I get the free time, to play around with creating an on-commit-like command that based upon its execution (generate a random number for example), changes the order of DNS servers handed out to clients.


Remind me: Configuring BIND9 plugin for Munin on FreeBSD (and Linux)

I was attempting to get Munin working on a new FreeBSD machine, monitoring the rate of queries to a Bind9 DNS server. Every time I attempted ‘munin-run bind9’ I was presented with the same error:

2011/01/29-18:09:55 [3581] Error output from bind9: 2011/01/29-18:09:55 [3581]     Died at /usr/local/etc/munin/plugins/bind9 line 41. 2011/01/29-18:09:55 [3581] Service 'bind9' exited with status 2/0. 

Digging around in the Bind9 Munin plugin, line 41 complains about a state file that Munin uses. The plugin immediately tries to open the state file, without checking if the file is actually present. (TODO: Check to see what ramifications there are to just creating the file if it is not present.)

Line 41:

open(Q,&quot;&lt; $STATEFILE&quot;) or die; 

After digging around to figure out the plugin state directory (/var/munin/plugin-state, for those following along at home), I was back in business.

[root@dns1 ~]# cd /var/munin/plugin-state [root@dns1 /var/munin/plugin-state]# touch bind9.state [root@dns1 /var/munin/plugin-state]# chgrp munin bind9.state [root@dns1 /var/munin/plugin-state]# chmod g+rw bind9.state [root@dns1 /var/munin/plugin-state]# ls -al total 4 drwxrwxr-x  2 nobody  munin  512 Jan 29 18:13 . drwxr-xr-x  3 munin   munin  512 Jan 29 14:47 .. -rw-rw-r--  1 root    munin    0 Jan 29 18:13 bind9.state 

Relevant bind named.conf stanza for query logging:

logging { channel default_queries { file '/var/log/queries.log' versions 3 size 500k; severity info; print-severity yes; print-category yes; print-time yes; }; category queries { default_queries; }; }; 

With that, ‘munin-run bind9’ worked. I restarted the munin-node process and queries are now being graphed as expected.

[Update 2012-02-20: Getting this working on Ubuntu Server]

After banging my head against the wall trying to get this plugin working on Linux, dying on the same line (about the inability to find the state file), this is how I got it working.

Create the following file with the appropriate permissions.

root@grenache:/var/lib/munin/plugin-state# ls -al bind9.state  -rw-rw-r-- 1 nobody munin 22 2012-02-20 15:39 bind9.state 

And now, voila:

root@grenache:/var/lib/munin/plugin-state# munin-run bind9 query_PTR.value 25 query_A.value 109 query_AAAA.value 199 query_other.value 0 

Munin monitoring your SB6120 Comcast Cable Modem

For those who have spent time debugging their Comcast Internet connection, we all know the frustration of trying to explain to Comcast that something on their end is the problem. In this case, more data is better: latency history, ping times, traceroutes, etc. You can run Smokeping to monitor latency between your home connection and a remote Internet IP address for example. You can also print out traceroute examples and email them if you have an astute support contact. But if you want to monitor the data your cable modem is seeing, you need to look at the signal to noise ratio of your connection. This ratio refers to how much of your signal has been disturbed by noise on the physical line (Thanks Wikipedia). Newer cable modems will use multiple channels along the same line to increase your download and upload speed, and each channel can be disturbed independently.

Enter Munin, an RRD-graph based tool that creates easy to understand (for the most part) graphs of data over time. Munin plugins can be written to scrape any data you are able to programatically retrieve and whip it into a pretty graph. The plugin I wrote for Munin scrapes the data from a Motorola Surfboard SB6120 modem’s status page, and presents it in an easy-to-digest format for Munin. Previous Munin plugins handled the old SB4120 and SB5120 modems. These are old DOCSIS 2.0 products, and therefore the status page has evolved, and displays the connection data in a way where the old plugin does not work on the SB6120 modem.

The graph shows the various downstream and upstream channels, graphing the signal-to-noise ratio dB (decibel) of each respective channel.  Suggested values for Comcast in particular can be found in a FAQ entry from Broadband Reports. These are just suggestions, and different values will have different effects on your Internet connection. A severe dip in the graph could correspond to your modem being unable to stay sycned with Comcast’s infrastructure, which translates to an unstable Internet connection.

Installation instructions follow common Munin plugin practices, by creating the plugin (or a symlink to the plugin) in the /etc/munin/plugins/ directory to the plugin code. Mine is written in Python, so a basic Python install is required on your machine. Source can be found can be found in my munin-surfboard6120 repo on Github.

Enjoy! (Comments/criticisms are welcomed to improve the usefulness of this plugin)

Howto: Git, hooks, Nagios, oh my.

At work we have a monitoring configuration workflow where our Nagios config files are parsed and generated before they are allowed to be ‘svn commit’ed. I know this verification has saved me many times when trying to add new hosts or services, since everything might not be ready for prime time. I wanted to see if I could recreate this scenario at home using Git hooks, if only for my own interest and curiosity.

My proposed workflow: Start with a local clone of the Nagios configs. Add a host,  service, or contact to a local nagios Config file. Git commit locally. Git push to the remote repo on the monitoring box. The Nagios config parser would run remotely verifying the syntactic-correctness of the proposed-check in. If these checks fail, reject the push. If parsing succeeds, accept the push, and restart Nagios (hence reloading the new configs). I wanted to alleviate the need for an admin to log into the monitoring machine at all. This would eliminate the need to add sudo permissions to a group of users to allow them to restart the Nagios service.

Without further delay, on to the code. The first piece, was the pre-receive hook with Git. I used this hook specifically because it has the power to accept or reject a push.

Behold, $git_repo/.hooks/pre-receive:

#!/bin/bash while read OLD_SHA1 NEW_SHA1 REFNAME; do export GIT_WORK_TREE=/tmp/nagiosworkdir /usr/bin/git checkout -f $NEW_SHA1 sed -i "s|cfg_dir=CHANGEWITHGIT|cfg_dir=${GIT_WORK_TREE}/config|g" $GIT_WORK_TREE/nagios.cfg sudo -u root /bin/chgrp -R nagios $GIT_WORK_TREE sudo -u root /usr/sbin/nagios3 -v $GIT_WORK_TREE/nagios.cfg > $GIT_WORK_TREE/check.out NAGIOS_CHECK_STATUS=$? echo "Nagios Config Check Exit Status:" $NAGIOS_CHECK_STATUS if [ "$NAGIOS_CHECK_STATUS" -ne 0 ]; then echo "Your configs did not parse correctly, there was an error. Output follows." cat $GIT_WORK_TREE/check.out exit 1 else echo "Your configs look good and parsed correctly." exit 0 fi done 

The operations here flow from checking out the pending push to a ‘temporary directory’ (explained below), and then making a slight modification to the nagios.cfg to handle our temp configs being somewhere other than the Nagios install directory. This is needed since we want to parse our configs, and not the ones currently in-use on the machine. After this is done, we parse the output of that check, and based upon that output, either accept or reject the push. In the case of a rejection, the output of the verification is printed for the user to see. This output is pretty clear when explaining where the error is.

And now the post-receive:

#!/bin/bash cd /etc/nagios3/testing /usr/bin/env -i /usr/bin/git pull echo "Restarting Nagios now" sudo -u root /usr/sbin/service nagios3 restart 

The post-receive happens after the entire process is done [1]. We change into the directory of the Git clone of your monitoring config repository. which itself is inside /etc/nagios3, and execute a ‘git pull.’ The need to wrap it around ‘env’ is because of an issue I also learned in this process. [2] The GIT_DIR directory is set to the directory of your repository. In this case, we must change to another directory on this machine, which is outside of our initial Git repo. Therefore we must ‘unset’ this variable, which is what the ‘env’ execution does.

This is pretty straightforward for the most part. The part of the process new to me was the GIT_WORK_TREE. I received a a pretty simple explanation from the author of Gitolite. He explained that the work tree directory is a directory where the potential new commit can be checked out before it is actually allowed to be pushed into the remote repository. Essentially, the opposite of a bare repository. The reason I do this is because I want to do the parsing check before I actually allow the push. I can’t ‘git pull’ the new repository directly into the Nagios configuration directory before it is actually committed. GIT_WORK_TREE allows this intermediary functionality.

The reasoning for the ‘sed’ modification in that file is that since I will be checking-out the intermediary files into a new directory, I need a Nagios config file that references this intermediary directory. How did I get around this? I copied the nagios.cfg file I would run my install with into the repo, and changed the cfg_dir directory to something I can easily modify during the check in (cfg_dir=CHANGEWITHGIT). This file is not expected to be used on the actual server, so do not copy this one over your actual Nagios.cfg file.

The chgrp magic is because the user who runs the Nagios application must have read access to the configuration files.

Relevant sudoers lines needed:

git ALL=(ALL) NOPASSWD: /bin/chgrp -R nagios /tmp/nagiosworkdir git ALL=(ALL) NOPASSWD: /usr/sbin/nagios3 -v /tmp/nagiosworkdir/nagios.cfg git ALL=(ALL) NOPASSWD: /usr/sbin/service nagios3 restart 

The three lines above are as such. (1) Allow the git user to, without a password, change the group of the temporary Nagios work dir. (2) Allow the git user to, without password, run the verification of the to-be-committed files. (3) Actually restart the Nagios service.

A successful commit, including a restart of Nagios looks like the following:

jforman@merlot:~/devel/testing/config$ git commit . && git push [master 2f796e3] testing for the blog post 1 files changed, 1 insertions(+), 1 deletions(-) Counting objects: 7, done. Delta compression using up to 4 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 394 bytes, done. Total 4 (delta 2), reused 0 (delta 0) remote: Previous HEAD position was ca6dd74... fix now remote: HEAD is now at 2f796e3... testing for the blog post remote: Nagios Config Check Exit Status: 0 remote: Your configs look good and parsed correctly. remote: From /home/git/repositories/testing remote: ca6dd74..2f796e3 master -> origin/master remote: Updating ca6dd74..2f796e3 remote: Fast-forward remote: config/contacts_nagios2.cfg | 2 +- remote: 1 files changed, 1 insertions(+), 1 deletions(-) remote: Restarting Nagios now remote: * Restarting nagios3 monitoring daemon nagios3 remote: remote: ...done. To git@monitor:testing.git ca6dd74..2f796e3 master -> master 

This was definitely an eye-opening learning process for me into all the moving parts of Git. I hope this Howto helps those out there looking for a solution like this. Enjoy!


For those wondering about the various permissions of the git_work_tree directory and also the repo checkout inside /etc/Nagios3:

monitor:/tmp/nagiosworkdir drwxrwxr-x 3 git nagios 4096 2010-11-16 16:13 nagiosworkdir  monitor:/etc/nagios3/testing drwxr-xr-x 4 git nagios 4096 2010-11-15 08:36 testing