Wireless, now with more 802.11’s…

With nothing else to do around here tonight while the whole state is shut down thanks to a blizzard, I should catch up on some blog posts.

On my list of home network upgrades for the past several months was the wireless. As my wife and I add to our collection of smart phones, laptops, tablets, and wireless streaming devices (I am looking at you EOL Logitech Revue with Google TV) the amount of latency and available bandwidth started to show signs of strain. I had been running the wireless for several years on an Asus WL-500G Premium v2 router/wap, which only ran 802.11b/g over 2.4Ghz. It was time for an upgrade.

Welcome our new Asus RT-N66U 802.11b/g/n router/wap that handles dual channel 2.4Ghz and 5Ghz wifi.

I did some very unscientific comparisons before and after I performed the hardware upgrade. I pushed and pulled a ~763MB Ubuntu ISO across the wireless, through a 10/100Mb switch, via rsync over SSH from a server on the LAN. The following table shows rsync’s average speed and transfer duration from the point of view of a 15″ Macbook Pro connected via the wifi.

Old Wifi New Wifi 2.4Ghz New Wifi 5Ghz
Upload  1.87MB/s (6:46)  7.69MB/s (1:39)  5.57MB/s (2:17)
Download  2.61MB/s (4:51)  10.81MB/s (1:10)  10.39MB/s (1:13)

Needless to say, I am keeping the new router.

California 2012, thricely.

Condensed version of trip #3 to California.

San Diego

  • Sushi Ota: Sake and sushi with Mozilla folks. For the quality of the sushi (incredible), the price (reasonable) blew me away.
  • Tajima: Ramen! The spicy miso ramen here lives up to its name, be prepared.
  • Fish Market
  • Cucina Urbana: Serious Italian, and a wine list to match. Funky interior too.
  • Davanti Enoteca: Good tripe dish

San Francisco

  • State Bird Provisions: Dim sum delivery, California style.
  • Black Point Cafe: Coffee refuel near the Golden Gate Bridge. Killer latte. Gary Danko
  • Spruce: The duck. Yes, get the duck.
  • Philz Coffee: No justification needed. Had to restock the East Coast supply.
  • Acqurello
  • Izakaya Roku: More ramen! Sake!
  • Humphry Slocombe: Blue Bottle Vietnamese Coffee Ice Cream. Good lord. I can now die a happy man.
  • Incanto: Charcuterie zen master.

Napa/St Helena

  • White Rock Winery: We happened upon this one as they were dumping out the lower quality wine. So sad.
  • Oakville Grocery Co: Mid-Napa refuel. Bread, cheese, meats.
  • Saddleback Cellars
  • Gott’s Roadside: The juxtaposition of this in Napa is pretty jarring. Their milkshakes are a nice divider between all the wine.
  • Stag’s Leap Wine Cellars: This is where I cemented the fact that I like the smaller wineries, rather than the commercial behemoths.
  • Cliff Lede Winery
  • Morimoto Napa

Central California Coast

  • La Bicyclette (Carmel): Pizza, charcuterie, cheese. Take out. To be returned.
  • Sierra Mar at Post Ranch Inn (Big Sur): One of the most spectacular sunset views on the Pacific Coast. Food is damn tasty too.
  • Big Sur Bakery: Brunch doesn’t start until 1030am, but the fruit strudel and latte’s are killer.
  • Dover Canyon Winery: Our first foray into Paso Robles wines. Unexpectedly awesome. Along with their 185 lb. St Bernard named Thunder.
  • Turley Winery
  • Whalebone: Free chili with every tasting. Clutch with all the damn rain.
  • Adelaide Winery
  • Olavino: Olive oil and salt, one made with ghost chili. Holy crap. and holy good.
  • L’Adventure Winery

Nagios and Git hooks, a redux

A while back I blogged about how I hooked up Nagios and Git to run the Nagios preflight checks before restarting with a new checkin’s worth of configs. But the more I looked at how it all fit together, the more I knew it could be improved. A sed hack, expecting a certain pattern in the nagios.cfg? Bad bad bad. Most of the improvement revolves around Nagios’s ability to reference relative paths for its config files. Given the path of the ‘main’ nagios.cfg file, you can then reference directories that contain your services, hosts, and other custom commands, in relation to that main file. With this functionality I significantly improved the Git->Nagios pipeline.

First, the pre-receive hook

#!/bin/bash  umask 022  while read OLD_SHA1 NEW_SHA1 REFNAME;  do  export GIT_WORK_TREE=/tmp/nagiostest-$NEW_SHA1  mkdir -p $GIT_WORK_TREE /usr/bin/git checkout -f $NEW_SHA1 sudo /usr/sbin/nagios3 -v $GIT_WORK_TREE/nagios.cfg if [ "$?" -ne "0" ]; then  echo "Nagios Preflight Failed"  echo "See the above error, fix your config, and re-push to attempt to update Nagios."  exit 1  else  echo "Nagios Preflight Passed"  echo "Clearing temporary work directory."  rm -rf $GIT_WORK_TREE exit 0 fi done 

Using the GIT_WORK_TREE environment variable, which specifies Git’s working directory, I check out the new set of potential configs to a temporary directory. This provides a temporary ‘waiting room’ for the proposed configuration to be tested, before before being put into production. Imagine never (intentionally) breaking Nagios again because of a broken host or service specification. The main thing remember is that all references in the nagios.cfg to other config files (hosts, commands, etc) must be relative paths. I.E., I have lines that look like “cfg_dir=configs” in the nagios.cfg. Note the lack of absolute paths. We now run the Nagios pre-flight check (nagios -v) on the nagios.cfg in the Git work tree. Depending upon the exit value of ‘nagios -v’, 0 for success and 1 for failure, we either proceed or die immediately. If success, clean up our temporary run directory.

Now the post-receive hook:

#!/bin/sh  echo "Updating repo /etc/nagios3" sudo /usr/bin/update-gitrepo /etc/nagios3 

The post-receive hook merely runs a script, noted below, on the Nagios configuration directory.


#!/bin/sh umask 022  REPO_DIR=$1 cd ${REPO_DIR}  /usr/bin/git pull origin master 

Given the Git checkout’s directory, we fetch the most recent push to the repository.

For the final step we have to fix some permissions (given that my setup runs the repository through Gitolite as the git user). This hook is located in the actual checkout itself, /etc/nagios3, in the post-merge hook.

#!/bin/sh  sudo chown -R nagios:admin /etc/nagios3 sudo /etc/init.d/nagios3 restart 

A full commit and restart looks like this:

jforman@merlot:/mnt/raid1/personal/git/monitor/nagios/configs$ git push Counting objects: 7, done. Delta compression using up to 4 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 414 bytes, done. Total 4 (delta 3), reused 0 (delta 0) remote: Previous HEAD position was c80fa03... turn off test notifications with notifications_enabled 0 remote: HEAD is now at f088dbc... Example: Add boilerplate header that file is managed by Git. remote: remote: Nagios Core 3.2.3 remote: Copyright (c) 2009-2010 Nagios Core Development Team and Community Contributors remote: Copyright (c) 1999-2009 Ethan Galstad remote: Last Modified: 10-03-2010 remote: License: GPL remote: remote: Website: http://www.nagios.org remote: Reading configuration data... remote: Read main config file okay... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/commands.cfg'... remote: Processing object config directory '/etc/nagios-plugins/config'... remote: Processing object config file '/etc/nagios-plugins/config/ftp.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/mail.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_int.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/nt.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/http.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/real.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/check_nrpe.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_storage.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/disk.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/mysql.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_load.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/fping.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/dhcp.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/ssh.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/rpc-nfs.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/mailq.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/breeze.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/dummy.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/netware.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/hppjd.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/load.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/mrtg.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/apt.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_cpfw.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_process.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_env.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/news.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/ntp.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/telnet.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/users.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_mem.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/procs.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/ifstatus.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/games.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/disk-smb.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/tcp_udp.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_win.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/ping.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/pgsql.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/ldap.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/flexlm.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/dns.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/radius.cfg'... remote: Processing object config file '/etc/nagios-plugins/config/snmp_vrrp.cfg'... remote: Processing object config directory '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs'... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs/timeperiods.cfg'... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs/services.cfg'... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs/commands.cfg'... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs/hosts.cfg'... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs/abstracts.cfg'... remote: Processing object config file '/tmp/nagiostest-f088dbcebf194edbce78068b6004cbbfca703432/configs/contacts.cfg'... remote: Read object config files okay... remote: remote: Running pre-flight check on configuration data... remote: remote: Checking services... remote: Checked 95 services. remote: Checking hosts... remote: Checked 10 hosts. remote: Checking host groups... remote: Checked 7 host groups. remote: Checking service groups... remote: Checked 0 service groups. remote: Checking contacts... remote: Checked 3 contacts. remote: Checking contact groups... remote: Checked 2 contact groups. remote: Checking service escalations... remote: Checked 0 service escalations. remote: Checking service dependencies... remote: Checked 56 service dependencies. remote: Checking host escalations... remote: Checked 0 host escalations. remote: Checking host dependencies... remote: Checked 0 host dependencies. remote: Checking commands... remote: Checked 181 commands. remote: Checking time periods... remote: Checked 4 time periods. remote: Checking for circular paths between hosts... remote: Checking for circular host and service dependencies... remote: Checking global event handlers... remote: Checking obsessive compulsive processor commands... remote: Checking misc settings... remote: remote: Total Warnings: 0 remote: Total Errors: 0 remote: remote: Things look okay - No serious problems were detected during the pre-flight check remote: Nagios Preflight Passed remote: Clearing temporary work directory. remote: Updating repo /etc/nagios3 remote: From monitor:nagios remote: * branch master -> FETCH_HEAD remote: Updating c80fa03..f088dbc remote: Fast-forward remote: configs/commands.cfg | 2 ++ remote: 1 file changed, 2 insertions(+) remote: * Restarting nagios3 monitoring daemon nagios3 remote: Waiting for nagios3 daemon to die.. remote: ...done. To git@monitor:nagios.git c80fa03..f088dbc master -> master 

Note that I do keep the Nagios package bundled commands in the /etc/nagios-plugins directory and have purposely not put those in the Git tree. This allows for updated Nagios packages from Ubuntu to update those commands accordingly without interfering with the Git repo.


Remind Me: Damage done in San Francisco in six days

I am lucky enough to have a sister living out in San Francisco, and to be able to work out of our offices there. Below is a hit list of the places I ate at and visited in the span of six days. My stomach has finally recovered.

Wineries (Sonoma County):

Both of us are lucky enough to have been through Napa several times, so we decided to venture into Sonoma County. The last item in that list is an unsuspecting general store off of Dry Creek Road in Heldsburg which has an incredible sandwich list. This area turns Napa on its head, with a much more family-run low-key atmosphere. There is none of the pretense of visiting a large production winery such as Mondavi or the herds of people who visit Duckhorn.

To Eat:

I cannot recommend every one of these places enough. Izakaya Sozai serves killer ramen. Yank Sing serves dim sum on weekend mornings that melt in your mouth. Shuck your own oysters at Hog Island (we learned in about 30 seconds) at the farm while sitting on benches along Tomales Bay. Mission Chinese blasts gangster rap while you gorge yourself on craveable Chinese food.

Go, eat, recover later.

The Internet is slow. Is the Internet down?

We have all heard the same questions at one point in our careers, “Is the Internet down?” or “Getting to X site is slow.” You scramble to a browser to see if Google, ESPN or the NY Times websites are up. Then you fire up traceroute. In some cases, the pages might load slowly, in other cases not at all. These two situations are often downstream fallout of two connectivity issues: latency and packet loss. Latency is the time it takes for a packet to get from source to destination. The speed of light says the latency for one packet to get across the USA from New York to San Francisco is normally between 70-90ms [1]. Packet loss occurs when packets do not make it from their source to destination, being lost along the way. Many factors can contribute to packet loss, including overloaded routers and switches, service interruptions, and human error.

When diagnosing network issues between source and destination, it is helpful to have data to backup your suspicions of slow and inconsistent network performance. Insert Smokeping.

As part of a network and system monitoring arsenal, you might have Nagios configured for host and service monitoring, and Munin for graphing system metrics. But for monitoring network performance, I feel Smokeping fills that gap. Below are some notes I took getting Smokeping installed and running on a Ubuntu Linux VM at home.

I installed Smokeping from source, since the version in the Ubuntu repository (2.3.6 for Ubuntu Oneiric) is quite old compared to the latest release, 2.6.8, at the time of this post. After installing the various dependencies from the Ubuntu repo, I was able to build and install Smokeping under /opt/smokeping. One thing I do appreciate about Smokeping is that you can run it as any arbitrary user. No root needed!

First we need to configure Smokeping and verify it starts up.

Part of my Smokeping config:

imgcache = /opt/smokeping/htdocs/cache   imgurl   = http://yourserver/smokeping/cache  + random   menu = random   title = Random Hosts  ++ utexas   host = www.utexas.edu  ++ stanford   host = www.stanford.edu  ++ mit   host = media.mit.edu  ++ multihost   title = edu comparison   host = /random/utexas /random/stanford /random/mit   menu = EDU Host Comparison 

imgcache must be the absolute path on your webserver where Smokeping’s cgi process writes out png files. imgurl is the absolute URL where your httpd presents the imgcache directory.

What follows is a sample stanza under the ‘charts’ category in the config. It contains three discrete Smokeping graphs to webservers found at the MIT’s Media Lab, University of Texas, and Stanford University. I picked these three hosts because they represent a variety of near, far, and trans-continental servers from my home in the Northeastern US. The last entry, multihost, creates one single graph with the three data points combined. The ‘host’ parameter in this case contains three path-like references to the graphs we want consolidated into one graph.

To test that Smokeping starts up, execute the following:

jforman@testserver1 /opt/smokeping % ./bin/smokeping --config /opt/smokeping/etc/config --nodaemon   Smokeping version 2.006008 successfully launched.   Not entering multiprocess mode for just a single probe.   FPing: probing 3 targets with step 300 s and offset 161 s. 

When you are ready to take the training wheels off, remove the ‘–nodaemon’ argument, and put this command in your distribution’s rc.local file to be started at boot time.

To actually view the generated data in graphs, you will need CGI support configured in your httpd of choice. For the most part, I run Apache.

Snippets of required Apache configuration:

LoadModule cgi_module /usr/lib/apache2/modules/mod_cgi.so   AddHandler cgi-script .cgi  Alias /smokeping "/opt/smokeping/htdocs"   <Directory "/opt/smokeping/htdocs">   Options Indexes MultiViews   AllowOverride None   Order deny,allow   Allow from all   Options ExecCGI   DirectoryIndex smokeping.cgi   </Directory> 

I am not presenting my Smokeping install as a virtual host, so I have left that part out. Also take note that the httpd’s user needs to have permissions on the imgcache directory in your Smokeping config file. In my case, /opt/smokeping/htdocs/cache is 775 with www-data as the group.

Hopefully this has been helpful for those who find this post, and a reminder for me on how I got things working for further installations (and re-installations) of Smokeping.

[1] AT&T Network Latency: http://ipnetwork.bgtmo.ip.att.net/pws/network_delay.html

A home network overengineered: dhcpd, tsig keys, ddns

I started to write this post, explaining how I upgraded my home network setup with a dhcpd server, multiple dns servers communicating securely via tsig keys along with dynamic dns, but the post became unwieldy and would have been thousands of words. Instead, I’ll post some links and gotcha’s and hints on how to make it work a lot easier.

Links scoured and re-read in the process:


Manage the key files distributed to each of your DNS servers with some sort of config management system (I use Puppet). That way if you ever need to change a key or add a new one, it makes things a heck of a lot easier.

Don’t stick the TSIG key files inside your named.conf. This posses a security risk because anyone who can read your named.conf, now has access to your TSIG keys and can potentially update your zones. Instead, put them in their own files inside your bind etc directory, mark their perms as 640 (bind:bind, or the like) and use an include statement to get them into your named.conf

Following on that last point, use dns’s allow-update statement inside zone definitions on the master. You can either lock things down via IP (less secure) or via Key (more secure) so that only authorized processes or people can update your zones.


If you have FreeBSD clients, don’t forget the ‘hostname’ parameter in /etc/rc.conf. Otherwise you’ll request a lease from the dhcp server, but never tell your hostname, and therefore won’t get a record added to the ddns zone.


Yes, this is a completely over-engineered solution on how to run a home network. It came to be because I play around with lots of VM’s at home, and to pique my curiosity bug, wanted to try to get things working end to end. Being able to ssh into the various Linux/OpenBSD/FreeBSD VMs by name made it a lot easier.

Remind Me: Initial Data in a Django class-based Form

I love Django‘s class-based way of handling forms. You name the class, articulate each field (data point of your form), and attach it to a view. Voila. But what happens when you want some initial data in the form?

Initial to the rescue!

What your class might look like:

class PersonForm(forms.Form): first_name = forms.CharField(max_length=100) last_name = forms.CharField(max_length=100) gender = forms.CharField(max_length=1) hair_color = forms.CharField(max_length=256) 

If you now wanted to initialize your form for males with blonde hair, include this snippet in your view:

form = PersonForm(initial = { 'gender' : "M", 'hair_color' : "blonde" } ) 

Then pass that form as part of your render return:

return render_to_response('add_person.htm', { 'form' : form }) 

This post is brought to you by #neverwantingtosearchtheinternetforthisagain, and StackOverflow for inspiration.

Boston Barcamp 6, Day Two

Finally got this post out after having a bit of a busy week.  

Location based networking, anurag wakhlu (coloci inc) http://goo.gl/mxAtd * location based apps: where are you now? or where will you be? * where are you now: foursquare, gowalla, loopt, etc * where will you be: coloci, fyesa, tripit, plancast * interest based networking: the reason to talk to someone who is near you. tie an interest: sending someone a coupon when they are near starbucks. if they arent near starbucks, what good is a coupon? * proactive coupons: dont wait for a check-in. if someone is 2 blocks from starbucks, send them a notification for coupon. ex// minority report. walk by a billboard, recognizes you, tailors ad specifically to you. 52% of US consumers willing to share location for retail perks. * foursquare background checkin? automatically check you in when you are in a close enough vicinity to a location * Do privacy concerns have a potential impact on services becoming more popular? ex// European privacy laws about broadcasting who you are, where you are, etc. * Have to trust your device that when you disallow authority to know your location, it actually does not broadcast where you are. * Trade off of convenience versus privacy. Debit card is a lot more convenient than cash, people are more than likely to give up privacy. * If you really want to not be tracked, you really need to disconnect yourself from the computer. Go cash only. Re-education might help. “You might already be sharing this info somewhere else, so what difference is it now that you do it via your phone?” * Tracking someone’s history via CSS visited tag. Firefox supposedly has fixed this issue where websites cannot do this anymore. * Using EZpass, who is responsible for giving a ticket if you did 60 miles in faster than 60 minutes? Using your location to know your broke the law. At the start, Anurag gave a wonderfully succint history of location based networking, highighting the current giants like Foursquare and Facebook Places. We talked about how the potential is there to enable your phone to alert you about consumer deals in your vicinity, having more of a ‘push’ aspect to networking, or your phone could alert you to friends being near as well. Eventually though, the attendants turned the talk into a big privacy discussion. Not necessarily as flame-worthy as it could have been, but still talking about how much of our information we want to broadcast and allow to advertisers. Broadcasting location and private information. Could the situation eventually get to the point like Minority Report where your phone is overtly/covertly broadcasting who you are to potential advertisers or other potentially nefarious people.

Economics of open source * reputation is a kind of currency. ancillary benefits of ‘being known.’ ex// popular github repo, can get you a book deal, flown to conferences, etc. * are we cheapening what we do by giving it away? software produces so much cash for people. not everything is oss. still need people to customize it and apply. * discussion: can donations kill a project? the comptroller decides who gets money, and those who donate time but dont get paid feel slighted, and the project can take a nose dive. Content of presentation was a bit bland/dry, but the discussion was involved. War story: giving training away for free when a company charges for it. you are hurting the ecosystem by giving it away rather than someone paying for it. This was fairly interesting, delving past the common topic of software being ‘free as in beer.’

Interviewing well as a coder round table * feel okay sitting there for a couple minutes thinking. Dont feel stressed to start writing code right away. * some questions to ask you to regurgitate syntax. what happens if you get confused between languages. * design issues “show us where you would add X feature.” stylistics versus code syntax. * code portfolios: employers look at your github profile. see the code you’ve written. if your code is ‘too good’, employer wants you to find bugs in their code. * how to practice your whiteboarding skills? codekata: short programming problems. * asking questions that there is no solution to. can you be an asshole interviewing? * be prepared for personal questions because employers will google you and find your personal interests * spin negative questions as positive: what do you see improving in your work environment? * questions back to employee: what do you hope to improve for our company? * if you list a skill in your skills list, be ready to whiteboard the code.

Can the internet make you healthier? jason jacobs, runkeeper founder * convergence of health/athletic data and IT * virtual coaching: ahead/behind pace, in-app reminders to go faster or slower on their iOS app. The more data you have over what you’re doing physically, can help you react. How am I doing against my peers? This was interesting, since Jason sees his company’s first product ‘Run Keeper’ as the jumping off point to more athletic-body sensing applications. The point was raised about what point does the app which suggests a certain pace while running, dance the line of being medical advice. I think it is a good point, that the app needs more information about your health before suggesting a certain distance or pace for exercise. I’ll be curious myself as I use the app more, how I am improving athletically.

Overall, I found the signal-to-noise ratio of the unconference to be very high. For my first Barcamp, I would suggest it to all technically-inclined folks who just want to let their interests and imaginations plot the course of which talks they attend. I know I will be a repeat attendee.

Barcamp Boston 6, Day One

Having never been to a Barcamp before, I knew the overall structure of the conference, but was curious if I would actually like it. Truth be told, I found it full of content, without a lot of fluff, even for the talks I sat in on where I had no prior knowledge. My notes follow, thanks to the great OSX app Notational Velocity hooked up to Simplenote. My overall thoughts in italic after each post.:

how to give a presentation people love and learn from
break presentation into 7-10 minute chunks
then transition 7 minutes into the talk to another topic, to keep people’s attention
insert emotion, a story. rather than just X happened.
(For a talk to be this meta, a presentation about giving presentations, I was not hooked. There weren’t any real nuggets of information here that made me sit up and say “Wow, I haven’t been doing this in the presentation I make.”)

how to run a startup like genghis khan, by @wufoo
* work like a nomad
* build an audience first. protect your audience. make the audience part of the show.
* make developers handle support requests. once devs get same question two or three times, they go in and fix the code so they dont get the question again.
(Presenter absolutely killed it. Engaging, fast talking (without mumbling), great slides that presented the information in clear and sometimes humorous ways. Made me think more about engaging the people I am trying to convince to my way of thinking)

android developer: war stories and antipatterns
yoni, lead android dev at scavngr
* dont code splashscreens. more of an iOS thing. if you have to preload data, show a progress bar in the app already open
* dont force orientation (landscape/portrait). support both
* dont assume their screen size. use relative layouts
(I went into this talk curious and with no prior experience or knowledge of writing an Android app. I don’t even own an Android phone. This was much more a round table, with those devs in the room very willing to share their experiences and war stories. I found they really had good experiential tips, rather than “This is the best practice” and moving on.)

ask a plasma physics grad student anything
(I must say this was completely over my head. The student at the front of the room, from MIT’s Plasma Science and Fusion Center, seemed to know his stuff and was genuienly interested in challenging the audience. What blew me away was the knowledge of the audience, asking very pointed questions with what sounded like real science to back it up.)

building fast websites, making users happy (@jonathanklein)
* google injected 400ms delay into search page, dropped 0.76% searches/users over time.
* phpied.com/the-performance-business-pitch
* faster sites rank better in google. site speed is part of search ranking.
* what’s load time? server side generation time, client side render time. 80-90% of load time takes place on the client.
* best practices:
* reduce http requests: combine css/js, use image sprites (one download and cut up into multiple images).
* minify css/jss: strip comments out and white space. (yuy, java library). will rename variables into shortest name possible
* gzip all text: html, css, js
* for graphics, use png8 (restricts you to 256 different colors in the image)
* jpegs can be saved at 75% quality
* image compressor: smush.it (from yahoo dev network), lossless compression.
* measuring performance
* google webmaster tools, http://www.webpagetest.org
* yotta, firebug, yslow, page speed, dynatrace ajax edition
(For an ops guy, I was really interested in this talk. Jonathan blew through his material at break-neck speed, but covered the topics and answered questions without feeling like the talk was broken up. Some really good information through his experiences, and things I would like to dig into more myself.)

nosql round table
* some are relational, others are key value
* redis, redis + resque
* cassandra
* mongodb
* why nosql over mysql? no schema, lack of migrations from version to version. being able to store different things. replication (single threaded)
* keeping mysql in sync with nosql layer about: broadcast updates from mysql over rapidmq(?).  nosql service grabs update from mysql.
* solutions that they discarded:
* cassandra: v0.6, latency spikes between nodes. node would get flagged as awol. cascading failure because data gets rebalanced. use “hinted handoff” to prefer the direction of the failover. supposedly better in v0.7. documentation is messy.
* in the cloud or in a dc? mostly EC2. local storage with evs slave.
* search via solr
(Another one where I went in having nothing but curiosity, since noSQL is one of the popular buzz words these days. Very engaged audience who shared war stories, both good and bad, implementing noSQL solutions in their workplaces. Left me with a stalk of websites to dig into.)

agile development war stories
* problems it tries to solve: waste. business approach.
* more collab between business and engineering. dont just throw the ‘stories’ from biz over the wall.
* focus on testable behavior. how can we test each iteration? should be part of the original story.
* be smaller, quicker, more iterative. ex// dont go off for 18 months planning your solution. business might change underneath you
* people do “agile but..” and tend to modify the methodology.
* burn down?
* should tasks stay <1 day? sounds a bit unreasonable, since “speeding up the server by 20%” is unable to be done in one day. task size should have a reason.
* average sprint time: 2 weeks
* do a code review before the planning meeting. so estimations on a piece of work can be completed in the meeting. ex// dont trace the code for the 1st time in a meeting.
* software to track scrums/managing stories: soft2, scrum ninja, team foundations studio (windows), white boards, index cards on a wall, ibm rational, pivotal tracker (good for distributed teams), mingle from softworks.
(Another highly-concentrated buzzword round table where I was more curious than anything. Some real good information about what works and what doesn’t when it comes to managing time and projects. Lots to read up on here, and see if I can apply it to my daily work life.)

New toy, Nikon style.

It had only been ‘recently’ that I had purchased myself a micro four-thirds digital camera for my honeymoon. It took pretty good pictures, and I loved its compactness when roaming around Portugal for 10 days. But I had always wanted a bit more control over the photos I took; whether it was exposure modification, lense type, or overall flexiblity for shooting in different situations (low light at night).

‘Lo and behold, Nikon announced the D7000. It had all the features of my Father’s D90, but with better HD video capture, an upgraded AF sensor and a whole host of other functionality too long to list. Patiently I waited for my bonus to be direct deposited, reading up on the manual (a hefty 350 pages), investigating some photography walks and classes in the area. Now in my hot little hands:

The weather finally reached an acceptable temperature where it was just above the level of being uncomfortable for an afternoon stroll. I wandered my way through Somerville and Cambridge on my way through Harvard Square. I captured the below uniquely-painted house. I was mainly playing around with Aperture-priority today, but look forward to digging into more of the image control options to bring out different detail.

Overall the camera is great, not too heavy for a couple hours slung over my shoulder, even with the included Nikon strap and an 18-105mm lens connected. The Op/Tech neoprene strap I have on order should make things a heck of a lot more comfortable when that arrives. I am really looking forward to the weather warming up when I can explore more of Boston with the camera and take it up into the mountains for some day-hikes.