Uncategorized

Netflow and IPv6

Now that I’m handing out IPv6 addresses to various VLANs on my network, I needed a way to see what percentage of my traffic was actually using IPv6. Enter pf, pflow, and ntop.

pf is used by my router/firewall to process packets, and mark them for pflow processing. pflow is a psuedo device on the router/fw which exports pflow accounting data to a remote collector. ntop (and nprobe) is a collector and visualization application for digging into packet statistics.

Below is the configuration to hook it all up.

pf.conf snippet:

set state-defaults pflow

Using state-defaults allows you to apply state options to any rule without an explicit keep state. Given I have a deny-by-default firewall ruleset, allowing only specific traffic, each ‘allow’ statement will automatically be exported to the pflow interface. By default, the pflow interface is pflow0.

# cat /etc/hostname.pflow0

flowsrc 10.10.2.1 flowdst 10.10.2.201:2055

The pflow0 interface is configured with a source and destination with which to send the accounting data. The flow data is sent via UDP to a collector on the configured flowdst host and port.

Enter nprobe, a Netflow probe for receiving netflow data and sending it to nTop for visualization. The only command-line flag to nprobe which took some research was the v9/IPFIX template. The default template is pretty conservative,  and by default, does not handle IPv6.

nprobe

–zmq tcp://*:5556

–collector-port 3001

–collector none

–interface none

–flow-version 10

-T=%IP_PROTOCOL_VERSION %IPV4_SRC_ADDR %IPV6_SRC_ADDR %IPV4_DST_ADDR %IPV6_DST_ADDR %IPV4_NEXT_HOP %IPV6_NEXT_HOP %INPUT_SNMP %OUTPUT_SNMP %IN_PKTS %IN_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT %TCP_FLAGS %PROTOCOL %SRC_TOS %SRC_AS %DST_AS %IPV4_SRC_MASK %IPV6_SRC_MASK %IPV4_DST_MASK %IPV6_DST_MASK

The final piece is nTop, for actual visualization. The config file is pretty long, and in my case, I only needed to specify a few options. Local networks was important to me because it allows me to break down traffic graphs into local and remote traffic. (The whole part of this was to see how much of my local IPv6 requests were actually contacting remote IPv6 services).

/etc/ntopng.conf

–community
–interface=”tcp://127.0.0.1:5556″
–local-networks=”10.10.0.0/16,2001:beef:b00c::/48″
–dns-mode=1

As of September 2018. roughly 17% of my traffic is traversing the wire using IPv6.Screenshot from 2018-09-13 20-58-54Sources:

Faster Ubuntu installs up in the clouds

It all started with a Ubuntu Blog blog post about a slimmer Ubuntu server image. I play around with virtual machines at home, many based on Ubuntu’s full-size server ISO. It would take 20-25 minutes to spin up a new VM using some prebuilt preseed files I had constructed to automate user creation and SSH key copying. I knew there was a better way, and it turns out, using the pre-built Ubuntu minimal image (subsequently called cloud image), combined with cloud-init infrastructure, I was able to spin up Ubuntu Cloud Image minimal VMs in under 2 minutes.

What did it take?

Two git commits (one including some refactoring of my virtbuilder script):

What’s the big difference?

  • Full-size server ISO install: 21m10s.
  • Cloud Image install: 10m57s with cloud image download. 56s when the cloud image is already locally downloaded.

Documentation:

Where might you be able to find this handy automation script? https://github.com/jforman/virthelper

I remember IPv6 being difficult.

I remember using he.net year ago for their IPv6 tunnels years ago, and have painful memories of configuring it, both on the router and to share to the subnets on my home LAN. Not this time.

(more…)

Load balanced Kubernetes Ingress. So metal.

Kubernetes has some incredible features, one of them being Ingress. Ingress can be described as a way to give external access to a Kubernetes-run service, typically over HTTP(S). This is useful when you run webapps (Grafana, Binder) in your Kubernetes cluster that need to be accessed by users across your network.

Typically, Ingress integrates with automation provided by public cloud providers like GCP/GKE, AWS, Azure, Digital Ocean, etc where the external IP and routing is done for you. I’ve found bare-metal Ingress configuration examples on the web to be hand-wavy at best. So what happens when there are so many standards, but not sure which one to pick? You make your own. Below is how I configured my bare-metal Ingress on my CoreOS-based Kubernetes cluster to access Grafana.

(more…)

Kubernetes, CoreOS, and many lines of Python later.

Several months after my last post, and lots of code hacking, I can rebuild CoreOS-based bare-metal Kubernetes cluster in roughly 20 minutes. It only took  ~1300 lines of Python following Kelsey Hightower’s Kubernetes the Hard Way instructions.

Why? The challenge.

But really, why? I like to hack on code at home, and spinning up a new VM for another Django or Golang app was pretty heavyweight, when all I needed was an easy way to push it out via container. And with various open source projects out on the web providing easy ways to run their code, running my own Kubernetes cluster seemed like a no-brainer.

(more…)

Large refactors require large changes in code.

It had been several months since I was away from my machines at home, and in that time, CoreOS changed their bare-metal installation procedures quite a bit. To the point where it almost seemed like an after-thought that folks would run CoreOS anywhere outside of GCE/AWS/Azure. Being that I don’t want to spend my money on cloud-based infrastructure when I’ve got a perfectly adequate 8-core machine at home with 32GB of ram and a few TB of storage, I knew I needed to update my virthelper scripts to get with the program.

High level requirements

  • Automate converting the Container Config (no longer cloud-init configs) to Ignition configs.
  • Modify the Libvirt XML manually (with code) to pass Ignition config path arguments.

I found that some of the automation and trickery included with CoreOS to generate the etcd snippets did not support libvirt. The vagrant-virtualbox helpers were a close fit, but not quite enough (expecting eth1 instead of eth0 for the network interface). This causes the coreos-metadata service to completely fail, a current major blocker for my new scripts to bear all their fruit. I’ve filed some issues/pull requests below with the CoreOS team to get that fixed.

There were some cleanup commits in my repository to allow for flexibility in running virt-install, but the main commit is: https://github.com/jforman/virthelper/commit/0cc65134d3dfd1aaaf14392a9e947e428969b491.

Issues/Pull Requests:

Supporting Documentation and Links:

No more powerline networking in this house.

I finally got around to wiring Cat6 to my desktop machines at home, and ripped out those powerline network adapters. I ran a test if iperf between my desktop and my router before and after the upgrade to see how things fared.

iperf results before:
desktop1:~$ iperf -f m -V -t 30 -c 10.10.0.1
————————————————————
Client connecting to 10.10.0.1, TCP port 5001
TCP window size: 0.08 MByte (default)
————————————————————
[ 3] local 10.10.0.241 port 35262 connected with 10.10.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-30.0 sec 510 MBytes 142 Mbits/sec

iperf results over Cat6:
$ iperf -f m -V -t 30 -c 10.10.0.1
————————————————————
Client connecting to 10.10.0.1, TCP port 5001
TCP window size: 0.08 MByte (default)
————————————————————
[ 3] local 10.10.0.241 port 55044 connected with 10.10.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-30.0 sec 2135 MBytes 597 Mbits/sec


142Mbit/sec to 597 Mbit/sec. That’ll do.

What I read today:

I’d like to (try to) keep a running tab of all the technical, and non-technical, bits of information I pick up day to day. I’m hoping it might provide some insight into what I’m interested at the time, or little tidbits of helpful information I find laying around the web.

Pain(less) NGINX Ingress

Once I get my Kubernetes cluster back up at home, I want to create separate environments for promotions. Right now the deployment I have running is much more pets than cattle, and I want to change that. I want to treat each piece as completely replaceable and interchangeable, and that only happens by having a setup that is not one big snowflake that you are afraid to touch.

How we grow Junior Developers at the BBC

All of this one rang true as an SRE trying to write more code. Mentoring others, while getting mentored are crucial characteristics I feel to being part of a productive team. You can’t just sit behind your monitoring with headphones on and expect to build relationships and have impact.

Kubernetes, the slow way.

It all started when I began hearing about this container thing outside of work. I’ve been a Google SRE going on 6 years, but knowing that the way we do containers internally on Borg is probably not how the rest of the world does reliable, scalable, infrastructure. I was curious, how hard could it be to spin up a few containers and play around like I do at work?

Little did I know, it would take two months, a few hours a few nights a week, to get the point where I was able to access a web service inside my home grown Kubernetes cluster. Below are the high level steps, scripts, and notes I kept during the process.

(more…)

A simplified way to securely move all the bits.

A while back, I wrote a post about setting up an L2TP/IPSec VPN on my home firewall/router. It required two daemons and a bunch of configuration that had hard coded IP addresses. While this solution used firmly-established practices (L2TP/IPSec), it felt too brittle. What happens when my dynamic IP address changes? Now I need to update config files, restart daemons, etc. There had to be a better way.

Enter IKEv2. IKEv2 is a successor implementation to Internet Security Association and Key Management Protocol (ISAKMP)/Oakley, IKE version 1.

(more…)