A simple command to open all files with merge conflicts

When I get merge conflicts in a rebase I found it irritating to open up the problem files in my editor, I couldn’t find anything past copying and pasting the file path or locating it in the source tree. So I wrote a simple hg command to open all the unresolved files into my editor. Maybe this is useful to you too?

[alias]
unresolved = !$HG resolve -l "set:unresolved()" -T "{reporoot}/{path}\0" | xargs -0 $EDITOR

Bridging an internal LAN to a server’s Docker containers over a VPN

I recently decided that the basic web hosting I was using wasn’t quite a configurable or powerful as I would like so I have started paying for a VPS and am slowly moving all my sites over to it. One of the things I decided was that I wanted the majority of services it ran to be running under Docker. Docker has its pros and cons but the thing I like about it is that I can define what services run, how they run and where they store all their data in a single place, separate from the rest of the server. So now I have a /srv/docker directory which contains everything I need to backup to ensure I can reinstall all the services easily, mostly regardless of the rest of the server.

As I was adding services I quickly realised I had a problem to solve. Some of the services were obviously external facing, nginx for example. But a lot should not be exposed to the public internet but needed to still be accessible, web management interfaces etc. So I wanted to figure out how to easily access them remotely.

I considered just setting up port forwarding or a socks proxy over ssh. But this would mean having to connect to ssh whenever needed and either defining all the ports and docker IPs (which I would then have to make static) in the ssh config or having to switch proxies in my browser whenever I needed to access a service and also would only really support web protocols.

Exposing them publicly anyway but requiring passwords was another option, I wasn’t a big fan of this either though. It would require configuring an nginx reverse proxy or something everytime I added a new service and I thought I could come up with something better.

At first I figured a VPN was going to be overkill, but eventually I decided that once set up it would give me the best experience. I also realised I could then set up a persistent VPN from my home network to the VPS so when at home, or when connected to my home network over VPN (already set up) I would have access to the containers without needing to do anything else.

Alright, so I have a home router that handles two networks, the LAN and its own VPN clients. Then I have a VPS with a few docker networks running on it. I want them all to be able to access each other and as a bonus I want to be able to just use names to connect to the docker containers, I don’t want to have to remember static IP addresses. This is essentially just using a VPN to bridge the networks, which is covered in many other places, except I had to visit so many places to put all the pieces together that I thought I’d explain it in my own words, if only so I have a single place to read when I need to do this again.

In my case the networks behind my router are 10.10.* for the local LAN and 10.11.* for its VPN clients. On the VPS I configured my docker networks to be under 10.12.*.

0. Configure IP forwarding.

The zeroth step is to make sure that IP forwarding is enabled and not firewalled any more than it needs to be on both router and VPS. How you do that will vary and it’s likely that the router will already have it enabled. At the least you need to use sysctl to set net.ipv4.ip_forward=1 and probably tinker with your firewall rules.

1. Set up a basic VPN connection.

First you need to set up a simple VPN connection between the router and the VPS. I ended up making the VPS the server since I can then connect directly to it from another machine either for testing or if my home network is down. I don’t think it really matters which is the “server” side of the VPN, either should work, you’ll just have to invert some of the description here if you choose the opposite.

There are many many tutorials on doing this so I’m not going to talk about it much. Just one thing to say is that you must be using certificate authentication (most tutorials cover setting this up), so the VPS can identify the router by its common name. Don’t add any “route” configuration yet. You could use redirect-gateway in the router config to make some of this easier, but that would then mean that all your internet traffic (from everything on the home LAN) goes through the VPN which I didn’t want. I set the VPN addresses to be in 10.12.10.* (this subnet is not used by any of the docker networks).

Once you’re done here the router and the VPS should be able to ping their IP addresses on the VPN tunnel. The VPS IP is 10.12.10.1, the router’s gets assigned on connection. They won’t be able to reach beyond that yet though.

2. Make the docker containers visible to the router.

Right now the router isn’t able to send packets to the docker containers because it doesn’t know how to get them there. It knows that anything for 10.12.10.* goes through the tunnel, but has no idea that other subnets are beyond that. This is pretty trivial to fix. Add this to the VPS’s VPN configuration:

push "route 10.12.0.0 255.255.0.0"

When the router connects to the VPS the VPN server will tell it that this route can be accessed through this connection. You should now be able to ping anything in that network range from the router. But neither the VPS nor the docker containers will be able to reach the internal LANs. In fact if you try to ping a docker container’s IP from the local LAN the ping packet should reach it, but the container won’t know how to return it!

3. Make the local LAN visible to the VPS.

Took me a while to figure this out. Not quite sure why, but you can’t just add something similar to a VPN’s client configuration. Instead the server side has to know in advance what networks a client is going to give access to. So again you’re going to be modifying the VPS’s VPN configuration. First the simple part. Add this to the configuration file:

route 10.10.0.0 255.255.0.0
route 10.11.0.0 255.255.0.0

This makes openVPN modify the VPS’s routing table telling it it can direct all traffic to those networks to the VPN interface. This isn’t enough though. The VPN service will receive that traffic but not know where to send it on to. There could be many clients connected, which one has those networks? You have to add some client specific configuration. Create a directory somewhere and add this to the configuration file:

client-config-dir /absolute/path/to/directory

Do NOT be tempted to use a relative path here. It took me more time than I’d like to admit to figure out that when running as a daemon the open vpn service won’t be able to find it if it is a relative path. Now, create a file in the directory, the filename must be exactly the common name of the router’s VPN certificate. Inside it put this:

iroute 10.10.0.0 255.255.0.0
iroute 10.11.0.0 255.255.0.0

This tells the VPN server that this is the client that can handle traffic to those networks. So now everything should be able to ping everything else by IP address. That would be enough if I didn’t also want to be able to use hostnames instead of IP addresses.

4. Setting up DNS lookups.

Getting this bit to work depends on what DNS server the router is running. In my case (and many cases) this was dnsmasq which makes this fairly straightforward. The first step is setting up a DNS server that will return results for queries for the running docker containers. I found the useful dns-proxy-server. It runs as the default DNS server on the VPS, for lookups it looks for docker containers with a matching hostname and if not forwards the request on to an upstream DNS server. The VPS can now find the a docker container’s IP address by name.

For the router (and so anything on the local LAN) to be able to look them up it needs to be able to query the DNS server on the VPS. This meant giving the DNS container a static IP address (the only one this entire setup needs!) and making all the docker hostnames share a domain suffix. Then add this line to the router’s dnsmasq.conf:

server=/<domain>/<dns ip>

This tells dnsmasq that anytime it receives a query for *.domain it passes on the request to the VPS’s DNS container.

5. Done!

Everything should be set up now. Enjoy your direct access to your docker containers. Sorry this got long but hopefully it will be useful to others in the future.

Pop-free sound from a Raspberry Pi running XBMC

UPDATE: I’ll leave this around for posterity but a large part of this problem has now been fixed in the latest Raspberry Pi firmware. See here for instructions for raspbmc until that gets updated.

I’ve been in the process of setting up a Raspberry Pi in my office so I can play my mp3 collection through my old stereo. It’s generally gone well and I have to take my hat off to the developers of Raspbmc which makes setting up XBMC on the Pi ridiculously easy and fast. It didn’t take me long to have Airplay set up and running as well as being able to use my phone to remote control XBMC to play things direct from my music library sitting on my Synology NAS. Quite a nice setup really.

Just one problem. I play the music out through the Pi’s audio jack which doesn’t have a fantastic DAC. The big noticeable issue is audible pops every-time XBMC starts and stops playing. For Airplay this isn’t too bad, you get a pop when it first starts but only another after you stop playing. Playing direct on XBMC though you get two pops between each track as it stops playing one and starts the next. Very annoying. It’s a pretty well known problem and the best solution so far is to simply never stop playing. If you have a player that uses pulseaudio then you can configure it to keep the audio stream going even when idle. Of course it isn’t that easy, XBMC doesn’t use pulseaudio on the Pi. There is some work going on that might change that but for now it is very buggy to the point of being unusable. It seemed I was stuck … or was I?

It took some experimentation but I finally came across something of a hack that solves the problem for me. It probably works on other distributions but I did all this using Raspbmc.

First as root you want to do this:

echo "snd-bcm2835" >>/etc/modules

This makes the kernel module for the sound devices load on startup. It allows alsa and by proxy pulseaudio to talk to the audio hardware. Next edit /etc/pulse/system.pa and comment out this line:

load-module module-suspend-on-idle

This tells pulseaudio to keep the audio stream alive even when not playing anything. Now reboot your Pi. Once it is started up copy a plain wav file (nothing mp3 encoded or anything) to the Pi, log in and play it through pulseaudio:

paplay test.wav

If it doesn’t come out of the speakers then alsa might be trying to play to the HDMI port. Try running this then running the above command again:

sudo amixer cset numid=3 1

What just happened is pulseaudio played the sound file, but now should have kept the audio hardware active and will continue to do so until the Pi is next turned off. You might think that that might mean that XBMC can’t play anything now. You’d be wrong, it plays just fine and no longer suffers from the popping between tracks. Occasionally I hear a pop when I first start playing after startup, but after that I’ve not heard any issues.

There are probably easier ways to initialise pulseaudio but I don’t mind playing some sound on every startup to let me know my Pi is ready. I made it automatic by sticking this at the top of .bashrc for the pi user (which is used to run xbmc):

/usr/bin/paplay $HOME/test.wav

It means it also plays every-time I log in with ssh but I’m not expecting to do that much now it’s all working. I’m sure someone will point out the better way to do this that I’m too lazy to look for now everything is working for me.

How Crashplan breaks xpcshell tests on Windows

I recently switched to a Windows laptop and have been going through the usual teething pains related. One thing that confused me though was that when I was running xpcshell tests on my new machine they would frequently fail with access denied errors. I’ve seen this sort of thing before so I know some service was monitoring files and opening them after they had changed, when this happens they can’t be deleted or edited until the service closes them again and often tests open, close and delete files so fast that there isn’t time for that to happen.

It took me a little while to remember that I can just use Process Monitor to track down the offending service. Just fire it up, set a filter to only include results to a particular directory (the temp directory in this case) and go create a file there and see what shows up. I was quite surprised to see Crashplan, the backup software I (and probably many people in Mozilla) use. Surprised because Crashplan isn’t set to backup my temp directory and really I shudder to think what the performance cost is of something continually accessing every file that changes in the temp directory.

Turns out you can turn it off though. Hidden in the depths of Crashplan’s advanced backup settings is an option to disable real-time filesystem watching. From what I can see online the downside to this is that files will only be backed up once a day, but that’s a pretty fine tradeoff to  having functioning xpcshell tests for me. There is also an option to put crashplan to sleep for an hour or so, that seems to work too but I don’t know exactly what that does.

It confuses me a little why Crashplan monitors files it never intends to backup (even when the backup server isn’t connected and backups aren’t in progress) and it is quite a lot of file accesses it does too. Seems likely to be a bug to me but at least I can workaround it for now.

Another 7 things…

Thanks to robcee you get to learn a little more about me. Much like him I’ve done this once before (perhaps it’s a sign of our age?) but it was over two years ago so let’s see if I can manage to rustle up a whole other seven things. I believe that the original meme said the facts had to be surprising things that most people didn’t know but I think I’ve used up all the surprising stuff about me already so most of this is probably common knowledge to those that follow my twitter stream.

Here are the rules:

  1. Link to your original tagger(s) and list these rules in your post.
  2. Share seven facts about yourself in the post.
  3. Tag seven people at the end of your post by leaving their names and the links to their blogs.
  4. Let them know they’ve been tagged.

My new things:

  1. I like creating fractals. I guess this says that I have an artistic side but fractals are kind of cheating, you can create beautiful images mostly by tinkering with numbers in equations. I haven’t had chance to do it much lately mostly because the better software is Windows only and rebooting my laptop into Windows is tedious.[flickr size=”small”]4715418159[/flickr]
  2. Another part of my artistic side which has grown in the past couple of years is photography. I’ve liked taking photographs for some time but for a lot of that I was really just playing with simple point and shoots. Since moving onto the excellent Canon Powershot S90 and now the Nikon D7000 I like to think I’ve been able to get some really great shots. Of course I throw away more than I take. During a recent weekend trip to a lake I think I took around 1000 shots, kept about 90 of those and uploaded just 35 to flickr.[flickr size=”small”]5766449103[/flickr]
  3. I am engaged to the gorgeous Tiffney. We’re going to be married at the start of September, just five months after I proposed. We took the easy route of paying for a mostly pre-packaged wedding. I think some of our friends who are also getting married at the same time are a little jealous of how easy it’s been for us.[flickr size=”small”]5777718954[/flickr]
  4. We own a cat, Loki. He is curled up next to me as I write this. We named him before we knew just how appropriate it was. We had to buy sheets of plastic to lay by the sides of the bed to stop him scratching the mattress in the middle of the night. We used to also have a second cat, Ripley, but sadly she passed away at the start of the year from cancer.[flickr size=”small”]5870075150[/flickr]
  5. I have just bought a condo with my fiancée, the first place I’ve owned. It feels nice to no longer be answerable to a landlord though I guess we still have to answer to the homeowner’s association. Since we only just moved in we don’t know where any of our stuff is including the TV remote which is quite irritating.[flickr size=”small”]6015887571[/flickr]
  6. For a few years while I was at university I answered to the name Andy. This started because in my scuba diving class there was a large number of Andys and our instructor decided to just start calling us all Andy. I was one of the few from that year’s novices who got more involved in the club afterwards and so the name stuck. In fact I got so used to it that one evening in a noisy pub someone was trying to get my attention by calling “Dave” and when that didn’t work they called “Andy” which I heard right away.
  7. My IRC nickname (Mossop) originated in college. There was a childrens TV show on at the time featuring extremely badly modelled puppets. Someone in my physics class claimed that I looked like one of them, the nickname stuck and I’ve been using it online ever since. One bonus is the name is so unusual that I mostly get away with using it wherever I go.

Now for the tags, hopefully many of these are new to the game:

  1. Myk Melez who always has something interesting to say and hasn’t blogged enough lately.
  2. Dave Mason who needs to get his blog syndicated on planet.
  3. Jeff Griffiths who has only recently started at Mozilla and needs to get his new blog syndicated on planet before he follows through on this.
  4. Blair McBride, my chief partner in coding for the grand Add-ons Manager redesign for Firefox 4.
  5. Daniel Holbert, his blog title makes me laugh and the second coming of this meme needs to branch out into other teams.
  6. John Ford, so he can get the build and release teams in on this.
  7. Philipp von Weitershausen, I can’t pronounce his last name and I’m sure he’ll tag lots of the new services guys.

 

Beating Bootcamp

I had a plan to go back to doing some more traditional Fractal work this weekend, unfortunately the best tools out there (UltraFractal is a fine example) tend to be on Windows and all my machines are Macs right now. So I figured it would be a simple task to use Bootcamp to install Windows onto my laptop, but like much else that I’ve tried to do these past few weeks it turned into a bit of a nightmare so I figured I’d document how I managed it.

The way Bootcamp works is relatively simple, it shrinks your main OSX partition and then creates a new Windows partition at the end of the disk, then you just install Windows onto that. The problem is that Bootcamp isn’t very good at shrinking the main partition when there are files sitting around at the end of it. As every good OSX fanboy tends to blather about, OSX works hard to avoid fragmentation within files, unfortunately I suspect this tends to spread individual files around all over the disk as the OS tries to find contiguous space to move them to. So when you try to use Bootcamp to shrink your partition you get this lovely error telling you that some files cannot be moved and suggesting you basically reinstall OSX to fix it:

Bootcamp failing to shrink your partition

There are lots of suggestions on the web for combating this. Most revolve around paying for defragmentation software. I didn’t want to pay though so here is a free alternative.

It should go without saying that this involves partition wrangling. Make sure you backup first, double check before doing every step, don’t blame me if you lose data.

Go get the Gparted live CD, burn it to a disk and boot from it (hold down Option while restarting). GParted should show your disk with two partitions in in, one small one at the start that claims to be FAT, ignore this one. The rest of the disk should be HFS+ and named after your Mac install. Select this partition, click the resize button and choose how much to shrink it by. The easiest way is to just put the amount of space you want for Windows in the space to leave at the end box. Click ok and then apply then wait patiently while it does a far more competent job than Bootcamp could of moving your files around.

Restart into OSX, open Disk Utility and go to the partition for your main disk. You should now see your OSX partition taking up less space and a gap at the end. Click the + sign to add a new FAT partition and name it BOOTCAMP. You should now find that the Bootcamp assistant recognises that the Windows partition exists and will allow you to start the Windows installation.

Make sure to read the bootcamp instructions, there are some real gotchas in there during the Windows install, and make sure to read the support info on how to allow Windows SP3 to install if you want to do that, without that I ended up with a broken Windows.

Why does no-one make a mouse that I want?

As far as mouse types go you tend to find 2 different styles (lets ignore wired mice for now which are the spawn of the devil).

There are notebook mice. These tend to be small, making them uncomfortable for me to use. The big benefit with notebook mice though is that they can come with ultra-small USB dongles, or even better, bluetooth connectivity.

Then there are desktop mice. These are larger, ergonomically designed, with more buttons and all around nicer to use. However the USB dongles you get for them are generally a minimum of an inch long.

My problem is that I use a laptop all the time, I travel with it frequently and I like keeping the same mouse with me that is setup the way I like it. But the notebook mice are too small to use and the dongles with the desktop mice stick out too far which makes them very inconvenient.

What I really want is a desktop sized comfortable mouse that connects over bluetooth (I’d accept a sub-1cm USB dongle but really that just loses me a port). I’ve never found one, do they exist?

Hey Dreamhost, we use tabs now

I’ll be honest, this post isn’t about Mozilla or even really anything Mozilla related (beyond the fact that it is about a poor web application). However I know that lots of people in the Mozilla community use Dreamhost as their webhost and I figure some of them might want to know to watch out for this and avoid getting into the same mess that I did so I’ll include it in planet feed anyway.

The problem is with Dreamhost’s Web Panel, the service that customers use to administer their web hosting packages. It turns out that certain parts of this panel can’t cope with the idea of you accessing it from multiple tabs or windows at the same time. If you try opening the web hosting settings for more than one domain in different tabs, only the last one you opened will actually work as you expect. Yep, they are remembering, server side, which domain you are editing the settings for.

This wouldn’t normally be the end of the world. I could accept it if there was some error message or it just failed to save the settings for one of the domains it wasn’t expecting. Sadly the panel is not that clever. Instead, no matter which domain’s settings you try to save it always overwrites the domain that was last opened. The panel does subsequently tell you the real domain that it saved the settings for (with a nice green tick to emphasise that everything is fine!) if you are careful enough to read the full message but of course by then the domain’s old settings are gone.

I discovered this when trying to make a change to one of my domains blew away the settings for my secure server, including the ssl keys. Thankfully they were recovered but this sort of thing really shouldn’t happen at all. It seems pretty absurd to me that a modern web application can exist that doesn’t seem to take tabs into account at all.

It’s a shame really because for the most part Dreamhost are the best webhosts I’ve ever used and their panel is very easy to use. Dreamhost’s support team have told me that this is a limitation of the web panel and it should only be used in one window, though there don’t seem to be any warnings about this anywhere. I hate to make sweeping statements like “Surely it would be easy to…” because it bugs me whenever anyone says the same about Firefox, but happily overwriting your settings (and calling it a success) seems like something that should be avoided at all costs.

I must be missing something in the clouds

For a long time now there have been web applications mirroring pretty much all the applications I use locally, email, calendar, spreadsheets, etc. I keep looking at these and feeling like I should jump on the bandwagon, after all lots of the people I work with use them and rave about them so they must be great right? The problem is I can’t figure out what I am actually missing, and most of the time I can spot immediately things I would miss by moving to them.

Obviously one clear benefit is that they are available anywhere in the world, you just need access to any computer with a modern web-browser. But you know what? Wherever I go in the world I take my laptop, or if I don’t it is because I really want to relax and be offline completely. About the only critical thing that I might need to get updates on is my mail, which I do have a webmail access to anyway.

The online services seem to fall down for me in a bunch of ways:

  • I like to be able to run applications separately from my browser. I’ll grant you that tools like Prism make this sort of thing possible so that failure is going away slowly.
  • No matter how good browsers become I don’t believe HTML will ever create as good a UI as a real application can. For the most part they are restricted to a single window interface, with pseudo windows hovering above looking nothing like platform native.
  • They need you to be online (let’s ignore gears and stuff for the moment, I haven’t found the technology to be quite there enough yet). As I said I take my laptop everywhere. I can look at all my mail and calendar without needing to pay for an internet connection in some random hotspot.
  • They simply don’t have the features that my local apps do. I expect this to change over the years but many of the online offerings are basic at best.
  • How do I back up my data? Seriously, if I want to back up my gmail or google calendar what do I do? If I want to back up my local mail and calendar I just plug in a hard disk and let OSX deal with it. Obviously the opposite to this is that if my machine goes down then the online service will still be there and I’ll only have a potentially stale backup, but my backup is never more than a week old, and I can tell you I’ve lost more data over the years due to online systems going down than local machines breaking.

So here I am, wondering (again) what to do about task management. I keep feeling drawn to things like Remember The Milk because they are all online and Web2.0ish, but I’m not sure quite why. I’m sure I must be missing something critical about using the online apps, but I just can’t figure out what it is.

Daylight robbery

It wasn’t long ago that I was responsible for developing and maintaining a large number of websites. Like everyone in this role I needed a domain registrar I could trust to be cheap, efficient and most of all keep me updated about upcoming renewals. At the time I had a lot of love for Freeparking. They didn’t (and still don’t) look like much, but at the time I started using them they were all these things. No surprise I carried on using them after I left my last job when registering some personal domains.

Imagine my horror today then when someone else emails me to notify my that the registration for oxymoronical.com had expired the day before. Freeparking hadn’t so much as whispered on the subject and would you look at that, I now have to pay a late registration fee.

To say I feel let down is an understatement. Not only have they over the years failed to even update their website beyond its only-just-working state but now they seem to be actively trying to rip people off. It’s a shame but I guess it is the push I need to move all my registrations to Dreamhost who have been nothing short of excellent when it comes to my hosting. Sadly of course I still have to renew the domain first, then wait 60 days before I can move it.

I particularly like how their support form has “I was not informed of an imminent renewal” as one of the options to choose from. Clearly this comes up quite a lot for them.