Six years revisited

Two years ago I blogged about how it had been six years since I wrote my first patch for Firefox. Today I get to say that it’s been six years since I started getting paid to do what I love, working for Mozilla. In that time I’ve moved to a new continent, found a wife (through Mozilla no less!), progressed from coder to module owner to manager, seen good friends leave for other opportunities and others join and watched (dare I say helped?) Mozilla, and the Firefox team in particular, grow from the small group that I joined in 2007 to the large company that will soon surpass 1000 employees.

One of the things I’ve always appreciated about Mozilla is how flexible they can be about where you work. Recently my wife and I decided to move away from the bay area to be closer to family. It was a hard choice to make as it meant leaving a lot of friends behind but one thing that made it easier was knowing that I wouldn’t have to factor work into that choice. Unlike many other companies I know there is no strict requirement that everyone work from the office. True it is encouraged and it sometimes makes sense to require it for some employees, particularly when starting out, but I knew that when it came time to talk to my manager about it he wouldn’t have a problem with me switching to working remote. Of course six years ago when I started I was living in the UK and remained so for my first two years at Mozilla so he had a pretty good idea that I could handle it, and at least this time I’m only separated from the main office by a short distance and no time-zones.

The web has changed a lot in the last six years. Back then we were working on Firefox 3, the first release to contain the awesomebar and a built-in way to download extensions from AMO. Twitter and Facebook had only been generally available for about a year. The ideas for CSS3 and HTML5 were barely written, let alone implemented. If you had told me back then that you’d be able to play a 3D game in your browser with no additional plugins, or watch videos without flash I’d have probably thought they were crazy pipe-dreams. We weren’t even Jitting our JS code back then. Mozilla, along with other browser makers, are continuing to prove that HTML, CSS and JS are winning combinations that we can build on to make the future of the web open, performant and powerful. I can’t wait to see what things will be like in another six years.

Firefox now ships with the add-on SDK

It’s been a long ride but we can finally say it. This week Firefox 21 shipped and it includes the add-on SDK modules.

We took all the Jetpack APIs and we shipped them in Firefox!What does this mean? Well for users it means two important things:

  1. Smaller add-ons. Since they no longer need to ship the APIs themselves add-ons only have to include the unique code that makes them special. That’s something like a 65% file-size saving for the most popular SDK based add-ons, probably more for simpler add-ons.
  2. Add-ons will stay compatible with Firefox for longer. We can evolve the modules in Firefox that add-ons use so that most of the time when changes happen to Firefox the modules seamlessly shift to keep working. There are still some cases where that might be impossible (when a core feature is dropped from Firefox for example) but hopefully those should be rare.

To take advantage of these benefits add-ons have to be repacked with a recent version of the SDK. We’re working on a plan to do that automatically for existing add-ons where possible but developers who want to get the benefits right now can just repack their add-ons themselves using SDK 1.14 and using cfx xpi --strip-sdk, or using the next release of the SDK, 1.15 which will do that by default.

Afterburner steak rub

This is a recipe for a steak rub that I made up last night. It came out pretty well and everyone enjoyed it. It slightly caramelises the surface of the steak infusing it with herby goodness and after you enjoy that flavour a nice kick of heat comes in the after taste.

  • 1/4 handful fresh sage, finely chopped
  • 1/2 handful fresh thyme, finely chopped
  • 1 handful fresh parsley, finely chopped
  • 2 tsp salt
  • 1 tbsp brown sugar
  • 2 tsp black pepper
  • 1 tsp cayenne pepper

Mix it all together in a bowl, rinse off your steaks and apply liberally to each side about half an hour before you plan to start cooking. These quantities should make enough for 4 small steaks.

Get notifications about changes to any directory in mercurial

Back in the old days when we used CVS of all things for our version control we had a wonderful tool called bonsai to help query the repository for changes. You could list changes on a per directory basis if you needed which was great for keeping an eye on certain chunks of code. I recall there being a way of getting an RSS feed from it and I used it when I was the module owner of the extension manager to see what changes were landed that I hadn’t noticed in bugs.

Fast forward to today and we use mercurial instead. When we switched there was much talk of how we’d get tool parity with CVS but bonsai is something that has never been replaced fully. Oh hgweb is decent at looking at individual files and browsing the tree, but you can’t get that list of changes per directory from it. I believe you can use the command line to do it but who wants to do that? Lately I’ve been finding need of those directory RSS feeds more and more. We’re now periodically uplifting the Add-on SDK repository to mozilla-central, it’s really important to spot changes that have been made to that directory in mozilla-central so we can also land them in our git repository and not clobber them the next time we uplift. I’m also the module owner of toolkit, which is a pretty big sprawling set of files. It seems like everytime I look I find something that landed without me noticing. I don’t make for a good module owner if I’m not keeping an eye on things so I’d really like to see when new files are added there.

So I introduce the Hg Change Feed, the result of mostly just a few days of work. Every 10 minutes it pulls new changes from mozilla-central and mozilla-inbound. A mercurial hook looks over the changes and adds information about them to a MySQL database. Then a simple django app displays that information. As you browse through the directories in the tree it shows only changesets that affected files beneath that directory. For any directory you can also get an RSS feed of the same. Plug that into IFTTT and you have an automated system to notify you in pretty much any way you’d like about new changes you’d be interested in.

Some simple examples. For tracking changes to Add-on SDK I’m watching http://hgchanges.fractalbrew.com/mozilla-inbound/file/addon-sdk/source. For toolkit I’m looking at http://hgchanges.fractalbrew.com/mozilla-inbound/file/toolkit?types=added. Types takes a comma separated list of “added”, “removed” and “modified” to filter which changes you’re interested in. There’s no UI on the site for changing that right now, you’re welcome to add some!

One other neat trick that this does is mostly ignore merge changesets. Only if a merge actually makes a change not already present in either of the merge parents (mostly happens when resolving merge conflicts) will it show up in the list of changes, because really you don’t need to hear about changes twice.

So play with it, let me know if you find it useful or if you think things are missing. I can also add other mercurial repositories if people want. Some caveats:

  • It only retains the last 2000 changesets from any repository in an effort to keep the DB size small and fast, it also only shows the last 200 changesets for each page, or just the last 20 in the feeds. These can be tweaked easily enough and I’ve done basically no benchmarking to say those are the right values.
  • The site isn’t as fast as I’d like, particularly listing changes for the top level directory takes nearly 5 seconds. I’ve thrown some basic caching in place to help alleviate that for now. I bet someone who has more MySQL and django experience than me could tell me what I’m doing wrong.
  • I’m off on vacation tomorrow so I guess I’m announcing this then running away, sorry if that means it takes me a while to respond to comments.

Want to help out and make it better? Go nuts with the source. There’s a readme that hopefully explains how to set up your own instance.

Hacking on Tilt

Tilt, or 3D view as it is known in Firefox, is an awesome visual tool that really lets you see the structure of a webpage. It shows you just how deep your tag hierarchy goes which might give signs of your page being too complex or even help you spot errors in your markup that you wouldn’t otherwise notice. But what if it could do more? What if there were different ways to visualise the same page? What if even web developers could create their own visualisations?

I’ve had this idea knocking around in my head for a while now and the devtools work week was the perfect time to hack on it. Here are some results, click through for larger images:

Normal 3D view

Normal 3D view

Only give depth to links

Only give depth to links

Only give depth to links going off-site

Only give depth to links going off-site

Only give depth to elements that have a different style on hover

Only give depth to elements that have a different style on hover

This is all achieved with some changes to Firefox itself to make Tilt handle more generic visualisations along with an extension that then overrides Tilt’s default visualisation. Because the extension has access to everything Firefox knows about the webpage it can use some interested sources of data about the page, including those not found in the DOM. This one I particularly like, it makes the element’s depth proportional to the number of attached DOM event listeners:

Give depth to elements based on the number of attached event listeners

Give depth to elements based on the number of attached event listeners

Just look at that search box, and what’s up with the two buttons having different height?

The code just calls a JS function to get the height for each element displayed in 3D view. It’s really easy to use DOM functions to highlight different things about the elements and while I think some of the examples I made are interesting I think it will be more interesting to just let web-devs come up with and share their own visualisations. To that end I also demoed using scratchpad to write whatever function you like to control the visualisation. You can see a screencast of it in action.

Something that struck me towards the end of the week is that it could be awesome to pair this up with external sources of data like analytics. What about being able to view your page with links given depth proportional to how often users click them? Seems like an awesome way to really understand where your users are going and maybe why.

I’m hoping to get the changes to Firefox landed soon maybe with an additional patch to properly support extensibility of Tilt, right now the extension works by replacing a function in a JSM which is pretty hacky, but it wouldn’t be difficult to make it nicer than that. After that I’ll be interested to see what visualisation ideas others come up with.

The Add-on SDK is now in Firefox

We’re now a big step closer to shipping the SDK APIs with Firefox and other apps, we’ve uplifted the SDK code from our git repository to mozilla-inbound and assuming it sticks we will be on the trains for releasing. We’ll be doing weekly uplifts to keep the code in mozilla-central current.

What’s changed?

Not a lot yet. Existing add-ons and add-ons built with the current version of the SDK still use their own versions of the APIs from their XPIs. Add-ons built with the next version of the SDK may start to try to use the APIs in Firefox in preference to those shipped with the XPI and then a future version will only use those in Firefox. We’re also talking about the possibility of making Firefox override the APIs in any SDK based add-on and use the shipped ones automatically so the add-on author wouldn’t need to do anything.

We’re working on getting the Jetpack tests running on tinderbox switched over to use the in-tree code, once we do we will be unhiding them so other developers can immediately see when their changes break the SDK code. You can now run the Jetpack tests with a mach command or just with make jetpack-tests from the object directory. Those commands are a bit rudimentary right now, they don’t give you a way to choose individual tests to run. We’ll get to that.

Can I use it now?

If you’re brave, sure. Once a build including the SDKs is out (might be a day or so for nightlies) fire up a chrome context scratch pad and put this code in it:

var { Loader } = Components.utils.import("resource://gre/modules/commonjs/toolkit/loader.js", {});
var loader = Loader.Loader({
  paths: {
    "sdk/": "resource://gre/modules/commonjs/sdk/",
    "": "globals:///"
  },
  resolve: function(id, base) {
    if (id == "chrome" || id.startsWith("@"))
      return id;
    return Loader.resolve(id, base);
  }
});
var module = Loader.Module("main", "scratchpad://");
var require = Loader.Require(loader, module);

var { notify } = require("sdk/notifications");
notify({
  text: "Hello from the SDK!"
});

Everything but the last 4 lines sets up the SDK loader so it knows where to get the APIs from and creates you a require function to call. The rest can just be code as you’d include in an SDK add-on. You probably shouldn’t use this for anything serious yet, in fact I haven’t included the code to tell the module loader to unload so this code example may leak things for the rest of the life of the application.

This is too long of course (longer than it should be right now because of a bug too) so one thing we’ll probably do is create a simple JSM that can give you a require function in one line as well as take care of unloading when the app goes away.

Pop-free sound from a Raspberry Pi running XBMC

UPDATE: I’ll leave this around for posterity but a large part of this problem has now been fixed in the latest Raspberry Pi firmware. See here for instructions for raspbmc until that gets updated.

I’ve been in the process of setting up a Raspberry Pi in my office so I can play my mp3 collection through my old stereo. It’s generally gone well and I have to take my hat off to the developers of Raspbmc which makes setting up XBMC on the Pi ridiculously easy and fast. It didn’t take me long to have Airplay set up and running as well as being able to use my phone to remote control XBMC to play things direct from my music library sitting on my Synology NAS. Quite a nice setup really.

Just one problem. I play the music out through the Pi’s audio jack which doesn’t have a fantastic DAC. The big noticeable issue is audible pops every-time XBMC starts and stops playing. For Airplay this isn’t too bad, you get a pop when it first starts but only another after you stop playing. Playing direct on XBMC though you get two pops between each track as it stops playing one and starts the next. Very annoying. It’s a pretty well known problem and the best solution so far is to simply never stop playing. If you have a player that uses pulseaudio then you can configure it to keep the audio stream going even when idle. Of course it isn’t that easy, XBMC doesn’t use pulseaudio on the Pi. There is some work going on that might change that but for now it is very buggy to the point of being unusable. It seemed I was stuck … or was I?

It took some experimentation but I finally came across something of a hack that solves the problem for me. It probably works on other distributions but I did all this using Raspbmc.

First as root you want to do this:

echo "snd-bcm2835" >>/etc/modules

This makes the kernel module for the sound devices load on startup. It allows alsa and by proxy pulseaudio to talk to the audio hardware. Next edit /etc/pulse/system.pa and comment out this line:

load-module module-suspend-on-idle

This tells pulseaudio to keep the audio stream alive even when not playing anything. Now reboot your Pi. Once it is started up copy a plain wav file (nothing mp3 encoded or anything) to the Pi, log in and play it through pulseaudio:

paplay test.wav

If it doesn’t come out of the speakers then alsa might be trying to play to the HDMI port. Try running this then running the above command again:

sudo amixer cset numid=3 1

What just happened is pulseaudio played the sound file, but now should have kept the audio hardware active and will continue to do so until the Pi is next turned off. You might think that that might mean that XBMC can’t play anything now. You’d be wrong, it plays just fine and no longer suffers from the popping between tracks. Occasionally I hear a pop when I first start playing after startup, but after that I’ve not heard any issues.

There are probably easier ways to initialise pulseaudio but I don’t mind playing some sound on every startup to let me know my Pi is ready. I made it automatic by sticking this at the top of .bashrc for the pi user (which is used to run xbmc):

/usr/bin/paplay $HOME/test.wav

It means it also plays every-time I log in with ssh but I’m not expecting to do that much now it’s all working. I’m sure someone will point out the better way to do this that I’m too lazy to look for now everything is working for me.

Making Git play nice on Windows

A while ago now I switched to Windows on my primary development machine. It’s pretty frustrating particularly when it comes to command line support but MozillaBuild goes a long way to giving me a more comfortable environment. One thing that took a long time to get working right though was full colour support from both mercurial and git. Git in particular is a problem because it uses a custom version of MSYS which seems to conflict with the stock MSYS in MozillaBuild leaving things broken if you don’t set it up just right. Since I just switched to a new machine I thought it would be useful to record how I got it working, perhaps more for my benefit than anything else, but perhaps others will find it useful too.

You want to install MozillaBuild, Mercurial and Git. MozillaBuild comes with Mercurial, but it’s generally an older version. This post assumes you put them in their default locations (on a 64-bit machine), so adjust accordingly if you changed that. When the Git installer asks choose the most conservative option, don’t put git in your PATH here.

You need some custom settings. First create a .hgrc file in your home directory or add the following to it if you already have one:

[extensions]
hgext.color=

[color]
mode=win32

Now a .profile file in the same place:

export PATH="/c/Program Files/Mercurial":$PATH:"/c/Program Files (x86)/Git/bin"

The ordering is important there. $PATH will be set up to point to MozillaBuild (and other places) when you run the MozillaBuild startup scripts. You want the installed Mercurial to override the one in MozillaBuild and you want the MozillaBuild binaries to override the custom MSYS versions that Git comes with.

Finally you want a decent console to use. I’m using Console2 with general success. Install it, start it and go into its settings. Console2 supports creating custom tab types, so you can create a new MozillaBuild tab, or a new Windows console tab, etc. I like to do this so go to the “Tabs” section of the settings, click add and then give it a title of MozillaBuild. You then want to add the MozillaBuild startup script you want to use in the “Shell” option, e.g. “C:\mozilla-build\start-msvc10.bat” for me. If you’re feeling extravagant (and have a checkout of mozilla-central), point the icon to browser/branding/official/firefox.ico.

Some final useful settings, In “Hotkeys” assign Ctrl+T to “New Tab 1“. In “Mouse” set “Select Text” to the left button and “Paste Text” to the right button.

And that should be it. Open a new MozillaBuild tab in Console2. If you’ve done everything right you should see messages about the Visual Studio and SDK versions chosen. “which hg” and “which git” should show you the binaries in Program Files somewhere. They should both run and output in colour when useful.

Let’s just put it in Toolkit!

Toolkit is fast turning into the dumping ground of mozilla-central. Once upon a time the idea was simple. Any code that could be usefully shared across multiple applications (and in particular code that wasn’t large enough to deserve a module of its own) would end up in Toolkit. The rules were pretty simple, any code in there should work for any application that wants to use it. This didn’t always work exactly according to plan but we did our best to fix Seamonkey and Thunderbird incompatibilities as they came along.

This worked great when there was only one Firefox. Shared code went into m-c/toolkit, Firefox specific code went into m-c/browser. There were always complaints that more of the code in browser should be moved to toolkit so Seamonkey and other projects could make use of it but otherwise no big issues.

Now we have more than one Firefox: Firefox for desktop, Firefox for Android, B2G, Metro and who knows what else to come. Suddenly we want to share code across different Firefoxen, often different sets depending on the code, often depending on other pieces of code like services that aren’t available in all other applications. Keeping the rules as they stand now means that Toolkit isn’t the correct place for this code. So what do we do about this?

There only seem to be two sensible choices. Either we change the Toolkit rules to allow code that may not work in some applications, or we create a new catch-all module for this sort of code. And really those are just the same but with the need to find a new module owner for the new module. I’m going to ignore the third option which is to create a new module for each new piece of code like this as being hopelessly bureaucratic.

So, I’m proposing that we (by which I mean I as module owner) redefine the rules for Toolkit to be a little broader:

  • Any code in Toolkit should be potentially useful to multiple applications but it isn’t up to the author to make it work everywhere.
  • Patches to make code work in other applications will be accepted if not too invasive.
  • Any code in Toolkit that is called automatically by Gecko (like the add-ons manager) must work in all applications.

Any strong objections?

What is an API?

I recently posted in the newsgroups about a concern over super-review. In some cases patches that seem to meet the policy aren’t getting super-reviewed. Part of the problem here is that the policy is a little ambiguous. It says that any API or pseudo-API requires super-review but depending on how you read that section it could mean any patch that changes the signature of a JS function is classed as an API. We need to be smarter than that. Here is a straw-man proposal for defining what is an API:

Any code that is intended to be used by other parts of the application, add-ons or web content is an API and requires super-review when added or its interface is changed

Some concrete examples to demonstrate what I think this statement covers:

  • JS modules and XPCOM components in toolkit are almost all intended to be used by applications or add-ons and so are APIs
  • UI code in toolkit (such as extensions.js) aren’t meant to be used elsewhere and so aren’t APIs (though they may contain a few cases such as observer notifications, these should be clearly marked out in the file)
  • Any functions or objects exposed to web content are APIs
  • The majority of code in browser/ is only meant to be used within browser/ and so isn’t an API. There are some exceptions to this where extensions rely on certain functionality such as tabbrowser.xml

Do you think my simple statement above matches those cases and others that you can think of? Is this a good way to define what needs super-review?