The Add-on SDK is now in Firefox

We’re now a big step closer to shipping the SDK APIs with Firefox and other apps, we’ve uplifted the SDK code from our git repository to mozilla-inbound and assuming it sticks we will be on the trains for releasing. We’ll be doing weekly uplifts to keep the code in mozilla-central current.

What’s changed?

Not a lot yet. Existing add-ons and add-ons built with the current version of the SDK still use their own versions of the APIs from their XPIs. Add-ons built with the next version of the SDK may start to try to use the APIs in Firefox in preference to those shipped with the XPI and then a future version will only use those in Firefox. We’re also talking about the possibility of making Firefox override the APIs in any SDK based add-on and use the shipped ones automatically so the add-on author wouldn’t need to do anything.

We’re working on getting the Jetpack tests running on tinderbox switched over to use the in-tree code, once we do we will be unhiding them so other developers can immediately see when their changes break the SDK code. You can now run the Jetpack tests with a mach command or just with make jetpack-tests from the object directory. Those commands are a bit rudimentary right now, they don’t give you a way to choose individual tests to run. We’ll get to that.

Can I use it now?

If you’re brave, sure. Once a build including the SDKs is out (might be a day or so for nightlies) fire up a chrome context scratch pad and put this code in it:

var { Loader } = Components.utils.import("resource://gre/modules/commonjs/toolkit/loader.js", {});
var loader = Loader.Loader({
  paths: {
    "sdk/": "resource://gre/modules/commonjs/sdk/",
    "": "globals:///"
  resolve: function(id, base) {
    if (id == "chrome" || id.startsWith("@"))
      return id;
    return Loader.resolve(id, base);
var module = Loader.Module("main", "scratchpad://");
var require = Loader.Require(loader, module);

var { notify } = require("sdk/notifications");
  text: "Hello from the SDK!"

Everything but the last 4 lines sets up the SDK loader so it knows where to get the APIs from and creates you a require function to call. The rest can just be code as you’d include in an SDK add-on. You probably shouldn’t use this for anything serious yet, in fact I haven’t included the code to tell the module loader to unload so this code example may leak things for the rest of the life of the application.

This is too long of course (longer than it should be right now because of a bug too) so one thing we’ll probably do is create a simple JSM that can give you a require function in one line as well as take care of unloading when the app goes away.

Pop-free sound from a Raspberry Pi running XBMC

UPDATE: I’ll leave this around for posterity but a large part of this problem has now been fixed in the latest Raspberry Pi firmware. See here for instructions for raspbmc until that gets updated.

I’ve been in the process of setting up a Raspberry Pi in my office so I can play my mp3 collection through my old stereo. It’s generally gone well and I have to take my hat off to the developers of Raspbmc which makes setting up XBMC on the Pi ridiculously easy and fast. It didn’t take me long to have Airplay set up and running as well as being able to use my phone to remote control XBMC to play things direct from my music library sitting on my Synology NAS. Quite a nice setup really.

Just one problem. I play the music out through the Pi’s audio jack which doesn’t have a fantastic DAC. The big noticeable issue is audible pops every-time XBMC starts and stops playing. For Airplay this isn’t too bad, you get a pop when it first starts but only another after you stop playing. Playing direct on XBMC though you get two pops between each track as it stops playing one and starts the next. Very annoying. It’s a pretty well known problem and the best solution so far is to simply never stop playing. If you have a player that uses pulseaudio then you can configure it to keep the audio stream going even when idle. Of course it isn’t that easy, XBMC doesn’t use pulseaudio on the Pi. There is some work going on that might change that but for now it is very buggy to the point of being unusable. It seemed I was stuck … or was I?

It took some experimentation but I finally came across something of a hack that solves the problem for me. It probably works on other distributions but I did all this using Raspbmc.

First as root you want to do this:

echo "snd-bcm2835" >>/etc/modules

This makes the kernel module for the sound devices load on startup. It allows alsa and by proxy pulseaudio to talk to the audio hardware. Next edit /etc/pulse/ and comment out this line:

load-module module-suspend-on-idle

This tells pulseaudio to keep the audio stream alive even when not playing anything. Now reboot your Pi. Once it is started up copy a plain wav file (nothing mp3 encoded or anything) to the Pi, log in and play it through pulseaudio:

paplay test.wav

If it doesn’t come out of the speakers then alsa might be trying to play to the HDMI port. Try running this then running the above command again:

sudo amixer cset numid=3 1

What just happened is pulseaudio played the sound file, but now should have kept the audio hardware active and will continue to do so until the Pi is next turned off. You might think that that might mean that XBMC can’t play anything now. You’d be wrong, it plays just fine and no longer suffers from the popping between tracks. Occasionally I hear a pop when I first start playing after startup, but after that I’ve not heard any issues.

There are probably easier ways to initialise pulseaudio but I don’t mind playing some sound on every startup to let me know my Pi is ready. I made it automatic by sticking this at the top of .bashrc for the pi user (which is used to run xbmc):

/usr/bin/paplay $HOME/test.wav

It means it also plays every-time I log in with ssh but I’m not expecting to do that much now it’s all working. I’m sure someone will point out the better way to do this that I’m too lazy to look for now everything is working for me.

Making Git play nice on Windows

A while ago now I switched to Windows on my primary development machine. It’s pretty frustrating particularly when it comes to command line support but MozillaBuild goes a long way to giving me a more comfortable environment. One thing that took a long time to get working right though was full colour support from both mercurial and git. Git in particular is a problem because it uses a custom version of MSYS which seems to conflict with the stock MSYS in MozillaBuild leaving things broken if you don’t set it up just right. Since I just switched to a new machine I thought it would be useful to record how I got it working, perhaps more for my benefit than anything else, but perhaps others will find it useful too.

You want to install MozillaBuild, Mercurial and Git. MozillaBuild comes with Mercurial, but it’s generally an older version. This post assumes you put them in their default locations (on a 64-bit machine), so adjust accordingly if you changed that. When the Git installer asks choose the most conservative option, don’t put git in your PATH here.

You need some custom settings. First create a .hgrc file in your home directory or add the following to it if you already have one:



Now a .profile file in the same place:

export PATH="/c/Program Files/Mercurial":$PATH:"/c/Program Files (x86)/Git/bin"

The ordering is important there. $PATH will be set up to point to MozillaBuild (and other places) when you run the MozillaBuild startup scripts. You want the installed Mercurial to override the one in MozillaBuild and you want the MozillaBuild binaries to override the custom MSYS versions that Git comes with.

Finally you want a decent console to use. I’m using Console2 with general success. Install it, start it and go into its settings. Console2 supports creating custom tab types, so you can create a new MozillaBuild tab, or a new Windows console tab, etc. I like to do this so go to the “Tabs” section of the settings, click add and then give it a title of MozillaBuild. You then want to add the MozillaBuild startup script you want to use in the “Shell” option, e.g. “C:\mozilla-build\start-msvc10.bat” for me. If you’re feeling extravagant (and have a checkout of mozilla-central), point the icon to browser/branding/official/firefox.ico.

Some final useful settings, In “Hotkeys” assign Ctrl+T to “New Tab 1“. In “Mouse” set “Select Text” to the left button and “Paste Text” to the right button.

And that should be it. Open a new MozillaBuild tab in Console2. If you’ve done everything right you should see messages about the Visual Studio and SDK versions chosen. “which hg” and “which git” should show you the binaries in Program Files somewhere. They should both run and output in colour when useful.

Let’s just put it in Toolkit!

Toolkit is fast turning into the dumping ground of mozilla-central. Once upon a time the idea was simple. Any code that could be usefully shared across multiple applications (and in particular code that wasn’t large enough to deserve a module of its own) would end up in Toolkit. The rules were pretty simple, any code in there should work for any application that wants to use it. This didn’t always work exactly according to plan but we did our best to fix Seamonkey and Thunderbird incompatibilities as they came along.

This worked great when there was only one Firefox. Shared code went into m-c/toolkit, Firefox specific code went into m-c/browser. There were always complaints that more of the code in browser should be moved to toolkit so Seamonkey and other projects could make use of it but otherwise no big issues.

Now we have more than one Firefox: Firefox for desktop, Firefox for Android, B2G, Metro and who knows what else to come. Suddenly we want to share code across different Firefoxen, often different sets depending on the code, often depending on other pieces of code like services that aren’t available in all other applications. Keeping the rules as they stand now means that Toolkit isn’t the correct place for this code. So what do we do about this?

There only seem to be two sensible choices. Either we change the Toolkit rules to allow code that may not work in some applications, or we create a new catch-all module for this sort of code. And really those are just the same but with the need to find a new module owner for the new module. I’m going to ignore the third option which is to create a new module for each new piece of code like this as being hopelessly bureaucratic.

So, I’m proposing that we (by which I mean I as module owner) redefine the rules for Toolkit to be a little broader:

  • Any code in Toolkit should be potentially useful to multiple applications but it isn’t up to the author to make it work everywhere.
  • Patches to make code work in other applications will be accepted if not too invasive.
  • Any code in Toolkit that is called automatically by Gecko (like the add-ons manager) must work in all applications.

Any strong objections?

What is an API?

I recently posted in the newsgroups about a concern over super-review. In some cases patches that seem to meet the policy aren’t getting super-reviewed. Part of the problem here is that the policy is a little ambiguous. It says that any API or pseudo-API requires super-review but depending on how you read that section it could mean any patch that changes the signature of a JS function is classed as an API. We need to be smarter than that. Here is a straw-man proposal for defining what is an API:

Any code that is intended to be used by other parts of the application, add-ons or web content is an API and requires super-review when added or its interface is changed

Some concrete examples to demonstrate what I think this statement covers:

  • JS modules and XPCOM components in toolkit are almost all intended to be used by applications or add-ons and so are APIs
  • UI code in toolkit (such as extensions.js) aren’t meant to be used elsewhere and so aren’t APIs (though they may contain a few cases such as observer notifications, these should be clearly marked out in the file)
  • Any functions or objects exposed to web content are APIs
  • The majority of code in browser/ is only meant to be used within browser/ and so isn’t an API. There are some exceptions to this where extensions rely on certain functionality such as tabbrowser.xml

Do you think my simple statement above matches those cases and others that you can think of? Is this a good way to define what needs super-review?

What is Jetpack here for?

Who are the Jetpack team? What are they here for? A lot of people in Mozilla don’t know that Jetpack still exists (or never knew it existed). Others still think of it as the original prototype which we dropped in early 2010. Our goals have changed a lot since then. Even people who think they know what we’re about might be surprised at what our current work involves.

Let’s start with this basic statement:

Jetpack makes it easier to develop features for Firefox

There are a couple of points wrapped up in this statement:

  1. The Jetpack team doesn’t develop features. We enable others to develop features. We should be an API and tools team. We’ve done a lot of API development, but we should work with the developer tools team more to make sure that Firefox developer tools work for Firefox development, too.
  2. I didn’t say “add-ons.” I said “features”. The APIs we develop shouldn’t be just for add-on developers, Firefox developers should be able to use them too. If the APIs we create are the best they can be, then everyone should want to use them. They should demand to use them, in fact. It follows that this should make it easier for add-ons to transition into full Firefox features, or vice versa.

We happen to think that a modular approach to developing features makes more sense than the old style of using overlays and xpcom. All of the APIs we develop are common-js style modules (as used in Node and elsewhere) and we’ve built the add-on SDK to simplify this style of add-on development. That work includes a loader for common-js modules which has now landed in Firefox and is available to all feature developers. It also includes tools to take directories of common-js modules and convert them into a standalone XPI that can be installed into Firefox.

The next step in supporting Firefox and add-on developers is to land our APIs into Firefox itself so they are available to everyone. We hope to complete that by the end of this year and it will bring some great benefits, including smaller add-ons and the ability to fix certain problems in SDK based add-ons with Firefox updates.

Mythical goals

Jetpack has been around for a while and in that time our focus has changed. There are a few cases where I think people are mistaken about the goals we have. Here are the common things that were talked about as hard-goals in the past but I don’t are any longer.

Displace classic add-ons

Most people think Jetpack’s main goal is to displace classic add-ons. It’s obvious that we’ve failed to do that. Were we ever in a position to do so? Expecting the developers of large add-ons to switch to a different style of coding (even a clearly better one) without some forcing factor doesn’t work. The electrolysis project might have done it, but even supporting e10s was easier than converting a large codebase to the add-on SDK. The extension ecosystem of today still includes a lot of classic addons, and the APIs we build should be usable by their developers, too.

Forwards compatibility

Users hate it when Firefox updates break their add-ons. Perfect forwards compatibility was another intended benefit of the SDK. Shipping our APIs with Firefox will help a lot, as the add-ons that use them will work even if the specific implementation of the APIs needs to change under the hood over time. It won’t be perfect, though. We’re going to maintain the APIs vigorously, but we aren’t fortune tellers. Sometimes parts of Firefox will change in ways that force us to break APIs that didn’t anticipate those changes.

What we can do though is get better at figuring out which add-ons will be broken by API changes and reach out to those developers to help them update their code. All add-on SDK based add-ons include a file that lists exactly which APIs they use making it straightforward to identify those that might be affected by a change.

Cross-device compatibility

There’s a theory that says that as long as you’re only using SDK APIs to build your add-on, it will magically work on mobile devices as well as it does on desktop. Clearly we aren’t there yet either. We are making great strides, but the goal isn’t entirely realistic either. Developers want to be able to use as many features as they can in Firefox to make their new feature great. But many features in one device don’t exist or make no sense on other devices. Pinned tabs exist on desktop and the SDK includes API support for pinning and un-pinning tabs. But on mobile there is currently no support for pinned tabs. Honestly, I don’t think it’s something we even should have for phone devices. Adding APIs for add-ons to create their own toolbars makes perfect sense on desktop, but again for phones makes no sense at all.

So, do we make the APIs only support the lowest common denominator across all devices Firefox works on? Can we even know what that is? No. We will support APIs on the devices where it makes sense, but in some cases we will have to say that the API simply isn’t supported on phones, or tablets, or maybe desktops. What we can do, again by having a better handle on which APIs developers are using, is make device compatibility more obvious to developers, allowing them to make the final call on which APIs to employ.

Hopefully that has told you something you didn’t know before. Did it surprise you?

Simple image filters with getUserMedia

I forgot to blog about this last week, but Justin made me remember. The WebRTC getUserMedia API is available on the Nightly and Aurora channels of Firefox right now and Tim has done a couple of great demos of using JavaScript to process the media stream. That got me interested and after a little playing around I remembered learning the basics of convolution image filters so I thought I’d give it a try. The result is a sorta ugly-looking UI that lets you build your own image filters to apply to the video coming off your webcam. There are a few pre-defined filter matrices there to get you started and it’s interesting to see what effects you can get. Remember that you need to enable media.navigator.enabled in about:config to make it work.

The downside is that either through me not seeing an obvious optimisation or JS just being too slow right now it isn’t fast enough for real use. Even a simple 3×3 filter is too slow on my machine since it ends up having to do 9 calculations per pixel which is just too much. Can someone out there make it faster?

Update: The ever-awesome bz took a look and pointed out three quick fixes that made the code far faster and it now runs almost realtime with a 3×3 filter for me. First he pointed out that storing in a variable outside the loops rather than resolving it each time speeds things up a lot. Secondly apparently let isn’t fully supported by IonMonkey yet so it switches to a slower path when it encounters it. Finally I was manually clamping the result to 0-255 but pixel data is a Uint8ClampedArray so it clamps itself automatically.

Six hour chilli

I’ve enjoyed making and eating chilli ever since I was in university. It has a lot of appeal for your average student:

  • You can make it in bulk and live off it for a few weeks
  • You can make it spicy enough that you start bleeding out of your nose (and thus totally impress all your friends)
  • It’s dead simple and really hard to screw up

The great part is that really once you have the basic ingredients you can just go nuts, vary up the ratios, add extra bits and pieces, whatever you fancy. Eventually though I discovered that I was getting a bit too wild and really wanted to nail down a basic “good” recipe to work from. It took a couple of tries but here it is. It’ll take around six hours to cook, an extra hour for prep depending on how lazy/drunk you are.


  • 2 cups chopped onions
  • 2.5 lb lean ground beef


  • 56 oz diced tomatoes (canned works just fine)
  • 2 tsp chilli powder
  • 2 tsp chilli pepper flakes
  • 1 tsp ground cumin
  • 1 tsp cayenne pepper
  • 1/8 cup cilantro leaves (I use dried but fresh is probably good too)
  • 1/2 tsp chipotle chilli pepper powder
  • 1/2 pint stout or other dark beer
  • 2 cups red wine (I tend to use merlot)
  • 2 cups beef broth
  • 1/8 cup tomato puree or 1/4 cup tomato ketchup
  • ~3-4 tsp coarsely ground black pepper


  • 3×12 oz cans of mixed beans
  • 2-3 large bell peppers (red and/or green), chopped


  • Fry the onions in some olive oil until they start to turn translucent, then throw in the beef and stir until it browns all over.
  • Add into a large pot with the tomatoes, spices, wine, beer and broth
  • Cover and simmer for 2 hours, stirring periodically (use the remains of the stout to help you get through this)
  • Drain and add the beans
  • Cover and simmer for 2 hours, stirring periodically (it’s possible you’ll need more stout)
  • Add the chopped peppers
  • Cover and simmer for 1-2 hours, stirring periodically
  • Serve with cornbread and the rest of the bottle of wine


  • This makes a lot of chilli, make sure you have a large enough frying pan and pot. You can try scaling it down but I find it works best when making it in large quantities.
  • After adding the peppers you’re basically simmering till it is about the right consistency, uncovering and/or adding some corn starch can help thicken it up at this stage.
  • This recipe comes out pretty spicy, you might want to drop the chipotle chilli pepper power if that isn’t your thing.
  • Unless you are feeding an army make sure you have tupperware to hold the leftovers. It reheats very well, if using a microwave throwing a little extra water in helps.

After an awesome Jetpack work week

It’s the first day back at work after spending a great work week in London with the Jetpack team last week. I was going to write a summary of everything that went on but it turns out that Jeff beat me to it. That’s probably a good thing as he’s a better writer than I so go there and read up on all the fun stuff that we got done.

All I’ll add is that it was fantastic getting the whole team into a room to talk about what we’re working on and get a load of stuff done. I ran a discussion on what the goals of the Jetpack project are (I’ll follow up on this in another blog post to come later this week) and was delighted that everyone on the team is on the same page. Employing people from all around the world is one of Mozilla’s great strengths but also a potential risk. It’s vital for us to keep doing work weeks and all hands like this to make sure everyone gets to know everyone and is working together towards the same goals.

Where are we all now?

Quite a while ago now I generated a map showing where all the Mozilla employees are in the world. A few of the new folks on my team reminded me of it and when I showed it to them they wanted to know when I was going to get around to updating it. Challenge accepted! Back then the Mozilla phone book was available as a static html page so scraping it for the locations was easy. Now the phonebook is a prettier webapp so the same technique doesn’t quite work. Instead I ended up writing a simple add-on to pull the main phonebook tree then do requests for the json data for each employee, dumping them to a file on disk. Then some manipulation of the original scripts led to great success. Click through for a full zoomable map.

A map showing where Mozilla staff are in the world

Mozilla staff around the world

One of the criticisms I heard the last time I did this was that it didn’t contain any information about the rest of the community. I’ll say basically the same thing that I said then. Get me the data and I’ll gladly put it on a map for all to see, but right now I don’t know where to find that information. I think this is something that Mozillians could do but I’m not sure what the chances are of getting everyone in the directory to add that info to their profile. I think this information is still useful even without the community as it demonstrates how global we are (and in some cases aren’t!) as a company.