Six years revisited

Two years ago I blogged about how it had been six years since I wrote my first patch for Firefox. Today I get to say that it’s been six years since I started getting paid to do what I love, working for Mozilla. In that time I’ve moved to a new continent, found a wife (through Mozilla no less!), progressed from coder to module owner to manager, seen good friends leave for other opportunities and others join and watched (dare I say helped?) Mozilla, and the Firefox team in particular, grow from the small group that I joined in 2007 to the large company that will soon surpass 1000 employees.

One of the things I’ve always appreciated about Mozilla is how flexible they can be about where you work. Recently my wife and I decided to move away from the bay area to be closer to family. It was a hard choice to make as it meant leaving a lot of friends behind but one thing that made it easier was knowing that I wouldn’t have to factor work into that choice. Unlike many other companies I know there is no strict requirement that everyone work from the office. True it is encouraged and it sometimes makes sense to require it for some employees, particularly when starting out, but I knew that when it came time to talk to my manager about it he wouldn’t have a problem with me switching to working remote. Of course six years ago when I started I was living in the UK and remained so for my first two years at Mozilla so he had a pretty good idea that I could handle it, and at least this time I’m only separated from the main office by a short distance and no time-zones.

The web has changed a lot in the last six years. Back then we were working on Firefox 3, the first release to contain the awesomebar and a built-in way to download extensions from AMO. Twitter and Facebook had only been generally available for about a year. The ideas for CSS3 and HTML5 were barely written, let alone implemented. If you had told me back then that you’d be able to play a 3D game in your browser with no additional plugins, or watch videos without flash I’d have probably thought they were crazy pipe-dreams. We weren’t even Jitting our JS code back then. Mozilla, along with other browser makers, are continuing to prove that HTML, CSS and JS are winning combinations that we can build on to make the future of the web open, performant and powerful. I can’t wait to see what things will be like in another six years.

Firefox now ships with the add-on SDK

It’s been a long ride but we can finally say it. This week Firefox 21 shipped and it includes the add-on SDK modules.

We took all the Jetpack APIs and we shipped them in Firefox!What does this mean? Well for users it means two important things:

  1. Smaller add-ons. Since they no longer need to ship the APIs themselves add-ons only have to include the unique code that makes them special. That’s something like a 65% file-size saving for the most popular SDK based add-ons, probably more for simpler add-ons.
  2. Add-ons will stay compatible with Firefox for longer. We can evolve the modules in Firefox that add-ons use so that most of the time when changes happen to Firefox the modules seamlessly shift to keep working. There are still some cases where that might be impossible (when a core feature is dropped from Firefox for example) but hopefully those should be rare.

To take advantage of these benefits add-ons have to be repacked with a recent version of the SDK. We’re working on a plan to do that automatically for existing add-ons where possible but developers who want to get the benefits right now can just repack their add-ons themselves using SDK 1.14 and using cfx xpi --strip-sdk, or using the next release of the SDK, 1.15 which will do that by default.

Get notifications about changes to any directory in mercurial

Back in the old days when we used CVS of all things for our version control we had a wonderful tool called bonsai to help query the repository for changes. You could list changes on a per directory basis if you needed which was great for keeping an eye on certain chunks of code. I recall there being a way of getting an RSS feed from it and I used it when I was the module owner of the extension manager to see what changes were landed that I hadn’t noticed in bugs.

Fast forward to today and we use mercurial instead. When we switched there was much talk of how we’d get tool parity with CVS but bonsai is something that has never been replaced fully. Oh hgweb is decent at looking at individual files and browsing the tree, but you can’t get that list of changes per directory from it. I believe you can use the command line to do it but who wants to do that? Lately I’ve been finding need of those directory RSS feeds more and more. We’re now periodically uplifting the Add-on SDK repository to mozilla-central, it’s really important to spot changes that have been made to that directory in mozilla-central so we can also land them in our git repository and not clobber them the next time we uplift. I’m also the module owner of toolkit, which is a pretty big sprawling set of files. It seems like everytime I look I find something that landed without me noticing. I don’t make for a good module owner if I’m not keeping an eye on things so I’d really like to see when new files are added there.

So I introduce the Hg Change Feed, the result of mostly just a few days of work. Every 10 minutes it pulls new changes from mozilla-central and mozilla-inbound. A mercurial hook looks over the changes and adds information about them to a MySQL database. Then a simple django app displays that information. As you browse through the directories in the tree it shows only changesets that affected files beneath that directory. For any directory you can also get an RSS feed of the same. Plug that into IFTTT and you have an automated system to notify you in pretty much any way you’d like about new changes you’d be interested in.

Some simple examples. For tracking changes to Add-on SDK I’m watching http://hgchanges.fractalbrew.com/mozilla-inbound/file/addon-sdk/source. For toolkit I’m looking at http://hgchanges.fractalbrew.com/mozilla-inbound/file/toolkit?types=added. Types takes a comma separated list of “added”, “removed” and “modified” to filter which changes you’re interested in. There’s no UI on the site for changing that right now, you’re welcome to add some!

One other neat trick that this does is mostly ignore merge changesets. Only if a merge actually makes a change not already present in either of the merge parents (mostly happens when resolving merge conflicts) will it show up in the list of changes, because really you don’t need to hear about changes twice.

So play with it, let me know if you find it useful or if you think things are missing. I can also add other mercurial repositories if people want. Some caveats:

  • It only retains the last 2000 changesets from any repository in an effort to keep the DB size small and fast, it also only shows the last 200 changesets for each page, or just the last 20 in the feeds. These can be tweaked easily enough and I’ve done basically no benchmarking to say those are the right values.
  • The site isn’t as fast as I’d like, particularly listing changes for the top level directory takes nearly 5 seconds. I’ve thrown some basic caching in place to help alleviate that for now. I bet someone who has more MySQL and django experience than me could tell me what I’m doing wrong.
  • I’m off on vacation tomorrow so I guess I’m announcing this then running away, sorry if that means it takes me a while to respond to comments.

Want to help out and make it better? Go nuts with the source. There’s a readme that hopefully explains how to set up your own instance.

Hacking on Tilt

Tilt, or 3D view as it is known in Firefox, is an awesome visual tool that really lets you see the structure of a webpage. It shows you just how deep your tag hierarchy goes which might give signs of your page being too complex or even help you spot errors in your markup that you wouldn’t otherwise notice. But what if it could do more? What if there were different ways to visualise the same page? What if even web developers could create their own visualisations?

I’ve had this idea knocking around in my head for a while now and the devtools work week was the perfect time to hack on it. Here are some results, click through for larger images:

Normal 3D view
Normal 3D view
Only give depth to links
Only give depth to links
Only give depth to links going off-site
Only give depth to links going off-site
Only give depth to elements that have a different style on hover
Only give depth to elements that have a different style on hover

This is all achieved with some changes to Firefox itself to make Tilt handle more generic visualisations along with an extension that then overrides Tilt’s default visualisation. Because the extension has access to everything Firefox knows about the webpage it can use some interested sources of data about the page, including those not found in the DOM. This one I particularly like, it makes the element’s depth proportional to the number of attached DOM event listeners:

Give depth to elements based on the number of attached event listeners
Give depth to elements based on the number of attached event listeners

Just look at that search box, and what’s up with the two buttons having different height?

The code just calls a JS function to get the height for each element displayed in 3D view. It’s really easy to use DOM functions to highlight different things about the elements and while I think some of the examples I made are interesting I think it will be more interesting to just let web-devs come up with and share their own visualisations. To that end I also demoed using scratchpad to write whatever function you like to control the visualisation. You can see a screencast of it in action.

Something that struck me towards the end of the week is that it could be awesome to pair this up with external sources of data like analytics. What about being able to view your page with links given depth proportional to how often users click them? Seems like an awesome way to really understand where your users are going and maybe why.

I’m hoping to get the changes to Firefox landed soon maybe with an additional patch to properly support extensibility of Tilt, right now the extension works by replacing a function in a JSM which is pretty hacky, but it wouldn’t be difficult to make it nicer than that. After that I’ll be interested to see what visualisation ideas others come up with.

The Add-on SDK is now in Firefox

We’re now a big step closer to shipping the SDK APIs with Firefox and other apps, we’ve uplifted the SDK code from our git repository to mozilla-inbound and assuming it sticks we will be on the trains for releasing. We’ll be doing weekly uplifts to keep the code in mozilla-central current.

What’s changed?

Not a lot yet. Existing add-ons and add-ons built with the current version of the SDK still use their own versions of the APIs from their XPIs. Add-ons built with the next version of the SDK may start to try to use the APIs in Firefox in preference to those shipped with the XPI and then a future version will only use those in Firefox. We’re also talking about the possibility of making Firefox override the APIs in any SDK based add-on and use the shipped ones automatically so the add-on author wouldn’t need to do anything.

We’re working on getting the Jetpack tests running on tinderbox switched over to use the in-tree code, once we do we will be unhiding them so other developers can immediately see when their changes break the SDK code. You can now run the Jetpack tests with a mach command or just with make jetpack-tests from the object directory. Those commands are a bit rudimentary right now, they don’t give you a way to choose individual tests to run. We’ll get to that.

Can I use it now?

If you’re brave, sure. Once a build including the SDKs is out (might be a day or so for nightlies) fire up a chrome context scratch pad and put this code in it:

var { Loader } = Components.utils.import("resource://gre/modules/commonjs/toolkit/loader.js", {});
var loader = Loader.Loader({
  paths: {
    "sdk/": "resource://gre/modules/commonjs/sdk/",
    "": "globals:///"
  },
  resolve: function(id, base) {
    if (id == "chrome" || id.startsWith("@"))
      return id;
    return Loader.resolve(id, base);
  }
});
var module = Loader.Module("main", "scratchpad://");
var require = Loader.Require(loader, module);

var { notify } = require("sdk/notifications");
notify({
  text: "Hello from the SDK!"
});

Everything but the last 4 lines sets up the SDK loader so it knows where to get the APIs from and creates you a require function to call. The rest can just be code as you’d include in an SDK add-on. You probably shouldn’t use this for anything serious yet, in fact I haven’t included the code to tell the module loader to unload so this code example may leak things for the rest of the life of the application.

This is too long of course (longer than it should be right now because of a bug too) so one thing we’ll probably do is create a simple JSM that can give you a require function in one line as well as take care of unloading when the app goes away.

Making Git play nice on Windows

A while ago now I switched to Windows on my primary development machine. It’s pretty frustrating particularly when it comes to command line support but MozillaBuild goes a long way to giving me a more comfortable environment. One thing that took a long time to get working right though was full colour support from both mercurial and git. Git in particular is a problem because it uses a custom version of MSYS which seems to conflict with the stock MSYS in MozillaBuild leaving things broken if you don’t set it up just right. Since I just switched to a new machine I thought it would be useful to record how I got it working, perhaps more for my benefit than anything else, but perhaps others will find it useful too.

You want to install MozillaBuild, Mercurial and Git. MozillaBuild comes with Mercurial, but it’s generally an older version. This post assumes you put them in their default locations (on a 64-bit machine), so adjust accordingly if you changed that. When the Git installer asks choose the most conservative option, don’t put git in your PATH here.

You need some custom settings. First create a .hgrc file in your home directory or add the following to it if you already have one:

[extensions]
hgext.color=

[color]
mode=win32

Now a .profile file in the same place:

export PATH="/c/Program Files/Mercurial":$PATH:"/c/Program Files (x86)/Git/bin"

The ordering is important there. $PATH will be set up to point to MozillaBuild (and other places) when you run the MozillaBuild startup scripts. You want the installed Mercurial to override the one in MozillaBuild and you want the MozillaBuild binaries to override the custom MSYS versions that Git comes with.

Finally you want a decent console to use. I’m using Console2 with general success. Install it, start it and go into its settings. Console2 supports creating custom tab types, so you can create a new MozillaBuild tab, or a new Windows console tab, etc. I like to do this so go to the “Tabs” section of the settings, click add and then give it a title of MozillaBuild. You then want to add the MozillaBuild startup script you want to use in the “Shell” option, e.g. “C:\mozilla-build\start-msvc10.bat” for me. If you’re feeling extravagant (and have a checkout of mozilla-central), point the icon to browser/branding/official/firefox.ico.

Some final useful settings, In “Hotkeys” assign Ctrl+T to “New Tab 1“. In “Mouse” set “Select Text” to the left button and “Paste Text” to the right button.

And that should be it. Open a new MozillaBuild tab in Console2. If you’ve done everything right you should see messages about the Visual Studio and SDK versions chosen. “which hg” and “which git” should show you the binaries in Program Files somewhere. They should both run and output in colour when useful.

Let’s just put it in Toolkit!

Toolkit is fast turning into the dumping ground of mozilla-central. Once upon a time the idea was simple. Any code that could be usefully shared across multiple applications (and in particular code that wasn’t large enough to deserve a module of its own) would end up in Toolkit. The rules were pretty simple, any code in there should work for any application that wants to use it. This didn’t always work exactly according to plan but we did our best to fix Seamonkey and Thunderbird incompatibilities as they came along.

This worked great when there was only one Firefox. Shared code went into m-c/toolkit, Firefox specific code went into m-c/browser. There were always complaints that more of the code in browser should be moved to toolkit so Seamonkey and other projects could make use of it but otherwise no big issues.

Now we have more than one Firefox: Firefox for desktop, Firefox for Android, B2G, Metro and who knows what else to come. Suddenly we want to share code across different Firefoxen, often different sets depending on the code, often depending on other pieces of code like services that aren’t available in all other applications. Keeping the rules as they stand now means that Toolkit isn’t the correct place for this code. So what do we do about this?

There only seem to be two sensible choices. Either we change the Toolkit rules to allow code that may not work in some applications, or we create a new catch-all module for this sort of code. And really those are just the same but with the need to find a new module owner for the new module. I’m going to ignore the third option which is to create a new module for each new piece of code like this as being hopelessly bureaucratic.

So, I’m proposing that we (by which I mean I as module owner) redefine the rules for Toolkit to be a little broader:

  • Any code in Toolkit should be potentially useful to multiple applications but it isn’t up to the author to make it work everywhere.
  • Patches to make code work in other applications will be accepted if not too invasive.
  • Any code in Toolkit that is called automatically by Gecko (like the add-ons manager) must work in all applications.

Any strong objections?

What is an API?

I recently posted in the newsgroups about a concern over super-review. In some cases patches that seem to meet the policy aren’t getting super-reviewed. Part of the problem here is that the policy is a little ambiguous. It says that any API or pseudo-API requires super-review but depending on how you read that section it could mean any patch that changes the signature of a JS function is classed as an API. We need to be smarter than that. Here is a straw-man proposal for defining what is an API:

Any code that is intended to be used by other parts of the application, add-ons or web content is an API and requires super-review when added or its interface is changed

Some concrete examples to demonstrate what I think this statement covers:

  • JS modules and XPCOM components in toolkit are almost all intended to be used by applications or add-ons and so are APIs
  • UI code in toolkit (such as extensions.js) aren’t meant to be used elsewhere and so aren’t APIs (though they may contain a few cases such as observer notifications, these should be clearly marked out in the file)
  • Any functions or objects exposed to web content are APIs
  • The majority of code in browser/ is only meant to be used within browser/ and so isn’t an API. There are some exceptions to this where extensions rely on certain functionality such as tabbrowser.xml

Do you think my simple statement above matches those cases and others that you can think of? Is this a good way to define what needs super-review?

What is Jetpack here for?

Who are the Jetpack team? What are they here for? A lot of people in Mozilla don’t know that Jetpack still exists (or never knew it existed). Others still think of it as the original prototype which we dropped in early 2010. Our goals have changed a lot since then. Even people who think they know what we’re about might be surprised at what our current work involves.

Let’s start with this basic statement:

Jetpack makes it easier to develop features for Firefox

There are a couple of points wrapped up in this statement:

  1. The Jetpack team doesn’t develop features. We enable others to develop features. We should be an API and tools team. We’ve done a lot of API development, but we should work with the developer tools team more to make sure that Firefox developer tools work for Firefox development, too.
  2. I didn’t say “add-ons.” I said “features”. The APIs we develop shouldn’t be just for add-on developers, Firefox developers should be able to use them too. If the APIs we create are the best they can be, then everyone should want to use them. They should demand to use them, in fact. It follows that this should make it easier for add-ons to transition into full Firefox features, or vice versa.

We happen to think that a modular approach to developing features makes more sense than the old style of using overlays and xpcom. All of the APIs we develop are common-js style modules (as used in Node and elsewhere) and we’ve built the add-on SDK to simplify this style of add-on development. That work includes a loader for common-js modules which has now landed in Firefox and is available to all feature developers. It also includes tools to take directories of common-js modules and convert them into a standalone XPI that can be installed into Firefox.

The next step in supporting Firefox and add-on developers is to land our APIs into Firefox itself so they are available to everyone. We hope to complete that by the end of this year and it will bring some great benefits, including smaller add-ons and the ability to fix certain problems in SDK based add-ons with Firefox updates.

Mythical goals

Jetpack has been around for a while and in that time our focus has changed. There are a few cases where I think people are mistaken about the goals we have. Here are the common things that were talked about as hard-goals in the past but I don’t are any longer.

Displace classic add-ons

Most people think Jetpack’s main goal is to displace classic add-ons. It’s obvious that we’ve failed to do that. Were we ever in a position to do so? Expecting the developers of large add-ons to switch to a different style of coding (even a clearly better one) without some forcing factor doesn’t work. The electrolysis project might have done it, but even supporting e10s was easier than converting a large codebase to the add-on SDK. The extension ecosystem of today still includes a lot of classic addons, and the APIs we build should be usable by their developers, too.

Forwards compatibility

Users hate it when Firefox updates break their add-ons. Perfect forwards compatibility was another intended benefit of the SDK. Shipping our APIs with Firefox will help a lot, as the add-ons that use them will work even if the specific implementation of the APIs needs to change under the hood over time. It won’t be perfect, though. We’re going to maintain the APIs vigorously, but we aren’t fortune tellers. Sometimes parts of Firefox will change in ways that force us to break APIs that didn’t anticipate those changes.

What we can do though is get better at figuring out which add-ons will be broken by API changes and reach out to those developers to help them update their code. All add-on SDK based add-ons include a file that lists exactly which APIs they use making it straightforward to identify those that might be affected by a change.

Cross-device compatibility

There’s a theory that says that as long as you’re only using SDK APIs to build your add-on, it will magically work on mobile devices as well as it does on desktop. Clearly we aren’t there yet either. We are making great strides, but the goal isn’t entirely realistic either. Developers want to be able to use as many features as they can in Firefox to make their new feature great. But many features in one device don’t exist or make no sense on other devices. Pinned tabs exist on desktop and the SDK includes API support for pinning and un-pinning tabs. But on mobile there is currently no support for pinned tabs. Honestly, I don’t think it’s something we even should have for phone devices. Adding APIs for add-ons to create their own toolbars makes perfect sense on desktop, but again for phones makes no sense at all.

So, do we make the APIs only support the lowest common denominator across all devices Firefox works on? Can we even know what that is? No. We will support APIs on the devices where it makes sense, but in some cases we will have to say that the API simply isn’t supported on phones, or tablets, or maybe desktops. What we can do, again by having a better handle on which APIs developers are using, is make device compatibility more obvious to developers, allowing them to make the final call on which APIs to employ.

Hopefully that has told you something you didn’t know before. Did it surprise you?

Simple image filters with getUserMedia

I forgot to blog about this last week, but Justin made me remember. The WebRTC getUserMedia API is available on the Nightly and Aurora channels of Firefox right now and Tim has done a couple of great demos of using JavaScript to process the media stream. That got me interested and after a little playing around I remembered learning the basics of convolution image filters so I thought I’d give it a try. The result is a sorta ugly-looking UI that lets you build your own image filters to apply to the video coming off your webcam. There are a few pre-defined filter matrices there to get you started and it’s interesting to see what effects you can get. Remember that you need to enable media.navigator.enabled in about:config to make it work.

The downside is that either through me not seeing an obvious optimisation or JS just being too slow right now it isn’t fast enough for real use. Even a simple 3×3 filter is too slow on my machine since it ends up having to do 9 calculations per pixel which is just too much. Can someone out there make it faster?

Update: The ever-awesome bz took a look and pointed out three quick fixes that made the code far faster and it now runs almost realtime with a 3×3 filter for me. First he pointed out that storing frame.data in a variable outside the loops rather than resolving it each time speeds things up a lot. Secondly apparently let isn’t fully supported by IonMonkey yet so it switches to a slower path when it encounters it. Finally I was manually clamping the result to 0-255 but pixel data is a Uint8ClampedArray so it clamps itself automatically.