What is an API?

I recently posted in the newsgroups about a concern over super-review. In some cases patches that seem to meet the policy aren’t getting super-reviewed. Part of the problem here is that the policy is a little ambiguous. It says that any API or pseudo-API requires super-review but depending on how you read that section it could mean any patch that changes the signature of a JS function is classed as an API. We need to be smarter than that. Here is a straw-man proposal for defining what is an API:

Any code that is intended to be used by other parts of the application, add-ons or web content is an API and requires super-review when added or its interface is changed

Some concrete examples to demonstrate what I think this statement covers:

  • JS modules and XPCOM components in toolkit are almost all intended to be used by applications or add-ons and so are APIs
  • UI code in toolkit (such as extensions.js) aren’t meant to be used elsewhere and so aren’t APIs (though they may contain a few cases such as observer notifications, these should be clearly marked out in the file)
  • Any functions or objects exposed to web content are APIs
  • The majority of code in browser/ is only meant to be used within browser/ and so isn’t an API. There are some exceptions to this where extensions rely on certain functionality such as tabbrowser.xml

Do you think my simple statement above matches those cases and others that you can think of? Is this a good way to define what needs super-review?

Simple image filters with getUserMedia

I forgot to blog about this last week, but Justin made me remember. The WebRTC getUserMedia API is available on the Nightly and Aurora channels of Firefox right now and Tim has done a couple of great demos of using JavaScript to process the media stream. That got me interested and after a little playing around I remembered learning the basics of convolution image filters so I thought I’d give it a try. The result is a sorta ugly-looking UI that lets you build your own image filters to apply to the video coming off your webcam. There are a few pre-defined filter matrices there to get you started and it’s interesting to see what effects you can get. Remember that you need to enable media.navigator.enabled in about:config to make it work.

The downside is that either through me not seeing an obvious optimisation or JS just being too slow right now it isn’t fast enough for real use. Even a simple 3×3 filter is too slow on my machine since it ends up having to do 9 calculations per pixel which is just too much. Can someone out there make it faster?

Update: The ever-awesome bz took a look and pointed out three quick fixes that made the code far faster and it now runs almost realtime with a 3×3 filter for me. First he pointed out that storing frame.data in a variable outside the loops rather than resolving it each time speeds things up a lot. Secondly apparently let isn’t fully supported by IonMonkey yet so it switches to a slower path when it encounters it. Finally I was manually clamping the result to 0-255 but pixel data is a Uint8ClampedArray so it clamps itself automatically.

Managing changes is the key to a project’s success

TomTom made an interesting claim recently. Their summary is “when it comes to automotive-grade mapping, open source has some quite serious limitations, falling short on the levels of accuracy and reliability required for safe navigation

This is a bold claim and they talk about recent studies that back them up. Unfortunately none of them are referenced but it’s pretty clear from the text of the article that all they are doing is comparing the accuracy of TomTom maps with existing open source maps. So they’re just generalising, this doesn’t prove a limitation with the open source process itself of course, just perhaps of a particular instance of it.

In fact having read the article I think TomTom are just misunderstanding how open source development works. Their basic complaint seems to be that open source maps are entirely community generated with no proper review of the changes made. In such a situation I’m sure the data generated is always going to be liable to contain errors, sometimes malicious, sometimes honest. But that isn’t how open source development works in general (I make no claim to know how it works for open source mapping). I’d probably call such a situation crowd-sourcing.

Successful open source projects rely on levels of management controlling the changes that are made to the central repository of the source code (or in this case mapping data). In Firefox for example every change is reviewed at least once by an expert in the area of the code affected before being allowed into the repository. Most other open source projects I know of run similarly. It’s this management that is, in my opinion, key to the success of the project. Clamp down too hard on changes and development is slow and contributors get frustrated and walk away, be too lenient and too many bugs get introduced or the project veers out of control. You can adjust the level of control based on how critical the accuracy of the contribution is. Of course this isn’t some special circumstance for open source projects, closed source projects should operate in the same fashion.

The part of their post that amuses me is when they say “we harness the local knowledge of our 60 million satnav customers, who can make corrections through TomTom Map Share“. So basically they accept outside contributions to their maps too. As far as their development goes it sounds like they function much like an open source project to me! The only claim they make is that they have better experts reviewing the changes that are submitted. This might be true but it has nothing to do with whether the project is open source or not, it’s just who you find to control the changes submitted.

There is of course one place where open source is at an arguable disadvantage. The latest bleeding edge source is always available (or at least should be). If you look at the changes as they come in, before QA processes and community testing has gone on then of course you’re going to see issues. I’m sure TomTom have development versions of their maps that are internal only and probably have their fair share of errors that are waiting to be ironed out too. Open source makes it perhaps easier to end up using these development versions so unless you know what you’re doing you should always stick to the more stable releases.

Just because a project accepts contributions from a community doesn’t mean it is doomed to fail, nor does it mean it is bound to succeed. What you have to ask yourself before using any project, open source or not, is how good are those controlling changes and how many people are likely to have reviewed and tested the end result.

 

How Crashplan breaks xpcshell tests on Windows

I recently switched to a Windows laptop and have been going through the usual teething pains related. One thing that confused me though was that when I was running xpcshell tests on my new machine they would frequently fail with access denied errors. I’ve seen this sort of thing before so I know some service was monitoring files and opening them after they had changed, when this happens they can’t be deleted or edited until the service closes them again and often tests open, close and delete files so fast that there isn’t time for that to happen.

It took me a little while to remember that I can just use Process Monitor to track down the offending service. Just fire it up, set a filter to only include results to a particular directory (the temp directory in this case) and go create a file there and see what shows up. I was quite surprised to see Crashplan, the backup software I (and probably many people in Mozilla) use. Surprised because Crashplan isn’t set to backup my temp directory and really I shudder to think what the performance cost is of something continually accessing every file that changes in the temp directory.

Turns out you can turn it off though. Hidden in the depths of Crashplan’s advanced backup settings is an option to disable real-time filesystem watching. From what I can see online the downside to this is that files will only be backed up once a day, but that’s a pretty fine tradeoff to¬† having functioning xpcshell tests for me. There is also an option to put crashplan to sleep for an hour or so, that seems to work too but I don’t know exactly what that does.

It confuses me a little why Crashplan monitors files it never intends to backup (even when the backup server isn’t connected and backups aren’t in progress) and it is quite a lot of file accesses it does too. Seems likely to be a bug to me but at least I can workaround it for now.

WebApp Tabs, version control and GitHub

The extension I’ve been working on in my spare time for the past couple of weeks is now available as a first (hopefully not too buggy) release. It lets you open WebApps in Thunderbird, properly handling loading new links into Firefox and making all features like spellchecking work in Thunderbird (most other extensions I found didn’t do this). You can read more about the actual extension at its homepage.

Mostly I’ve been really encouraged during the development of this at just how far our platform has come for developing restartless add-ons. When we first made it possible in Firefox 4 there was a whole list of things that were quite difficult to do but we’ve come a long way since then. While there are still things that are difficult there are lots of things that are now pretty straightforward. My add-on loads simple XUL overlays, style overlays, installs JS XPCOM components with category manager registration, all similar to older add-ons. In fact I’m struggling to think of things that it is still hard to do though I’m sure other more prolific developers will have plenty of comments on that!

The other thing I’ve been doing with this extension is experimenting with git and GitHub. I think it’s been an interesting experience, there are continual arguments over which is better between git and mercurial with many pros and cons listed. I think most of these were done some time ago before mercurial and git really matured because from what I’ve seen there is really little difference between the two. They have slightly different default branching styles, but both can do the same kind of branching that the other can if you want and there are a few other minor differences but nothing that would really make me all that bothered over deciding which to use. I think the only place where git has a bonus is with GitHub, and really as far as I can see there isn’t a reason why someone couldn’t develop a similar site backed by mercurial repositories, it’s just that no-one really has.

GitHub is pretty nice with built-in basic issue tracking and documentation though it still has some frustrating issues. It seems odd for example that you can’t fork your own project, only someone else’s, but that’s only a minor niggle really. As project hosting goes I can’t say I’ve come across anything better that I can remember.

Overlays without overlays in restartless add-ons

Perhaps the most common way of making changes to Firefox with an extension has always been using the overlay. For a window’s UI you can make changes to the underlying XUL document, add script elements to be executed in the context of the normal window’s code and add new stylesheets to the window to change how the UI looks.

Restartless add-ons change this around completely, the normal overlay and style-overlay mechanisms just aren’t available to restartless add-ons and this is likely to remain true for a while, these methods don’t clean up after themselves when the add-on is uninstalled.

This can make things hard, particularly for porting older add-ons to become restartless. I was in this situation earlier this weekend. I was working on porting David Ascher‘s WebTabs for Thunderbird to be restartless. I could have just moved all the script code over to bootstrap.js but in many ways it is nice to keep the code that works on the main UI separate to the code that runs for the preferences UI etc. Plus I like to play around with new ways of doing things so I developed a JS module I’m calling the OverlayManager.

The OverlayManager watches for new windows being opened and for every new window it can run JS script and apply CSS stylesheets to the window in a way that is easy to undo if the add-on is disabled at runtime. Although it can’t do any XUL modifications right now (I didn’t need any for this particular extension) it would be pretty easy to extend this to support a minimum about of XUL overlays.

Stylesheets are loaded by adding a HTML style tag to the XUL document, so they can be removed easily when the add-on is disabled. Scripts are handled in a way that may even be better than normal overlays. In the old system extension scripts all run in the same context as the window they overlay giving rise to the possibility of conflicts. Restartless add-ons shouldn’t do this since it makes removing the script code again much more troublesome. The OverlayManager handles it by creating a sandbox to run the script in. The sandbox’s prototype is set to the window the script is being run for meaning the script sees all the functions and objects of the window directly in its own scope but as long as it doesn’t modify any of the objects in the main window’s code all we have to do is throw away the sandbox to get rid of its JS.

There are a few things different of course. The script shouldn’t use load and unload event handlers for the window as it may get loaded well after the window does or unloaded well before. Instead the OverlayManager looks for an OverlayListener object in the script and calls load and unload methods on it, these are called either with the window’s real load and unload events or while the window is open normally. You also can’t reference code in the script from JS string blocks, so if you set onclick="myfunc()" on a XUL element it wouldn’t work because that would run in the main window scope which can’t see the sandbox code at that point. This tends to be pretty simple to get around by using addEventListener for all your events though.

You can see the existing state of the code on github and an example of the structure you’d pass to OverlayManager.addOverlays is in the bootstrap script for the same project. It is appropriately licensed so go nuts!

Update: I changed the stylesheets to use XML processing instructions to be more like they work currently and just for fun I implemented the very basics of document overlaying, almost totally untested though so YMMV.

Adding add-on preferences to the Add-ons Manager

For some time now Firefox for mobile has had this nice feature where add-ons could embed their preferences right into the list of add-ons, no need to open a whole a new window like add-ons for desktop have to. During the development of Firefox 4 we were a little jealous of what the mobile team had done and so we drew up some ideas for how the same functionality would look on desktop. We didn’t get time to implement them then but I’m excited that someone from the community stepped up and implemented it for us. Not just that but he made the code shared between mobile and desktop, added some new option types and made it work fine for restartless add-ons which are unable to register their own chrome.

The basic idea is simple. Create a XUL file containing a list of <setting> elements. Different types of settings are possible, checkboxes, input boxes, menulists, buttons, etc. Each one shows up as a row in the details view for an add-on in the add-ons manager. The XUL file can either be just added to your XPI (call it options.xul) or referenced by the optionsURL option in your install.rdf.

Get it right and you’ll see something like this:

I want to thank Geoff Lankow (darktrojan on IRC) for his awesome work getting this done. This feature is now in the Aurora builds and it’d be great to get add-on developers playing with it. Geoff even wrote up some detailed docs to help you out.

As a bonus Geoff also implemented support for in-tab preferences. This makes Firefox load an add-ons options UI in a new tab instead of a new window. Setting the optionsType property to 3 enables this.

Unloading JS modules

One of the problems with writing a restartless add-on is that you have to be careful about undoing anything your add-on does when it is told to shutdown. This means that right now some features of the platform can’t be used as we have no way to undo them. Recently I made this list a little shorter by making it possible to unload JS modules loaded with Components.utils.import().

Just call Components.utils.unload(uri) and the module loaded from that URI will be unloaded. Take care when you do this because something might still have references into it which will stop working. Firefox also caches the module’s code for fast loading the next time around. The add-ons manager clears this cache when your add-on is updated or uninstalled so you mostly don’t have to worry about it but if you do something strange like unload a module, manually alter the file and then import it again you won’t get the latest code.

This support is in the upcoming Aurora build.

Why do Firefox updates break add-ons?

Our success in switching to the new rapid release cycle for Firefox has stirred up lots of excitement in the community and I wouldn’t be surprised if that intensifies when we ship the next update to Firefox in 8 weeks time. People keep pointing out that everytime we update Firefox we break add-ons so surely faster releases means add-ons will get broken faster. Many people don’t really understand why Firefox updates should break add-ons anyway so here is my attempt at an explanation and how maybe rapid releases aren’t such a bad thing after all.

The crux of the matter is in how deeply we allow add-ons integrate with Firefox. Most browsers separate add-ons from their code using what can be called an add-on API. All the add-on can see and use is what the browser makes available through that API and this is normally a restricted set of functionality. Firefox has no such separation. Add-on code runs in exactly the same setting as the browser code, they can call any of the internal functionality that we write to make the browser work in the first place.

There are big advantages to the Firefox way:

  • Add-ons can do basically anything, allowing really complicated add-ons like Firebug and NoScript to exist.
  • Authors don’t have to wait for features to be exposed through some API to be able to use it, as soon as the feature is in the application add-ons can use it.

There are also some disadvantages:

  • Add-ons can do basically anything so you should be wary about installing add-ons from people you don’t trust!
  • Because we aren’t forced to provide an API specifically for add-ons sometimes what is there can be cumbersome for add-ons to use.
  • Add-ons rely on internal functions, if we make a change to one then it breaks the add-on.

It is that last point which explains why Firefox updates break add-ons. Any time we add, remove or change some code we run a risk that some add-on depends on it and so will break. Some bits of code are more important than others but we have such a vast library of add-ons now that it’s probably getting to the point where almost everything is in use. When people say that Firefox updates shouldn’t break add-ons what they don’t realise is that they’re asking us to stop making changes to Firefox: no new features, no bug fixes.

How does the Add-ons SDK fit in?

With the most recent update to Firefox we also made the first official release of the new add-ons SDK available. These are tools designed to make developing add-ons for Firefox easier and faster and they largely serve as an add-ons API sitting between the add-on and Firefox with one difference, the API is actually a part of the add-on so you can actually write your own additional APIs to supplement those in the core SDK.

Add-ons written with the SDK (and only using the core APIs) gain a lot of the benefits of add-ons for other browsers. The APIs in the SDK can be far more stable than those in the browser itself, as the browser code changes the internal SDK code can adapt to match allowing add-ons to work just by rebuilding with the newer SDK. This makes the problem of Firefox updates much smaller for those that can move their add-ons to the SDK.

I don’t foresee a future where the only add-ons are SDK based though. We’d lose too many really important add-ons that way so the SDK doesn’t completely solve the problem.

How does rapid release affect the problem?

As I’ve said, when Firefox updates it breaks add-ons. Ask any add-on author how much fun it was fixing their add-ons to work from Firefox 2 to Firefox 3 or 3.6 to 4 and you’ll probably get some pained expressions. Now think about how in the past we would do a Firefox update about once a year, now we are talking about it being more than eight times a year. Having given up maintaining my own add-ons I can hardly blame authors who are concerned that they won’t be able to keep up with the pace.

It actually isn’t all bad though. It’s not like we are making more changes than we were before, we’re just releasing them sooner in bite-sized chunks. Instead of one update a year that probably changes every piece of functionality an add-on might use we do 8 smaller updates, each one touching smaller parts of the code. This means it is more likely that an add-on can survive an update than before.

The stability of updates is a key change too. It used to be that add-on authors might choose to wait for the RC stage of an update before updating their add-ons. We’d make so many changes even during the beta cycle that sometimes it wasn’t worth trying to keep up. In our new cycle though there is a full 6 weeks in the beta cycle before release during which time almost nothing changes, no new features, extremely limited bugfixes, just things to solve any stability issues that have been found. Before that there is 6 weeks in the aurora cycle which again should be largely change free. Some new features might get removed at this point and there is the possibility of larger bugfixes, but based on the two aurora cycles we’ve had so far there are far fewer changes going in than there were in the final beta stages of Firefox 4.

I actually think that rapid releases can make it easier for add-on authors to keep up. Sure you have to check your add-on more often but most of the time it will probably just work, if not the scope of change is likely to be smaller and you have 12 weeks of pretty stable code to build against.

Why not just mark add-ons as working when they are?

A pretty common question asked is why we don’t just mark add-ons as working in the new update of Firefox when we know they are. The good news is that we actually are starting to do this. As we make changes to Firefox we plan to keep track of which changes have the potential to affect add-on compatibility and automatically scan add-ons on AMO to find ones that might break.

In the run up to the most recent update of Firefox we found nearly 4,000 add-ons that were unlikely to be affected by the update and just marked them as compatible with no need for the add-on author to do anything. For the others we emailed the author to explain what problems we had found. You can read more about the compatibility checking process we’ll be using.

As well as automated tools the add-on compatibility reporter provides add-on authors valuable feedback on how their add-ons are performing in Firefox. Users can install this tool and then use normally incompatible add-ons and file reports on whether they are working or not.

Faster releases are good for add-ons

Rapid release is all about getting new features into users’ hands faster. It has a side effect though, it gets new features and APIs into add-on authors’ hands faster too. Back when I was still writing add-ons I would frequently do so in response to some fantastic new thing that had been added to the Firefox nightlies. It could then be up to a year before I’d be able to release an add-on that normal users could actually make use of, by which point I’d often have lost interest in it.

I think that on the whole faster releases will be a good thing for add-ons. Add-on developers can get exciting new features into users’ hands faster and should only need to make smaller updates with each new Firefox. The new automated tools we are building to help authors understand when and why their add-ons are no longer compatible are going to be a massive improvement, especially as we get better at identifying the changes we make in Firefox that will affect add-ons.

Time will tell of course, but I do believe that the overall work an author has to do to keep an add-on up to date with Firefox is going to drop with the new rapid release cycle even though they may have to do something more often.

Creating custom add-on types just got easier

One of the nice features that we added to the add-ons manager in Firefox 4 was support  for custom add-on types that could be treated the same way as the built-in types, even showing up in the same UI if you did a little work. I blogged a basic example of how to do this and I know since then Greasemonkey and Stylish have been using the support.

A few days ago (and now in Aurora builds) I landed some improvements that simplify the hardest part of this, making the UI appear.

Registering a custom type

Previously in order to add a new type of add-on and have it appear in the add-ons manager you would have to do two things, first tell the add-ons manager module that you supported add-ons and secondly manually create some elements in the UI to allow the user to view them. These two things have now been combined:

AddonManagerPrivate.registerProvider(myProvider, [{
  id: "scripts",
  name: "User Scripts",
  uiPriority: 3500,
  viewType: AddonManager.VIEW_TYPE_LIST
}]);

This registers the provider as before but also tells the module that your provider supports an AddonType with the ID “scripts” and some other information. The documentation explains a little more about the properties. You’ll note it passes an array to allow for providers registering multiple types.

Currently you only need to do this if you want your add-on types to show up in the UI, however we may start requiring providers to register their types to help speed up some of the API calls. If you want to register your type and not have the UI automatically show it just don’t give it a viewType.

Breaking changes

Some of the changes we had to make may have broken some add-ons that have been doing this the hard way in the past. Here are the once I’ve identified and some ways to work around the problems:

Types must be registered

The UI now requires that if you’re displaying your add-ons in the regular list view then it must have been registered. If you only care about supporting Firefox 6 and higher then you can just remove your code for adding your type to the category list, if you want to support both then you should either only do that on versions before Firefox 6, or you could always manually add your type to the UI but also register your type with no viewType.

Some IDs changed in the UI

We made some changes to the IDs of elements in the add-ons manager UI, we dropped the “s” on the end of the type selectors. category-extensions became category-extension as an example.

Category items aren’t in the UI immediately

Because we now build the list of types to display in the UI (even the built-in types use the same type registration I’ve talked about here), the elements for those types aren’t in the UI until after the main initialize function for the window has completed. If you try to use overlays that refer to those elements or if you run script that tries to find these elements before the initialize function completes (it is a load event listener) then they won’t exist and you’ll probably find things broken.