Get notifications about changes to any directory in mercurial

Back in the old days when we used CVS of all things for our version control we had a wonderful tool called bonsai to help query the repository for changes. You could list changes on a per directory basis if you needed which was great for keeping an eye on certain chunks of code. I recall there being a way of getting an RSS feed from it and I used it when I was the module owner of the extension manager to see what changes were landed that I hadn’t noticed in bugs.

Fast forward to today and we use mercurial instead. When we switched there was much talk of how we’d get tool parity with CVS but bonsai is something that has never been replaced fully. Oh hgweb is decent at looking at individual files and browsing the tree, but you can’t get that list of changes per directory from it. I believe you can use the command line to do it but who wants to do that? Lately I’ve been finding need of those directory RSS feeds more and more. We’re now periodically uplifting the Add-on SDK repository to mozilla-central, it’s really important to spot changes that have been made to that directory in mozilla-central so we can also land them in our git repository and not clobber them the next time we uplift. I’m also the module owner of toolkit, which is a pretty big sprawling set of files. It seems like everytime I look I find something that landed without me noticing. I don’t make for a good module owner if I’m not keeping an eye on things so I’d really like to see when new files are added there.

So I introduce the Hg Change Feed, the result of mostly just a few days of work. Every 10 minutes it pulls new changes from mozilla-central and mozilla-inbound. A mercurial hook looks over the changes and adds information about them to a MySQL database. Then a simple django app displays that information. As you browse through the directories in the tree it shows only changesets that affected files beneath that directory. For any directory you can also get an RSS feed of the same. Plug that into IFTTT and you have an automated system to notify you in pretty much any way you’d like about new changes you’d be interested in.

Some simple examples. For tracking changes to Add-on SDK I’m watching http://hgchanges.fractalbrew.com/mozilla-inbound/file/addon-sdk/source. For toolkit I’m looking at http://hgchanges.fractalbrew.com/mozilla-inbound/file/toolkit?types=added. Types takes a comma separated list of “added”, “removed” and “modified” to filter which changes you’re interested in. There’s no UI on the site for changing that right now, you’re welcome to add some!

One other neat trick that this does is mostly ignore merge changesets. Only if a merge actually makes a change not already present in either of the merge parents (mostly happens when resolving merge conflicts) will it show up in the list of changes, because really you don’t need to hear about changes twice.

So play with it, let me know if you find it useful or if you think things are missing. I can also add other mercurial repositories if people want. Some caveats:

  • It only retains the last 2000 changesets from any repository in an effort to keep the DB size small and fast, it also only shows the last 200 changesets for each page, or just the last 20 in the feeds. These can be tweaked easily enough and I’ve done basically no benchmarking to say those are the right values.
  • The site isn’t as fast as I’d like, particularly listing changes for the top level directory takes nearly 5 seconds. I’ve thrown some basic caching in place to help alleviate that for now. I bet someone who has more MySQL and django experience than me could tell me what I’m doing wrong.
  • I’m off on vacation tomorrow so I guess I’m announcing this then running away, sorry if that means it takes me a while to respond to comments.

Want to help out and make it better? Go nuts with the source. There’s a readme that hopefully explains how to set up your own instance.

Hacking on Tilt

Tilt, or 3D view as it is known in Firefox, is an awesome visual tool that really lets you see the structure of a webpage. It shows you just how deep your tag hierarchy goes which might give signs of your page being too complex or even help you spot errors in your markup that you wouldn’t otherwise notice. But what if it could do more? What if there were different ways to visualise the same page? What if even web developers could create their own visualisations?

I’ve had this idea knocking around in my head for a while now and the devtools work week was the perfect time to hack on it. Here are some results, click through for larger images:

Normal 3D view
Normal 3D view
Only give depth to links
Only give depth to links
Only give depth to links going off-site
Only give depth to links going off-site
Only give depth to elements that have a different style on hover
Only give depth to elements that have a different style on hover

This is all achieved with some changes to Firefox itself to make Tilt handle more generic visualisations along with an extension that then overrides Tilt’s default visualisation. Because the extension has access to everything Firefox knows about the webpage it can use some interested sources of data about the page, including those not found in the DOM. This one I particularly like, it makes the element’s depth proportional to the number of attached DOM event listeners:

Give depth to elements based on the number of attached event listeners
Give depth to elements based on the number of attached event listeners

Just look at that search box, and what’s up with the two buttons having different height?

The code just calls a JS function to get the height for each element displayed in 3D view. It’s really easy to use DOM functions to highlight different things about the elements and while I think some of the examples I made are interesting I think it will be more interesting to just let web-devs come up with and share their own visualisations. To that end I also demoed using scratchpad to write whatever function you like to control the visualisation. You can see a screencast of it in action.

Something that struck me towards the end of the week is that it could be awesome to pair this up with external sources of data like analytics. What about being able to view your page with links given depth proportional to how often users click them? Seems like an awesome way to really understand where your users are going and maybe why.

I’m hoping to get the changes to Firefox landed soon maybe with an additional patch to properly support extensibility of Tilt, right now the extension works by replacing a function in a JSM which is pretty hacky, but it wouldn’t be difficult to make it nicer than that. After that I’ll be interested to see what visualisation ideas others come up with.

The Add-on SDK is now in Firefox

We’re now a big step closer to shipping the SDK APIs with Firefox and other apps, we’ve uplifted the SDK code from our git repository to mozilla-inbound and assuming it sticks we will be on the trains for releasing. We’ll be doing weekly uplifts to keep the code in mozilla-central current.

What’s changed?

Not a lot yet. Existing add-ons and add-ons built with the current version of the SDK still use their own versions of the APIs from their XPIs. Add-ons built with the next version of the SDK may start to try to use the APIs in Firefox in preference to those shipped with the XPI and then a future version will only use those in Firefox. We’re also talking about the possibility of making Firefox override the APIs in any SDK based add-on and use the shipped ones automatically so the add-on author wouldn’t need to do anything.

We’re working on getting the Jetpack tests running on tinderbox switched over to use the in-tree code, once we do we will be unhiding them so other developers can immediately see when their changes break the SDK code. You can now run the Jetpack tests with a mach command or just with make jetpack-tests from the object directory. Those commands are a bit rudimentary right now, they don’t give you a way to choose individual tests to run. We’ll get to that.

Can I use it now?

If you’re brave, sure. Once a build including the SDKs is out (might be a day or so for nightlies) fire up a chrome context scratch pad and put this code in it:

var { Loader } = Components.utils.import("resource://gre/modules/commonjs/toolkit/loader.js", {});
var loader = Loader.Loader({
  paths: {
    "sdk/": "resource://gre/modules/commonjs/sdk/",
    "": "globals:///"
  },
  resolve: function(id, base) {
    if (id == "chrome" || id.startsWith("@"))
      return id;
    return Loader.resolve(id, base);
  }
});
var module = Loader.Module("main", "scratchpad://");
var require = Loader.Require(loader, module);

var { notify } = require("sdk/notifications");
notify({
  text: "Hello from the SDK!"
});

Everything but the last 4 lines sets up the SDK loader so it knows where to get the APIs from and creates you a require function to call. The rest can just be code as you’d include in an SDK add-on. You probably shouldn’t use this for anything serious yet, in fact I haven’t included the code to tell the module loader to unload so this code example may leak things for the rest of the life of the application.

This is too long of course (longer than it should be right now because of a bug too) so one thing we’ll probably do is create a simple JSM that can give you a require function in one line as well as take care of unloading when the app goes away.

Making Git play nice on Windows

A while ago now I switched to Windows on my primary development machine. It’s pretty frustrating particularly when it comes to command line support but MozillaBuild goes a long way to giving me a more comfortable environment. One thing that took a long time to get working right though was full colour support from both mercurial and git. Git in particular is a problem because it uses a custom version of MSYS which seems to conflict with the stock MSYS in MozillaBuild leaving things broken if you don’t set it up just right. Since I just switched to a new machine I thought it would be useful to record how I got it working, perhaps more for my benefit than anything else, but perhaps others will find it useful too.

You want to install MozillaBuild, Mercurial and Git. MozillaBuild comes with Mercurial, but it’s generally an older version. This post assumes you put them in their default locations (on a 64-bit machine), so adjust accordingly if you changed that. When the Git installer asks choose the most conservative option, don’t put git in your PATH here.

You need some custom settings. First create a .hgrc file in your home directory or add the following to it if you already have one:

[extensions]
hgext.color=

[color]
mode=win32

Now a .profile file in the same place:

export PATH="/c/Program Files/Mercurial":$PATH:"/c/Program Files (x86)/Git/bin"

The ordering is important there. $PATH will be set up to point to MozillaBuild (and other places) when you run the MozillaBuild startup scripts. You want the installed Mercurial to override the one in MozillaBuild and you want the MozillaBuild binaries to override the custom MSYS versions that Git comes with.

Finally you want a decent console to use. I’m using Console2 with general success. Install it, start it and go into its settings. Console2 supports creating custom tab types, so you can create a new MozillaBuild tab, or a new Windows console tab, etc. I like to do this so go to the “Tabs” section of the settings, click add and then give it a title of MozillaBuild. You then want to add the MozillaBuild startup script you want to use in the “Shell” option, e.g. “C:\mozilla-build\start-msvc10.bat” for me. If you’re feeling extravagant (and have a checkout of mozilla-central), point the icon to browser/branding/official/firefox.ico.

Some final useful settings, In “Hotkeys” assign Ctrl+T to “New Tab 1“. In “Mouse” set “Select Text” to the left button and “Paste Text” to the right button.

And that should be it. Open a new MozillaBuild tab in Console2. If you’ve done everything right you should see messages about the Visual Studio and SDK versions chosen. “which hg” and “which git” should show you the binaries in Program Files somewhere. They should both run and output in colour when useful.

Let’s just put it in Toolkit!

Toolkit is fast turning into the dumping ground of mozilla-central. Once upon a time the idea was simple. Any code that could be usefully shared across multiple applications (and in particular code that wasn’t large enough to deserve a module of its own) would end up in Toolkit. The rules were pretty simple, any code in there should work for any application that wants to use it. This didn’t always work exactly according to plan but we did our best to fix Seamonkey and Thunderbird incompatibilities as they came along.

This worked great when there was only one Firefox. Shared code went into m-c/toolkit, Firefox specific code went into m-c/browser. There were always complaints that more of the code in browser should be moved to toolkit so Seamonkey and other projects could make use of it but otherwise no big issues.

Now we have more than one Firefox: Firefox for desktop, Firefox for Android, B2G, Metro and who knows what else to come. Suddenly we want to share code across different Firefoxen, often different sets depending on the code, often depending on other pieces of code like services that aren’t available in all other applications. Keeping the rules as they stand now means that Toolkit isn’t the correct place for this code. So what do we do about this?

There only seem to be two sensible choices. Either we change the Toolkit rules to allow code that may not work in some applications, or we create a new catch-all module for this sort of code. And really those are just the same but with the need to find a new module owner for the new module. I’m going to ignore the third option which is to create a new module for each new piece of code like this as being hopelessly bureaucratic.

So, I’m proposing that we (by which I mean I as module owner) redefine the rules for Toolkit to be a little broader:

  • Any code in Toolkit should be potentially useful to multiple applications but it isn’t up to the author to make it work everywhere.
  • Patches to make code work in other applications will be accepted if not too invasive.
  • Any code in Toolkit that is called automatically by Gecko (like the add-ons manager) must work in all applications.

Any strong objections?

What is an API?

I recently posted in the newsgroups about a concern over super-review. In some cases patches that seem to meet the policy aren’t getting super-reviewed. Part of the problem here is that the policy is a little ambiguous. It says that any API or pseudo-API requires super-review but depending on how you read that section it could mean any patch that changes the signature of a JS function is classed as an API. We need to be smarter than that. Here is a straw-man proposal for defining what is an API:

Any code that is intended to be used by other parts of the application, add-ons or web content is an API and requires super-review when added or its interface is changed

Some concrete examples to demonstrate what I think this statement covers:

  • JS modules and XPCOM components in toolkit are almost all intended to be used by applications or add-ons and so are APIs
  • UI code in toolkit (such as extensions.js) aren’t meant to be used elsewhere and so aren’t APIs (though they may contain a few cases such as observer notifications, these should be clearly marked out in the file)
  • Any functions or objects exposed to web content are APIs
  • The majority of code in browser/ is only meant to be used within browser/ and so isn’t an API. There are some exceptions to this where extensions rely on certain functionality such as tabbrowser.xml

Do you think my simple statement above matches those cases and others that you can think of? Is this a good way to define what needs super-review?

Simple image filters with getUserMedia

I forgot to blog about this last week, but Justin made me remember. The WebRTC getUserMedia API is available on the Nightly and Aurora channels of Firefox right now and Tim has done a couple of great demos of using JavaScript to process the media stream. That got me interested and after a little playing around I remembered learning the basics of convolution image filters so I thought I’d give it a try. The result is a sorta ugly-looking UI that lets you build your own image filters to apply to the video coming off your webcam. There are a few pre-defined filter matrices there to get you started and it’s interesting to see what effects you can get. Remember that you need to enable media.navigator.enabled in about:config to make it work.

The downside is that either through me not seeing an obvious optimisation or JS just being too slow right now it isn’t fast enough for real use. Even a simple 3×3 filter is too slow on my machine since it ends up having to do 9 calculations per pixel which is just too much. Can someone out there make it faster?

Update: The ever-awesome bz took a look and pointed out three quick fixes that made the code far faster and it now runs almost realtime with a 3×3 filter for me. First he pointed out that storing frame.data in a variable outside the loops rather than resolving it each time speeds things up a lot. Secondly apparently let isn’t fully supported by IonMonkey yet so it switches to a slower path when it encounters it. Finally I was manually clamping the result to 0-255 but pixel data is a Uint8ClampedArray so it clamps itself automatically.

Managing changes is the key to a project’s success

TomTom made an interesting claim recently. Their summary is “when it comes to automotive-grade mapping, open source has some quite serious limitations, falling short on the levels of accuracy and reliability required for safe navigation

This is a bold claim and they talk about recent studies that back them up. Unfortunately none of them are referenced but it’s pretty clear from the text of the article that all they are doing is comparing the accuracy of TomTom maps with existing open source maps. So they’re just generalising, this doesn’t prove a limitation with the open source process itself of course, just perhaps of a particular instance of it.

In fact having read the article I think TomTom are just misunderstanding how open source development works. Their basic complaint seems to be that open source maps are entirely community generated with no proper review of the changes made. In such a situation I’m sure the data generated is always going to be liable to contain errors, sometimes malicious, sometimes honest. But that isn’t how open source development works in general (I make no claim to know how it works for open source mapping). I’d probably call such a situation crowd-sourcing.

Successful open source projects rely on levels of management controlling the changes that are made to the central repository of the source code (or in this case mapping data). In Firefox for example every change is reviewed at least once by an expert in the area of the code affected before being allowed into the repository. Most other open source projects I know of run similarly. It’s this management that is, in my opinion, key to the success of the project. Clamp down too hard on changes and development is slow and contributors get frustrated and walk away, be too lenient and too many bugs get introduced or the project veers out of control. You can adjust the level of control based on how critical the accuracy of the contribution is. Of course this isn’t some special circumstance for open source projects, closed source projects should operate in the same fashion.

The part of their post that amuses me is when they say “we harness the local knowledge of our 60 million satnav customers, who can make corrections through TomTom Map Share“. So basically they accept outside contributions to their maps too. As far as their development goes it sounds like they function much like an open source project to me! The only claim they make is that they have better experts reviewing the changes that are submitted. This might be true but it has nothing to do with whether the project is open source or not, it’s just who you find to control the changes submitted.

There is of course one place where open source is at an arguable disadvantage. The latest bleeding edge source is always available (or at least should be). If you look at the changes as they come in, before QA processes and community testing has gone on then of course you’re going to see issues. I’m sure TomTom have development versions of their maps that are internal only and probably have their fair share of errors that are waiting to be ironed out too. Open source makes it perhaps easier to end up using these development versions so unless you know what you’re doing you should always stick to the more stable releases.

Just because a project accepts contributions from a community doesn’t mean it is doomed to fail, nor does it mean it is bound to succeed. What you have to ask yourself before using any project, open source or not, is how good are those controlling changes and how many people are likely to have reviewed and tested the end result.

 

How Crashplan breaks xpcshell tests on Windows

I recently switched to a Windows laptop and have been going through the usual teething pains related. One thing that confused me though was that when I was running xpcshell tests on my new machine they would frequently fail with access denied errors. I’ve seen this sort of thing before so I know some service was monitoring files and opening them after they had changed, when this happens they can’t be deleted or edited until the service closes them again and often tests open, close and delete files so fast that there isn’t time for that to happen.

It took me a little while to remember that I can just use Process Monitor to track down the offending service. Just fire it up, set a filter to only include results to a particular directory (the temp directory in this case) and go create a file there and see what shows up. I was quite surprised to see Crashplan, the backup software I (and probably many people in Mozilla) use. Surprised because Crashplan isn’t set to backup my temp directory and really I shudder to think what the performance cost is of something continually accessing every file that changes in the temp directory.

Turns out you can turn it off though. Hidden in the depths of Crashplan’s advanced backup settings is an option to disable real-time filesystem watching. From what I can see online the downside to this is that files will only be backed up once a day, but that’s a pretty fine tradeoff to  having functioning xpcshell tests for me. There is also an option to put crashplan to sleep for an hour or so, that seems to work too but I don’t know exactly what that does.

It confuses me a little why Crashplan monitors files it never intends to backup (even when the backup server isn’t connected and backups aren’t in progress) and it is quite a lot of file accesses it does too. Seems likely to be a bug to me but at least I can workaround it for now.

WebApp Tabs, version control and GitHub

The extension I’ve been working on in my spare time for the past couple of weeks is now available as a first (hopefully not too buggy) release. It lets you open WebApps in Thunderbird, properly handling loading new links into Firefox and making all features like spellchecking work in Thunderbird (most other extensions I found didn’t do this). You can read more about the actual extension at its homepage.

Mostly I’ve been really encouraged during the development of this at just how far our platform has come for developing restartless add-ons. When we first made it possible in Firefox 4 there was a whole list of things that were quite difficult to do but we’ve come a long way since then. While there are still things that are difficult there are lots of things that are now pretty straightforward. My add-on loads simple XUL overlays, style overlays, installs JS XPCOM components with category manager registration, all similar to older add-ons. In fact I’m struggling to think of things that it is still hard to do though I’m sure other more prolific developers will have plenty of comments on that!

The other thing I’ve been doing with this extension is experimenting with git and GitHub. I think it’s been an interesting experience, there are continual arguments over which is better between git and mercurial with many pros and cons listed. I think most of these were done some time ago before mercurial and git really matured because from what I’ve seen there is really little difference between the two. They have slightly different default branching styles, but both can do the same kind of branching that the other can if you want and there are a few other minor differences but nothing that would really make me all that bothered over deciding which to use. I think the only place where git has a bonus is with GitHub, and really as far as I can see there isn’t a reason why someone couldn’t develop a similar site backed by mercurial repositories, it’s just that no-one really has.

GitHub is pretty nice with built-in basic issue tracking and documentation though it still has some frustrating issues. It seems odd for example that you can’t fork your own project, only someone else’s, but that’s only a minor niggle really. As project hosting goes I can’t say I’ve come across anything better that I can remember.