So a lot of the work done so far has been removing all this non-standard stuff so that ESLint can pass with only a very small set of style rules defined. Soon we’ll start increasing the rules we check in browser and toolkit.
How do I lint?
From the command line this is simple. Make sure and run
./mach eslint --setup to install eslint and some related packages then just
./mach eslint <directory or file> to lint a specific area. You can also lint the entire tree. For now you may need to periodically run setup again as we add new dependencies, at some point we may make mach automatically detect that you need to re-run it.
You can also add ESLint support to many code editors and as of today you can add ESLint support into hg!
- Linting can be used to enforce the sorts of style rules that keep our code consistent. Imagine no more nit comments in code review forcing you to update your patch. You can fix all those before submitting and reviewers don’t have to waste time pointing them out.
- Linting can catch real bugs. When we turned on one of the basic rules we found a problem in shipping code.
- With only standard JS code to deal with we open the possibility of using advanced like AST transforms for refactoring (e.g. recast). This could be very useful for switching from Cu.import to ES6 modules.
- ESLint in particular allows us to write custom rules for doing clever things like handling head.js imports for tests.
Where are we?
There’s a full list of everything that has been done so far in the dependencies of the ESLint metabug but some highlights:
- Removed #include preprocessing from browser.js moving all included scripts to global-scripts.inc
- Added an ESLint plugin to allow linting the JS parts of XBL bindings
- Fixed basic JS parsing issues in lots of b2g, browser and toolkit code
- Created a hg extension that will warn you when committing code that fails ESLint
- Turned on some basic linting rules
- Mozreview is close to being able to lint your code and give review comments where things fail
- Work is almost complete on a linting test that will turn orange on the tree when code fails to lint
I’m grateful to all those that have helped get things moving here but there is still more work to do. If you’re interested there’s really two ways you can help. We need to lint more files and we need to turn on more lint rules.
We also need to turn on more rules. We’ve got a rough list of the rules we want to turn on in browser and toolkit but as you might guess they aren’t on because they fail right now. Fixing up our JS to work with them is simple work but much appreciated. In some cases ESLint can also do the work for you!
ESLint becomes the most useful when you get warnings before even trying to land or get your code reviewed. You can add support to your code editor but not all editors support this so I’ve written a mercurial extension which gives you warnings any time you commit code that fails lint checks. It uses the same rules we run elsewhere. It doesn’t abort the commit, that would be annoying if you’re working on a feature branch but gives you a heads up about what needs to be fixed and where.
To install the extension add this to a hgrc file, I put it in the .hg/hgrc file of my mozilla-central clone rather than the global config.
mozeslint = <path to clone>/tools/mercurial/eslintvalidate.py
After that anything that creates a commit, so that includes mq patches, will run any changed JS files through ESLint and show the results. If the file was already failing checks in a few places then you’ll still see those too, maybe you should fix them up too before sending your patch for review? 😉
Over time Mozilla has been trying to reduce the amount of time between developing a feature and getting it into a user’s hands. Some time ago we would do around one feature release of Firefox every year, more recently we’ve moved to doing one feature release every six weeks. But it still takes at least 12 weeks for a feature to get to users. In some cases we can speed that up by landing new things directly on the beta/aurora branches but the more we do this the harder it is for release managers to track the risk of shipping a given release.
The Go Faster project is investigating ways that we can speed up getting changes to users. System add-ons are one piece of this that will let us deliver updates to core Firefox features more often than the regular six week releases. Instead of being embedded in the rest of the code certain features will be developed as standalone system add-ons.
Building features as add-ons gives us more flexibility in how we deliver the features to users. System add-ons will ship in two different ways. First every Firefox release will include a default set of system add-ons. These are the latest versions of the features at the time the Firefox build was produced. Later during runtime Firefox will contact Mozilla’s update servers to ask for the current list of system add-ons. If there are new or updated versions listed Firefox will download and install them giving users access to the newest features without needing to update the entire application.
Building a feature as an add-on gives developers a lot of benefits too. Developers will be able to work on and test new features without doing custom Firefox builds. Users can even try out new features by just installing the add-ons. Once the feature is ready to ship it ships as an add-on with no code changes necessary for integration into Firefox. This is something we’ve attempted to do before with things like Test Pilot and pdf.js, but system add-ons make this process much smoother and reduces the differences between how the feature runs as an add-on and how it runs when shipped in the application.
The basic support for system add-ons is already included in current nightly builds and Firefox 44 should be the first release that we could use to deliver features like this if we choose. If you’re interested in the details you can read the client implementation plan or follow along the tracking bug for the client side of the feature.
As Firefox increasingly switches to support running in multiple processes we’ve been finding common problems. Where we can we are designing nice APIs to make solving them easy. One problem is that we often want to run in-content pages like about:newtab and about:home in the child process without privileges making it safer and less likely to bring down Firefox in the event of a crash. These pages still need to get information from and pass information to the main process though, so we have had to come up with ways to handle that. Often we use custom code in a frame script acting as a middle-man, using things like DOM events to listen for requests from the in-content page and then messaging to the main process.
We recently added a new API to make this problem easier to solve. Instead of needing code in a frame script the RemotePageManager module allows special pages direct access to a message manager to communicate with the main process. This can be useful for any page running in the content area, regardless of whether it needs to be run at low privileges or in the content process since it takes care of listening for documents and hooking up the message listeners for you.
There is a low-level API available but the higher-level API is probably more useful in most cases. If your code wants to interact with a page like
about:myaddon just do this from the main process:
let manager = new RemotePages("about:myaddon");
The manager object is now something resembling a regular process message manager. It has
addMessageListener methods but unlike the regular e10s message managers it only communicates with
about:myaddon pages. Unlike the regular message managers there is no option to send synchronous messages or pass cross-process wrapped objects.
about:myaddon is loaded it has
The module documentation has more in-depth examples showing message passing between the page and the main process.
The RemotePageManager module is available in nightlies now and you can see it in action with the simple change I landed to switch
about:plugins to run in the content process. For the moment the APIs only support exact URL matching but it would be possible to add support for regular expressions in the future if that turns out to be useful.
The offending changeset that broke hgchanges yesterday turns out to be a merge from an ancient branch to current tip. That makes the diff insanely huge which is why things like hgweb were tripping over it. Kwierso point out that just ignoring those changesets would solve the problem. It’s not ideal but since in this case they aren’t useful changesets I’ve gone ahead and done that and so hgchanges is now updating again.
My handy tool for tracking changes to directories in the mozilla mercurial repositories is going to be broken for a little while. Unfortunately a particular changeset seems to be breaking things in ways I don’t have the time to fix right now. Specifically trying to download the raw patch for the changeset is causing hgweb to timeout. Short of finding time to debug and fix the problem my only solution is to wait until that patch is old enough that it no longer attempts to index it. That could take a week or so.
Obviously I’ll happily accept patches to fix this problem sooner.
I have been a little lax in my duty of keeping the list of peers for Toolkit up to date and there have been a few notable exceptions. Thankfully we’re good about disregarding rules when it makes sense to and so many people who should be peers have already been doing reviews. Of course that’s no use to new contributors trying to figure out who should review what so I am grateful to someone who prodded me into updating the list.
As I was doing so I came to the conclusion that there is a lot of overlap between Firefox code and Toolkit code. Lots of patches touch both at the same time and it often doesn’t make a lot of sense to require different reviewers there. I also couldn’t think of a reason why someone would be a trusted reviewer of Firefox code and yet not be trusted to review Toolkit code. Looking at the differences in the lists of peers confirmed that all those missing really should be Toolkit peers too.
So going forwards I have decided that Firefox peers will automatically be considered to be Toolkit peers. That means I can announce a whole bunch of new people who are now Toolkit peers, please congratulate them in the usual way, by flooding their review queue:
- Ehsan Akhgari
- Mike de Boer
- Mike Conley
- Georg Fritzsche
- Mark Hammond
- Felipe Gomes
- Gijs Kruitbosch
- Florian Quèze
- Tim Taubert
You might ask if the reverse should hold true, should all Toolkit peers be Firefox peers? i.e. should we just merge the lists. I leave that to the Firefox owner to decide but I will say that there are a few pieces of Toolkit that are very much not front-end and so in some cases I could see a reviewer for that area not needing to be listed in the Firefox list, not because they wouldn’t be trusted to turn down the patches they couldn’t review but just because there would be almost no patches in their area in Firefox. Maybe that case is so rare that it isn’t worth the hassle of two lists though.
I don’t often blog about non-work related stuff. Actually scratch that, I don’t often blog. But I can’t help but talk about how life becomes very strange when you’re expecting. None of this is new but it is new to me so therefore you must read it again and in some cases relive it.
Random strangers are suddenly your closest friend
I don’t know how my wife copes with it at least I don’t have to wear a stamp on my forehead proclaiming me to be an expectant father. Even so the fact does occasionally slip out while I’m talking to people I don’t know. All of a sudden I’m expected to offer up details on how far along we are, how my wife is feeling, if we know the sex, what names have we picked and damnit I just want to order some new checks please I don’t need to hear about how great the schools are around here.
They’re also all experts
So far I’ve been told in no uncertain terms that the city I live in is great for kids and terrible for kids. Natural birth is way better but epidurals are the way to go. Working from home is going to be great when the baby is here but also terrible. I don’t know what it is but when it comes to raising a child people suddenly start talking like they’re an authority rather than giving you suggestions like for other subjects.
They all think that you must be excited
Of course we must. I mean we’re only a few weeks from a large change in the status quo that by all accounts nothing can prepare you for so excitement is the natural reaction right? Generally I tell people that I’m actually petrified and everyone laughs like they think I am joking.
Which is strange because they apparently also think that your life is about to become terrible
- What’s that? You’re feeling a little tired? Haha just wait until your baby is here!
- You went to the cinema? Better do as much of that while you still can!
- Laundry day? When your baby is here it will be laundry hour, amirite!
Need I go on?
But naturally we’ll want a second
Without a doubt the most bizarre thing to me is how common it is for everyone from friends and family to the supermarket trolley boy to ask when we’re planning to have our second. Hold your horses there buddy the first one isn’t even cooked yet, plenty of time to be thinking about dessert later.
Slightly belated in some cases but I’d like to formally welcome four new toolkit peers. Paolo Amadini, Matthew Noorenberghe, Jared Wein and Irving Reid have all shown themselves to be well capable of reviewing patches in any of the toolkit code. Paolo, Matt and Jared actually got added a few months ago but apparently I failed to make an announcement at the time. Irving was added just last week. Please congratulate them all and don’t go too hard on their review queues!
Also if you think there are others who should be peers of Toolkit (or current peers that are no longer relevant) then please let me know.
Nearly a year ago I showed off the first version of my webapp for displaying recent changes in mercurial repositories. I’ve heard directly from a number of people that use it but it’s had a few problems that I’ve spent some of my spare time trying to solve. I’m now pleased to say that a new version is up and running. I’m not sure it’s accurate to call this a rewrite since it was entirely incremental but when I look back over of the code changes there really isn’t much that wasn’t touched in some way or other. So what’s new? For you, a person using the site absolutely nothing! So what on earth have I been doing rewriting the code?
Hopefully I’ve been making the thing a whole lot more stable. Eagle eyed users may have noticed that periodically the site would stop updating, sometimes for days at a time. The site is split into two pieces. One is a django website to display the changes. The other is the important bit that reads the change data out of mercurial and puts it into a database that the website can use. I do this caching because it would be too expensive to work it out on the fly. The problem is that the process of reading the data out of mercurial meant keeping a local version of every repository on the server (some 5GB for the four currently tracked) and reading the changes using mercurial’s python API was slow and used a lot of memory. The shared hosting environment that I run this out of kept killing the process when it used too many resources. Normally it could recover but occasionally it would need a manual kick. Worse there is some mercurial bug where sometimes pulling a repository will end up with an inconsistent database. It’s rare so many people don’t see it but when you’re pulling repositories every ten minutes it starts happing every couple of weeks, again requiring manual work to fix the problem.
I did dabble with getting this into shape so I could run it on a mozilla server. PAAS seemed like the obvious option but after a lot of work it turned out that the database size restrictions there made this impossible. Since then I’ve seem so many horror stories about PAAS falling over that I think I’m glad I never got there. I also tried getting a dedicated server from Mozilla to run this on but they were quite rightly wary when I mentioned that the process used a bunch of memory and wanted harder figures before committing, unfortunately I never found out a way to give those figures.
The final solution has been ripping out the mercurial API dependency. Now rather than needing local mercurial repositories the import process instead talks to them over the web. The default mercurial web APIs aren’t great but luckily all of the repositories I care about have pushlog installed which has a good JSON API for retrieving recently pushed changesets. So now the process is to pull the recent pushlog, get the patches for every changeset and parse them to work out which files have changed. By not having to load the mercurial structures there is far less memory used and since the process now has to delay as it downloads each patch file it keeps the CPU usage low.
I’ve also made the database itself a lot more efficient. Previously I recorded every changeset for every repository, but many repositories have the same changeset in them. So by noting that the database gets a little more complicated but uses much less space and since you only have to parse the changeset once the import process becomes faster. It took me a while to get the website performing as fast as before with these changes but I think it’s there now (barring some strange slowness in Django that I can’t identify). I’ve also turned on some quite aggressive caching both on the client and server side, if someone has visited the same page as you’re trying to access recently then it should load super fast.
So, the new version is up and running at the same location as the old. Chances are you’re using it already so if you’re noticing any unexpected slowness or something not looking right please let me know.