New Firefox reviewers

I’m delighted to announce that I’ve just made Andrew Swan (aswan on IRC) a reviewer for Firefox code. That also reminds me that I failed to announce when I did the same for Rob Helmer (rhelmer). Please inundate them with patches.

There are a few key things I look for when promoting folks to reviewers, surprisingly none of them are an understanding of the full breadth of code in Firefox or Toolkit:

  • Get to reviews soon. You don’t have to complete the review necessarily, do a first pass or if you really have no time hand off to someone else quickly.
  • Be courteous, you are a personal connection to Mozilla and you should be as welcoming as Mozilla is supposed to be.
  • Know your limits. Don’t review anything if you don’t understand the code it deals with, help the reviewee find an alternate reviewer.
  • You don’t get to demand the reviewee re-write the patch in your style. If their code meets style guidelines, fixes the bug and is efficient enough don’t waste people’s time by re-architecting it.

Pretty much all of this is demonstrated by seeing what code the potential reviewer is writing and letting them review smaller patches by already well-versed contributors.

Linting for Mozilla JavaScript code

One of the projects I’ve been really excited about recently is getting ESLint working for a lot of our JavaScript code. If you haven’t come across ESLint or linters in general before they are automated tools that scan your code and warn you about syntax errors. They can usually also be set up with a set of rules to enforce code styles and warn about potential bad practices. The devtools and Hello folks have been using eslint for a while already and Gijs asked why we weren’t doing this more generally. This struck a chord with me and a few others and so we’ve been spending some time over the past few weeks getting our in-tree support for ESLint to work more generally and fixing issues with browser and toolkit JavaScript code in particular to make them lintable.

One of the hardest challenges with this is that we have a lot of non-standard JavaScript in our tree. This includes things like preprocessing as well as JS features that either have since been standardized with a different syntax (generator functions for example) or have been dropped from the standardization path (array comprehensions). This is bad for developers as editors can’t make sense of our code and provide good syntax highlighting and refactoring support when it doesn’t match standard JavaScript. There are also plans to remove the non-standard JS features so we have to remove that stuff anyway.

So a lot of the work done so far has been removing all this non-standard stuff so that ESLint can pass with only a very small set of style rules defined. Soon we’ll start increasing the rules we check in browser and toolkit.

How do I lint?

From the command line this is simple. Make sure and run ./mach eslint --setup to install eslint and some related packages then just ./mach eslint <directory or file> to lint a specific area. You can also lint the entire tree. For now you may need to periodically run setup again as we add new dependencies, at some point we may make mach automatically detect that you need to re-run it.

You can also add ESLint support to many code editors and as of today you can add ESLint support into hg!

Why lint?

Aside from just ensuring we have standard JavaScript in the tree linting offers a whole bunch of benefits.

  • Linting spots JavaScript syntax errors before you have to run your code. You can configure editors to run ESLint to tell you when something is broken so you can fix it even before you save.
  • Linting can be used to enforce the sorts of style rules that keep our code consistent. Imagine no more nit comments in code review forcing you to update your patch. You can fix all those before submitting and reviewers don’t have to waste time pointing them out.
  • Linting can catch real bugs. When we turned on one of the basic rules we found a problem in shipping code.
  • With only standard JS code to deal with we open the possibility of using advanced like AST transforms for refactoring (e.g. recast). This could be very useful for switching from Cu.import to ES6 modules.
  • ESLint in particular allows us to write custom rules for doing clever things like handling head.js imports for tests.

Where are we?

There’s a full list of everything that has been done so far in the dependencies of the ESLint metabug but some highlights:

  • Removed #include preprocessing from browser.js moving all included scripts to global-scripts.inc
  • Added an ESLint plugin to allow linting the JS parts of XBL bindings
  • Fixed basic JS parsing issues in lots of b2g, browser and toolkit code
  • Created a hg extension that will warn you when committing code that fails ESLint
  • Turned on some basic linting rules
  • Mozreview is close to being able to lint your code and give review comments where things fail
  • Work is almost complete on a linting test that will turn orange on the tree when code fails to lint

What’s next?

I’m grateful to all those that have helped get things moving here but there is still more work to do. If you’re interested there’s really two ways you can help. We need to lint more files and we need to turn on more lint rules.

The .eslintignore file shows what files are currently ignored in the lint checks. Removing files and directories from that involves fixing JavaScript standards issues generally but every extra file we lint is a win for all the above reasons so it is valuable work. Also mostly straightforward once you get the hang of it, there are just a lot of files.

We also need to turn on more rules. We’ve got a rough list of the rules we want to turn on in browser and toolkit but as you might guess they aren’t on because they fail right now. Fixing up our JS to work with them is simple work but much appreciated. In some cases ESLint can also do the work for you!

500px isn’t quite Flickr yet

Since the big changes to Flickr last week I’ve been mulling over the idea of switching to a different photo sharing site. 500px had caught my eye in the past as being a very similar concept to Flickr. It has social aspects like Flickr does, maybe even more so as it supports the notion of “liking” a photo as well as making it a “favourite”. They seem to target the more professional photographer (yes Marissa Mayer, there really is still such a thing) and the curated photos that show up through their main photos section shows that. Frankly it’s a little off-putting since my photos don’t even come close to that level, but the same can be said for Flickr’s similar sections so I guess it’s not that big of a deal. So I took a day or two to upload some of my photos, put 500px through its paces and see how it measures up to Flickr. I’ve built up a fairly specific workflow for my photo uploading and I’m measuring against that so what might be show-stoppers for me may not affect others.

Ultimately if you can’t be bothered to read the details I found that 500px is a nice enough photo site and while visually it may look better than Flickr right now it is missing much of the functionality that I find important.

Basic organisation

500px lets you organise your photos similarly to Flickr. You have your photo library and you can put your photos into sets. The main difference is that while in Flickr your full library is generally visible to everyone, on 500px it isn’t. Instead the main photos you see for a person are those put into the “public photos” area, which just appears to be a special set. This is a little odd. If I want to see someone’s photos I have to click through all their sets, Flickr just lets me browse their photostream. Stranger still, sometimes photos randomly end up in the public photos set without me putting them there. I don’t know if this was something the website did or the plugin I used to upload, but after uploading two sets which overlapped all the photos that appeared in both sets were suddenly in the public photos set too.

Browsing through photos is hard on 500px. On Flickr if I go to a photo I can see which sets it appears in and easily move back and forward through any of those sets or the photostream. 500px only shows you thumbnails for the set you got to the photo through, it makes finding similar things from the same photographer more difficult. 500px also supports tags of course, but there doesn’t seem to be a way to show all the photos from a photographer with a particular tag. There doesn’t even seem to be a way to see a full-size version of a photo, just the 900px wide version on the main photo page.

Uploading

I’m not the sort to trust online sites to be the place where I store and manage my photos. I keep everything managed locally in Lightroom and rely on Jeffrey Friedl’s excellent plugin to then mirror that to Flickr, so I wanted to do the same with 500px. They have a Lightroom plugin too. Excitingly for me it is open source so while I found problems here it could be possible for me to improve it myself. 500px’s plugin is in a word “basic”. It can upload your photos tag them and name them but that is about it. It is remarkably slow to do that too. For some reason it does the upload in two passes, the first pass eats up my CPU and seemingly does nothing (maybe rendering the photos to disk somewhere?) then it goes ahead and does the upload. This is frustrating since you have no way to guess how long it might take. Probably not a problem for small uploads though. The other big problem for me is that it uploads the photos in a random order. I like my photos to be in the order I shot them but there doesn’t seem to be any way to do this at upload time with 500px’s plugin. Just the lack of options in their plugin mean I’d be spending a long time trying to make this plugin do what I really wanted, things like tagging based on rating, stripping certain metadata from photos etc.

Organisation

Once you have your photos in Flickr you can use their excellent organiser to put things into sets and arrange things how you like. As I mentioned I prefer to do this in Lightroom and just mirror that, but that doesn’t work for things like the order that photos appear in sets. Flickr makes that pretty easy, you can reorder a set manually or by various attributes like capture time. After uploading to 500px put all my photos out of order I figured I could just correct this online. Sadly the 500px equivalent is extremely basic. You can reorder manually … and that’s it. For a set of a few hundred photos that just doesn’t cut it.

Portfolios

One feature that 500px has that Flickr doesn’t is portfolios. They are effectively a custom website to show your photos off on, no 500px branding, very clean layouts that just show your photos off. They’re a little oddly implemented to my tastes, you have to create custom sets to appear in your portfolio, and those sets don’t show on the main 500px site. Want the same set in both? You have to duplicate it. I wasn’t a fan of any of the available layouts either but that is just my taste. Apparently you can go in and edit the layout and styles directly so you can probably do better things with this. Ultimately I don’t think it’s a very useful feature for me, and if I wanted it I could just use Flickr’s API to build something similar on my own site.

Stats

One of the things that scares me about Flickr dropping Pro membership is that they are probably going to be phasing out their stats. I like to be able to see how many people are looking at my photos and where they are coming from and while Flickr’s stats offering was always simplistic to say the least I could at least use it to find sites that were linking back to my photos. 500px boasts “Advanced Statistics” for the paid tiers, but I’m sad to say that this claim is pretty laughable. Flickr’s stats are poor, 500px’s stats are even worse. They track the social aspects (likes, faves, comments) over time but not the photo views which is what actually interests me. You can see the total views for each photo, but not over time. And that’s about the total of all the stats you get. 500px’s highest tier also boasts Google Analytics. Don’t be fooled though. This only extends to your portfolio views, not views of your photos through the main 500px site.

Summary

There is a recurring theme throughout this post. 500px has the basic functionality that you need for putting your photos online but not much beyond that and has nowhere near the functionality. There is another problem too that affects any site that isn’t Flickr. Flickr was the first big site in the game and has a great API. They are the Facebook of photo hosting. Almost any application or tool that does anything with photos boasts some level of integration with Flickr, support for other sites is random at best.

All of this is not terribly surprising. 500px launched just 4 years ago, Flickr has has more than twice that time to develop its feature set, user base and developer base. Maybe 500px will improve in the future but for now it just doesn’t have the features and support that I need and Flickr provides. Maybe I’ll continue looking at other options but if it comes down to Flickr or 500px, right now I’ll stick with Flickr.

What is an API?

I recently posted in the newsgroups about a concern over super-review. In some cases patches that seem to meet the policy aren’t getting super-reviewed. Part of the problem here is that the policy is a little ambiguous. It says that any API or pseudo-API requires super-review but depending on how you read that section it could mean any patch that changes the signature of a JS function is classed as an API. We need to be smarter than that. Here is a straw-man proposal for defining what is an API:

Any code that is intended to be used by other parts of the application, add-ons or web content is an API and requires super-review when added or its interface is changed

Some concrete examples to demonstrate what I think this statement covers:

  • JS modules and XPCOM components in toolkit are almost all intended to be used by applications or add-ons and so are APIs
  • UI code in toolkit (such as extensions.js) aren’t meant to be used elsewhere and so aren’t APIs (though they may contain a few cases such as observer notifications, these should be clearly marked out in the file)
  • Any functions or objects exposed to web content are APIs
  • The majority of code in browser/ is only meant to be used within browser/ and so isn’t an API. There are some exceptions to this where extensions rely on certain functionality such as tabbrowser.xml

Do you think my simple statement above matches those cases and others that you can think of? Is this a good way to define what needs super-review?

Managing changes is the key to a project’s success

TomTom made an interesting claim recently. Their summary is “when it comes to automotive-grade mapping, open source has some quite serious limitations, falling short on the levels of accuracy and reliability required for safe navigation

This is a bold claim and they talk about recent studies that back them up. Unfortunately none of them are referenced but it’s pretty clear from the text of the article that all they are doing is comparing the accuracy of TomTom maps with existing open source maps. So they’re just generalising, this doesn’t prove a limitation with the open source process itself of course, just perhaps of a particular instance of it.

In fact having read the article I think TomTom are just misunderstanding how open source development works. Their basic complaint seems to be that open source maps are entirely community generated with no proper review of the changes made. In such a situation I’m sure the data generated is always going to be liable to contain errors, sometimes malicious, sometimes honest. But that isn’t how open source development works in general (I make no claim to know how it works for open source mapping). I’d probably call such a situation crowd-sourcing.

Successful open source projects rely on levels of management controlling the changes that are made to the central repository of the source code (or in this case mapping data). In Firefox for example every change is reviewed at least once by an expert in the area of the code affected before being allowed into the repository. Most other open source projects I know of run similarly. It’s this management that is, in my opinion, key to the success of the project. Clamp down too hard on changes and development is slow and contributors get frustrated and walk away, be too lenient and too many bugs get introduced or the project veers out of control. You can adjust the level of control based on how critical the accuracy of the contribution is. Of course this isn’t some special circumstance for open source projects, closed source projects should operate in the same fashion.

The part of their post that amuses me is when they say “we harness the local knowledge of our 60 million satnav customers, who can make corrections through TomTom Map Share“. So basically they accept outside contributions to their maps too. As far as their development goes it sounds like they function much like an open source project to me! The only claim they make is that they have better experts reviewing the changes that are submitted. This might be true but it has nothing to do with whether the project is open source or not, it’s just who you find to control the changes submitted.

There is of course one place where open source is at an arguable disadvantage. The latest bleeding edge source is always available (or at least should be). If you look at the changes as they come in, before QA processes and community testing has gone on then of course you’re going to see issues. I’m sure TomTom have development versions of their maps that are internal only and probably have their fair share of errors that are waiting to be ironed out too. Open source makes it perhaps easier to end up using these development versions so unless you know what you’re doing you should always stick to the more stable releases.

Just because a project accepts contributions from a community doesn’t mean it is doomed to fail, nor does it mean it is bound to succeed. What you have to ask yourself before using any project, open source or not, is how good are those controlling changes and how many people are likely to have reviewed and tested the end result.