What is Jetpack here for?

Who are the Jetpack team? What are they here for? A lot of people in Mozilla don’t know that Jetpack still exists (or never knew it existed). Others still think of it as the original prototype which we dropped in early 2010. Our goals have changed a lot since then. Even people who think they know what we’re about might be surprised at what our current work involves.

Let’s start with this basic statement:

Jetpack makes it easier to develop features for Firefox

There are a couple of points wrapped up in this statement:

  1. The Jetpack team doesn’t develop features. We enable others to develop features. We should be an API and tools team. We’ve done a lot of API development, but we should work with the developer tools team more to make sure that Firefox developer tools work for Firefox development, too.
  2. I didn’t say “add-ons.” I said “features”. The APIs we develop shouldn’t be just for add-on developers, Firefox developers should be able to use them too. If the APIs we create are the best they can be, then everyone should want to use them. They should demand to use them, in fact. It follows that this should make it easier for add-ons to transition into full Firefox features, or vice versa.

We happen to think that a modular approach to developing features makes more sense than the old style of using overlays and xpcom. All of the APIs we develop are common-js style modules (as used in Node and elsewhere) and we’ve built the add-on SDK to simplify this style of add-on development. That work includes a loader for common-js modules which has now landed in Firefox and is available to all feature developers. It also includes tools to take directories of common-js modules and convert them into a standalone XPI that can be installed into Firefox.

The next step in supporting Firefox and add-on developers is to land our APIs into Firefox itself so they are available to everyone. We hope to complete that by the end of this year and it will bring some great benefits, including smaller add-ons and the ability to fix certain problems in SDK based add-ons with Firefox updates.

Mythical goals

Jetpack has been around for a while and in that time our focus has changed. There are a few cases where I think people are mistaken about the goals we have. Here are the common things that were talked about as hard-goals in the past but I don’t are any longer.

Displace classic add-ons

Most people think Jetpack’s main goal is to displace classic add-ons. It’s obvious that we’ve failed to do that. Were we ever in a position to do so? Expecting the developers of large add-ons to switch to a different style of coding (even a clearly better one) without some forcing factor doesn’t work. The electrolysis project might have done it, but even supporting e10s was easier than converting a large codebase to the add-on SDK. The extension ecosystem of today still includes a lot of classic addons, and the APIs we build should be usable by their developers, too.

Forwards compatibility

Users hate it when Firefox updates break their add-ons. Perfect forwards compatibility was another intended benefit of the SDK. Shipping our APIs with Firefox will help a lot, as the add-ons that use them will work even if the specific implementation of the APIs needs to change under the hood over time. It won’t be perfect, though. We’re going to maintain the APIs vigorously, but we aren’t fortune tellers. Sometimes parts of Firefox will change in ways that force us to break APIs that didn’t anticipate those changes.

What we can do though is get better at figuring out which add-ons will be broken by API changes and reach out to those developers to help them update their code. All add-on SDK based add-ons include a file that lists exactly which APIs they use making it straightforward to identify those that might be affected by a change.

Cross-device compatibility

There’s a theory that says that as long as you’re only using SDK APIs to build your add-on, it will magically work on mobile devices as well as it does on desktop. Clearly we aren’t there yet either. We are making great strides, but the goal isn’t entirely realistic either. Developers want to be able to use as many features as they can in Firefox to make their new feature great. But many features in one device don’t exist or make no sense on other devices. Pinned tabs exist on desktop and the SDK includes API support for pinning and un-pinning tabs. But on mobile there is currently no support for pinned tabs. Honestly, I don’t think it’s something we even should have for phone devices. Adding APIs for add-ons to create their own toolbars makes perfect sense on desktop, but again for phones makes no sense at all.

So, do we make the APIs only support the lowest common denominator across all devices Firefox works on? Can we even know what that is? No. We will support APIs on the devices where it makes sense, but in some cases we will have to say that the API simply isn’t supported on phones, or tablets, or maybe desktops. What we can do, again by having a better handle on which APIs developers are using, is make device compatibility more obvious to developers, allowing them to make the final call on which APIs to employ.

Hopefully that has told you something you didn’t know before. Did it surprise you?

Simple image filters with getUserMedia

I forgot to blog about this last week, but Justin made me remember. The WebRTC getUserMedia API is available on the Nightly and Aurora channels of Firefox right now and Tim has done a couple of great demos of using JavaScript to process the media stream. That got me interested and after a little playing around I remembered learning the basics of convolution image filters so I thought I’d give it a try. The result is a sorta ugly-looking UI that lets you build your own image filters to apply to the video coming off your webcam. There are a few pre-defined filter matrices there to get you started and it’s interesting to see what effects you can get. Remember that you need to enable media.navigator.enabled in about:config to make it work.

The downside is that either through me not seeing an obvious optimisation or JS just being too slow right now it isn’t fast enough for real use. Even a simple 3×3 filter is too slow on my machine since it ends up having to do 9 calculations per pixel which is just too much. Can someone out there make it faster?

Update: The ever-awesome bz took a look and pointed out three quick fixes that made the code far faster and it now runs almost realtime with a 3×3 filter for me. First he pointed out that storing frame.data in a variable outside the loops rather than resolving it each time speeds things up a lot. Secondly apparently let isn’t fully supported by IonMonkey yet so it switches to a slower path when it encounters it. Finally I was manually clamping the result to 0-255 but pixel data is a Uint8ClampedArray so it clamps itself automatically.

Six hour chilli

I’ve enjoyed making and eating chilli ever since I was in university. It has a lot of appeal for your average student:

  • You can make it in bulk and live off it for a few weeks
  • You can make it spicy enough that you start bleeding out of your nose (and thus totally impress all your friends)
  • It’s dead simple and really hard to screw up

The great part is that really once you have the basic ingredients you can just go nuts, vary up the ratios, add extra bits and pieces, whatever you fancy. Eventually though I discovered that I was getting a bit too wild and really wanted to nail down a basic “good” recipe to work from. It took a couple of tries but here it is. It’ll take around six hours to cook, an extra hour for prep depending on how lazy/drunk you are.

Ingredients

  • 2 cups chopped onions
  • 2.5 lb lean ground beef

 

  • 56 oz diced tomatoes (canned works just fine)
  • 2 tsp chilli powder
  • 2 tsp chilli pepper flakes
  • 1 tsp ground cumin
  • 1 tsp cayenne pepper
  • 1/8 cup cilantro leaves (I use dried but fresh is probably good too)
  • 1/2 tsp chipotle chilli pepper powder
  • 1/2 pint stout or other dark beer
  • 2 cups red wine (I tend to use merlot)
  • 2 cups beef broth
  • 1/8 cup tomato puree or 1/4 cup tomato ketchup
  • ~3-4 tsp coarsely ground black pepper

 

  • 3×12 oz cans of mixed beans
  • 2-3 large bell peppers (red and/or green), chopped

Instructions

  • Fry the onions in some olive oil until they start to turn translucent, then throw in the beef and stir until it browns all over.
  • Add into a large pot with the tomatoes, spices, wine, beer and broth
  • Cover and simmer for 2 hours, stirring periodically (use the remains of the stout to help you get through this)
  • Drain and add the beans
  • Cover and simmer for 2 hours, stirring periodically (it’s possible you’ll need more stout)
  • Add the chopped peppers
  • Cover and simmer for 1-2 hours, stirring periodically
  • Serve with cornbread and the rest of the bottle of wine

Tips

  • This makes a lot of chilli, make sure you have a large enough frying pan and pot. You can try scaling it down but I find it works best when making it in large quantities.
  • After adding the peppers you’re basically simmering till it is about the right consistency, uncovering and/or adding some corn starch can help thicken it up at this stage.
  • This recipe comes out pretty spicy, you might want to drop the chipotle chilli pepper power if that isn’t your thing.
  • Unless you are feeding an army make sure you have tupperware to hold the leftovers. It reheats very well, if using a microwave throwing a little extra water in helps.

After an awesome Jetpack work week

It’s the first day back at work after spending a great work week in London with the Jetpack team last week. I was going to write a summary of everything that went on but it turns out that Jeff beat me to it. That’s probably a good thing as he’s a better writer than I so go there and read up on all the fun stuff that we got done.

All I’ll add is that it was fantastic getting the whole team into a room to talk about what we’re working on and get a load of stuff done. I ran a discussion on what the goals of the Jetpack project are (I’ll follow up on this in another blog post to come later this week) and was delighted that everyone on the team is on the same page. Employing people from all around the world is one of Mozilla’s great strengths but also a potential risk. It’s vital for us to keep doing work weeks and all hands like this to make sure everyone gets to know everyone and is working together towards the same goals.

Where are we all now?

Quite a while ago now I generated a map showing where all the Mozilla employees are in the world. A few of the new folks on my team reminded me of it and when I showed it to them they wanted to know when I was going to get around to updating it. Challenge accepted! Back then the Mozilla phone book was available as a static html page so scraping it for the locations was easy. Now the phonebook is a prettier webapp so the same technique doesn’t quite work. Instead I ended up writing a simple add-on to pull the main phonebook tree then do requests for the json data for each employee, dumping them to a file on disk. Then some manipulation of the original scripts led to great success. Click through for a full zoomable map.

A map showing where Mozilla staff are in the world

Mozilla staff around the world

One of the criticisms I heard the last time I did this was that it didn’t contain any information about the rest of the community. I’ll say basically the same thing that I said then. Get me the data and I’ll gladly put it on a map for all to see, but right now I don’t know where to find that information. I think this is something that Mozillians could do but I’m not sure what the chances are of getting everyone in the directory to add that info to their profile. I think this information is still useful even without the community as it demonstrates how global we are (and in some cases aren’t!) as a company.

Managing changes is the key to a project’s success

TomTom made an interesting claim recently. Their summary is “when it comes to automotive-grade mapping, open source has some quite serious limitations, falling short on the levels of accuracy and reliability required for safe navigation

This is a bold claim and they talk about recent studies that back them up. Unfortunately none of them are referenced but it’s pretty clear from the text of the article that all they are doing is comparing the accuracy of TomTom maps with existing open source maps. So they’re just generalising, this doesn’t prove a limitation with the open source process itself of course, just perhaps of a particular instance of it.

In fact having read the article I think TomTom are just misunderstanding how open source development works. Their basic complaint seems to be that open source maps are entirely community generated with no proper review of the changes made. In such a situation I’m sure the data generated is always going to be liable to contain errors, sometimes malicious, sometimes honest. But that isn’t how open source development works in general (I make no claim to know how it works for open source mapping). I’d probably call such a situation crowd-sourcing.

Successful open source projects rely on levels of management controlling the changes that are made to the central repository of the source code (or in this case mapping data). In Firefox for example every change is reviewed at least once by an expert in the area of the code affected before being allowed into the repository. Most other open source projects I know of run similarly. It’s this management that is, in my opinion, key to the success of the project. Clamp down too hard on changes and development is slow and contributors get frustrated and walk away, be too lenient and too many bugs get introduced or the project veers out of control. You can adjust the level of control based on how critical the accuracy of the contribution is. Of course this isn’t some special circumstance for open source projects, closed source projects should operate in the same fashion.

The part of their post that amuses me is when they say “we harness the local knowledge of our 60 million satnav customers, who can make corrections through TomTom Map Share“. So basically they accept outside contributions to their maps too. As far as their development goes it sounds like they function much like an open source project to me! The only claim they make is that they have better experts reviewing the changes that are submitted. This might be true but it has nothing to do with whether the project is open source or not, it’s just who you find to control the changes submitted.

There is of course one place where open source is at an arguable disadvantage. The latest bleeding edge source is always available (or at least should be). If you look at the changes as they come in, before QA processes and community testing has gone on then of course you’re going to see issues. I’m sure TomTom have development versions of their maps that are internal only and probably have their fair share of errors that are waiting to be ironed out too. Open source makes it perhaps easier to end up using these development versions so unless you know what you’re doing you should always stick to the more stable releases.

Just because a project accepts contributions from a community doesn’t mean it is doomed to fail, nor does it mean it is bound to succeed. What you have to ask yourself before using any project, open source or not, is how good are those controlling changes and how many people are likely to have reviewed and tested the end result.

 

Mossop Status Update: 2012-05-11

Done:

  • Submitted pdf.js packaging work for review (bug 740795)
  • Patched a problem on OSX with FAT filesystem profiles (bug 733436)
  • Patched a problem with restartless add-ons when moving profiles between machines (bug 744833)
  • Added some quoting for the extensions crash report annotation (bug 753900)
  • Thoughts on shipping the SDK in Firefox and problems with supporting other apps: https://etherpad.mozilla.org/SDK-in-Firefox