Wildwood Regional Park, 1st June

Last weekend a friend of ours was visiting so we took him out to one of our favourite walks in the Wildwood Regional Park. We’d spotted earlier in the week that there was a closer entrance to the park, just a few minutes drive from our house so we decided to try it from that side. From the maps it looked like we could drive quite close and by just a mile or so from paradise falls. Of course things never work out like that. One of the roads was closed to public traffic and so we ended up parking further out. We spotted the start of a trail leading up into the hills and so decided to take that instead. It ended up being a very steep climb up some 600 feet in the first hour but it was worth it for the views from up there. After we got down into the park and to the waterfall we found a flatter route back, it just involved going past a water reclamation facility so we had to put up with some bad smells for five minutes.

This is a five and a half mile hike, quite steep at first but it flattens out once you get out of the hills and into the main park. It took us three hours to complete.

[gpx url=”https://dl.dropboxusercontent.com/sh/qgj7quoaz66skqz/1a8h3I_dd1/20130601.gpx”]

 

How to Win Friends and Influence People in the Digital Age

A friend recently made me a deal where he would buy me this book if I’d read it and tell him something that I learnt from it. I think this is all a cunning plan of his to get all his friends to do the same and save him having to read it himself but I’m never one to turn down a free book, especially when it is one that a former manager recommended I read. Well ok so he recommended I read the original which was written in the 1930’s. Since then there have been a number of books based on the same ideas for different audiences, even one in pink for teen girls (I am not kidding). This version attempts to bring the concepts into the modern age by using more recent examples and explaining how you can apply them to the internet age.

The title of this book always put me off. It made it sound like a textbook on manipulative practices to make people like you. That isn’t really what it’s about though. It’s more about changing your own attitudes and behaviours than it is trying to get others to change theirs. The claim is that others will react to your attitude towards them and often respond in kind so if you can change yourself for the better then you’ll see others respond to you in better ways.

The principles in this book are well worth anyone taking the time to read over and try to follow. This is particularly true of those in corporate environments and they are all vital for those who manage people. It should come as no surprise then that all of them are also covered to one degree or other in other management books that I’ve read. This means that nothing in this book was completely new to me. Some of the chapters did put some of the ideas into different contexts and had me thinking of ideas that might help me manage my teams though.

What I liked about the style of the book was its simplicity. It’s short and doesn’t mess around. You can see what all the techniques are from the table of contents. For many you don’t even need to read into the chapters to understand why they are important but the chapters provide useful examples of how they can be applied in certain situations. I’m planning on making a list of the chapter headings to stick up by my desk somewhere for quick reference. A couple of the chapters seem to go off the rails a little, particularly towards the end and some of the examples meant for the digital age felt a little contrived and jammed in for the sake of it. I do wonder if it might be as good to read the original and rely on yourself to figure out how to apply it to the modern world.

I think the thing I immediately drew from the book is that I am consistently too negative. I was at first going to make this post a scathing review of the book because for sure it does have some problems. But that would be ignoring all of the benefits you can get from reading it. And what would be the point? Maybe it makes me feel big and clever but it doesn’t make me look big and clever. So hopefully this is a more positive review that should convince you to take a flick through, it is certainly worthwhile if you haven’t read much like it in the past.

And now I feel big and clever for seeing that being negative only makes me feel big and clever. Ah well, can’t win them all.

500px isn’t quite Flickr yet

Since the big changes to Flickr last week I’ve been mulling over the idea of switching to a different photo sharing site. 500px had caught my eye in the past as being a very similar concept to Flickr. It has social aspects like Flickr does, maybe even more so as it supports the notion of “liking” a photo as well as making it a “favourite”. They seem to target the more professional photographer (yes Marissa Mayer, there really is still such a thing) and the curated photos that show up through their main photos section shows that. Frankly it’s a little off-putting since my photos don’t even come close to that level, but the same can be said for Flickr’s similar sections so I guess it’s not that big of a deal. So I took a day or two to upload some of my photos, put 500px through its paces and see how it measures up to Flickr. I’ve built up a fairly specific workflow for my photo uploading and I’m measuring against that so what might be show-stoppers for me may not affect others.

Ultimately if you can’t be bothered to read the details I found that 500px is a nice enough photo site and while visually it may look better than Flickr right now it is missing much of the functionality that I find important.

Basic organisation

500px lets you organise your photos similarly to Flickr. You have your photo library and you can put your photos into sets. The main difference is that while in Flickr your full library is generally visible to everyone, on 500px it isn’t. Instead the main photos you see for a person are those put into the “public photos” area, which just appears to be a special set. This is a little odd. If I want to see someone’s photos I have to click through all their sets, Flickr just lets me browse their photostream. Stranger still, sometimes photos randomly end up in the public photos set without me putting them there. I don’t know if this was something the website did or the plugin I used to upload, but after uploading two sets which overlapped all the photos that appeared in both sets were suddenly in the public photos set too.

Browsing through photos is hard on 500px. On Flickr if I go to a photo I can see which sets it appears in and easily move back and forward through any of those sets or the photostream. 500px only shows you thumbnails for the set you got to the photo through, it makes finding similar things from the same photographer more difficult. 500px also supports tags of course, but there doesn’t seem to be a way to show all the photos from a photographer with a particular tag. There doesn’t even seem to be a way to see a full-size version of a photo, just the 900px wide version on the main photo page.

Uploading

I’m not the sort to trust online sites to be the place where I store and manage my photos. I keep everything managed locally in Lightroom and rely on Jeffrey Friedl’s excellent plugin to then mirror that to Flickr, so I wanted to do the same with 500px. They have a Lightroom plugin too. Excitingly for me it is open source so while I found problems here it could be possible for me to improve it myself. 500px’s plugin is in a word “basic”. It can upload your photos tag them and name them but that is about it. It is remarkably slow to do that too. For some reason it does the upload in two passes, the first pass eats up my CPU and seemingly does nothing (maybe rendering the photos to disk somewhere?) then it goes ahead and does the upload. This is frustrating since you have no way to guess how long it might take. Probably not a problem for small uploads though. The other big problem for me is that it uploads the photos in a random order. I like my photos to be in the order I shot them but there doesn’t seem to be any way to do this at upload time with 500px’s plugin. Just the lack of options in their plugin mean I’d be spending a long time trying to make this plugin do what I really wanted, things like tagging based on rating, stripping certain metadata from photos etc.

Organisation

Once you have your photos in Flickr you can use their excellent organiser to put things into sets and arrange things how you like. As I mentioned I prefer to do this in Lightroom and just mirror that, but that doesn’t work for things like the order that photos appear in sets. Flickr makes that pretty easy, you can reorder a set manually or by various attributes like capture time. After uploading to 500px put all my photos out of order I figured I could just correct this online. Sadly the 500px equivalent is extremely basic. You can reorder manually … and that’s it. For a set of a few hundred photos that just doesn’t cut it.

Portfolios

One feature that 500px has that Flickr doesn’t is portfolios. They are effectively a custom website to show your photos off on, no 500px branding, very clean layouts that just show your photos off. They’re a little oddly implemented to my tastes, you have to create custom sets to appear in your portfolio, and those sets don’t show on the main 500px site. Want the same set in both? You have to duplicate it. I wasn’t a fan of any of the available layouts either but that is just my taste. Apparently you can go in and edit the layout and styles directly so you can probably do better things with this. Ultimately I don’t think it’s a very useful feature for me, and if I wanted it I could just use Flickr’s API to build something similar on my own site.

Stats

One of the things that scares me about Flickr dropping Pro membership is that they are probably going to be phasing out their stats. I like to be able to see how many people are looking at my photos and where they are coming from and while Flickr’s stats offering was always simplistic to say the least I could at least use it to find sites that were linking back to my photos. 500px boasts “Advanced Statistics” for the paid tiers, but I’m sad to say that this claim is pretty laughable. Flickr’s stats are poor, 500px’s stats are even worse. They track the social aspects (likes, faves, comments) over time but not the photo views which is what actually interests me. You can see the total views for each photo, but not over time. And that’s about the total of all the stats you get. 500px’s highest tier also boasts Google Analytics. Don’t be fooled though. This only extends to your portfolio views, not views of your photos through the main 500px site.

Summary

There is a recurring theme throughout this post. 500px has the basic functionality that you need for putting your photos online but not much beyond that and has nowhere near the functionality. There is another problem too that affects any site that isn’t Flickr. Flickr was the first big site in the game and has a great API. They are the Facebook of photo hosting. Almost any application or tool that does anything with photos boasts some level of integration with Flickr, support for other sites is random at best.

All of this is not terribly surprising. 500px launched just 4 years ago, Flickr has has more than twice that time to develop its feature set, user base and developer base. Maybe 500px will improve in the future but for now it just doesn’t have the features and support that I need and Flickr provides. Maybe I’ll continue looking at other options but if it comes down to Flickr or 500px, right now I’ll stick with Flickr.

Hello new Flickr, goodbye social

To say that it was a shock to find Flickr had released a massive revamp of their site is something of an understatement. Ever since Yahoo took over the reigns the site has received minimal attention leading many to believe that it wouldn’t be long before they gave up altogether. Every time Yahoo announced another set of web properties that they were discontinuing users breathed a sigh of relief to see that Flickr wasn’t on the list … yet. Now those days may be over. Love them or hate them it is clear that Yahoo have invested a lot of time and thought into the changes that they released earlier this week.

The thing that strikes me as really odd about the changes though is what they seem to be doing their best to hide, the social aspect of the site. Flickr was possibly the first social photo sharing service. Many other sites have always existed that allow you to put your photos online and show off your portfolio. Flickr though was always aimed at also making connections and discussing each others photos. You have contacts, comments, favourites not to mention information about where and how each photo was taken right alongside each shot. It seems remarkable that in this time when everyone seems to be trying to build social sites that Yahoo have decided that the social aspect is less important. When you look at a photo’s page now you see a giant version of the photo. All of the information about the photo and comments people have left are hidden below the fold. The front page suffers equally. Where before I would see a list of recent comments on my photos and thumbnails of what my contacts have uploaded recently my current page shows me one and a half pictures from my contacts.

Yahoo seem to have decided that the photo is all that is really important. I disagree, the photo is of course very important but the information about the photo and how people react to it is very important too, far too important to hide.

Six years revisited

Two years ago I blogged about how it had been six years since I wrote my first patch for Firefox. Today I get to say that it’s been six years since I started getting paid to do what I love, working for Mozilla. In that time I’ve moved to a new continent, found a wife (through Mozilla no less!), progressed from coder to module owner to manager, seen good friends leave for other opportunities and others join and watched (dare I say helped?) Mozilla, and the Firefox team in particular, grow from the small group that I joined in 2007 to the large company that will soon surpass 1000 employees.

One of the things I’ve always appreciated about Mozilla is how flexible they can be about where you work. Recently my wife and I decided to move away from the bay area to be closer to family. It was a hard choice to make as it meant leaving a lot of friends behind but one thing that made it easier was knowing that I wouldn’t have to factor work into that choice. Unlike many other companies I know there is no strict requirement that everyone work from the office. True it is encouraged and it sometimes makes sense to require it for some employees, particularly when starting out, but I knew that when it came time to talk to my manager about it he wouldn’t have a problem with me switching to working remote. Of course six years ago when I started I was living in the UK and remained so for my first two years at Mozilla so he had a pretty good idea that I could handle it, and at least this time I’m only separated from the main office by a short distance and no time-zones.

The web has changed a lot in the last six years. Back then we were working on Firefox 3, the first release to contain the awesomebar and a built-in way to download extensions from AMO. Twitter and Facebook had only been generally available for about a year. The ideas for CSS3 and HTML5 were barely written, let alone implemented. If you had told me back then that you’d be able to play a 3D game in your browser with no additional plugins, or watch videos without flash I’d have probably thought they were crazy pipe-dreams. We weren’t even Jitting our JS code back then. Mozilla, along with other browser makers, are continuing to prove that HTML, CSS and JS are winning combinations that we can build on to make the future of the web open, performant and powerful. I can’t wait to see what things will be like in another six years.

Firefox now ships with the add-on SDK

It’s been a long ride but we can finally say it. This week Firefox 21 shipped and it includes the add-on SDK modules.

We took all the Jetpack APIs and we shipped them in Firefox!What does this mean? Well for users it means two important things:

  1. Smaller add-ons. Since they no longer need to ship the APIs themselves add-ons only have to include the unique code that makes them special. That’s something like a 65% file-size saving for the most popular SDK based add-ons, probably more for simpler add-ons.
  2. Add-ons will stay compatible with Firefox for longer. We can evolve the modules in Firefox that add-ons use so that most of the time when changes happen to Firefox the modules seamlessly shift to keep working. There are still some cases where that might be impossible (when a core feature is dropped from Firefox for example) but hopefully those should be rare.

To take advantage of these benefits add-ons have to be repacked with a recent version of the SDK. We’re working on a plan to do that automatically for existing add-ons where possible but developers who want to get the benefits right now can just repack their add-ons themselves using SDK 1.14 and using cfx xpi --strip-sdk, or using the next release of the SDK, 1.15 which will do that by default.

Afterburner steak rub

This is a recipe for a steak rub that I made up last night. It came out pretty well and everyone enjoyed it. It slightly caramelises the surface of the steak infusing it with herby goodness and after you enjoy that flavour a nice kick of heat comes in the after taste.

  • 1/4 handful fresh sage, finely chopped
  • 1/2 handful fresh thyme, finely chopped
  • 1 handful fresh parsley, finely chopped
  • 2 tsp salt
  • 1 tbsp brown sugar
  • 2 tsp black pepper
  • 1 tsp cayenne pepper

Mix it all together in a bowl, rinse off your steaks and apply liberally to each side about half an hour before you plan to start cooking. These quantities should make enough for 4 small steaks.

Get notifications about changes to any directory in mercurial

Back in the old days when we used CVS of all things for our version control we had a wonderful tool called bonsai to help query the repository for changes. You could list changes on a per directory basis if you needed which was great for keeping an eye on certain chunks of code. I recall there being a way of getting an RSS feed from it and I used it when I was the module owner of the extension manager to see what changes were landed that I hadn’t noticed in bugs.

Fast forward to today and we use mercurial instead. When we switched there was much talk of how we’d get tool parity with CVS but bonsai is something that has never been replaced fully. Oh hgweb is decent at looking at individual files and browsing the tree, but you can’t get that list of changes per directory from it. I believe you can use the command line to do it but who wants to do that? Lately I’ve been finding need of those directory RSS feeds more and more. We’re now periodically uplifting the Add-on SDK repository to mozilla-central, it’s really important to spot changes that have been made to that directory in mozilla-central so we can also land them in our git repository and not clobber them the next time we uplift. I’m also the module owner of toolkit, which is a pretty big sprawling set of files. It seems like everytime I look I find something that landed without me noticing. I don’t make for a good module owner if I’m not keeping an eye on things so I’d really like to see when new files are added there.

So I introduce the Hg Change Feed, the result of mostly just a few days of work. Every 10 minutes it pulls new changes from mozilla-central and mozilla-inbound. A mercurial hook looks over the changes and adds information about them to a MySQL database. Then a simple django app displays that information. As you browse through the directories in the tree it shows only changesets that affected files beneath that directory. For any directory you can also get an RSS feed of the same. Plug that into IFTTT and you have an automated system to notify you in pretty much any way you’d like about new changes you’d be interested in.

Some simple examples. For tracking changes to Add-on SDK I’m watching http://hgchanges.fractalbrew.com/mozilla-inbound/file/addon-sdk/source. For toolkit I’m looking at http://hgchanges.fractalbrew.com/mozilla-inbound/file/toolkit?types=added. Types takes a comma separated list of “added”, “removed” and “modified” to filter which changes you’re interested in. There’s no UI on the site for changing that right now, you’re welcome to add some!

One other neat trick that this does is mostly ignore merge changesets. Only if a merge actually makes a change not already present in either of the merge parents (mostly happens when resolving merge conflicts) will it show up in the list of changes, because really you don’t need to hear about changes twice.

So play with it, let me know if you find it useful or if you think things are missing. I can also add other mercurial repositories if people want. Some caveats:

  • It only retains the last 2000 changesets from any repository in an effort to keep the DB size small and fast, it also only shows the last 200 changesets for each page, or just the last 20 in the feeds. These can be tweaked easily enough and I’ve done basically no benchmarking to say those are the right values.
  • The site isn’t as fast as I’d like, particularly listing changes for the top level directory takes nearly 5 seconds. I’ve thrown some basic caching in place to help alleviate that for now. I bet someone who has more MySQL and django experience than me could tell me what I’m doing wrong.
  • I’m off on vacation tomorrow so I guess I’m announcing this then running away, sorry if that means it takes me a while to respond to comments.

Want to help out and make it better? Go nuts with the source. There’s a readme that hopefully explains how to set up your own instance.

Hacking on Tilt

Tilt, or 3D view as it is known in Firefox, is an awesome visual tool that really lets you see the structure of a webpage. It shows you just how deep your tag hierarchy goes which might give signs of your page being too complex or even help you spot errors in your markup that you wouldn’t otherwise notice. But what if it could do more? What if there were different ways to visualise the same page? What if even web developers could create their own visualisations?

I’ve had this idea knocking around in my head for a while now and the devtools work week was the perfect time to hack on it. Here are some results, click through for larger images:

Normal 3D view
Normal 3D view
Only give depth to links
Only give depth to links
Only give depth to links going off-site
Only give depth to links going off-site
Only give depth to elements that have a different style on hover
Only give depth to elements that have a different style on hover

This is all achieved with some changes to Firefox itself to make Tilt handle more generic visualisations along with an extension that then overrides Tilt’s default visualisation. Because the extension has access to everything Firefox knows about the webpage it can use some interested sources of data about the page, including those not found in the DOM. This one I particularly like, it makes the element’s depth proportional to the number of attached DOM event listeners:

Give depth to elements based on the number of attached event listeners
Give depth to elements based on the number of attached event listeners

Just look at that search box, and what’s up with the two buttons having different height?

The code just calls a JS function to get the height for each element displayed in 3D view. It’s really easy to use DOM functions to highlight different things about the elements and while I think some of the examples I made are interesting I think it will be more interesting to just let web-devs come up with and share their own visualisations. To that end I also demoed using scratchpad to write whatever function you like to control the visualisation. You can see a screencast of it in action.

Something that struck me towards the end of the week is that it could be awesome to pair this up with external sources of data like analytics. What about being able to view your page with links given depth proportional to how often users click them? Seems like an awesome way to really understand where your users are going and maybe why.

I’m hoping to get the changes to Firefox landed soon maybe with an additional patch to properly support extensibility of Tilt, right now the extension works by replacing a function in a JSM which is pretty hacky, but it wouldn’t be difficult to make it nicer than that. After that I’ll be interested to see what visualisation ideas others come up with.

The Add-on SDK is now in Firefox

We’re now a big step closer to shipping the SDK APIs with Firefox and other apps, we’ve uplifted the SDK code from our git repository to mozilla-inbound and assuming it sticks we will be on the trains for releasing. We’ll be doing weekly uplifts to keep the code in mozilla-central current.

What’s changed?

Not a lot yet. Existing add-ons and add-ons built with the current version of the SDK still use their own versions of the APIs from their XPIs. Add-ons built with the next version of the SDK may start to try to use the APIs in Firefox in preference to those shipped with the XPI and then a future version will only use those in Firefox. We’re also talking about the possibility of making Firefox override the APIs in any SDK based add-on and use the shipped ones automatically so the add-on author wouldn’t need to do anything.

We’re working on getting the Jetpack tests running on tinderbox switched over to use the in-tree code, once we do we will be unhiding them so other developers can immediately see when their changes break the SDK code. You can now run the Jetpack tests with a mach command or just with make jetpack-tests from the object directory. Those commands are a bit rudimentary right now, they don’t give you a way to choose individual tests to run. We’ll get to that.

Can I use it now?

If you’re brave, sure. Once a build including the SDKs is out (might be a day or so for nightlies) fire up a chrome context scratch pad and put this code in it:

var { Loader } = Components.utils.import("resource://gre/modules/commonjs/toolkit/loader.js", {});
var loader = Loader.Loader({
  paths: {
    "sdk/": "resource://gre/modules/commonjs/sdk/",
    "": "globals:///"
  },
  resolve: function(id, base) {
    if (id == "chrome" || id.startsWith("@"))
      return id;
    return Loader.resolve(id, base);
  }
});
var module = Loader.Module("main", "scratchpad://");
var require = Loader.Require(loader, module);

var { notify } = require("sdk/notifications");
notify({
  text: "Hello from the SDK!"
});

Everything but the last 4 lines sets up the SDK loader so it knows where to get the APIs from and creates you a require function to call. The rest can just be code as you’d include in an SDK add-on. You probably shouldn’t use this for anything serious yet, in fact I haven’t included the code to tell the module loader to unload so this code example may leak things for the rest of the life of the application.

This is too long of course (longer than it should be right now because of a bug too) so one thing we’ll probably do is create a simple JSM that can give you a require function in one line as well as take care of unloading when the app goes away.