A review of the Canon EOS R7

My totally unprofessional review

I’ve always enjoyed photography. I’ve had a Nikon D7000 for nearly 12 years and apparently I’ve taken some 80,000 shots with it. But over the past few years I’ve been taking less and less photos and finding I wasn’t enjoying it as much. I’d rarely take the camera out with me and even when I did I found I was throwing away most of the photos I took. I finally decided it was time to try something new.

Cameras have come on some in 12 years and modern mirror-less cameras are more functional yet smaller and lighter than my D7000 so I figured I was more likely to actually take one out and about with me. My dad recently bought the Canon EOS R10 which I played with for a bit and was impressed with so I ended up buying the Canon EOS R7. I’m not going to talk about every aspect of the camera, there are plenty of reviews online for that, but here are the things that I’ve found particularly notable about it after using it for a month.

The Good

The autofocus performance of the R7 is incredible. I love ultra-sharp focus in the subject of most of my photos. If it’s a picture of someone and I can’t make out their individual eyelashes then I’ll often toss it. I don’t know if the focus on my Nikon was broken or in need of calibration or if I am just not as able to hold the camera steady as I used to but I was finding most of my photos were out of focus. So the R7’s AI based autofocus was one of its main selling points for me. It’s quick. Ridiculously quick in comparison to the D7000. It recognises people and animals near instantly and it’s really good at tracking the thing I care about as I move. Sometimes it identifies the wrong thing automatically but far more of my shots are coming out in perfect focus. It’s super easy to tell it what to focus on with the touch screen.

A blackbird sitting on a fence

The viewfinder is surprisingly good. I dislike shooting from a screen and I had tried an early mirror-less camera with an electronic viewfinder many years ago and the latency was so bad as to make it unusable. On the R7 it’s really good. Not perfect but good enough that you forget that you’re looking at a tiny screen. But of course the tiny screen means the display is much more functional that than of a optical viewfinder.


One of my reasons for going with the R7 over the R10 was the in-body stabilisation. Partly to help with potential focusing issues but I also like taking longer exposure shots to reveal some of the order behind the chaos of things like waterfalls. I wasn’t quite sure how good the IBIS was going to be, they throw around claims like 7 stops of improvement, but honestly I am pretty amazed by it. The photo below was taken handheld with a 1/10 shutter speed, something that I would have considered basically impossible on my previous camera.

A smooth river running over a small waterfall

The Nikon D7000 had customisable buttons. Two of them. On the R7 you can customise almost every button on the camera. There were always a number of buttons on the D7000 that I’m sure are important to some but I basically never used. With the R7 I can set it up how I like. This does have it’s problems, half of the buttons are labelled so it could get confusing but the other half just have odd symbols on them which frankly mean nothing to me anyway!


The Bad

Probably the most annoying feature of the R7 is the eye-cup. My D7000’s was thick and squidgy and comfortable to press against for long periods of time. The R7’s on the other hand is thin and feels much stiffer. It doesn’t take too much use before my eye gets sore using it. There are third party eye-cups which look a lot better but replacing it involves unscrewing the old one which Canon say would void the camera’s warranty. It’s a bit disappointing that the camera is let down by such a cheap part.


I said that the customisable buttons were a good feature of the R7. They do have a minor annoyance though when combined with the custom shooting modes. When you save the settings for a shooting mode it includes the button customisations. Periodically as I’ve been setting up the camera I’ve found I wanted to change a button to do something different and then I have to go through all the custom modes and change the buttons for them too. I guess I can see the benefit of different button setups for different modes but I think for me it would just get confusing. Probably this annoyance will go away once I’m done tinkering with the setup.


Rather than a simple power switch the R7 instead has a rotating mode selector, “off”, “on” or “video”. It makes sense since you can customise the buttons and other features of the camera quite differently in video mode compared to photo. But it means that when turning on you have to be careful not to accidentally rotate it too far and go into video mode. And when the camera is on it’s easy to switch it to video mode thinking that you’re turning it off. A pretty minor annoyance and one I will probably get used to as I use the camera more.


This is really more of a “why wouldn’t they do this?” than a complaint. The camera has three custom programmable shooting modes. Every-time you change mode the screen displays your new mode, C1, C2 or C3. Why Canon didn’t think to let me give each mode a short label so I could remember which each is for is beyond me.


The Summary

I’m only a month in to owning the camera and so far don’t have a lot of bad things to say about it. Maybe that will change, maybe it won’t. What I can say is that the numbers don’t lie. Over the whole of 2022 I took around 160 photos that I kept from my old camera. Over the last month I’ve kept nearly 60 from the R7. It’s clearly got me out shooting and keeping more photos again so it’s been worth it just from that perspective.

A grainy photo of a row of houses lit by the orange glow of a sunset

Using VS Code for merges in Mercurial

VS Code is now a great visual merge tool, here is how you set it up to be the merge tool and visual diff tool for Mercurial

I’ve always struggled to find a graphical merge tool that I can actually understand and up until now I have just been using merge markers along with a handy Mercurial command to open all conflicted files in VS Code, my editor of preference.

Well it turns out that since version 1.69 VS Code now has built in support for acting as a merge tool and after trying it out I actually found it to be useful! Given that they (and the rest of the world) tend to focus on Git I couldn’t find explicit instructions for setting it up for Mercurial so here is how you do it. Add the following to your ~/.hgrc:

[extensions]
extdiff =

[ui]
merge = code

[merge-tools]
code.priority = 100
code.premerge = True
code.args = --wait --merge $other $local $base $output

[extdiff]
cmd.vsd = code
opts.vsd = --wait --diffCode language: PHP (php)

This does two things. It registers VS Code as the merge tool for conflicts and also adds a hg vsd command to open side by side diffs of individual files in VS Code.

And if you do still need to open any unresolved files in VS Code you can use this config:

[alias]
unresolved = !$HG files -T "{reporoot}/{path}\0" "set:unresolved()" | xargs -0 code --waitCode language: PHP (php)

After that running hg unresolved will open any unresolved files in VS Code.

Recording UK Gas and Electricity usage in InfluxDB

I’m not going to do the online recipe thing. Or at least I’m going to give you the thing you probably want before the context. If you want a free tool for accessing your UK smart energy meter readings you should install and register the Bright mobile app. Once you’ve done so my Rust crate can access their API to let you pull data from the command line for various uses including submitting the data into InfluxDB where you can build all kinds of interesting graphs and alerts.

Now if you want to learn about my bike ride through Provence where I learned about the importance of monitoring my energy usage read on…

When we still lived in the US, I tried to start recording our electricity and gas usage in InfluxDB, a tool for recording time based measurements that then allows for graphic and triggering alerts. It turned out to be too difficult for my taste. Despite having “smart” meters that were measuring and sending our usage data to our energy provider there was no sane way for me to retrieve that data. I would have had to automate a web-based login process to grab the data, not impossible but annoying and subject to breakage whenever the website changed.

Now we’re in the UK and after having smart meters installed I tried the exercise again. So I started searching for details about my energy provider. Turns out they provided an API for downloading data! Which they turned off about a year ago. Damn.

But all is not lost. In the US wherever you live determines your provider and you’re stuck with them, no choice whatsoever (capitalism?). The way energy is supplied in the UK is a bit different. You can switch provider basically whenever you like. In fact it is encouraged to keep looking for better prices and switch often. It’s a bit weird and I don’t totally understand it because the electricity and gas still comes down the same lines and through the same meters into the house, but it does mean that I could potentially switch to a provider that offers a better API.

Now am I going to switch to a potentially more expensive provider just to get access to my data? Probably not. But there is something else about this setup that is useful to me. As I said, the energy still comes through the same meters. If you switch provider you don’t switch meters. Regardless of the provider I use they can still see the usage data from my meter (or at least this is true with modern smart meters). How does that work? It appears that all meters report their data to a central place and then providers can register to pull data from there. But it isn’t just your provider. Other companies can, with permission, gain access to your data. And then if they choose to, pass it on to you. Knowing that it didn’t take long to find a company that provides the Bright mobile app for displaying your energy usage (regardless of your energy provider!) and crucially once you’ve gone through the registration process for the mobile app a sane REST API for accessing your usage data.

The API is pretty straightforward so I’ve built a Rust crate and CLI for downloading and displaying data from it. Most importantly the CLI includes an “influx” command that downloads usage data and outputs it in InfluxDB line protocol complete with various tags for the data. I used that to download all the data since my smart meters were installed (five months ago) and then also added it to the telegraf configuration on one of my servers so every hour it attempts to pull the most recent usage data. And now I can generate pretty graphs like the one at the top of this post.

Announcing LocalNS

An auto-configuring nameserver for the services you run on your local network.

I mess around with a bunch of different projects in my spare time but it’s been a long time since I’ve thought one was worth tidying up into an actual release. Maybe it will be useful to you?

The problem I faced was that I had a whole bunch of services that I installed on my network, some in docker, some as standalone servers, some behind a Traefik proxy. In the docker case I was using macvlan networking so each service had its own IP address accessible to the entire network. Once I learned how to do all that it became trivial to spin up a new local service, say InfluxDB or Grafana with a couple of lines in a docker-compose file.

The challenge was remembering the IP address for each service. That is of course the job of a DNS server and while I played with various standard DNS server options it always involved more manual work than I wanted and I wanted features they weren’t really designed for like being able to override publicly published DNS names with internal IP addresses when on the local network.

Traefik’s method of auto-discovering services really inspired me. It will connect to the docker daemon and listen for containers starting and stopping and based on labels on the containers route requests to them. So I built something similar that I’m calling LocalNS. It connects to docker and for containers with a specific label it detects the IP address and answers queries for its name through DNS.

So I can trivially bring up a container with something like:

~$ docker run -d \
  --network internal \
  --label 'localns.hostname=grafana.mydomain' \
  grafana/grafanaCode language: Shell Session (shell)

As soon as the container starts LocalNS detects it and starts responding to queries for the name grafana.mydomain with whatever IP address docker assigned it on the internal network.

Once I did that I realised I wanted the same for the services that Traefik was proxying. And Traefik has an API so LocalNS will also request the routing rules from Traefik and for the simple cases understand what hostnames Traefik will respond to and so expose those names over DNS.

There are a couple of other sources supported too and it shouldn’t be difficult to add more in the future. When asked for DNS names it doesn’t know LocalNS will forward the request to another DNS server.

It takes a small amount of configuration to set up LocalNS but once done it mostly configures itself as you change the services that are running elsewhere. I wouldn’t say the code is perfect right now but I’ve been using it as the nameserver for everything on my home network for two months now with no obvious issues.

You can check out the code at Github or there is documentation for how to actually use it.

Creating HTML content with a fixed aspect ratio without the padding trick

It seems to be a common problem, you want to display some content on the web with a certain aspect ratio but you don’t know the size you will be displaying at. How do you do this? CSS doesn’t really have the tools to do the job well currently (there are proposals). In my case I want to display a video and associated controls as large as possible inside a space that I don’t know the size of. The size of the video also varies depending on the one being displayed.

Padding height

The answer to this according to almost all the searching I’ve done is the padding-top/bottom trick. For reasons that I don’t understand when using relative lengths (percentages) with the CSS padding-top and padding-bottom properties the values are calculated based on the width of the element. So padding-top: 100% gives you padding equal to the width of the element. Weird. So you can fairly easily create a box with a height calculated from its width and from there display content at whatever aspect ratio you choose. But there’s an inherent problem here, you need to know the width of the box in the first place, or at least be able to constrain it based on something. In my case the aspect ratio of the video and the container are both unknown. In some cases I need to constrain the width and calculate the height, but in others I need to constrain the height and calculate the width which is where this trick fails.

object-fit

There is one straightforward solution. The CSS object-fit property allows you to scale up content to the largest size possible for the space allocated. This is perfect for my needs, except that it only works for replaced content like videos and images. In my case I also need to overlay some controls on top and I won’t know where to position them unless they are inside a box the size of the video.

The solution?

So what I need is something where I can create a box with set sizes and then scale both width and height to the largest that fit entirely in the container. What do we have on the web that can do that … oh yes, SVG. In SVG you can define the viewport for your content and any shapes you like inside with SVG coordinates and then scale the entire SVG viewport using CSS properties. I want HTML content to scale here and luckily SVG provides the foreignObject element which lets you define a rectangle in SVG coordinates that contains non-SVG content, such as HTML! So here is what I came up with:

<!DOCTYPE html>

<html>
<head>
<style type="text/css">
html,
body,
svg,
div {
  height: 100%;
  width: 100%;
  margin: 0;
  padding: 0;
}

div {
  background: red;
}
</style>
</head>
<body>
  <svg viewBox="0 0 4 3">
    <foreignObject x="0" y="0" width="100%" height="100%">
      <div></div>
    </foreignObject>
  </svg>
</body>
</html>

This is pretty straightforward. It creates an SVG document with a viewport with a 4:3 aspect ratio, a foreignObject container that fills the viewport and then a div that fills that. what you end up with is a div with a 4:3 aspect ratio. While this shows it working against the full page it seems to work anywhere with constraints on either height, width or both such as in a flex or grid layout. Obviously changing the viewBox allows you to get any aspect ratio you like, just setting it to the size of the video gives me exactly what I want.

You can see it working over on codepen.

A simple command to open all files with merge conflicts

When I get merge conflicts in a rebase I found it irritating to open up the problem files in my editor, I couldn’t find anything past copying and pasting the file path or locating it in the source tree. So I wrote a simple hg command to open all the unresolved files into my editor. Maybe this is useful to you too?

[alias]
unresolved = !$HG resolve -l "set:unresolved()" -T "{reporoot}/{path}\0" | xargs -0 $EDITOR

Please watch your character encodings

I started writing this as a newsgroup post for one of Mozilla’s mailing lists, but it turned out to be too long and since this part was mainly aimed at folks who either didn’t know about or wanted a quick refresher on character encodings I decided to blog it instead. Please let me know if there are errors in here, I am by no means an expert on this stuff either and I do get caught out sometimes!

Text is tricky. Unicode supports the notion of 1,114,112 distinct characters, slightly more than a byte of memory can hold. So to store a character we have to use a way of encoding its value into bytes in memory. A straightforward encoding would just use three bytes per character. But (roughly) the larger the character value the less often it is used, and memory is precious, so often variable length encodings are used. These will use fewer bytes in memory for characters earlier in the range at the cost of using a little more memory for the rarer characters. Common encodings include UTF-8 (one byte for ASCII characters, up to four bytes for other characters) and UTF-16 (two bytes for most characters, four bytes for less used ones).

What does this mean?

It may not be possible to know the number of characters in a string purely by looking at the number of bytes of memory used.

When a string is encoded with a variable length encoding the number of bytes used by a character will vary. If the string is held in a byte buffer just dividing its length by some number will not always return the number of characters in a string. Confusingly many string implementations expose a length property, that often only tells you the number of code points, not the number of characters in a string. I bet most JavaScript developers don’t know that JavaScript suffers from this:

let test = "\u{1F42E}"; // This is the Unicode cow 🐮 (https://emojipedia.org/cow-face/)
test.length; // This returns 2!
test.charAt(0); // This returns "\ud83d"
test.charAt(1); // This returns "\udc2e"
test.substring(0, 1); // This returns "\ud83d"

Fun!

More modern versions of JavaScript do give better options, though they are probably slower than the length property (because it must decode the characters to understand the length:

Array.from(test).length; // This returns 1
test.codePointAt(0).toString(16); // This returns "1f42e"

When you encode a character into memory and pass it to some other code, that code needs to know the encoding so it can decode it correctly. Using the wrong encoder/decoder will lead to incorrect data.

Using the wrong decoder to convert a buffer of memory into characters will often fail. Take the character “ñ”. In UTF-8 this is encoded as C3 B1. Decoding that as UTF-16 will result in “쎱”. In UTF-16 however “ñ” is encoded as 00 F1. Trying to decode that as UTF-8 will fail as that is in invalid UTF-8 sequence.

Many languages thankfully use string types that have fixed encodings, in rust for example the str primitive is UTF-8 encoded. In these languages as long as you stick to the normal string types everything should just work. It isn’t uncommon though to do manipulations based on the byte representation of the characters, %-encoding a string for a URL for example, so knowing the character encoding is still important.

Some languages though have string types where the encoding may not be clear. In Gecko C++ code for example a very common string type in use is the nsCString. It is really just a set of bytes and has no defined encoding and no way of specifying one at the code level. The only way to know for sure what the string is encoded as is to track back to where it was created. If you’re unlucky it gets created in multiple places using different encodings!

Funny story. This blog posts contains a couple of the larger unicode characters. While working on the post I kept coming back to find that the character had been lost somewhere along the way and replaced with a “?”. Seems likely that there is a bug in WordPress that isn’t correctly handling character encodings. I’m not sure yet whether those characters will survive publishing this post!

These problems disproportionately affect non-English speakers.

Pretty much all of the characters that English speakers use (mostly the Latin alphabet) live in the ASCII character set which covers just 128 characters (some of these are control characters). The ASCII characters are very popular and though I can’t find references right now it is likely that the majority of strings used in digital communication are made up of only ASCII characters, particularly when you consider strings that humans don’t generally see. HTTP request and response headers generally only use ASCII characters for example.

Because of this popularity when the Unicode character set was first defined, it mapped the 128 ASCII characters to the first 128 Unicode characters. Also UTF-8 will encode those 128 characters as a single byte, any other characters get encoded as two bytes or more.

The upshot is that if you only ever work with ASCII characters, encoding or decoding as UTF-8 or ASCII yields identical results. Each character will only ever take up one byte in memory so the length of a string will just be the number of bytes used. An English speaking developer, and indeed many other developers may only ever develop and test with ASCII characters and so potentially become blind to the problems above and not notice that they aren’t handling non-ASCII characters correctly.

At Mozilla where we try hard to make Firefox work in all locales we still routinely come across bugs where non-ASCII characters haven’t been handled correctly. Quite often issues stem from a user having non-ASCII characters in their username or filesystem causing breakage if we end up decoding the path incorrectly.

This issue may start getting rarer. With the rise in emoji popularity developers are starting to see and test with more and more characters that encode as more than one byte. Even in UTF-16 many emoji encode to four bytes.

Summary

If you don’t care about non-ASCII characters then you can ignore all this. But if you care about supporting the 80% of the world that use non-ASCII characters then take care when you are doing something with strings. Make sure you are checking its length correctly when needed. If you are working with data structures that don’t have an explicit character encoding then make sure you know what encoding your data is in before doing anything with it other than passing it around.

Bridging an internal LAN to a server’s Docker containers over a VPN

I recently decided that the basic web hosting I was using wasn’t quite a configurable or powerful as I would like so I have started paying for a VPS and am slowly moving all my sites over to it. One of the things I decided was that I wanted the majority of services it ran to be running under Docker. Docker has its pros and cons but the thing I like about it is that I can define what services run, how they run and where they store all their data in a single place, separate from the rest of the server. So now I have a /srv/docker directory which contains everything I need to backup to ensure I can reinstall all the services easily, mostly regardless of the rest of the server.

As I was adding services I quickly realised I had a problem to solve. Some of the services were obviously external facing, nginx for example. But a lot should not be exposed to the public internet but needed to still be accessible, web management interfaces etc. So I wanted to figure out how to easily access them remotely.

I considered just setting up port forwarding or a socks proxy over ssh. But this would mean having to connect to ssh whenever needed and either defining all the ports and docker IPs (which I would then have to make static) in the ssh config or having to switch proxies in my browser whenever I needed to access a service and also would only really support web protocols.

Exposing them publicly anyway but requiring passwords was another option, I wasn’t a big fan of this either though. It would require configuring an nginx reverse proxy or something everytime I added a new service and I thought I could come up with something better.

At first I figured a VPN was going to be overkill, but eventually I decided that once set up it would give me the best experience. I also realised I could then set up a persistent VPN from my home network to the VPS so when at home, or when connected to my home network over VPN (already set up) I would have access to the containers without needing to do anything else.

Alright, so I have a home router that handles two networks, the LAN and its own VPN clients. Then I have a VPS with a few docker networks running on it. I want them all to be able to access each other and as a bonus I want to be able to just use names to connect to the docker containers, I don’t want to have to remember static IP addresses. This is essentially just using a VPN to bridge the networks, which is covered in many other places, except I had to visit so many places to put all the pieces together that I thought I’d explain it in my own words, if only so I have a single place to read when I need to do this again.

In my case the networks behind my router are 10.10.* for the local LAN and 10.11.* for its VPN clients. On the VPS I configured my docker networks to be under 10.12.*.

0. Configure IP forwarding.

The zeroth step is to make sure that IP forwarding is enabled and not firewalled any more than it needs to be on both router and VPS. How you do that will vary and it’s likely that the router will already have it enabled. At the least you need to use sysctl to set net.ipv4.ip_forward=1 and probably tinker with your firewall rules.

1. Set up a basic VPN connection.

First you need to set up a simple VPN connection between the router and the VPS. I ended up making the VPS the server since I can then connect directly to it from another machine either for testing or if my home network is down. I don’t think it really matters which is the “server” side of the VPN, either should work, you’ll just have to invert some of the description here if you choose the opposite.

There are many many tutorials on doing this so I’m not going to talk about it much. Just one thing to say is that you must be using certificate authentication (most tutorials cover setting this up), so the VPS can identify the router by its common name. Don’t add any “route” configuration yet. You could use redirect-gateway in the router config to make some of this easier, but that would then mean that all your internet traffic (from everything on the home LAN) goes through the VPN which I didn’t want. I set the VPN addresses to be in 10.12.10.* (this subnet is not used by any of the docker networks).

Once you’re done here the router and the VPS should be able to ping their IP addresses on the VPN tunnel. The VPS IP is 10.12.10.1, the router’s gets assigned on connection. They won’t be able to reach beyond that yet though.

2. Make the docker containers visible to the router.

Right now the router isn’t able to send packets to the docker containers because it doesn’t know how to get them there. It knows that anything for 10.12.10.* goes through the tunnel, but has no idea that other subnets are beyond that. This is pretty trivial to fix. Add this to the VPS’s VPN configuration:

push "route 10.12.0.0 255.255.0.0"

When the router connects to the VPS the VPN server will tell it that this route can be accessed through this connection. You should now be able to ping anything in that network range from the router. But neither the VPS nor the docker containers will be able to reach the internal LANs. In fact if you try to ping a docker container’s IP from the local LAN the ping packet should reach it, but the container won’t know how to return it!

3. Make the local LAN visible to the VPS.

Took me a while to figure this out. Not quite sure why, but you can’t just add something similar to a VPN’s client configuration. Instead the server side has to know in advance what networks a client is going to give access to. So again you’re going to be modifying the VPS’s VPN configuration. First the simple part. Add this to the configuration file:

route 10.10.0.0 255.255.0.0
route 10.11.0.0 255.255.0.0

This makes openVPN modify the VPS’s routing table telling it it can direct all traffic to those networks to the VPN interface. This isn’t enough though. The VPN service will receive that traffic but not know where to send it on to. There could be many clients connected, which one has those networks? You have to add some client specific configuration. Create a directory somewhere and add this to the configuration file:

client-config-dir /absolute/path/to/directory

Do NOT be tempted to use a relative path here. It took me more time than I’d like to admit to figure out that when running as a daemon the open vpn service won’t be able to find it if it is a relative path. Now, create a file in the directory, the filename must be exactly the common name of the router’s VPN certificate. Inside it put this:

iroute 10.10.0.0 255.255.0.0
iroute 10.11.0.0 255.255.0.0

This tells the VPN server that this is the client that can handle traffic to those networks. So now everything should be able to ping everything else by IP address. That would be enough if I didn’t also want to be able to use hostnames instead of IP addresses.

4. Setting up DNS lookups.

Getting this bit to work depends on what DNS server the router is running. In my case (and many cases) this was dnsmasq which makes this fairly straightforward. The first step is setting up a DNS server that will return results for queries for the running docker containers. I found the useful dns-proxy-server. It runs as the default DNS server on the VPS, for lookups it looks for docker containers with a matching hostname and if not forwards the request on to an upstream DNS server. The VPS can now find the a docker container’s IP address by name.

For the router (and so anything on the local LAN) to be able to look them up it needs to be able to query the DNS server on the VPS. This meant giving the DNS container a static IP address (the only one this entire setup needs!) and making all the docker hostnames share a domain suffix. Then add this line to the router’s dnsmasq.conf:

server=/<domain>/<dns ip>

This tells dnsmasq that anytime it receives a query for *.domain it passes on the request to the VPS’s DNS container.

5. Done!

Everything should be set up now. Enjoy your direct access to your docker containers. Sorry this got long but hopefully it will be useful to others in the future.

Taming Phabricator

So Mozilla is going all-in on Phabricator and Differential as a code review tool. I have mixed feelings on this, not least because it’s support for patch series is more manual than I’d like. But since this is the choice Mozilla has made I might as well start to get used to it. One of the first things you see when you log into Phabricator is a default view full of information.

A screenshot of Phabricator's default view

It’s a little overwhelming for my tastes. The Recent Activity section in particular is more than I need, it seems to list anything anyone has done with Phabricator recently. Sorry Ted, but I don’t care about that review comment you posted. Likewise the Active Reviews section seems very full when it is barely listing any reviews.

But here’s the good news. Phabricator lets you create your own dashboards to use as your default view. It’s a bit tricky to figure out so here is a quick crash course.

Click on Dashboards on the left menu. Click on Create Dashboard in the top right, make your choices then hit Continue. I recommend starting with an empty Dashboard so you can just add what you want to it. Everything on the next screen can be modified later but you probably want to make your dashboard only visible to you. Once created click “Install Dashboard” at the top right and it will be added to the menu on the left and be the default screen when you load Phabricator.

Now you have to add searches to your dashboard. Go to Differential’s advanced search. Fill out the form to search for what you want. A quick example. Set “Reviewers” to “Current Viewer”, “Statuses” to “Needs Review”, then click Search. You should see any revisions waiting on you to review them. Tinker with the search settings and search all you like. Once you’re happy click “Use Results” and “Add to Dashboard”. Give your search a name and select your dashboard. Now your dashboard will display your search whenever loaded. Add as many searches as you like!

Here is my very simple dashboard that lists anything I have to review, revisions I am currently working on and an archive of closed work:

A Phabricator dashboard

Like it? I made it public and you can see it and install it to use yourself if you like!