Photo library work

It’s Alex and Fiona’s herfstvakantie now, and I am off work as well. I’ve spent some of the time consolidating photos from the various places they’ve ended up into a single archive location. This is not a trivial task.

We’ve been taking digital photos since 2000, and we have owned a fair few digital cameras and phones with cameras in that time. After consolidation I’m looking at a folder structure containing 46,947 items, taking up 114GB of space. This is mostly photos, with a few videos as well.

Here are the main sources I’ve been consolidating the photos from:

  • Folders on my hard disk. In the era before phone cameras, I copied photos off of memory cards and into a single date-based folder structure. But inevitably there are stragglers all over the place, in directories like /pictures, /photos, and /pictures/photos.
  • Dropbox, from the time when I was using its camera upload functionality. Over time, this evolved from “attach phone or memory card and let Dropbox do its thing” to “upload directly from my phone using the Dropbox app”.
  • Google Photos, from the time I was using a variety of Android phones (mainly a Nexus 4), and Google would upload pictures directly to its cloud storage.
  • Microsoft OneDrive, from recent usage. It uploads photos from my Lumia 930 directly to cloud storage. I started using this because the Dropbox app on Windows Phone (which should do exactly the same thing) was crashy and unreliable.
  • iPhoto. For a while between about 2010 and 2014, I imported photos into iPhoto for easier viewing and manipulation of things like orientation and date.
  • Apple Photos. In 2015, Apple’s new Photos app took over from iPhoto, and “imported” everything that was already in iPhoto. It also became the repository for my “Photo Stream”, which I still haven’t figured out completely.

This setup is problematic in terms of consistency: where do I look for a photo? First of all, it’s nice to have all of your photos in a single place. But also, without consistency, it’s hard to do good backups. Dropbox, Google Photos, and OneDrive may be cloud storage solutions where you don’t have to take care of backups yourself…but that doesn’t mean I trust them. Cloud services get shut down or die all the time. I prefer the belt and braces approach, where I maintain local backups (copies on multiple hard drives) combined with an off-site cloud backup (currently Crashplan).

Also, just browsing through folders of photos using the Finder on macOS or Explorer on Windows is…ok, but primitive. It’s certainly not fun. Dropbox’s web interface for browsing photos online is…functional. OneDrive is prettier, but unusably slow. iPhoto was pretty, but mainly local. Apple Photos is less pretty, but more functional; mostly local, but with cloud-like sharing capabilities. Google Photos is simple, good for sharing, but less functional.

Then there’s the whole issue of duplication. Something like Apple Photos is designed to import photos into its own library format. With Google Photos you upload your photos to Google, and it handles them remotely. In both cases, you end up with your source photos, and a copy of them in a library. You can do things to the copy, like add descriptions, or group them into albums; the source files are unaffected. On the one hand: the library allows you to do useful things! On the other hand, your useful things are only available in the library. If you decide you want to use a different software tool a few years down the line, what happens to all the work you spent editing the copies? Do you abandon that, or is there an export tool that allows you to take your edits with you? With the new library respect the metadata that came from the old one? And if you export your library, what do you do with the originals? Are they still relevant, or has the copy now become the new primary source?

Segals’s law says, “A person with a watch knows what time it is. A person with two watches is never sure.” Duplication is why it has taken me several days to consolidate my photos.

  • There were times in the past when I was copying photos off of my camera’s memory card, as well as letting Dropbox automatically upload the files.
  • When I was using the Nexus 4, I had Dropbox camera upload running as well as Google’s own Photos import.
  • For a while I was using Dropbox camera upload on my Lumia 930, and copying the files onto my hard disk manually, because the Dropbox app was so unreliable
  • On my iOS devices (iPhone 4 in the past, iPad mini now), photos generally go into Apple’s “Photo Stream”. From there, they usually end up on my laptop, in iPhoto or Apple Photos. (Usually, but not always.) And I usually have Dropbox doing its thing on iOS devices as well.
  • When Apple Photos was released, I tried it, and imported a copy of my iPhoto library.

Ugh. All these import services use different naming standards, sometimes remembering the camera’s original file name, sometimes attaching their own. If they used a date/time file name (e.g. 2016-10-20 11.21.59.jpg), usually these dates match, but not always. (OneDrive and Dropbox caused the most disagreements.)

But not only that: with the exception of pulling images off of a memory card or phone manually, all of these magical import/upload processes are less than 100% reliable. (They’re close, but definitely not 100%.) Sometimes they just fail to grab one or more pictures out of a batch. Sometimes the Apple Photo stream doesn’t copy pictures to the Photos app. The iPhoto to Apple Photos library import failed to copy a bunch of videos. This all adds up to many hours of work, manually checking folders against each other, deleting duplicates, and copying missing items. (Fortunately this was only for pictures from mid-2011 and onwards. I wasn’t using overlapping automated tools before then.)

As an aside, my camera usage has shifted substantially over the last five years. Although we still have our old Konica Minolta DiMAGE A200, I hardly use it at all now. On our trip to California in the summer, I didn’t even bring it with us. But reviewing thousands of photos from mixed sources in a short span of time has made it abundantly clear that the ten-year-old consumer-level A200 (which isn’t even a DSLR) still takes much better photos than the best of the smartphones we use right now.

There are features that the A200 lacks. Portability is the big one. I always have my phone with me; bringing the big camera makes any trip feel like an undertaking. It still uses old and slow CF memory cards; the auto-focus is relatively slow; the camera doesn’t have GPS for adding location to photos; it doesn’t even have an orientation sensor, which means lots of sideways photos to review. But the image quality makes me think that I shouldn’t be pining for a new phone, but looking for a new dedicated camera instead. The current crop of “superzoom” cameras looks amazing.

Having just trawled through thousands of photos, I’m keen not to repeat that experience. I want a process that will make it easier for me to gather, review, and share my photos in the future. The process has to separate four key stages: ingest, consolidate, tag, and share.

The first part, ingest is all about getting photos off of the camera or device. On a smartphone/tablet that could be Dropbox, OneDrive, or Apple’s Photo stream — I don’t really care, so long as it grabs all photos and videos at full resolution. For a standalone camera, I can manually grab the images from its memory card. (Or, if I get a new one, hook it up to wifi.)

Consolidate means taking the images from where the ingestion step put them, putting them into my central storage area, and organizing them into my preferred folder structure. The folder structure looks like YYYY/YYYYMMDD XXXXX/photo.jpg, where XXXXX is a short text summary of the event in that folder. This is a “date + event” model:

  • If an event covers more than one date, use multiple folders, e.g.
    • 20160719 Edinburgh trip
    • 20160720 Edinburgh trip
  • If a date covers more than one event, use multiple folders, e.g.
    • 20161019 Twiske molen damage
    • 20161019 Evening
  • In case of no obvious event description, use the naked date, e.g.
    • 20150406

The central storage area is an external hard drive that I mirror to a second drive nightly using SuperDuper. It also gets backed up to an external location as part of my Crashplan subscription.

The third stage is to tag the photos that have been organized. Photos coming off of modern smartphone already contain Exif information showing date and time, camera orientation, and GPS coordinates. These are main ones I care about. Sometimes the phone gets it wrong, though, for example if location services were off, or failed to capture my position correctly; or if I forgot to change the time zone after a flight. Photos from the A200 only have date and time information, no orientation or location data. In any case, it pays to do a pass over the photos to check and fix missing or inaccurate Exif data. (I can see myself writing a few scripts to help out with this.)

The final step is to share the photos: with friends and family, and also with my future self. I think I’ll use Google Photos for this, because it makes sharing with others easiest. Apple Photos has a nicer and more responsive interface for viewing photos locally (sharing with my future self), but if I want to share the whole library with Abi, and allow her to make new albums with its photos, I’d have to figure out Apple’s home sharing options and shared photo streams, and I just don’t trust Apple to have my very particular use case in mind. With Google Photos I can just set up a new Google account that both Abi and I have the password to. Google makes it easy to switch between multiple accounts these days.

It’s important that the tag and share steps are separate. I want as much metadata as possible to reside in the source image, not in the copy that the sharing service has. That makes it much easier to switch services in the future. I may trust Google more now than I did in the past (the Lindy effect comes into play here), but doing things like editing GPS coordinates only on the copy seems like a bad idea no matter what company runs the service.

(It looks like neither Google Photos nor Apple Photos use the “Image Description” Exif field for display purposes, which is a pity. Seeing as I have event description in the folder name anyway, I could update the photos with that text as well, and make the photos even more self-describing.)

Separating all of these steps might seem like a huge amount of bother. Google Photos and Apple Photos exist precisely to combine all of these steps, to make it easier and faster to get from taking the photo to sharing it. And that’s great, up to a point. But with 50,000 images, many of which are immeasurably precious to me, and a healthy mistrust of both hard disks and cloud services, I’m well beyond that point. I need more control.

Alex in 2002. One of my brushes with catastrophic data loss.
Alex in 2002. One of my brushes with catastrophic data loss.

With this new process, I have only got as far as the consolidation step. I still need to go through the consolidated images and fix the tags. But I can do that slowly, over time, and add them to the sharing service whenever I’m done with a folder. I felt that the consolidation step needed much more concentrated effort to get me over the initial hump, though. Tagging and sharing photos feels like fun, something I can do for an hour in an evening; comparing folders to find the missing images is work that I wanted to get done in as short a time as possible. Having a few days off was useful for just blasting through it.

Finally: watching myself gain weight in fast-forward over the last three years was not fun. Maybe this will be the spur I need to get back on the Flickr Diet.

@Media Ajax

@Media Ajax logoI was at the conference @Media Ajax conference last week. In hindsight, “@Media JavaScript” would have been a better title, though. It is less than two years since Jesse James Garrett coined the term “Ajax”, but we are already at the point where Ajax development is just the way we do things now, rather than something that needs to be explained, discussed, and evangelized.

During the wrap-up panel at the end of the second day, one of the questions was directed to the audience: who would have attended the conference if it had in fact been called “@Media JavaScript”? Most people put up their hand. I would not be surprised if Vivabit run a sequel to this conference next year; but the main reason for them to keep the term “Ajax” in the title would surely be to make it easier for developers to convince buzzword-hungry managers to let them attend.

Monday 19 November

Keynote presentation: “The State Of Ajax” by Dion Almaer and Ben Galbraith

Dion AlmaerBen GalbraithThis presentation set the scene for the rest of the conference, briefly covering subjects like JavaScript 2 and the heated politics surrounding it, the emergence of offline support for web apps (Google Gears) and runtimes with desktop integration for web apps (AIR, Silverlight), and the evolution and convergence of JavaScript frameworks. Their demonstration of Google Gears’ WorkerPools was an eye-opener for me; I hadn’t realized that Gears was about so much more than offline storage. They closed with a reflection on how Ajax has transformed our expectations of web applications, and how it is enabling a more attractive web.

(Note to self: get more familiar with Tamarin, ScreamingMonkey, Google Gears, AIR, HTML5, Dojo, Caja.)

“But I’m A Bloody Designer!” by Mike Stenhouse

Mike StenhouseMike talked about how in modern web development, the traditional barriers between designers and developers are breaking down. Designers need to be aware of the consequences of their choices, and how things like latency and concurrency will influence a feature. Developers need to increase their awareness of interaction design. This led to a discussion of how he feels thatBehavior-driven development has made him a better designer (and developer). He mentioned WebDriver for writing and executing BDD test cases, but the demo code he showed looked more like Ruby… I think I missed something there. Good tools and techniques to explore, though.

Update:: RSpec?

“Real World Accessibility for Ajax-enhanced Web Apps” by Derek Featherstone

Derek FeatherstoneProviding good accessibility for web content is hard enough; once you start building dynamic web apps, you’re practically off the map. Derek took the zoom/move control in Google Maps as an example of bad practice, showing how difficult it is for someone with only a voice interface to use. He walked through some more examples, with useful advice on how to make improvements in each case.

One of the toughest problems for Ajax applications is how to inform screen readers that a part of the screen has been updated. Derek noted Gez Lemon and Steve Faulkner’s technique for using the Virtual Buffer as being one of the best options for tackling this right now. Another cool technique that I hadn’t seen before was updating an input field’s <label> element with error information when the form is validated (so that a screen reader is made aware of the change), but then using CSS positioning to display the error information where a sighted user would expect to see it–possibly on the other side of the field than the label itself. Very clever.

I’m also going to have to familiarise myself with the ARIA (Accessible Rich Internet Applications) work coming out of the WAI: ARIA proposes to extend (X)HTML with additional semantics that would allow web applications to tap into the accessibility APIs of the underlying Operating System.

“How To Destroy The Web” by Stuart Langridge

Stuart LangridgeAfter lunch, Stuart Langridge put on his Master of EVIL hat, and tried to coax us to join him on the Dark Side by teaching us about all the things we can do to make a user’s experience on this hyperweb thingy as shitty and 1998-like as possible. Remember: if your app doesn’t use up all of a user’s bandwidth, they’ll only use it for downloading…well, something else. (“Horse porn” sounds so prejudicial.)

(Stuart’s slides lose a certain something when taken out of context.)

“Planning JavaScript And Ajax For Larger Teams” by Christian Heilmann

Christian HeilmannChristian works for Yahoo!, and has for a long time been a great evangelist of unobtrusive javascript and other modern JS techniques like the module pattern. In this presentation, he talked about working with JavaScript in larger teams. This is interesting, because until recently, there were no such things as “large JavaScript teams”. JS was something you copy-and-pasted into your web site, or got your resident front-end geek to bolt on as an afterthought. JavaScript has matured enormously over the last few years.

Many of Christian’s points are good software development practices in general: comment your code, follow a code standard, work as if you will never see your code again, perform code reviews, use good names, etc. Take five minutes to read through Christian’s presentation slides (they’re very readable and comprehensible, even out of context), and then take another five minutes to think about them. JavaScript is a first-class citizen of web development now: let’s treat it as such.

(Note to self: make more use of the BUILD PROCESS.)

“Ajax A Work: A Case Study” by Peter-Paul Koch

Peter-Paul KochPPK wrapped up the day with a case study of a genealogy/family tree application he is building. He walked through the decision processes behind:

  • building the app as an Ajax app in the first place
  • choosing XML instead of JSON (or HTML or CSV) for its data format on the wire
  • deciding on an optimal loading strategy to ensure a highly responsive user experience

Interestingly, PPK was the only speaker who used the “strict” definition of Ajax (i.e. Asynchronous JavaScript and XML) as the basis for his presentation. I didn’t agree with all of the decisions he described, but it was an interesting view anyway. (And besides, it’s not my app 🙂 His write-up of the conference, as well as his slides, can be found on his Quirksmode blog.

Tuesday 20 November

“The State Of Ajax” by Brendan Eich

Brendan EichBrendan Eich is the man who invented JavaScript. There are few mainstream languages that have both been adopted so widely, and dismissed out of hand by so many. In the keynote presentation, Dion and Ben characterised Brendan Eich as wanting to use the JavaScript 2 (ECMAScript 4) spec to “just let him fix his baby”. That’s a pretty crude caricature of Brendan’s position, though. He is very keenly aware of all the problems in JavaScript as it stands right now. (And there are some really big problems.) With JS2 he is trying to take the best bits of JS1, and build a language for the next 5-10 years (or more) of the web.

However: JS2 really is a different language. It adds new syntax, and it will not be compatible with existing interpreters. The other side of the “future of JavaScript” debate wants to see incremental improvements to the current implementation(s), so as to maintain compatibility and not “break the web”–because we’re still going to be stuck with IE6 for a long time to come.

I’m not going to run through the technical guts of all the things going into the JS2 spec–there are just too many of them. Take a look at Brendan’s roadmap blog to get pointers to what’s going on.

“Building Interactive Prototypes with jQuery” by John Resig

John ResigThis presentation did exactly what it said on the tin: an introduction to coding with jQuery. It appears to be compact, simple, expressive, and ideal for a lot of everyday JavaScript work.

“Metaprogramming Javascript” by Dan Webb

Dan WebbDan showed how to use some of JavaScript’s best features (prototypal inheritance, expando properties, using Functions as Objects, etc.) to produce some surprising results. Because of these techniques, JavaScript really is a language that can bootstrap itself into a better language. Very slick.

(See the slides for the presentation on Dan’s site.)

“Dojo 1.0: Great Experiences for Everyone” by Alex Russell

It appears that no @media conference is complete without a doppelgänger. I hope I’m not the only one who sees the obvious resemblance between Alex Russell and Ryan Reynolds. (Photo of Ryan Reynolds shamelessly lifted from Tharpo on Flickr.)

Hollywood star and sex symbol Ryan Reynolds Dojo toolkit lead developer Alex Russell

Alex is the lead developer for the Dojo toolkit. He talks really fast on stage! He is full of energy and seemed eager to share his insights with the audience, even though some of those insights paint a rather depressing picture of the state of the web. Personally, I lapped it up. I think it was the best presentation of the conference. Rather than talking just about Dojo, he discussed among other things:

  • the complexity of web development, and why there is a need for JavaScript libraries/frameworks in the first place
  • the burden of bringing new semantics to the web
  • how the lack of progress and competition is putting the whole open web in jeopardy

You can get the slides for the presentation on Alex’s blog, but without his lively and passionate narrative, they lose a lot of their power. Although he also talked about the technical capabilities of Dojo itself (powerful internationalization features, accessibility already built in to all its widgets, all built on top of a tiny core), it’s the strategic positioning of the toolkit that is going to make me download it and try it out.

“JavaScript: The Good Parts” by Douglas Crockford

Douglas CrockfordDouglas Crockford is one of the people most responsible for bringing JavaScript to its current level of maturity. He invented JSON, and wrote the JSLint checker and JSMin minifier. He reckons that JavaScript is the world’s most misunderstood programming language. His presentation covered some of the best bits, which you probably would not discover on a first glance at the language, such as Lambda expressions, closures, and dynamic objects.

Douglas stands in the opposite camp to Brendan Eich when it comes to evolving JavaScript. He wants to see the language become more secure (very important, given how glaringly insecure it is right now), but he thinks that the radical changes proposed for JS2 are wrong. One of the best parts of JavaScript is its stability: there have been no new design errors in the language since 1999, because that’s how long JS1 has been frozen. (There have been minor iterations to it since then, but nothing on the scale of the fundamental architectural changes that JS2 will bring.) He is still keen on evolving the language, but in a much more gradual way.

One very interesting thing that Douglas briefly mentioned was ADSafe. This is a subset of Javascript, designed for safety: a script built with the ADSafe subset can still perform useful work (it still has access to the DOM, and can make network calls), but it is not allowed to use any of the features that make JavaScript inherently unsafe (e.g. access to global variables, use of eval, etc.). ADSafe is a static checker: you run it to verify the code before you allow the code to appear on a page. If it isn’t safe, you don’t let it run. Google’s Caja works in a different way: it takes untrusted code and transforms it into safe code. To understand the use of these tools, consider Google’s iGoogle home page, where you can have widgets from a variety of sources all running on the same page. Without some kind of safety container, these scripts would have access to each other’s code and capabilities — very dangerous.

(The slides Douglas has on his blog are not quite those he used for this presentation, but they’re close enough.)

Wrap-up panel discussion with Brendan Eich, Stuart Langridge, Alex Russell, Douglas Crockford, and moderated by Jeremy Keith

Brendan EichStuart LangridgeAlex RussellDouglas CrockfordJeremy Keith Jeremy tried to keep this light-hearted, but there was clearly some tension between the panellists. I was pretty tired by this point, though, and the thing I remember most is Alex berating Yahoo! (Douglas) for not open-sourcing the YUI framework and coming together with other toolkit developers to present a unified front to browser vendors. Other subjects that came up included Google Gears (again), how badly CSS sucks (I see their point, but I still like it anyway), and capability-based security (see also The Confused Deputy).

(Jeremy’s has write-ups day 1 and day 2 on the DOM Scripting blog.)

Overall

It was a very interesting conference. It didn’t feature as much technical content as I had expected: it was more strategic than tactical. I didn’t mind at all, though, that it wasn’t just about “Ajax”. I love JavaScript, and I came away feeling excited by the amount of activity in the field.

The most important things I took on board:

  • Make more use of the build process
  • Investigate Google Gears – there is a lot of interesting stuff going on there, and it will start making its way into browser implementations soon
  • If you’re doing any kind of JavaScript development beyond simple form validation, you really should be using a library…
  • …probably jQuery…
  • …but Dojo looks REALLY interesting

A9 search

Amazon has been pumping up its A9 search engine this week. It’s been getting stacks of press, and I even noticed this evening that an A9 search box has replaced the standard Google search box over at IMDB. (Probably not surprising, since Amazon owns IMDB, too.)

I remember taking a look at A9 when they soft-launched the beta earlier this year, and thinking, “meh.” Looking at it now, though, they’ve really thrown some coals on the fire. Multiple lists of search results on a single page make it a power searcher’s dream. It makes heavy use of personalisation, automatically keeping track of your search history. And if you install the A9 toolbar, it will even provide the “Personal Search” functionality I was so interested in having back in February:

“With the A9 Toolbar all your web browsing history will be stored, allowing you (and only you!) to retrieve it at any time and even search it”

The only problem is, now that it’s here, I feel somewhat reluctant to actually use it.

Amazon are quite up-front about what they’re going to do with people’s A9 browser history: they’re going to correlate it with their Amazon customer history to improve the customer experience they provide. Their privacy policy says pretty unambigiously:

“PLEASE NOTE THAT A9.COM IS A WHOLLY OWNED SUBSIDIARY OF AMAZON.COM, INC. IF YOU HAVE AN ACCOUNT ON AMAZON.COM AND AN AMAZON.COM COOKIE, INFORMATION GATHERED BY A9.COM, AS DESCRIBED IN THIS PRIVACY NOTICE, MAY BE CORRELATED WITH ANY PERSONALLY IDENTIFIABLE INFORMATION THAT AMAZON.COM HAS AND USED BY A9.COM AND AMAZON.COM TO IMPROVE THE SERVICES WE OFFER.”

I was a little bit freaked out when I visited A9 earlier in the week and found the “Hello Mr Martin Sutherland” welcome message at the top of the screen. I didn’t remember ever signing up with A9, and a quick look through my password safe showed that I didn’t have a separate user name for it. But because A9 is an Amazon subsidiary, they share their cookies, and so they can use my Amazon login to identify me.

Cross-domain cookie sharing is often considered a bad thing, because it indicates information leakage. How happy are you if Company X decides to suddenly share your private information with Company Y without notifying you–even if you had previously agreed to their privacy policy? (Though probably without reading it.)

A9 is a wholly owned Amazon subsidiary, so technically they are the same company. Also, I like, trust and respect Amazon as a company. (Heck, I applied to–and still want to–work for them.) Put together, these two statements should generate a nice bit of syllogistic synergy to give me warm fuzzies about A9. But they don’t. There’s something about the relationship, and the sharing of personal information that makes me feel…icky.

It’s hard to quantify exactly where the Ick Factor starts. I’m happy enough to leave Amazon in custody of all my book, music, and DVD browsing and shopping information. I have absolutely no problem with that. In fact, I want them to use it to improve my shopping experience.

But also giving them access to all my search history, and potentially all my browsing history? Um, no.

I think that A9 recognizes this. In addition to their fully personalised site, they also offer generic.a9.com, an anonymous version of the search engine. You still get the multiple search panels, but they don’t tie your searching back to a specific identity.

But is the non-personalised search really that much better than, say, raw Google? I think it’s a “damned if you do, damned if you don’t” case. Without personalisation, A9 is only an evolutionary step in terms of search; but with personalisation they go too far.

So why don’t I have the same icky feeling about Google, which I’ve been using almost exclusively for several years now, and which also has the ability to track its users’ search history? Well, I kind of do when it comes to Orkut, their social networking service. And this is, I think, the crux of the matter: I am happy enough entrusting specific chunks of my on-line life to specific companies. It’s when they start clubbing together to aggregate my personal information that it all becomes icky.

And then we’re back at national identity cards. Sigh.

We’re only a decade or so into the Internet Age, and there’s still a long way to go in terms of clarifying mores and defining a social contract between individuals and collective entities. This is all going to be really big and important over the next ten years, isn’t it?

Related links:

Standard Life Bank…but only during office hours

Standard Life Bank’s internet banking service is useless. “Banking online gives you the flexibility to manage your account when it suits you,” they say. Well, that would often be on Saturday or Sunday evenings. Or sometimes very late on a weekday evening. Perhaps after 11pm.

Whoa there, boy! Did we say when it suits you? Whoops, we meant to say when it suits us. Sorry for the typo!

Standard Life Bank's internet site is only open during office hours

I mean, really, folks. What’s up with that? Are their servers unionized? Did they threaten walkouts if they had to be switched on 24/7? Do they all pull out their network cables at the end of the day and go home for the night?

Or maybe all of Standard Life Bank’s internet transactions get handled manually behind the scenes? Maybe there’s an army of call centre workers trained to take incoming web page requests, scribble them down on paper, and pass them along to their colleagues who work some abacus magic before approving a money transfer request. Maxwell’s Demon, eat your heart out.

Oh, and their phone banking is even worse. Not open on Sundays at all.

Welcome to the 21st century!

Dutch train station clocks

Dutch train station clocks have always baffled me. They are, I am sure, the strangest timepieces known to mankind.

They’re visually quite distinctive, in an Ur-clock kind of way. They have plain, backlit white faces inside a black box with rounded corners. The hands are thick, and the minute divisions are chunky. They make the time very readable even from large distances.

But their visual apperance isn’t the strange thing about them. It’s their behaviour. Click on the image below for a video clip (about 1.6MB) of one of these clocks in action. I haven’t doctored this clip in any way. Pay close attention to what happens when the second hand passes the minute mark.

A Dutch train station clock

Tick…tick…tick…PAUSE…PAUSE…PAUSE…tick…tick. The second hand pauses for about three second at the top of the minute. Why? It means that the second hand makes a full cycle in 57 seconds, rather than 60. Each beat of the second hand is only 0.95 seconds long. By design, these clocks can only ever show the right time once every minute. The rest of the time, they are GUARANTEED to be wrong.

Okay, so they’re only ever fractionally out, but…but… it’s just wrong. It’s the kind of thing that can drive a person just ever so slightly mad…in 0.05 second increments.

This behaviour must be by design; it’s too strange to be an accident, and the clocks would have been fixed long ago if it was. So there has to be a good explanation for it.

Could it be a subconscious nudge to make people hurry up for their trains, by making them think that it’s slightly later than it actually is? Is it a subtle technique to help people relax in a tense rush hour environment, by giving them a three-second breathing space at the top of every minute? Are there any readability benefits from having the second hand pause like this?

There must be a reason. Does anyone know what it is?

(Amusing speculations are also welcome.)

More keyboardy goodness

Some web sites have their drop-down list boxes set up in such a way that as soon as you select a different option from the list, you jump to a new page. The BBC’s weather page is a typical example. If you choose a different UK region from the list, you will be immediately transported to the local weather page for that region.

So what do you do if you don’t have a mouse for clicking on the drop-down and making the list options visible? Just pressing the down key on your keyboard changes the selected item in the list, and jumps you to a different page, so that doesn’t work. The answer is Alt+Down Arrow. This expands the list, and allows you to run up and down with the arrow keys without changing the active item. Press Enter once you’ve made your selection, and Bob’s your uncle.

Many, many more keyboard shortcuts are listed on Microsoft’s pages for keyboard assistance.