“atmedia” tags on Flickr

A tagIn their write-ups of @Media 2006, Eric Meyer and Peter-Paul Koch have both spoken out to discourage the use of the “atmedia” tag for photos on Flickr which have no (apparent) relevance to the event itself. Personally, I’m with Russ Weakley in the opposite camp.

The whole point of tags on Flickr (and elsewhere) is that they are not rigid categories decided by the site owners. Everyone uses them differently, and most people pay no heed whatsoever to the global namespace. For example, when I tag pictures of my family, I use “family”, and the first names of whoever appears in the photo, e.g. “family martin fiona”. This is because I’m thinking about the relevance of these tags in the context of my personal space on Flickr. I’m tagging these photos for my benefit, and for my friends and family–not to provide the entire Flickr user base with a convenient way of reaching these photos via a global search.

Tags are descriptive rather than prescriptive metadata. With tags, you can throw as much or as little description as you like at an item. This allows for enormous flexibility, which encourages people to actually attach metadata in the first place. This is a good thing. However, the metadata is also likely to be incomplete, imprecise, and highly subjective. But this subjectivity is actually a strength when it comes to “social” tagging schemes.

The reason tags are gaining ground on traditional fixed classification schemes is that people like being able to create their own labels, with their own personal relevance. People like not having to ponder whether they should file a photo of Westminster Abbey under “Places:UK:London” or “Architecture:Churches:Gothic”. Would Flickr contain even a tenth of the metadata if it provided a set of categories instead, and asked people to classify their photos accordingly? I don’t think so. Aside from the cognitive overhead involved in making those decisions, there’s the usability aspect to consider, too: repeatedly navigating a categories is going to be more difficult than just throwing a bunch of tags into a textbox.

So although it may be frustrating for one person to search for the tag “atmedia” and be confronted with photos of Big Ben instead of Big Veen, someone else is sitting in front of their computer perfectly delighted with Flickr for allowing him to group all the pictures from his trip with a single convenient, and–for him–highly specific and descriptive tag.

It’s fine to suggest a canonical tag for use in classifying photos or other data (blog posts, links, etc.). But trying to specify exactly what that tag should and shouldn’t be used for, goes against the grain of the system. It’s a futile effort at best.

In fact, Flickr already has a mechanism for grouping photos with a narrow set of common criteria: groups. It takes a few more steps to submit a photo to a group than it does to tag it, but that’s the price you have pay for increased relevance in this case. There was a group for @Media 2005, but there doesn’t seem to be one for this year’s event yet. If anyone is interested, I’ll create one.

(As a final note, I have to say that I’m absolutely gagging for the new Tags feature in Movable Type 3.3. It’s about time…)

Hugo Awards 2003

Around this time each year, the World Science Fiction Convention (“Worldcon”) takes place. It touches down in a different city each year. The last one we attended was Bucconneer, in Baltimore in 1998. Before then, we went to Intersection in Glasgow in 1995, and I attended ConFiction in Den Haag in 1990. This year, the event was called Torcon 3, and it took place in Toronto. (We didn’t go. We were visting friends and food in the South of The Netherlands instead.)

Worldcon is also where the Hugo Awards are announced. The Hugos are the “audience awards” of the science fiction world. Publishers like to tout their Hugo-winning authors. People who have not heard of an author might pick up a book that has “winner of the Hugo award” splattered over its cover. For people who didn’t attend the conference itself, the awards are one of the biggest pieces of news to emerge from it. You’d think that they’d maybe put the results up on the front page of their web site, wouldn’t you?

Okay, say they didn’t put up the results on the front page. Say the results are stuck on a page somewhere deeper in the site. Surely they’d have a link to it right on the home page! Surely?

Hello–2003 calling Torcon! Anyone home? Anyone heard of content management systems? Blogs? Personal publishing tools?

The SF community has embraced fanzines and mini-publishing totally. SF fans love getting together for cons. We love hanging out on the Internet in chat rooms, on Usenet, IRC and bulletin boards. Given the sheer volume of geeks and netheads involved in SF fandom, how is it possible for Worldcon web sites to be so uniformly rubbish?

I complained about this last year as well, and nothing has changed in the intervening period:

  • 2002: ConJosé. (Okay, so they did eventually put a link to the Hugos on their front page.)
  • 2003: Torcon 3. (Framesets…argh.)
  • 2004: Noreascon 4.
  • 2005: Interaction. (Ooh, pebble texture background…very 1997!)

At least Noreascon 4 has a blog. But do you notice any difference between the main site and the blog? Something to do with clarity of design, readability, timeliness of information? Is there some kind of WSFS rule that says you’re not allowed to use a graphic designer to put together a set of page templates? Some bizarre bylaw that makes information architecture and user testing a punishable offense?

The simple, old-fashioned HTML isn’t about accessibility, either, as the frameset design for Torcon 3 does a great job of preventing useful navigation for anyone without a frames-capable browser.

Yet it’s perfectly possible for sites to be accessible, well-structured, and good-looking–all at the same time! Good visual design isn’t child’s play, but it’s not rocket science. Usability testing can be done simply and quickly. Simplicity of design can be combined with depth and breadth of information and interaction.

It’s not too much to ask, is it?

(Oh, and about the actual results for the 2003 Hugos: Robert J. Sawyer’s Hominids won the award for best novel. I haven’t read it yet, but some of the comments about it make me ambivalent about starting.)

John Sandford’s web site

I’ve been a fan of John Sandford’s for some time now. The Prey novels are excellent police thrillers, and Lucas Davenport is one of my favourite series characters–right up there with Elvis Cole, Spenser, Kinsey Millhone, and Miles Vorkosigan. But it’s only today that I stumbled across John Sandford’s web site–and it’s a cracker.

Looking at it in terms of my criteria for what makes a “good” web site, the Sandford site excels in a number of areas:

  • Content: lots of it. For most of his novels, there is a brief synopsis, the book’s first chapter, author comments (actually by the author’s son), and also pictures of the covers of all published editions. This is great stuff! Basic facts about the books, as well as insight from the writer himself.
  • Indexing/findability. The Information architecture for the site is beautifully simple and perfectly effective. On the left hand side of the page there is a sidebar with links to main page for each book, and links to the other key sections of the site (FAQ, author bio, etc.). This sidebar is consistent, and always visible on each page. On the book sub-sections of the site, there are contextual navigation links at the top of the page. These allow you to switch between the pages that are available for that book: synopsis, chapter, covers, etc. There is no search facility, but the site is simple enough that it doesn’t need one.
  • Community. The site has a message board. Nothing complicated, but it allows fans to interact.
  • Connectedness. All of the book pages are internally hyperlinked to each other, so if you’re reading the comments for Chosen Prey, and see a reference to Easy Prey, it takes you there. Simple and effective. There is also a links page, which hooks you up to a number of rare book sites and other author sites.

Another very cool thing is that the site is run by John Sandford’s son, Roswell Camp. I can dig the whole father-and-son thing. 🙂

There are a few things that could be improved, for example allowing you to navigate directly to a book’s comments page, rather than having to go via its index, but overall the site is just damn good. It also mirrors exactly what I’m planning to do with my Bob Shaw project.

For some years now I have been on a quest to collect copies of all editions of Bob Shaw’s novels. I’m up to about a hundred or so now, and am probably about half to two-thirds of the way there–for the English-language editions. (I haven’t started on the foreign editions yet.) My intention is to create an “Encyclopedia of Shaw” on the web, containing detailed information about each book, reviews, comments, and all sorts of other things.

I made an abortive attempt at doing this back in 1998 (for some reason Compuserve is still maintaining the page, even though I left them long ago). It was just plain HTML, it was a pig to maintain, and I didn’t really have the time to put into it. Now, in 2003 I still don’t really have the time to spend on it, but Movable Type is going to make it so much more functional (Comments! Trackbacks!) and easier to maintain when I do get round to it.

I really ought to buckle down and get to work on it. It would be kinda cool. And it would be a lovely memorial to a fantastic writer.

What makes the web? Part 2

Last week, I wrote about what I think makes the Web what it is: where its true identity lies, and what its key qualities are. I identified four main points:

  • Content (primarily the sheer volume that is available)
  • Indexing (how easy it is to find information)
  • Community (how it brings people together)
  • Connectedness (how any web page can be linked to any other)

I’ve been thinking about this some more, and I’ve come to the conclusion that these four qualities are also the cornerstone of all good web sites. The properties that make the web as a whole such a successful medium are exactly the same as those that determine the strength of its building blocks.


Content is what makes someone visit a web site in the first place. This can be content in its classical form, such as news stories, articles, or fiction. But it can just as easily be a product: a book to order and have delivered to your home address, or piece of software to download.

Note that the strength of a site’s content lies not just in its quality, but also in its volume, its freshness, and its speed of delivery.


A web site is generally useless if you can’t find what you’re looking for when you go there. “Indexing,” in the context of a web site means more than just a list of keywords, hyperlinked to the relevant pages. It means more than just a site map, or a search box, or an outline tree, although these are all useful elements. It means findability in general. It means that you must have some way of mapping your visitors’ content desires onto the structure of your web site. This is where Information Architecture comes into play.

Information Architecture draws on library science and cognitive science to bring people and information closer together. It helps make sure that when you arrive at a web site, you leave with what you were looking for.


An on-line message board, where a web site’s patrons chat with each other, is a very simple example of community. More generally, though, community is about being and feeling in touch with a web site’s owners, as well as its other users. An email newsletter brings the web site to you, even when you haven’t visited it for a while. On an e-commerce site, showing feedback from customers can build a sense of community.


The great strength of the web in general is also a great strength of individual web sites. If you’re showing a visitor a particular product, you can instantly hook them into related or complementary products. If you’re presenting classical content, keywords can be hyperlinked to useful definitions, references, or more in-depth material. The web it determinedly non-linear, and people will jump around at the slightest mention of something interesting.

Successful web sites turn this to their advantage by making sure that these connections add value to the user’s experience, therefore ensuring that the user will come back for more usefulness!

Combining the principles

When I’m talking about the “success” of web sites, I’m talking about the success of the sites themselves, not of the businesses that may underlie them. It is quite possible for a successful web site to be a lousy value proposition for a business running it. Conversely, just because a web site is poor, doesn’t mean that it can’t be making heaps of money for its owners.

A successful web site is one that is recognized within the ecosystem of the web as a whole as a strong entity. In a Darwinian sense, it is one that is capable of survival.

Numerous examples of strong, successful web sites spring to mind: Amazon, IMDB, Ebay, Slashdot. All of these score highly on all four of the principles I’ve outlined above:

  • Amazon’s product reviews build both content and community. It cross-links to an almost obsessive degree, and few would fault how easy it is to find stuff there.
  • IMDB is the place for information about movies. It has a strong reviewing community. You can search on almost anything, and once you arrive at a given movie, or actor, you can hop around to your heart’s content (or at least until you’ve found the link to Kevin Bacon)
  • Ebay may seem like it has little community on board, but it concentrates this into its user ratings. On Ebay, as well as in real life, the community is what determines your reputation. Ebay scores relatively low on connectedness, but interestingly I have seen it taking steps to improve this aspect, by doing things like showing your recently viewed items under new searches. Its low connectedness also indicates where it could potentially reap large benefits.
  • Slashdot has good content, good community features, and excellent linking to external web sites. Its indexing is poor, because the only way to get at its old content is through a relatively primitive search box and search page. Slashdot could improve its holistic web “strength” according to my ratings by working on this aspect. On the other hand, it is primarily a news site, and being able to find things in their archives is of less importance to the site’s visitors.

I think that these four principles (dimensions?) are a decent way of classifying a web site’s strength. It doesn’t make for a perfect analysis, but it’s allows for quick identification of a site weaknesses, and where resources could be applied to improve it. Of course, using these success criteria is only of use if you want to increase a site’s Darwinian potential within the social ecosystem of the Web. The criteria say nothing about a site’s commercial prospects.

I suppose I need to work on the commercial aspects of this classification system…. It’s fine and well improving your web site in an abstract sense, but most businesses are probably more interested in how it can make them more money.