ia play

the good life in a digital age

Archive for the ‘information architecture’ Category

navigation isn’t a (good) promotional tool

without comments

One of classic IA arguments is about why New Service X shouldn’t be added to the primary site navigation.

Users aren’t *generally* just wandering around the internet going “ooh, what’s that? I have no idea, why don’t I click it and see”. If you just add random new stuff to the primary navigation, the main thing that will happen is it will get ignored as users carry on their journeys to wherever they were already going.

It can work ok for a new product category e.g. groceries but it really isn’t very effective for a new brand or unfamiliar feature e.g. X-PIL.  Combined with a big marketing push it might work, so you can get away with it if you have reason to believe this is going to be a really big new feature e.g. iPlayer *and* you are going to heavily marketing the brand.

Written by Karen

February 4th, 2011 at 2:23 pm

Posted in navigation

giving the user choice over “Did you mean?”

without comments

Be really really wary of expanding the users queries without telling them. Don’t just give them results for aubergine and results for eggplant, when they only searched for Aubergine. You think you are being clever and helpful. If you’re wrong about the expansion then you are just being extremely irritating.

Either:
a) Suggest the expansion but don’t run it for them. Risks them missing it.
b) Run the expansion but tell them you’ve done it. Still risks them missing it.

Google’s experimenting with both approaches over the years. And currently has a bit of a mixed approach. Don’t assume their approach has “cracked” the problem.

Written by Karen

February 3rd, 2011 at 2:21 pm

Posted in search

search: which features actually help?

without comments

1. Ranking

This is the least visible thing, that you might not consider a feature, that mostly gets ignored and is absolutely the most important thing for you to dedicate time to getting right.

If the query isn’t particularly ambiguous then you need the top results to be right, without asking the searcher to do much else.

Ranking isn’t sexy and it takes care and attention. But isn’t magic, it’s just rules. Ask what the rules are. Don’t be fobbed off. If no-one knows, work it out yourself.

2. Manual Suggestions (query expansion/narrowing)

This basically means Best Bets.

I’m very, very attached to Best Bets. This is mostly because I’ve been a search product manager as well as an IA on search re-design projects. Once the project team has packed up, the product manager (or web manager/editor) can still improve results and resolve problems using Best Bets. And they will need to. Promise.

3. Automated Suggestions (query expansion/narrowing)

We can’t spell and we can’t type. And then we blame the poor old search engine when it doesn’t find what we were looking for.

Any decent search solution needs to have some solution to misspellings (where to put them is a problem for another day!). You can do some of this with Best Bets, but with a big and diverse enough set of users you’ll probably need something a bit more automatic like Google’s Did You Mean?

A related but broader concept is suggesting related searches. You might have spelt your query correctly but there’s a similar term that would get you better results. Ask.com used to do this.

It might seem perverse to prioritise the manual intervention over the automated one. I’d usually expect to have both but I have a few reasons for picking manual if it comes to a choice:

  • the manual option is probably cheaper to add on if neither comes as standard
  • automated suggestions often get better over time but might start a bit ropy
  • automated suggestions may be ‘black-box’ you might not be able to do anything with them if they are wrong/misleading. And every system I’ve worked with and/or used makes mistakes sometimes.

It’s worth asking whether there is any control over the automated suggestions. Is there a dictionary? Is the right language (esp. UK v US English)? Can we edit it? How?

4. Filters and sort options (after you got search results)

These tend to get missed by users or interfere with their understanding of the page. Not all users will understand them, especially complex faceted filters. The positioning of filters/facets is very difficult to get right. Users home in on the top results, so above the first result is most likely to get noticed and also most likely to get noticed for being in an annoying position.

If you are doing product search then I’d probably still prioritise 1-3 but I’d strongly argue you need 4 as well.

5. Clever query language

Quote marks seem to be reasonably widely understood, so I might argue these should be higher up your expectation list.

But unless you’ll have access to your users and be able to train them all… I wouldn’t prioritise operators like wildcards, NOT/And/Or etc..

Find out what you get out of the box. Make that information available to interested users. But don’t invest lots of development effort and money here.

6. Filters and sort options (before you run the search)

a) Radio buttons and drop-downs.  These get missed, people don’t think about using them, they tend to just stick words in and hit go. Other users won’t use them because they don’t know they need to use them until they see the search results aren’t focused enough. So then they have to go backwards. So you might as well go with (4).

If you can sensibly default them then they can be more useful but establishing what the sensible default  is problematic.

b) Advanced search pages.
These are basically a collection of filters for the user to set before you run the search. Search specialists inevitably find advanced search useful but your average end-user doesn’t. The exception here is power users  but be sure the users actually are “power” users.  You are likely to find power users where there are time/cost pressures around searching e.g. staff answering customer calls or researchers using databases where they pay for searches. In these situations even reasonably techno-phobic users are motivated to get to grips with advanced searches including some of the more complex query building ones.

Another reason advanced search might be worthwhile is if your power users are also your most mouthy. If the segment of your audience that blogs/tweets is also the segment that might demand power features then you might consider the feature as marketing.

(Don’t be worried by people being intimated by the label “advanced”.  If they are intimated by the word then they’ll be intimated by the features. )

Written by Karen

February 1st, 2011 at 6:47 am

Posted in search

Best Bets in SharePoint

without comments

SharePoint search allows you to create Best Bets. They can be created by the Site Collection administrator.

If you go to Site Settings, you should see ‘Search Keywords’ under the Site Collection Administration heading.  If you don’t see it you probably haven’t got the right permissions.

You create a keyword, associate some synonyms with it and then add one or more Best Bet links. You can set it to expire and/or be reviewed.

Keyword: The search term that will generate the Best Bets and also is displayed above the Best Bet e.g. PenFriend

Synonym: Other search terms that will also generate the Best Bet. These aren’t displayed e.g. Pen Friend

Best Bets: The editorially picked search result e.g. Penfriend Audio Labeller

I can’t for the life of me figure out how to delete a keyword (Best Bet, yes. Keyword, no). Maybe it’s a permission thing again.

SharePoint Best Bets screenshots

Written by Karen

January 31st, 2011 at 6:38 am

Posted in search,sharepoint

e-commerce: google keywords

without comments

This article is part of a series about our e-commerce redesign.

Analysing your search referrals only tells you about the traffic you were successful in attracting. Even if you are getting lots of traffic for a particular keyword that might be a tiny fraction of the number of people searching for that keyword. And the referrers says nothing about what you missed out on completely.

So it helps to look at search engine traffic for keywords in the kind of space your website sits in. The free tools like Google AdWords keyword tool have generated lots of debate about how useful they are but I tend to see them as worth a look if you’re just looking for rough ideas about language and relative popularity.

With our shop research, I didn’t get much data for easy to see, easy to read, giant print, big print, canes, liquid level indicators, and (my favourite) bumpons. I couldn’t find information about Moon (the alphabet) because it was drowned by references to the satellite and all the other things called moon.

What I’ve learnt:

Generally people refer to concrete properties of the product rather than their condition. So it is ‘big button phone’ rather than ‘easy to see phone’ or ‘low vision phone’.

Singular is much more important than plural for objects like clocks and watches but the opposite is true for book formats e.g large print books. Which is kind of obvious…you only want one watch but you may want many books. This might have a bit of effect on our labelling policy, but not much as Google doesn’t seem to make a huge deal about singular verus plural.

There’s clearly a big opportunity around low vision products. The interest in products for blind people (like Braille) is less significant, which makes perfect sense when you compare the size of the audiences.

And loads of people are interested in magnifiers.

Written by Karen

January 28th, 2011 at 6:15 pm

SharePoint search administration via SSP

without comments

SharePoint search features are managed at 3 levels

  1. Farm level (configure the search service, configure crawler timeout settings etc…)
  2. SSP (Shared Services Provider) level
  3. Site collection level

The SSP functions are accessed via the Shared Services Administration.

SSP search functions:

  • add sources to the crawl
  • block URLs and URL patterns from the crawl
  • define crawl schedules
  • inspect crawl logs and troubleshoot crawls
  • emergency removal of items
  • install IFilters to support non-default file types
  • add/remove file types from the crawl
  • specify authoritative pages
  • create scopes for all site collections (you can also create at a site collection level)

And in theory specify noise words and create a custom thesaurus.  See Inside the Index and Search Engines
chapter 5 for more.

You can by default index these types of content source:

  • SharePoint sites
  • Non-SharePoint websites
  • Windows file shares
  • Microsoft Exchange Server public folders (you can index exchange mailboxes with a 3rd party add-on)

Crawl management:

  • Full crawl: indexes all content
  • Incremental crawl: only accesses content that has been updated since last crawl. Faster, but slow if  accessing an external website
  • Crawl schedules can be specified for each content source
  • Crawls should be scheduled for low usage times

Crawl rules

  • content can be excluded by defining a rule
  • rules are applied in the specified order so you usually need to move exclude rules in front of include rules.
  • a URL can be excluded by adding it as an exclude rule
  • URL patterns can also be excluded and help keep the management of rules neat e.g. http://www.bbc.co.uk/* or http://www.amazon.co.uk/*/dp/*
  • Exclude rules will remove any matched URLs during the next crawl
  • If you need to remove a URL in an emergency you do this via “Search Result Removal” instead
Sharepoint search admin screens

Resources elsewhere:
Introduction to SharePoint Search Indexes for DPM Administrators
Enterprise Search administration

Written by Karen

January 28th, 2011 at 1:20 pm

Posted in search,sharepoint

the recommendation trap: iPlayer

without comments

I am a bit obsessed by ‘when recommendations go wrong’ scenarios like the JustRabbitHutches incident.

iPlayer hasn’t done anything that silly, but it does seem to struggle with the recommendation concept. They sit particularly uneasily alongside the new Favourites functionality.

When the latest incarnation was in beta, I was quite excited by the prospect of favourite programmes and categories functionality. This had the potential to meet some of the needs that the absence of sophisticated browse function left. If I could tailor the content more then I’d need to browse less.

But the new site makes surprisingly little use of the favourites functionality. After you’ve put the effort into setting your favourites, it pretty much ignores all the work you’ve put in.

The favourite programmes bar is always closed. The favourite categories are similarly always closed. The radio stations box doesn’t remember your selection.

The homepage is dominated by four sections: Featured; For You; Most Popular; and Friends. None of these areas seem to be influenced by your own preferences.

Featured is rarely of interest to me but I get the editorial need to have some promo space.

For You is where the recommendations kick in but at least initially I had no idea what this section was supposed to be doing. A good design pattern is to explain recommendations ala Amazon and to let you know if there is anything you can do to make the suggestions better.

Most Popular is ok for me. Occasionally my interests overlap with the majority and then this spot is useful. Friends might be occasionally interesting, although “a people like you like” might have been more valuable. It seems a bit odd for the area to persist if you don’t login/specify any friends.

All these sections are potentially useful but the best predictor of my interests is my interests. It seems that in this design My Favourites and My Categories are given lower emphasis than *everything* else.

This is compounded by the presence of the For You section. As another commentator put it:

“why on earth would the site suggest I watch Eastenders? It’s been on TV for over 25 years and I’ve never once felt inclined to watch it, so what intuitive masterstroke has been developed to think that I may now wish to start?”.

Once you give recommendations personal labels like “For You” then people start to take your recommendations personally.

I’m annoyed that I told iPlayer what I like and it still insists on telling me that BBC 3 sitcoms are “for you!”. It’s started reminding me of my grandad and that’s not a flattering comparison.

Written by Karen

November 9th, 2010 at 6:30 am

Posted in bbc,recommendations

reasons to define a SharePoint content type

without comments

As a  general principle it is best not to go overboard on defining SharePoint content types. They add power to information retrieval but also add content creation overheads. Keep the number of types reasonable and also the number of metadata fields. (Obviously the art is defining what ‘reasonable’ means)

A list of reasons to define a specific content type:

  • you want to attach a document template for that content type
  • there’s a standard workflow for that content type
  • there’s a standard info policy for that content type
  • you want properties of the content type to be possible to search through advanced search
  • you want to restrict a search to that content type
  • you want to be able to sort a list or library by a specific metadata field of the content type
  • you want to categorise a list or library by a specific metadata field of the content type

See also Microsoft’s Managing enterprise metadata with content types

Written by Karen

November 8th, 2010 at 5:32 am

e-commerce project: the browse structure

without comments

This article is part of a series about our e-commerce redesign.

The browse structure of any website is always controversial within the organisation. I’m always struck by the discrepancy between how interested the organisation is in browse (as opposed to search) and how interested the users are. I’m not saying users don’t want a sensible, intuitive navigation scheme but they also want a really effective search engine. Most web design project involve huge amounts of effort invested in agreeing the navigation and very few discussions of how search will work.

Partly this is because navigation is easy for stakeholders to visualise. We can show them a sitemap and they can instantly see where their content is going to sit. And they know the project team is perfectly capable of changing it if they can twist their arm. With search on the other hand, stakeholders often aren’t sure how they want it to work (until they use it) and they’re not sure if it is possible to change anyway (search being a mysterious technical thing).

Even forgetting search, the focus on navigation is almost always about primary navigation with most stakeholders have very little interest in the cross-links or related journeys. The unspoken assumption is still that the important journey is arriving at the homepage and drilling down the hierarchy.

So I went into the e-commerce project assuming we’d need to spend alot of time consulting around the navigation structure (but knowing that I’d need to make sure I put equal energy into site search, seo and cross-linking, regardless of whether I was getting nagged about it).

A quick glance also showed that the navigation wasn’t going to be simple to put together. Some of my colleagues thought I wasn’t sufficiently worried but I’m used to the pain of categorising big diverse websites or herding cats as Martin puts it. I participated in at least three redesigns of the BBC’s category structure, which endeavours to provide a top-down view of the BBC’s several million pages on topics as diverse as Clifford the Big Red Dog, the War on Terror and Egg Fried Rice.

My new challenge was a simple, user friendly browse structure that would cover a huge book catalogue,  RNIB publications, subscriptions to various services, magazines, and a very diverse product catalogue of mobility aids, cookware, electronics and stationery. And those bumpons, of course.

Card-sorting is usually the IA’s weapon of choice in these circumstances. Now I’ve got my doubts about card-sorting anyway, particularly where you are asking users to sort a large, diverse set of content of which they are only interested in a little bit of it. Card-sorting for bbc.co.uk always came up with a very fair, balanced set of categories but one that didn’t really seem to match what the site was all about. It was too generous to the obscurer and less trafficked bits of the site and didn’t show due respect to the big guns. Users didn’t really use it, probably even the users who’d sorted it that way in the testing. My favourite card-sorting anecdote was the  guy who sorted into two piles “stuff I like” and “stuff I don’t like”. Which I think also alludes to why card-sorting isn’t always successful.

In any case, card-sorting isn’t going to half as simple and cheap when your users can’t see.

We decided to put together our best stab at a structure and create a way for users to browse on screen. Again not just any old prototyping methods is going to work here – however the browse structure was created would need to be readable with a screenreader.  So coded properly.

I wrote some principles for categories and circulated them to the stakeholders. Nothing controversial but it is helpful to agree the ground rules so you can refer back to them when disagreements occur later.

I reviewed the existing structure, which has been shaped over the years by technical constraints and the usual org structure influence.  I also looked at lots of proposed re-categorisations that various teams had worked on. I looked at which items and categories currently performed well. I reviewed the categorisation structures as part of the competitive review.

I basically gathered lots of information. And then stopped. And looked at it for a bit. And wondered what to do next.  Which is also pretty normal for this sort of problem.

(actually one of the things I did at this point was write up the bulk of this blog post – I find it really, really helpful to reset my thinking by writing up what I’m doing)

Somewhat inevitably I got the post-it notes out. I wrote out a post-it for each type of product and laid them out in groups based on similarity (close together for very similiar products and further away as the relationship gets weaker). This is inevitably my sense of similarity but remember this is a first stab to test with users.

Where obvious groups developed I labelled them with a simple word, some like books or toys. If a group needed a more complex label then I broke it up or combined it until I felt I had very simple, easily understood labels (essentially a stab at “basic categories”).

There were too many groupings and there were also a scattering of items that didn’t fit any group (the inevitable miscellaneous group). I dug out the analytics for the shop to see how my grouping compared in terms of traffic. I made sure the busiest groups were kept and the less popular sections got grouped up or subsumed.

This gave me a first draft to share with the business units. Which we argued about. A lot.

I referred everyone back to the principles we’d agreed and the analytics used to make the decisions. Everyone smiled sweetly at me and carried on with the debate.

After some advice from my eminently sensible project manager, I conceded one of the major sticking points. As I reported on Twitter at the time:

“Have given in and allowed the addition of a 13th category. Will the gates of hell open?”

Luckily at this stage we were finally able to do some usability testing with some real users. Only four mind, but they all managed to navigate the site fine and actually said some nice stuff about the categories. One tester even thought there must be more products on the new site, in spite of us cutting the categories by two-thirds.

So if someone attempts to re-open the browse debate, hopefully we can let usability tester #2 have the last word as in her opinion the new shop is…

“very, very clearly divided up”

Enough navigation, time to concentrate on search….

Related posts:
Re-branding miscellaneous

Written by Karen

May 12th, 2010 at 6:50 am

tripped up by “you might also like”

without comments

My rabbit hutch purchasing has been an interesting vein of UX experiences. In the end I bought a hutch from JustRabbitHutches, whose website was mostly pleasant to use and whose service was great.

That said, once I’d added my hutch to the basket I noticed they’d been tripped up by recommendations. Under my basket were suggestions that I might enjoy. Unfortunately one of them was a “delivery surcharge”.

Surcharges are always so much fun

Now this isn’t as damaging as Walmart’s dodgy DVD recommendations but it’s another example of how careful you have to be.

You could also ask why JustRabbitHutches thought they needed a recommendation engine here. After all the clue is in the title. If I’m buying a rabbit hutch, how likely is it that they’ll be able to sell me another one?

Written by Karen

March 23rd, 2010 at 6:42 am