ia play

the good life in a digital age

Archive for the ‘ucd’ Category

reasons to define a SharePoint content type

without comments

As a  general principle it is best not to go overboard on defining SharePoint content types. They add power to information retrieval but also add content creation overheads. Keep the number of types reasonable and also the number of metadata fields. (Obviously the art is defining what ‘reasonable’ means)

A list of reasons to define a specific content type:

  • you want to attach a document template for that content type
  • there’s a standard workflow for that content type
  • there’s a standard info policy for that content type
  • you want properties of the content type to be possible to search through advanced search
  • you want to restrict a search to that content type
  • you want to be able to sort a list or library by a specific metadata field of the content type
  • you want to categorise a list or library by a specific metadata field of the content type

See also Microsoft’s Managing enterprise metadata with content types

Written by Karen

November 8th, 2010 at 5:32 am

avoiding user testing too late, some challenges

with one comment

The classic usability complaint is that projects just tack a usability test on at the end of development when it is too late to make any changes. Which leaves the usability consultant in the uneviable position of having to tell the project team that their product doesn’t work, when they can’t do anything about it.  It can feel like a waste of time and money.

In reality these sessions are rarely entirely useless and I’d prefer to run them rather than having nothing at all. A lot of feedback is often about content which can usually be changed at the last minute.  You can also capture general customer research insights that can feed into the next project.

A couple of projects I’ve got involved with recently have involved late stage usability testing . We need to tackle this but we’ve got some bigger challenges than usual in bringing in a better approach to usability testing.

1. The organisation can’t afford rounds of testing

This is hardly unique to us and I fully expected this when I took the job. The answer usually involves the word “guerrilla” at some point.

2. We have some challenges in doing guerrilla testing

Our target audience (blind and partially sighted people)  is a small section of the population and can’t easily be found by popping into libraries and coffee shops. Everybody else really isn’t representative and would give completely different results. Although admittedly our target audience can often be found in our own offices, or rather in the public resource centre downstairs. But you can’t just get them to test on your laptop as you need to have the access tech that they are used to using. We might need to try and find folks who are both willing to test and also use the access tech we have available. Not insurmountable problems, but will take a bit of planning.

3. Can’t easily do paper-based testing or flat onscreen mock-ups.

I’ve mentioned this particular challenge before. We can survey and interview quite easily. We can test existing or competitor systems. But when it comes to trying out how well new designs are working, our options get a lot fewer. Whilst it would be interesting to experiment with tactile mock-ups, the admin overheads and learning curve probably aren’t justified.  Really we should just concentrate on working prototypes, rather than getting carried away with how cool an IA presentation idea “tactile wireframes” is.

Written by Karen

August 6th, 2010 at 6:09 am

Posted in accessibility,rnib,ucd

e-commerce project: the browse structure

without comments

This article is part of a series about our e-commerce redesign.

The browse structure of any website is always controversial within the organisation. I’m always struck by the discrepancy between how interested the organisation is in browse (as opposed to search) and how interested the users are. I’m not saying users don’t want a sensible, intuitive navigation scheme but they also want a really effective search engine. Most web design project involve huge amounts of effort invested in agreeing the navigation and very few discussions of how search will work.

Partly this is because navigation is easy for stakeholders to visualise. We can show them a sitemap and they can instantly see where their content is going to sit. And they know the project team is perfectly capable of changing it if they can twist their arm. With search on the other hand, stakeholders often aren’t sure how they want it to work (until they use it) and they’re not sure if it is possible to change anyway (search being a mysterious technical thing).

Even forgetting search, the focus on navigation is almost always about primary navigation with most stakeholders have very little interest in the cross-links or related journeys. The unspoken assumption is still that the important journey is arriving at the homepage and drilling down the hierarchy.

So I went into the e-commerce project assuming we’d need to spend alot of time consulting around the navigation structure (but knowing that I’d need to make sure I put equal energy into site search, seo and cross-linking, regardless of whether I was getting nagged about it).

A quick glance also showed that the navigation wasn’t going to be simple to put together. Some of my colleagues thought I wasn’t sufficiently worried but I’m used to the pain of categorising big diverse websites or herding cats as Martin puts it. I participated in at least three redesigns of the BBC’s category structure, which endeavours to provide a top-down view of the BBC’s several million pages on topics as diverse as Clifford the Big Red Dog, the War on Terror and Egg Fried Rice.

My new challenge was a simple, user friendly browse structure that would cover a huge book catalogue,  RNIB publications, subscriptions to various services, magazines, and a very diverse product catalogue of mobility aids, cookware, electronics and stationery. And those bumpons, of course.

Card-sorting is usually the IA’s weapon of choice in these circumstances. Now I’ve got my doubts about card-sorting anyway, particularly where you are asking users to sort a large, diverse set of content of which they are only interested in a little bit of it. Card-sorting for bbc.co.uk always came up with a very fair, balanced set of categories but one that didn’t really seem to match what the site was all about. It was too generous to the obscurer and less trafficked bits of the site and didn’t show due respect to the big guns. Users didn’t really use it, probably even the users who’d sorted it that way in the testing. My favourite card-sorting anecdote was the  guy who sorted into two piles “stuff I like” and “stuff I don’t like”. Which I think also alludes to why card-sorting isn’t always successful.

In any case, card-sorting isn’t going to half as simple and cheap when your users can’t see.

We decided to put together our best stab at a structure and create a way for users to browse on screen. Again not just any old prototyping methods is going to work here – however the browse structure was created would need to be readable with a screenreader.  So coded properly.

I wrote some principles for categories and circulated them to the stakeholders. Nothing controversial but it is helpful to agree the ground rules so you can refer back to them when disagreements occur later.

I reviewed the existing structure, which has been shaped over the years by technical constraints and the usual org structure influence.  I also looked at lots of proposed re-categorisations that various teams had worked on. I looked at which items and categories currently performed well. I reviewed the categorisation structures as part of the competitive review.

I basically gathered lots of information. And then stopped. And looked at it for a bit. And wondered what to do next.  Which is also pretty normal for this sort of problem.

(actually one of the things I did at this point was write up the bulk of this blog post – I find it really, really helpful to reset my thinking by writing up what I’m doing)

Somewhat inevitably I got the post-it notes out. I wrote out a post-it for each type of product and laid them out in groups based on similarity (close together for very similiar products and further away as the relationship gets weaker). This is inevitably my sense of similarity but remember this is a first stab to test with users.

Where obvious groups developed I labelled them with a simple word, some like books or toys. If a group needed a more complex label then I broke it up or combined it until I felt I had very simple, easily understood labels (essentially a stab at “basic categories”).

There were too many groupings and there were also a scattering of items that didn’t fit any group (the inevitable miscellaneous group). I dug out the analytics for the shop to see how my grouping compared in terms of traffic. I made sure the busiest groups were kept and the less popular sections got grouped up or subsumed.

This gave me a first draft to share with the business units. Which we argued about. A lot.

I referred everyone back to the principles we’d agreed and the analytics used to make the decisions. Everyone smiled sweetly at me and carried on with the debate.

After some advice from my eminently sensible project manager, I conceded one of the major sticking points. As I reported on Twitter at the time:

“Have given in and allowed the addition of a 13th category. Will the gates of hell open?”

Luckily at this stage we were finally able to do some usability testing with some real users. Only four mind, but they all managed to navigate the site fine and actually said some nice stuff about the categories. One tester even thought there must be more products on the new site, in spite of us cutting the categories by two-thirds.

So if someone attempts to re-open the browse debate, hopefully we can let usability tester #2 have the last word as in her opinion the new shop is…

“very, very clearly divided up”

Enough navigation, time to concentrate on search….

Related posts:
Re-branding miscellaneous

Written by Karen

May 12th, 2010 at 6:50 am

tripped up by “you might also like”

without comments

My rabbit hutch purchasing has been an interesting vein of UX experiences. In the end I bought a hutch from JustRabbitHutches, whose website was mostly pleasant to use and whose service was great.

That said, once I’d added my hutch to the basket I noticed they’d been tripped up by recommendations. Under my basket were suggestions that I might enjoy. Unfortunately one of them was a “delivery surcharge”.

Surcharges are always so much fun

Now this isn’t as damaging as Walmart’s dodgy DVD recommendations but it’s another example of how careful you have to be.

You could also ask why JustRabbitHutches thought they needed a recommendation engine here. After all the clue is in the title. If I’m buying a rabbit hutch, how likely is it that they’ll be able to sell me another one?

Written by Karen

March 23rd, 2010 at 6:42 am

why your search engine (probably) isn’t rubbish

without comments

Now all search engines struggle,  to varying degrees,  with the knotty mess that is natural language. But they don’t generally don’t get called rubbish for not succeeding with the meaty search challenges.

Rubbish search engines are the ones that can’t seem to answer the most basic requests in a sensible manner. These are ones that get mocked as “random link generators”, the jibbering wrecks of their breed.

Go to  Homebase and search for “rabbit hutch” (we need another one as two of our girls are about to produce heaps of bunnies at the same time).

The first result is “Small plastic pet carrier”. There’s a number of other carriers and cages. Then there’s a “Beech Finish Small Corner Desk with Hutch”. Finally there’s a Pentland Rabbit Hutch at result no #8.  This is a rubbish set of results. I asked for “rabbit hutch” and they’ve got a rabbit hutch to sell me but they’re showing me pet carriers and beech finish corner desks.

This is a rubbish set of results. But it doesn’t mean the search engine is rubbish.

Somebody made a rubbish decision. They’ve set it up shonky.

So before you reach for the million pound enterprise search project, try having a quick look under the bonnet with a spanner.

Is it AND or OR?

This is reasonably easy to test, if you can’t ask someone who knows.

Pick a word that will be rare on your site and another word that doesn’t appear with the rare one  e.g.  “Topaz form” for my intranet.  A rare word is one that should only appear one or two times in the entire dataset so you can check that the other word doesn’t appear with it.  You may need to be a bit imaginative but unique things like product codes can be helpful here.  If the query returns no results you’ve probably got an AND search.  More than a couple of results (and ones that don’t mention Topaz) and you’ve probably got OR.

(this can get messed up if there is query expansion going on but hopefully the rare word isn’t one whatever query expansion rules there are will work on).

AND is more likely to be problematic as a setting. You’ll get lots of “no results”. You’ll need your users to be super precise with their terminology and spell every word right.  If they are looking for “holiday form” and the form is called “annual leave form” they’ll get no results.

OR will generate lots of results. This is ok if the sort order is sensible. Very few people care that Google returned 2,009,990 results for their query. They just care that the first result is spot-on.

So most of the time you probably want an OR set-up.

(preferably combined with support for phrase searching so the users can choose to put their searches in nice speech marks to run an AND search if they want to and know how to).

Is there crazy stemming/query expansion going on?

Query expansion is search systems trying to be clever,  often getting it wrong and not telling you what they’ve done so you can unpick it. Basically the search system is taking the words you gave it and giving you results for those words, plus some others that it thinks are relevant or related.

Typical types of expansion are stemming (expand a search for fish to include fishes and fishing), misspellings and synonyms (expand a search for cockerel to include rooster).

This is probably what is happening if you are getting results that don’t seem to include the words you searched for anywhere on the page (although metadata is another option).

Now this stuff can be really, really helpful. If it is any good.

Have you got smart sophisticated query expansion like Google?  Or does it do silly (from a day-to-day not a Latin perspective) stemming like equating animation with animals? If it is the silly version then definitely switch it off (or tweak it if you can).

Even if you’ve got smart expansion options available, it’s generally best practice to either give the user the option of running the expanding (or alternate) query, or at the very least of undoing it if you’ve got it wrong. They won’t always spot the options (Google puts lots of effort into coming up with the right way of doing this) but it’s bad search engine etiquette to force your query on a user.

Is the sort order sensible?

That Homebase example. The main problem here is sorting by price low-high. That’d be fine (actually very considerate of Homebase) if I’d navigated to a category full of rabbit hutches. But I didn’t. I searched for rabbit hutches and got a mixed bag of results that included plenty of things that a small child could tell you aren’t rabbit hutches.

The solution? Sort by relevancy.

I’ve seen quite a lot of bad search set-ups recently where the search order was set to alphabetical. Why? Unless as Martin said when I bemoaned this on Twitter your main use case is “to enable people to find stuff about aardvarks”.

News sites sometimes go with most recent as the sort order. Kinda makes sense but you need to be sure the top results are still relevant not just recent.

Interestingly sort order doesn’t matter so much if you’ve gone for AND searches and you haven’t got any query expansion going on. If you’re pretty sure that everything in the result set is relevant, then you’ve got more freedom over sort order.  If not,  stick with relevancy.

(I don’t need to tell you that you want relevancy is high-low, do I?)

So people stop giving me grief over navigation.  Let’s talk about that rubbish search engine you’ve got.  I could probably fix that for you.

Written by Karen

March 5th, 2010 at 6:04 am

Posted in search

Tagged with

trying out the screen-reader experience

without comments

I’m not a screenreader expert and if you are wondering how your site works in screenreaders it is worth getting it tested properly by experts. But if you just want to get a flavour of what it is like to use a screenreader or how screenreaders cope with particular types of content…then these tools might be helpful.

Fangs Screen Reader Emulator :: Add-ons for Firefox. This Firefox add-on will produce a (text) version of your page to give you an idea of how a screenreader might read it. It’s just an idea as it depends on the screenreader and it doesn’t help you understand how the page might sound.

If you want to experience the actual audio experience:

NVDA is a free and open source screen reader for Windows. Apparently works best with Firefox. I find it useful for quickly pointing the cursor at a bit of the page and listening to how that is read out. If you want to get a real sense of the page might be navigated then you’ll need to learn some of the commands. And you’ll probably want to slow it down to start with (go to preferences > voice controls)

JAWs is a widely used screenreader but definitely not free. You can however download a free trial. As for NVDA, you’ll need to learn some commands.

All the screenreaders are easier to use if you tend to use the keyboard more than the mouse. You’ll already be in the habit of memorising all those key combinations.

It is important to remember that a screenreader’s experience of your page will vary depending on how many of the screenreader’s functions that user knows and how they have their preferences set. The setting that controls how much punctuation is read makes a big difference but there are legitimate reasons for having it set to read all punctuation (which probably makes it sound worse and harder to process).

Written by Karen

March 1st, 2010 at 4:38 pm

Posted in accessibility

worst drop down so far this year

without comments

Drop-down menus aren’t inherently evil but they do seem to encourage all sorts of terrible behaviour.

HMCS CourtFinder includes a menu that is certainly the worst I’ve had to interact with this year, and probably for a quite a long time before that.

Stupid menu

The list is incredibly long. But more damagingly it isn’t in *any* order that I can see. Nor is this a list where you or I is likely to be sure exactly what the term we’re looking for is. After all types of court work isn’t a classification that most of us know off-by-heart.

Written by Karen

February 9th, 2010 at 6:15 am

topical navigation on CHOW

without comments

CHOW has a nice example of topical navigation.

Timely nav

It’s cold, people are trying to eat healthily, and it is Superbowl time (for the Americans anyway). So the navigation includes nachos, snacks, braises and healthy recipes.

I’m very fond of this kind of navigation. For big sites it is rare than the navigation actually contains exactly what the user is looking for, instead it provides a starting point for a journey. But for any site where interest in content is influenced by outside events then you can use this knowledge to get the users where they are going much, much faster and with greater confidence.

Written by Karen

February 8th, 2010 at 6:00 am

Posted in navigation

various commentators on the iPad and accessibility

without comments

After some of the frustations with the accessibility of the iPhone when first launched,  I wondered what people were saying about the accessibility of the iPad.  There’s not masses of commentary yet and doesn’t seem to be any from anyone with any first hand experience (unsurprisingly).

This didn’t stop abledbody being unimpressed with the accessibility of the announcement:

“In Apple’s rush to debut the new iPad tablet it forgot one little piece of marketing: Accessibility. Apple has an accessibility page but it didn’t bother to add the iPad before launching it yesterday at its headquarters. And even though Steve Jobs’ keynote was likely prepared, Apple didn’t bother to add captions for deaf or hard of hearing reporters, nor did it add captions to the 46-minute video broadcast of Jobs’ speech or the video “demo” of the new tablet.”

But they do go on to say that the iPad has the same accessibility features as the iPhone including VoiceOver, screen zoom, mono audio and closed-captioned support.  They believe the size and weight are a good thing, as are the built in speakers.

Not so good is the shortage of captioned content to actually watch, and the inability to plug in alternative input devices.

abledbody: news, insights and reviews on disability and assistive technology » Hey Apple, What About iPad’s Accessibility?.

AccessTech News is pleased with the external keyboard, white-on-black display and the  cognitive simplicity but mentions that less languages are supported for VoiceOver.

Accessibility and the iPad: First Impressions « AccessTech News.

Mac-cessibility Network comments that “iWork for the Mac is almost entirely accessible, and Apple has made it a point to have good access to its AppStore offerings. We expect iWork for the iPad to be accessible, but this is not confirmed.”

They also have content concerns:

“To date, electronic book stores, such as Amazon’s Kindle store, have not provided books in an accessible format, owing to DRM restrictions. We hope Apple may be able to pave the way for the visually impaired and their access to content with the iBooks application and store. If VoiceOver does indeed have access to the content in these publications, it would be a tremendous step forward for access to printed media.”

The Mac-cessibility Network – News [Lioncourt.com]

Written by Karen

January 29th, 2010 at 5:32 pm

Posted in accessibility

ways of adding metadata

with one comment

I was digging around in my files this weekend and found this table I made once of different approaches to applying metadata to content. At first glance the volunteers example looks like it is only relevant to charities but alot of scenarios that refer to users tagging, it is actually volunteers tagging. The difference is doing something for your own benefit (users) or contributing something to a greater cause (volunteers).

users volunteers staff-authors staff-specialists automatic-rules automatic-training sets
Users apply metadata to their own content or content they have gathered for their own use Unpaid volunteers apply metadata to content produced by others e.g Freebase The paid author applies metadata to their own content. Paid metadata specialists apply metadata to content produced by others Software applies metadata to content based on rules defined by specialists Software applies metadata to content based on training sets chosen by specialists
Strengths Cheap, real user language, subjective value judgements, highly reactive, latest trend vocab depending on how handled can be more predictable and reliable than users, may be close to user language, can be guided more like staff, asked to go back and change small commitment required from each staff member, expert knowledge of the content highly motivated, objectives likely to be tied to quality of this work more efficient than staff options more efficient than staff options
Weaknesses no guarantees of contributions, same tag to mean different things, different tags mean the same thing, cryptic personal tags, smaller interpretations drowned out, hardly anyone goes back and changes out-of-date tagging, can require more management/attention than users, smaller number, may not make up enough hours, probably not viable in most commercial enterprises – although can still be done if company offers a free-at-consumption service that may be perceived as a public good. low motivation and interest, may be too close to the content to understand user needs, more likely to be formal/objective cost, needs to read the content first, may not necessarily be user focused, more likely to be formal/objective needs operational staffing hard to control, can be ‘black-box’, need a mechanism for addressing errors
Recommended environment Large user-base, with a *selfish* motivation for users – often gathering/collecting, reasonably shared vocabulary, rarely works on a single site where the user could instead aggregate links or content on a generic site like delicious Where you can rely on lots of good will. Probably in combination with another approach, unless a large number of volunteers are likely. You have good historical examples of imposing new activities on the authors and getting them to follow them. Probably quite process and guideline driven organisation. Bad where your authors think of themselves as creatives…they’ll think metadata is beneath them. Strong information management skills in the organisation. The project needs to be resourced on an ongoing basis. Business probably needs to see a very close correlation between the quality of the metadata and profit. As for specialist staff. Strong technical and information management skills in the organisation. An understanding from management of the ongoing need for operational staffing. Management do not believe the vendors promises.

Written by Karen

October 7th, 2009 at 6:51 am