Sunday, 1st October 2006
Ulla de Stricker
[This article reflects key points in a presentation to be given by the authors at the Internet Librarian International conference in London, 16-17 October 2006 <http://www.internet-librarian.com/>.]
When a website, an intranet, an information service or a marketing/communications campaign -- you name it -- does not see the traffic it 'should' see (given the size of targeted user groups, the number of association members, the number of individuals in a demographic group and other guideposts), those in charge of that service are understandably concerned. Did we misinterpret what users were telling us? Did we not get the full picture when we asked them? Have users' needs shifted and we missed it? Were we even tracking the evolution of those needs?
Over time, services can, and often do fall out of alignment with user requirements. It is necessary to be vigilant and persistent in monitoring those requirements -- not just through listening to what users say publicly about their needs when asked, but also through observing what they do in practice. Further, it is important not to be tempted to go with unsolicited input (the squeaky wheels) but rather to launch a systematic scrutiny of what users and non-users alike think about the service in question.
User interaction with all things electronic -- websites, intranets, extranets, e-newsletters, web stores, e-marketing -- is a complex affair (unless, of course, that interaction consists of hitting the Delete or Close button). Some of the interaction we observe through traffic and click-path tracing tools; but such tools naturally can't help us understand what went through the mind of the individual who just clicked, much less help us predict what he or she might do in future. Moreover, they do not help us determine how representative the individual is of the total potential user group.
Those in charge of (re)building electronic content delivery and communication vehicles need a deep and crystal clear understanding of user preferences. But that's an oversimplification. We need to understand what they want to get done when they come to our site or service, and what they would like to have in hand when they leave. Once we understand that, we can work on making the process smooth and enjoyable.
To arrive at such an understanding, we need answers to questions about several facets of the user experience:
Context: What specific goals are users hoping to accomplish when they paste in that URL or click that bookmark? Do they have a concrete task to finish (what is the conversion rate for a currency; what is the latest news on an organisation), or are they conducting broader research (what are the NGOs saying about X issue; what is the medical consensus on Y drug; what is the market outlook for Z products)?
Mood and attitude: Are they under time pressure to solve a problem at work, or are they at home, browsing to see what's new or interesting in an area of personal interest?
Past experience: What are their expectations of the navigation and content options, based on past exposure to similar sites or services?
Subjective impressions: What features in appearance, navigation and content presentation do they find intuitive, easily followed, complicated, ambiguous, confusing, totally baffling or downright annoying? Can they articulate why?
Willingness to share reactions: Which of their many possible reactions ...
Likelihood of and reasons for returning voluntarily: Aside from situations where users are forced to visit a site because there is no alternative, how interested are users in coming back, based on their initial visit?
Having any hope of building and maintaining compelling websites, information services and communications campaigns necessitates a thorough understanding of existing -- and potential -- users' context, goals, attitudes, experience, impressions and desires. This tall order cannot be filled by an annual survey or the odd poll! It requires a systematic approach to assessing and monitoring user needs and then to implementing new tools and technologies geared to meeting those needs.
Below, we offer a set of practical tips on how to obtain credible evidence of user opinion. None are surprising, but that does not lessen the discipline needed in carrying them out regularly.
The practice of assembling input about user opinion goes by any name you care to give it. If 'audit' is too ominous a word in your culture, just choose some other designation (e.g. review, assessment) to indicate that some form of examination is involved. The key is to undertake a systematic investigation that will yield information upon which to act.
Below we address two scenarios: an audit of a communications programme and an audit of a website or intranet design.
In the case of investigating a communications programme, the communications audit is an effort to understand what users seek from the organisation or service in question. You're looking to find out where they go if they can't find the information they need from you and what other organisations or services (i.e. the competition) are succeeding in communicating to the target group so that target users seek them out first. Having such an understanding will help those in charge of communicating with the target market as they plan how to:
In the case of a website or intranet audit, the goal is to understand how the users experience their interaction with the site. What parts are intuitive for them, and where do they scratch their heads? What makes them say, 'Cool!' and what makes them say, 'Huh?' What's missing for them? Is there too much irrelevant clutter?
Ask, or watch?
In addition to the 'please tell me' method of investigation, the at- the-elbow observation technique yields valuable insight. There are two variants:
Asking methods range from the highly involved (for example, an in- depth personal interview) to the minimally involved (for example, a survey). Similarly, observation and challenge methods range from the up-close, at-the-elbow, click level to the remote analysis of aggregated click statistics.
The mix of methods chosen depends on several factors including corporate culture, level of detail needed, size of total user population, amount of time available, the skills of staff members engaged in carrying out the assessment, and many more.
These steps will help you decide what kind of investigation you need to launch and how to carry it out.
Step 1: What type of investigation is needed?
The audit pre-work consists of a determination of the precise kind of insight needed. For example:
Key pros and cons for each type of method include:
Step 2: Clipboard -- getting the raw input
Once the audit process has been planned, the detailed work of 'sitting down with the users' and 'collecting the responses' ensues. The note- taking and documenting is quite time-consuming. If the examiners are emotionally attached to the site or activity in question, it requires a willingness to document the less flattering input as clearly as the more flattering input!
Step 3: Rollup -- what are the key themes? What might they mean?
Now comes the editorial phase. What are the key messages and takeaways? What should we do with input that seems to fall outside the key findings? How can the findings be translated into pointers toward directions? Does what we observed in one instance (e.g. with respect to a website) carry over into other areas such an e-newsletter or the intranet?
Step 4: Translation into specific plans
This is the stage when the difficult analysis takes place. Ask yourself: What does it all mean? How can the key themes be translated into concrete design or process changes? What will it take to implement them?
Step 5: Implementation and measurement
These last two steps may seem daunting, but they are important. First, having acted on user input, it is crucial to demonstrate to the study participants (and all others) that the input caused specific change. In other words: 'Thanks. We heard you. We acted accordingly. It will always be worth your while to participate when we come looking for input'. Second, we must find out whether the changes had the desired effect: 'Did we change the content or design in the right way? Has your issue been resolved? Is the site easier to use now? Are you better able to understand what we are communicating now?'. Naturally, traffic- and usage-indicators will tell their own story about the degree to which the entire exercise was successful.
Words of advice from veterans
As seasoned consultants in the business of helping our clients get the most out of their investments in services and content, we offer in conclusion a handful of tried-and-true tips for anyone about to launch a user input project:
1. Distance the owners from the input gathering. The team members in
charge of building and maintaining a website or operating a service
programme know too much and will have difficulty maintaining
neutrality and objectivity as they receive user comments. They will
tend to unconsciously suppress vital details.
2. Inflict no pain. Users are busy; make the input process smooth --
fun is even better. Easter egg hunt contests, with prizes to be
won for (1) the highest success rate and -- this is key -- (2) the
most illuminating commentary pointing to navigation issues, works
3. Reward participants. First, show appreciation for the
participation through simple gestures -- coffee and cookies,
thank-you notes to the manager, public acknowledgement. Later on,
tell them what the result was (see item seven below).
4. Prime the input pump by making it safe for participants to be
candid. Clearly signal they are in good company and that they will
not be seen as untrained or incompetent if they admit they are
confused: 'You are not the first to say that. We are intrigued. Can
you elaborate?' Or, 'Your colleagues have indicated they find X
ambiguous. Do you, too, find it confusing?'. Reaffirming that
responses will be kept anonymous may help individuals feel more
comfortable speaking frankly than might otherwise have been the
5. Secure future cooperation without narrowing user input to a pool of
regulars. 'May we count on you again as we test the revised
website? Could you suggest colleagues whose input may be useful in
6. At the end of the audit, it is vital not to just fade away. Having
asked for input obliges us to show participants that the input
resulted in change, or will do so. We recommend a summary to
participants indicating not only gratitude for the input, but
specifically a rundown: (a) we are able to do X immediately; (b) we
can achieve Y in such-and-such a time frame; and (c) options for
certain of the more ambitious changes are being investigated. Where
a desired change cannot be implemented just yet, or at all, explain
the reason specifically: 'We recognise that your suggestion is a
desirable feature and regret that at this time, resources limit our
ability to implement it' not only clarifies that the user input was
valid and has been heard, but could be a means of generating some
grassroots pressure for more resources! The key is to signal that
you heard the input the participants offered up, and to keep
everyone in the loop as to what is coming up.
Related FreePint links:
Related Subscription Articles:
Document the value chain, and transform the way you think about, manage and report on your product portfolio and your information service contributions to your organisation goals.
Focus on Value Chain
Risk assessment is a required process for a healthy information department. It gauges the ability of your services, team, portfolio and overall value to withstand stress.
Focus on Risk Assessment
Sorry, there seems to be a problem with Webinar and Community listings. Please let us know, by email to email@example.com. Thank you.
Our proven processes, resources and guidance will help your team make the shift from transaction centre to strategic asset.
Designed around the most common challenges and pain points for time- and resource-strapped information teams
Supercharge remote productivity and value
Holistic content portfolio management
Future-proof your information service
A tailored overview of our research and active discussion with your Jinfo analyst.
Measure your starting point to articulate your strengths and set priorities for future improvements. Assessments gauge risk, capacity, value and more.
Read case studies, and start the conversation:
Connect your team with the practical tools, original research and expertise to build and support information strategy in your organisation.
A Jinfo Subscription gives access to all Content (articles, reports, webinars) and Community.