Google
::: Welcome to Prashant Dubey's Blog :::

Friday, June 24, 2005














Usability Guidelines

Guideline: Ensure that the website format meets
user expectations, especially related to navigation,
content, and organization.


Comments: It is important for designers to develop
an understanding of their users’ expectations
through task analyses and other research. Users can have expectations based
on their prior knowledge and past experience. One study found that users
acted on their own expectations even when there were indications on the
screen to counter those expectations.
The use of familiar formatting and navigation schemes makes it easier for users
to learn and remember the layout of a site. It’s best to assume that a certain
percentage of users will not use a website frequently enough to learn to use it
efficiently. Therefore, using familiar conventions works best.


Sources: Carroll, 1990; Detweiler and Omanson, 1996; Lynch and Horton, 2002;
Spool, et al., 1997; Wilson, 2000.




Guideline: Use all available resources to better
understand users’ requirements.


Comments: The greater the number of exchanges of information with potential users, the better the developers’ understanding of the users’ requirements. The more information that can be exchanged between developers and users, the higher the probability of having a successful

website. These could include customer support lines, customer surveys and interviews, bulletin boards, sales people, user groups, trade show experiences, focus groups, etc. Successful projects require at least four (and average five) different sources of information. Do not rely too heavily on user intermediaries.


Sources: Adkisson, 2002; Brinck, Gergle and Wood, 2002; Buller, et al., 2001; Coble, Karat and Kahn, 1997; Keil and Carmel, 1995; Norman, 1993; Osborn and Elliott, 2002; Ramey, 2000; Vora, 1998; Zimmerman, et al., 2002.




Guideline: Have several developers independently
propose designs and use the best elements from
each design.


Comments: Do not have individuals make design decisions by themselves or
rely on the ideas of a single designer. Most designers tend to adopt a
strategy that focuses on initial, satisfactory, but less than optimal, solutions.
Group discussions of design issues (brainstorming) do not lead to the best

solutions. The best approach is parallel design, where designers independently evaluate the design issues and propose solutions. Attempt to “saturate the design space” before selecting the ideal solution. The more varied and independent the ideas that are considered, the better the final product will be.


Sources: Ball, Evans and Dennis, 1994; Buller, et al., 2001; Macbeth, Moroney
and Biers, 2000; McGrew, 2001; Ovaska and Raiha, 1995; Zimmerman, et al.2002




Guideline: Use an appropriate page layout to
eliminate the need for users to scroll horizontally.


Comments: Horizontal scrolling is a slow and tedious way to view an entire
screen. Common page layouts including fluid and left-justified may require
some users to scroll horizontally if their monitor resolution or size is smaller
than that used by designers.


Sources: Bernard and Larsen, 2001; Lynch and Horton, 2002; Nielsen and
Ta h i r, 2002; Spyridakis, 2000; Williams, 2000.




Guideline: Use longer, scrolling pages when users
are reading for comprehension.


Comments: Make the trade off between paging and
scrolling by taking into consideration that retrieving
new linked pages introduces a delay that can

interrupt users’ thought processes. Scrolling allows readers to advance in the text without losing the context of the message as may occur when they are required to follow links. However, with pages that have fast loading times, there is no reliable difference between scrolling and paging when people are reading for comprehension. For example, one study showed that paging participants construct better mental representations of the text as a whole, and are better at remembering the main ideas and later locating relevant information on a page. In one study, paging was preferred by inexperienced users.


Sources: Byrne, John, et al., 1999; Campbell and Maglio, 1999; Piolat, Roussey
and Thunin, 1998; Schwarz, Beldie and Pastoor, 1983; Spool, et al., 1997;
Spyridakis, 2000.






What is Information Architecture?:
One definition of Information Architecture is "the organization, labelling and structuring of data for content-based applications such as websites."

This emerging field has become more prominent in recent years as websites have grown increasing complex and users demand more friendly navigation systems.

Information Architects organize content, such as text, labels, graphics, and shopping carts, so that users can understand the site’s content and do things faster on the site.

Have you ever asked yourself: “How do I find that page again?”

Well, that site could probably have used a spoonful of Information Architecture medicine.

The role of the ‘Information Architect’ is similar to a traditional Architect. For example, before building a house an Architect will create a blue-print, work with the builders, plasters, and electricians and oversee the construction.

Lack of ‘architectural planning’ in web development is very expensive as large portions of the site may need to be improved (i.e. totally re-written) to correct areas that were overlooked in the haste just to ‘get something out there’.

An Information Architect also gives the Financial Controllers a better grasp of costs and contingency figures; improvised site designs frequently run over budget.

Sites Goals

Companies that wish to develop a commercial website have specific business goals in mind. The Information Architect captures these areas (not unlike the business requirements phase) and circulates them to all team leaders.

Large-scale websites often use Market Research and Focus Group testing, the results of which become incorporated into the site development plan.

Pre-Production

Software tools such as Visio, Word, and PowerPoint are used to prepare the site structure, labels the content sections, and defining content into hierarchical groups.

To achieve all this, the Information Architects will:

• Interview the client and note their business priorities.

• Organize Focus Group tests.

• Make Competitive Analysis.

• Benchmark competitor sites.

• Examine functional requirements.


A design document is then prepared which highlights critical risks and success factors. This also involves mapping the site structure, organizing the content on pages, and designing navigation systems.

Project Management

During the development process, the Information Architect establishes key deliverables and milestones — usually in conjunction with the Project Manager — to assist the client and team leaders in keeping the project on track.

At each major stage, the client is sent mock-ups of the work in progress. It’s also essential to brief the client as the site develops to help them understand what they are paying for and what areas are in development.

Any presentations should be in line with the client's level of understanding. Most prefer to see diagrams on both paper and PC’s to see how the site will function.

Sites Layout

In defining the site layout, the following areas need to be covered:

1. Site Maps — flowchart the navigation and main content sections to illustrate how users navigate, e.g. from the Catalogue to the Shopping Cart.

2. Content Maps — identify the content appears for each page and how it cross-references other groups.

3. Page Schematics — the Graphic Designer illustrates the page layout and categorize the links, content, advertising space, and navigation on each page. Schematics also highlight priority and hierarchies.

4. Storyboarding and Prototyping — prepare mock-ups to demonstrate how the site will perform.


When clients fail to grasp the long-term value of planning, the Information Architect will explain the benefits of concentrating on this area before any coding begins—and the potential risks that may occur by avoiding such steps.

Website Evaluation

Before evaluating a website, you need to examine the following:

• Target audience — who will use the site

• Business goals — what are the site’s objectives and critical success factors

• Technical constraints — what technical requirement need to be examined

• Future plans — considerations for future expansion and scalability


The Information Architect is responsible for exploring the project’s goals and objectives — it's the client's responsibility to ask about costs, timeframes and contingency plans.

Information Gathering

Content needs to be gathered quickly. And as it could be stored in different file formats and media formats, it needs to be made ‘web compatible’ and also formatted for other web channels, such as WAP and DTV.

Information Organization

After designing the site structure and navigation system, you can map content to different sections. You also need to label content for cross-referencing in databases and file sharing. Well-organized content enables the user to find things quickly and encourage them to stay on your site.
Use the 3 Click Test — if it takes more than 3 clicks to find something, design your navigation paths again.

Communicate Goals

Once you have gathered the content—or at least sufficient content to start—begin refining the content groups.

Every section requires specific content. Each of these groups needs to have the correct content and cross-references to other relevant groups. Refine the groups to get an equal distribution of content across all sections, so that the site is not over-populated in some sections and ‘under construction’ in others.

Divide large content groups into sub-groups. In this way, users can retrieve data swiftly and will not get lost is a sea of links!

Combining Visual Design and Content

Remember, visitors want three things on your site:

• Fast downloads

• High-quality content

• Ease of use


Your content should drive the site. Graphics enhance the content, not replace it. The exception is probably entertainment sites where content and imagery are very closely tied together.

During the design phase, keep returning to these Big Three mentioned above.
Successful websites provide as much information as possible with the least clicks. Select the color schemes in accordance with the company’s branding guidelines and business goals.

Future of Information Architecture
Information Architecture will play an increasingly important role in the success of large-scale websites, intranets and e-libraries. As more content gets produced, it needs to be labelled correctly, and structured for rapid access by users with different levels of experience.



 

Thursday, June 23, 2005

Usability Resources

Usability Resources

The ultimate all-under-one-roof resource to Usability & standards.

http://usability.gov/

What is usability? Why is usability important? How much does it cost? Planning Data collection Prototype development Usability testing Web site promotion Lessons learned from Web designs/redesigns Usability University Search engine statistics . . . .

And much more..... visit now!

Wednesday, June 22, 2005


This is my certificate for the test in Human Factors by Federal Aviation Administration
Prashant Dubey

Monday, June 20, 2005


fusion
Prashant Dubey

UI Design

If the user can't find it, it's not there!

It is frequently said that navigation is 80% of good usability. I've often wondered what that means. What are the parameters that make a site navigable? What specifically do I need to get right to automatically have it be 80% good?

User-centered Web designers answer this question reflexively. Good navigation means good information architecture. Good information architecture means having a hierarchical structure and the right labels. Having the right structure means deriving the hierarchy that reflects users' mental organization of the information. Using the right labels means ignoring the organization's (branded) terms for things and adopting the users' vocabulary for tokens and categories.
These two parameters – structure and labels – are asserted to be as independent and complimentary. Neither is individually sufficient to trigger that 80% usability threshold. You have to get both right.

Saturday, June 18, 2005


FUSION, the new CD-ROM for DESIGNERS. Order NOW! Posted by Hello

Friday, June 17, 2005

Unique Product

Friends,

Iam announcing one exciting thing, I had been waiting for last one year for this day.

Brand New CD-ROM with full of resources, tutorials, stock images, stock footage, pdf files created by professional designers, ...and lot lot lot more!!

So, sounds Mouth Watering, for all designers,..... yes, it is mouth watering offer!!!

FUSION, New CD-ROM for DESIGNERS! DOnt Miss It, Order NOW!

FUSION, New CD-ROM for DESIGNERS! DOnt Miss It, Order NOW!


All these things in One Single CD-ROM!! Howsthat possible?!?!

Well, we have organized and sorted out all the useful items and then compiled keeping usefull-ness of the items. And, wherever requierd, we clubbed together all the items making them one.

Another, good feature, that we used Macromedia Flashpaper Extensively for documenting entire packag. And, I need not say about features of flash paper! Its so compressed,...u dont even require zipppp!!!

So, WHAT ARE YOU WAITING FOR!!!

Just Send Me a Request, click here, RIGHT NOWWWW, and I will send you a detailed CATALOG to have a look at the PRODUCT. Once you are done, let me know, the CD Packaged, will be Sent to YOU by Courier, the very next day!

Payments can be made directly via credit card or by check, you can mail to my postal address!

KEEP WATCHING THIS SPACE.....COMING SOOOOON ..... MORE EXCITING DEALS!!!!

Thursday, June 16, 2005

Book

My True Colors ::: An Encounter with COLORS

Experimental Website : Futuristic

LOGOS Designed by ME!

Programs on My Computer

Wednesday, June 15, 2005

Abstract

My Portfolio

Guys,

Know more about me and my work.

Below are some links designed by me, alone!!

www.astegic.com

www.intransitionpartners.com

www.nkproteins.com

www.chaseinfotech.com

Happy browsing....and do not forget to write, whatever you want!

Take care!

Portfolio

Friends,

Take some time out and relax, just visit, http://prashantdubey.deviantart.com/gallery/



Yes, your guess is right, its my Portfolio Web site. Browse through it and get INSPIRED!

Happy Browsing!!

Colors

Folks,

A small and well compiled guide to colors. http://www.webwhirlers.com/colors/index.asp
Yes, a good and everything at one place whatever you want to know about colors. I liked this site and recommend people out there, do visit!

True Colors!

My Favourite Usability Reference Site

Guys,

One place where I get everything about Usability solutions, is none other but, Human Factors International. (www.humanfactors.com). Founded by Dr Erick, this site has lot of resources and high quality advices by Dr Erick himself, is something you dont find everywhere.

Great site with great content!

Do visit once, Iam sire you will visit daily!!

Happy Usability!

Tuesday, June 14, 2005

Docs

Docs in the Real World

In two recent consulting projects, we worked with online documentation developers who wanted to understand the problems users encountered and how their documentation helped solve those problems. To find out, we went and observed users in their own work environments. Although the clients and their software differ significantly, we found similar issues.

Both development teams initially had specific things they wanted to know about how people used the documentation, such as whether the structure and organization made sense to users, and how often users went to the wrong place first.

By watching users in their own workplace, we found issues that we would never have seen in a usability lab. These observations revealed many larger issues surrounding the use of the documentation.

We Are Not Alone

For example, neither team expected to see the variety and volume of other resources that "compete" with the official documentation. These included such diverse sources as co-workers, including cubicle-mates and the internal help-desk, users’ own notes, web sites, training materials, online news, and team meetings.

It surprised both teams to learn how much users relied on these competing forms of information. They found that the most successful users had the most resources available and knew when to use each one. No user relied exclusively on the online documentation.

The teams realized that this competition wasn’t always a problem. For example, users at one company benefited by having a printed list of field codes for the mainframe; this information never changed and users often needed it quickly. (It surprised this team to learn how important the mainframe procedures were: the company was phasing out the mainframe, so the team hadn’t focused on supporting it.) In contrast, frequently updated sources such as printed price schedules quickly became obsolete and many users still had old ones.

Therefore, this team abandoned its original assumption that it had to document everything in one online source, and decided to look for the most effective way to convey each type of information. They realized they needed to coordinate with the training department on how to present the information, including instruction on using each of the available resources.

Trust-Breakers

We repeatedly observed users avoiding documentation because they’d learned not to trust it. We saw many reasons for this loss of trust, some of them seemingly trivial.

Every Error Counts

Even small inaccuracies damaged the confidence of some users. This was more likely with users who didn’t know how to work around the problem. For example, users were stopped in their tracks when the online procedures told them to press Enter — but neglected to tell them they had to press it twice.

Many of these users were not proficient with computers, so even this small error was enough to make them abandon the documentation and call the help desk. In contrast, errors in documented policy didn’t affect trust much, because the support reps usually had more proficiency with the policies — at least a general idea of what the policy was supposed to be — and could figure out what to tell the customer.

The Buck Stops Here

Users don’t care where the problem lies: they don’t recognize that online help involves different pieces from different sources. Although some problems were beyond the doc team’s control, such as a bookmarks that didn’t work or a bug in Microsoft’s help engine, they still affect the users’ perception of the documentation — and their willingness to use it.

Fixing Broken Trust

It’s easy to break trust, and difficult to restore it. Once they’re burned by the docs, users typically won’t look there again. Unfortunately, this behavior can persist even after developers fix a problem. For example, although one team corrected an error that appeared in Version 2.0 of the docs, users still didn’t try to use it in Version 4.0. No one had told them that the error was gone.

When trust takes a beating, it’s hard to know the effect. After a bad experience, users may be willing to use other parts of the doc — and sometimes are not. Our observations showed us that the development team needs a concerted, proactive marketing effort to restore broken trust.
Trust isn’t an all-or-nothing thing. It’s built up slowly by successful experiences, and quickly eroded by disappointments. One team had translated the documentation into English from its original German — but unintentionally left pockets of German. Users who encountered German immediately stopped looking for the answer (getting German usually meant they were in the wrong place). Every time they encountered German, users were less likely to rely on the rest of the documentation.

Trust and Communication

One important element of building trust is the way the documentation team handles feedback. One team had an internal feed-back line to let users report errors and request changes. But the team was understaffed, so users rarely got a response, even an acknowledgment. As a result, users became reluctant to report problems.
The team realized it needed to put a response system in place — even if users just got a quick generic thank-you — before it could expect users to report problems.

Speed Is Relative

Users sometimes avoided documentation if they felt that it took too long to use. Actual speed, the measurable time it takes to get the answer, often wasn’t a problem. Instead, perceived lack of speed, how long users think the process will take, prevented users from using the docs. And what is "acceptable" varies from user to user, even from situation to situation: with an irate customer on the line, seconds can seem like hours.

In one project, we saw an interesting Catch-22: support reps didn’t look up anything in the online documentation unless they already knew where to find it. But they never learned where the information was unless they spent time using the docs. This perception of slow access speed wasn’t caused by poor organization of the documentation, but by the sheer volume of information, which made the online documentation daunting to the uninitiated.

On the other hand, this team was pleased that users generally could find things, and that users looked in the wrong place far less often than they’d feared. So, instead of launching a reorganization, the team proposed some other changes. These included getting the management to give reps off-phone time to do research (difficult during busy season), having managers and help-desk staff encourage reps to look things up first, and devoting more training time to practice sessions with the online documentation.

Sell, Sell, Sell

The documentation can still fail, even if it’s as usable as possible. Just because you’ve built it doesn’t mean they will come. We saw a clear pattern: users generally didn’t know about new features after they’d been trained.

It amazed both teams when they discovered how many users didn’t know about some basic features. For example, one team learned that no one had trained users to press F1 to get help. (Users got very excited when we showed them this trick!)

The other team found that users never went to the tutorials or used the Back feature — which the team had spent lots of time developing — because the team had never told the users about them.

These findings initially discouraged the teams, until they saw it as a "marketing" problem: determining what the messages they wanted to convey to users about the documentation, and then finding the appropriate channels.

After seeking feedback from successful reps, one team learned that the most appropriate and effective marketing message was something like: "The online documentation is huge and overwhelming at first. But if you invest some effort in learning it, it will pay off big, making you more proficient in the job skills you’re rewarded for."

To spread this message, the team decided that managers and the training department provided the best communication channels.

It Helped to Watch Users

Both teams learned that the documentation was only part of the picture of how people do their work.Perfect documents don’t help users succeed unless the team accounts for all these other factors. Because they made these site visits, the teams came up with some changes that showed great promise for improving users’ success. •

Testing the Three-Click Rule

In a recent client meeting, a high-ranking executive told us that every piece of content should take no more than three clicks to access. We knew exactly what he was talking about: we've heard the Three-Click Rule many times before. This unquestioned rule of web design has been around nearly as long as the web itself.

On the surface, the Three-Click Rule makes sense. If users can't find what they're looking for within three clicks, they're likely to get frustrated and leave the site.

Many of us have the frustration of endless searching ourselves. We go to a site, click through various pages of content, and end up ready to quit. In one of our studies, a perplexed user threw up her arms and pronounced, "I should be able to find everything on a site in just three-clicks!"

The Three-Click Rule directly addresses this frustration, acknowledging a user's desire for fast gratification and the threat that a competitor's content is only a click away.

Many have written about the Three-Click Rule. For instance, Jeffrey Zeldman, the influential web designer, wrote about the Three-Click Rule in his popular book, Taking Your Talent to the Web. He writes that the Three-Click Rule is "based on the way people use the Web" and "the rule can help you create sites with intuitive, logical hierarchical structures".

In our own research, we've seen evidence that data about clicking helps us recognize problems on a site. For example, in one e-commerce study, we found that the more pages users visited (more clicks), the less they bought. (See the article, Strategies for Categorizing Categories). What we noticed in that study, however, was that users who were clicking the "Back" button multiple times were the ones who were failing. So we weren't sure whether the number of clicks mattered, or if it was something else that caused them to fail.

Applying the Three-Click Rule leads to a number of design suggestions, such as putting global navigation on every page and making a navigation hierarchy shallow and wide. While these suggestions seem a natural extension of the Three-Click Rule, they assume the rule is worth following. After hearing about the rule for many years and having it as a requirement in some client projects, we decided to find out if the rule was true.

Do Users Really Leave After Three Clicks?

If the origins of the Three-Click Rule come from actual user behavior, then we should see a relationship between a user's success at finding the content they're seeking and the number of pages they visit.

To see if we could find this kind of relationship, we looked at data from a recent study of 44 users attempting 620 tasks. We counted the clicks of every task, whether the user succeeded or failed at finding their desired content. We analyzed more than 8,000 clicks!

In trying to complete the tasks, some users visited as many as 25 pages before they ended their task and others only visited two or three pages before stopping. If the Three-Click Rule came from data, we would certainly see it with this wide variation in the number of pages they visited.
As we study our data, the rule tells us we should see users dropping off after hitting the third page, leaving before they have a chance to succeed. Those tasks that took as many as 25 clicks would be unsuccessful, with the majority of successful tasks falling somewhere close to three clicks in length.

Users Kept Clicking

If there is a scientific basis to the Three-Click Rule, we couldn't find it in our data. Our analysis left us without any correlation between the number of times users clicked and their success in finding the content they sought.

Our analysis showed that there wasn't any more likelihood of a user quitting after three clicks than after 12 clicks. When we compared the successful tasks to the unsuccessful ones, we found no differences in the distributions of tasks lengths. Hardly anybody gave up after three clicks.
According to the Three-Click Rule, most people give up after three clicks. However, in our study, users often kept going, some as many as 25 clicks. According to our data, the Three-Clicks Rule is just a myth.

In this chart, we should've seen the majority of successful clickstreams ending around three clicks. However, for both successful and unsuccessful clickstreams, we see that it isn't until 15 clicks that we see 80% of our tasks completed. Successful clickstreams have the same distribution as unsuccessful clickstreams -- the number of clicks doesn't predict task success or failure.

The failure to find task data to support the Three-Click Rule made us rethink the problem. Could task success and failure be the wrong way to look at it? Maybe everyone believes in the rule because it's frustrating to keep clicking beyond the third page? We decided to look at the problem from a different angle.

What about Satisfied Users?

If we looked at the tasks that were dramatically longer than three clicks, do we see a drop in the satisfaction of the users? At the end of each of the 620 tasks, we had asked users to rate how satisfied they were with the site for that task. Again, there was a wide variety of answers -- sometimes users were very satisfied, other times they were completely unsatisfied. Did these ratings correlate with the number of clicks?

After analyzing the satisfaction data, we still found no evidence for the Three-Click Rule. When we looked at the percentage of users who were unsatisfied, our data showed there was little variation (between 46% and 61%) between different lengths of clickstreams. Fewer clicks do not make more satisfied users.

Users weren't any more satisfied with shorter clickstreams than they were with longer clickstreams. The satisfaction of users doesn't depend on the number of clicks.

Implications of User Frustration

In our studies, users complain about how long it takes to find things all the time. This is one way that users vocalize their frustration. They tell us that if they could only reduce the number of frustrating clicks the site would be better.

However, these complaints aren't actually about the clicks. They are really complaints about failing to find something. When users find what they want they don't complain about number of clicks.

We see this phenomenon quite often: users complain about a symptom and not the real problem that caused it. They want to explain why they are failing, and in this particular case, one of their initial thoughts is that they are clicking too much.

A Misdirected but Well-Intentioned Rule

The Three-Click Rule isn't completely bad. People talk about it with users in mind, even executives who have never designed a web site. The rule may help designers focus on the information that users need and may help them create better web sites. These are admirable qualities.

We can't be overly critical of a rule that has the effect of helping designers keep their focus on users and their needs. However, the Three-Click Rule does not focus on the real problem. The number of clicks isn't what is important to users, but whether or not they're successful at finding what they're seeking. •

Documentation

Six Slick Tests for Docs and Help

Usability testing isn’t just for software and web sites. Testing documentation can ensure that it includes — and accurately conveys — all the information users expect and need.

Testing gives you accurate information on how well your documentation and Help work. It can even uncover problems that are better solved by changing the interface.

As with most parts of the development, it’s easier and less expensive to find and correct doc problems early in the process, so we try to test as soon as possible.
Choosing a method depends on what you want to learn, how much time you have, and where your greatest risks are.

1. Incredibly Intelligent Help
This technique lets you discover what information users actually need, before you’ve written anything.
We bring in test participants as soon as we have a version of the product that will let users do some of the tasks they’d do in the real world. This may be well before we’ve written the documentation, but this isn’t a problem because a member of the development team simulates the documentation.
The incredibly intelligent part comes in when users get stuck or seem confused. At this point, the person simulating the docs asks, "What is your question right now?" When the user replies, the Help person provides a verbal clue, but gives as small a piece of information as possible. If this isn’t enough, the user can ask another question.
By giving information only a bit at a time, we can find out exactly which piece is most important to the user. Knowing this helps teams provide appropriate documentation — not too much, not too little — or, better yet, to fix the interface itself.

2. Index Tests
You can determine whether users can get to the right topic in your index by having them work with the draft index. You can even test the index before you’ve written the documentation.
To test the index, we show users screen shots from the product and ask them to write down the first three terms they’d look up to find more information. This test lets you look inside users’ heads and learn some of the terms they use — and expect to see. We write down all the terms they use; they could be words we’ve omitted or not referenced. The more of these we include, the better the index will work.

3. Summary Tests
By testing how well users’ comprehend conceptual material, you’ll know how successfully you’re communicating the information.
We ask users to read a section or examine a picture in the docs, then ask them to describe the three main points this element is trying to convey. (You should already know what you expect them to be!) If the users’ answers match yours, you’re probably getting your information across; if not, you may have some work to do.

4. Procedural Tests
This is a good way to test how accurately you’ve written the procedural details of your docs.
We give users the documentation or Help, asking them to read a specific page or topic and then work with the software to complete the procedure this section describes.
We note where they have problems, where they get confused, or where steps are missing. Then we make the necessary changes in the next draft.

5. Lookup Tests
Lookup tests demonstrate if the docs or Help include the necessary information — and if users can get to the right topic and use it. They also let you find the users’ biggest unanswered question.
To do a lookup test, we give users questions to answer or tasks to complete and see if the draft documentation helps them succeed. For example, when testing the documentation for a video product, we told users, "The picture you’re getting isn’t as clear as you’d like. Use the documentation and online help to find out what’s wrong and how to fix it."
After conducting lookup tests, some teams have changed from printed to online help (or vice versa), added index entries, or reorganized the information.

6. Rose in the Thorns Tests
It can be enlightening — and fascinating — to discover what experienced users don’t know about a product. To do this, we’ve recently started experimenting with the Rose in the Thorns test.
We ask experienced users to look through the documentation or Help until they find information about features they didn’t know before.
Then we ask them if this new information is valuable to them. If there are important features users don’t know about, we work with the interface designers to communicate them better.

Tips for Success

Three things help ensure successful tests of docs and help:

These tests work best with a couple of users at a time.
This increases the likelihood of brainstorming and lets us collect lots of information or keywords at once.
We try to avoid product-specific terms in the task instructions.
After the tests, we discuss with the team what we’ve seen and decide on next steps — including changes to the docs or more testing.

Technical Writer

The difference between technical writer and a designer has been blurred over a time period.