Yes, Users Read on the Web

Colleen Jones Posted by
Colleen Jones

Jan 15 2012

I first posted this essay on my personal blog and have moved it here, with a few small tweaks. This essay gives some insight into why Content Science is pursuing the Content + Credibility Study and why we offer content testing services.

I feel the need to say what should be the obvious. Why? Because recently, while catching up on my Twitter feed, the following statement smacked me like a gauntlet:

IT’S A FACT THAT USERS DON’T READ, AND WE HAVE TO DESIGN FOR IT.

I was too late to join the conversation, but the statement has concerned me ever since. In the user experience and design communities, has an assumption locked our thought about reading so tight that we refer to it as a “fact?”

WHY THIS “FACT” STIFLES US

Based on my experience with designing for and observing users, I am convinced that users read on the web (among other places). Sure, they scan hurriedly through irrelevant or uninteresting content until they arrive at what they want. (For a nice explanation, see pages 2-4 of Letting Go of the Words.) THEN, users read.

Why do we forget the reading part?

Think about some of the confining implications. If users don’t ever read, then

  • It doesn’t matter what we say or how we say it because users won’t notice.
  • Text has little impact on how users perceive a brand or make a decision.
  • We can communicate only with visuals.

And, taking this assumption to its logical conclusion, if users don’t ever read, is there much point to having any words on the web?

These implications seem ridiculous when I shine the light of reason on them. But, when they lurk unexposed with the shrouded assumption that users don’t read, our design and content choices are at risk of suffocating. I think we need to revisit this “fact,” starting with an excavation of Jakob Nielsen’s influential study of How Users Read on the Web.

LET’S DELVE INTO THE SOURCE: THE NIELSEN STUDY

For background, read Jakob Nielsen’s explanation of the study. Also try to check out a longer version including explanations of related studies. In my searching, I did not find much constructive criticism of this study. I feel it has five limitations:

1. The Topic: Irrelevant

A tourist trip to Nebraska? I know very few people for whom this topic would be relevant. (No offense to Nebraskans out there! I’m sure it’s a beautiful state. It’s just not on most people’s destination list, even if it should be.) In fact, the topic was chosen specifically because people would likely know little about travel in Nebraska. My concern is that if the topic is not pertinent, people won’ be motivated to read about it.

2. The Participant Sample: Unknown Interest

I could let 1 go if the study recruited a sample of people who expressed interest in a trip to Nebraska. It also would be interesting to test a sample of people with interest and a sample of people with no interest. However, the explanation does not state that the study used such sample criteria.

3. The Content Options: Too Extreme

The options include variations on a bombastic marketing version and a sparse objective version as well as variations on paragraph form and bulleted list form. Some important nuances are missing from the content options. How about a concise, promotional version that doesn’t lie and uses a bulleted list or a simple table? I also wonder whether wording variations and format variations are too many variables in one study. Furthermore, because the success metric focuses largely on remembering the list of tourist attractions, the content option that performs best—a bulleted list of the attractions—is designed tomemorize.

4. The Context: Unclear But Probably Persuasive

The study explanation does not mention the purpose of the content and the overall website. Is the purpose to attract new tourists, to win back past tourists, to encourage tourism business, or something else? Did the study scenarios reflect the context realistically? Also, most of these possible contexts (which I inferred based on reading the original version of the content) seem persuasive, not educational or informational.

5. The Success Metric: Not Complete

The study uses a reading usability metric including comprehension, recall, and time. It also includes a subjective measurement, but it is subjective mostly about usability qualities (how easy was it to find information, etc.). The metric does not address content meaning, influence, likelihood to visit Nebraska, or related measurements. From a user perspective, is the goal to remember the exact names of Nebraska’s tourist attractions? Or is the goal to make a confident decision about whether Nebraska is worth visiting? From a business perspective, is the goal to teach people about Nebraska’s specific tourist attractions? Or is it to convince people that Nebraska deserves to be on their travel itineraries? I believe the study tries to stick strictly to usability. But is it useful to measure success in a persuasive context without touching on meaning, influence, and broader goals?

The description of the metric shows awareness of context, noting that one might add weight to certain elements of the metric for an intranet or a leisure site. However, because the metric elements do not address persuasion, adjusting their weight for a persuasive context would not help.

In short, I believe these limitations stem from the following two mistakes:

  1. Attempting to analyze and measure a persuasive situation as an educational one.
  2. Trying to test reading without considering relevancy and context.

Because of the limitations, I don’t feel the study allows us to conclude much more than the following statement: People with unknown interest in visiting Nebraska who are asked to learn about Nebraska’s tourist attractions remember those attractions best when they have little description beyond their name and display in a bulleted list.

We certainly can’t conclude from this study that people don’t read on the web.

NOW, LET’S ELEVATE OUR UNDERSTANDING

Should we cut this study some slack because it happened 12 years ago? Yes and no. I truly appreciate how this study brought corporate attention to writing for the web. I respect the effort to test and measure reading usability at a time when the web was very new. I also am grateful that this and related studies inspired Redish’s useful description of users’ scanning behavior in Letting Go of the Words.

However, the approach exemplified in this study limits our thinking about how users read during an interactive experience. We learn only what users quickly find, read, and memorize on command. We do not learn

  • What content resonates, relates, or influences
  • What reading is like for users who find content about a topic that genuinely interests them.
  • The ways content makes (or fails to make) an emotional connection with users.

And that’s just for starters. So, I think this study is overdue to have its slack tightened. And we’re well overdue to elevate our understanding of interactive reading, which will breathe new life into our content and design choices.