Introduction to problems in writing for digital media.

Examining Our Most Used Tool: Usability

August 23, 2016

A few years ago I gave a presentation at the Society for Technical Communication Summit, pointing out that although usability studies are important for making sure websites are useful, they say nothing about quality of writing. “In fact,” I claimed, “there are no tests for evaluating content quality in digital media. One well-meaning attendee said, “Of course usability tests study writing quality. There is no way a site can be usable if the writing is not well done.” His statement makes sense, but it is wrong. The problem is this. You cannot know the value of a segment of text until you know what it is supposed to do and to whom. You can only do that if your know the genres involved. There is nothing in usability that permits you to identify the genres. In fact, focusing on usability as a method of evaluation tends to obscure the genres.

If we look at definitions of usability, we find . . .

Usability is the ease of use and learnability of a human-made object. In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use. (Wikipedia)

And . . .

Usability is about:

Effectiveness – can users complete tasks, achieve goals with the product, i.e. do what they want to do?

Efficiency – how much effort do users require to do this? (Often measured in time)

Satisfaction – what do users think about the products ease of use?

Usability is a quality attribute that assesses how easy user interfaces are to use. The word “usability” also refers to methods for improving ease-of-use during the design process. (UsabilityNet)

And . . .

Usability is defined by 5 quality components:

Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design

Efficiency: Once users have learned the design, how quickly can they perform tasks?

Memorability: When users return to the design after a period of not using it, how easily can they reestablish proficiency?

Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?

Satisfaction: How pleasant is it to use the design? (Jacob Nielsen)

 

None of the above definitions of usability has anything to say about writing quality. Nor do any say anything about identifying any genres. They are all about ease of use and navigation.

Implied Interest in Writing Quality?
Usability gurus do talk about writing quality, but they use it in the context of their definition, so they describe writing quality in terms of structures and navigation. Steve Krug’s “Don’t Make Me Think”, for example prescribes a writing style that makes navigation easier by engendering less confusion.

Technical communication professor Carol Barnum describes her approach to evaluating texts like this.

Identify unfamiliar words, or words that are used incorrectly; identify sentences/paragraphs that are unnecessarily complex; provide examples of te[x]t that is misunderstood on the first reading; identify where there are too many/few headings or overly complex organizational system; identify any information you couldn’t find easily in the table of contents, index, or other aids. (Barnum, In Usability testing and research, 2002)

This is all good advice, but it mostly involves examining structures, word choice, and simplification of the content – a variation of “don’t make me think.” There is nothing in here about evaluating the text for whether it is doing what it is supposed to do. Making the text less complex could easily be the wrong thing if you are writing to an audience of full professors like her.

A History of Usability

Here is why usability does not point out writing problems. In the past, it never had anything to do with writing. It was all about how usable physical products were. The first usability study I have ever found was done in England in the late 1800s, when British newspapers reported that a cavalry lieutenant’s sword bent in half against the armor of an enemy. The war department did usability studies of British swords until they discovered all of their flaws. Then they designed a new sword that eliminated the flaws.

During WWII, the military did usability studies of all of their weapons, after discovering they were sending defective torpedoes to their Pacific submarine fleet.

In 1959, human/computer interaction became the focus of usability study. Out of those early efforts, a new research group evolved – HCI (Human Computer Interaction). The research were made up of a variety of disciplines, including communications theorists, engineers (computer, electrical, mechanical, and manufacturing), communications professionals (information technologists, designers, writers, and editors), cognition theorists, and more.

HCI described their goal as, “develop or improve the safety, utility, effectiveness, efficiency and usability of systems that include computers.” A splinter group split out from this larger organization – or more appropriately “took it over.”

In a paper (2006) HCI spokespeople Draper and Sanger complained

As an example of how the development of general theories in HCI have been ignored, take the term “usability” which since the 1990s has become almost synonymous with all of the disciplines of HCI’s activities. The IwC Journal suggests five goals for HCI, “develop or improve the safety, utility, effectiveness, efficiency and usability of systems that include computers.” . . . Usability was the least of these five, yet has since been promoted to cover nearly everything. . . . [T]he study of HCI became the study of usability.

In its entire history, usability has never been about writing. Trying to make it about writing, now, goes against its very nature.

Conclusion

Usability is a critical test as a part of Web design and development, but it is only one test. There are others that have little to do with usability. For example,

  • *Search Engine Optimization (SEO)
  • *Landing Page Optimization (LPO)
  • *Metadata Analysis
  • *Code Analysis
  • *Site Analytics

All of these are done as a part of producing and maintaining an excellent site. But there is another that is virtually never done – content quality analysis. Content quality analysis would involve identifying genres in the site and making sure they are appropriate for their intended purpose and audience. This test is seldom done in digital design and development.

*I will discuss these in depth over the next weeks and months, but for the next few weeks I am focusing on writing skills.

 

You Might Also Like

No Comments

Leave a Reply