Anyone who has ever built a product wants user feedback – and we in open source want it more than anyone else, and place higher demands on it than anyone else. However, this feedback can be hard to give, hard to receive, and hard to act upon.
My product is open source software documentation, and the same is true of it too, but, at least in the case of documentation, I believe there’s a way to make feedback easier and more effective – all thanks to linguistic theory.
The reason is because documentation is a product that relies on natural language. While we’re not all experts on documentation, and we’d be wrong to believe that we are, we are all experts in natural language, and we’d be wrong to believe that we are not. And – as I argue below – this makes all the difference.
- Humans may be rational
- How to collect better documentation feedback with linguistic theory
Humans may be rational
An epic poem about humans, logic, language, and reason.
We humans are logical. Presented with (1) and (2), none of us will fail to conclude (3):
- All men are mortal.
- Socrates is a man.
- Therefore, Socrates is mortal.
However, we are diversely rational. Everyone who hears (1) and (2) below will conclude (3) – but some might object to the premises:
- If the sky is blue, pigs fly.
- The sky is blue.
- Therefore, pigs fly.
But we have language. As Steven Pinker eloquently put it:
“As you are reading these words, you are taking part in one of the wonders of the natural world. For you and I belong to a species with a remarkable ability: we can shape events in each other’s brains with exquisite precision. [. . . ] Simply by making noises with our mouths, we can reliably cause precise new combinations of ideas to arise in each other’s minds.”(Pinker, The language instinct, 1994)
Indeed, we all have language. As René Descartes bluntly (but perspicaciously) put it:
“It is a very remarkable fact that there are none so depraved and stupid, without even excepting idiots, that they cannot arrange different words together, forming of them a statement by which they make known their thoughts; while, on the other hand, there is no other animal, however perfect and fortunately circumstanced it may be, which can do the same.”(Descartes, Discourse on Method and Meditation on First Philosophy, 1637)
And human language is
a logical il logical superlogical.
- Human language might seem alogical, if not illogical. Indeed, on a daily basis we say one thing and people seem to hear another:
- I say: Jo wrote a poem or Jo wrote a novel. You hear that Jo wrote one or the other, I’m not sure which, but I’m sure not both. Meanwhile, you wonder why I’m repeating “Jo wrote”…
- I say: Jo made no more than 3 phone calls. You hear she made exactly 3 and she could have done better.
- I say: Jo ate at most 3 icecreams. You hear that she ate some, I’m not sure exactly how many but, regardless of the count, it wasn’t really all that much.
- I say: Jo is still asleep. You hear she’s sleeping too much.
- However, nobody’s complaining. Because the results turn out to be exactly as we want them to be. For example:
- (Context: A room with windows and no AC.) I say: It’s so stuffy in here! You go and open the window.
- Thus, there is clearly a pretty serious method to the madness. In short, it has to do with the fact that…
Human language is also similarly rational. To quote the seminal work by Paul Grice, the main reason why we usually manage to understand one another is because:
“Our talk exchanges do not normally consist of a succession of disconnected remarks, and would not be rational if they did. They are characteristically, to some degree at least, cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction.”(Grice, Logic and conversation, 1975)
That is – as he goes on to argue – because of a tacit universal Cooperative Principle that relies on the following four conversational standards:
- Quality (or truth),
- Relation (or connectedness, relevance),
- Quantity (or optimal informativity), and
- Manner (or clarity, non-obscurity, brevity, orderliness).
(We flout these standards all the time, of course, but that’s how you create sarcasm, irony, or humor.)
I believe we can use this to establish a common ground between us and the user, and so to collect better documentation feedback.
How to collect better documentation feedback with linguistic theory
A suggestion for how to improve our site feedback button.
One of the ways we collect user feedback on documentation for Juju currently is via a combination of a site feedback button and the Discourse discussion forum.
The site feedback button is designed to work as follows: When you click on it, you’re invited to select the portion of the screen that you want to give feedback on; then you see a pop-up screen with the question “Did you find what you’re looking for?” and a row of 5 smileys (Hate, Dislike, Neutral, Like, Love); finally, when you choose one, the pop-up expands to include an “Any comments” box where you can leave a comment and an “Email (optional)” box. The Discourse discussion works as an advanced feedback mechanism where users cannot just comment on documentation but, if their forum privileges are high enough, they can also update any doc directly.
The intent overall is to have a two-pronged approach with the button intended for quick, casual, one-way feedback and the forum for more serious, more involved, two-way communication.
However, while the Discourse forum works for the purpose of documentation feedback as well as any forum would (or even better, as users with high privilege can also intervene in the docs directly), the site feedback button does not – people rarely click on it and, when they do, the feedback is hard to act on or better suited for the forum.
So, why does the feedback button fail, and how can we make it better?
One point of failure is the Email box. It’s marked as optional, so, rarely used. Even when people do use it, and I reply to them (even though nobody else will be able to benefit from it), I never hear back. Overall, it doesn’t work. This is, nonetheless, consistent with the idea that this button is intended for one-way communication, so we can put it aside. (It does not properly serve the two-pronged approach, though, so we should consider adding a note to redirect users looking for deeper, two-way communication to Discourse.)
The main point of failure, however, is the five-point smiley scale. For one thing, it leads to responses based on emotion rather than reason. The bigger issue, though, is that they are not as understandable and actionable as they could be, which in turn means fewer chances that the docs will get better and the users happier.
I propose that we change the five-point smiley scale to a set of four prefilled options based on the four conversational standards:
- (Quality:) This is not true.
- (Relation:) This is not relevant here.
- (Quantity:) There is too much / too little information here.
- (Manner:) This could be said better.
For example, suppose a how-to guide says that you can accomplish some goal by using a certain command. A user who tries out this command and finds out that it doesn’t work (e.g., perhaps it has been deprecated and the doc has not been updated yet) might use the feedback button to select that command and then choose “This is not true”, perhaps also clarifying “I tried it on my Ubuntu 20.04 and got this error: …”.
Or suppose you have a how-to guide where instructions are constantly interrupted by definitions. A user (especially those familiar with our goals to transform documentation at Canonical according to Diátaxis) might comment “This is not relevant here”, perhaps adding “Maybe better to move this bit to this other doc: …”.
Or, when a document uses structures that are hard to process and leave crucial information out, a user might select both “This could be said better” and “There is too little information here”, perhaps also clarifying “I still don’t know who is supposed to do this”.
I cannot guarantee that this change will lead to more feedback. But I am convinced it will lead to feedback that is easier to give, easier to receive, and easier to act upon; to documentation that better fits the needs of the users; and to a deeper and more responsive relationship between authors, as experts in documentation, and users, as co-experts in natural language.