Share

AC for AC – 7 Steps to Improve the Quality of Acceptance Criteria

Posted by Product Development Team on October 24, 2019 7 min read

Authors: John Roets, Solutions Architect; and Paul Gebel, Product Innovation Lead

While it may seem odd to propose “acceptance criteria for acceptance criteria” as a way to specifically address the quality of AC requirements, that’s precisely what we’re doing here. We believe that establishing a list of quantitative measures against which each requirement statement (acceptance criteria) can be judged – and then measuring the requirements’ quality over time – goes well beyond the more vanilla approach we so often hear: “The team understands the AC” statement in the Definition of Ready (DoR), to a more explicit set of requirements for requirements.

But before we get too deep in the weeds, let’s take a step back.

On an Agile project, requirements are typically written as acceptance criteria on individual user stories. Developers are held accountable for an implementation that meets these requirements. They are under pressure to get it right. We monitor them, measuring their code quality through various tools and metrics: defect statistics, code analyzers, test results, code coverage, etc.

When defects are found, negativity can arise about the developers and their code. Proposed process fixes usually center around development in some way: more unit tests, additional code reviews, better estimation, more experienced developers, etc.

Unfortunately, in our experience, rarely do organizations concretely measure the quality of the assets that serve as the inputs to development: e.g., UI mockups, solution designs, testcases, and the requirements themselves. Typically, little more than lip service is paid to improving these inputs. Or worse, the development team is held to higher account to drive improvement of these assets than the asset creators themselves.

Code is the output. Requirements, Designs, and Testcases are inputs. Who is measuring and enforcing the quality of the inputs? Who’s testing the testcases? Who’s keeping track of the quality of documented UX designs? Where is the historical data regarding the clarity of requirements?

When quality issues are found with the inputs, what concrete actions are put in place for correcting them? For the outputs owned by developers and their assets, organizations will institute policies such as:

  • No failing unit tests allowed
  • Code coverage must be 80 percent
  • No violations of static code rules
  • All code must be reviewed
  • Testcases must pass
  • Acceptance criteria must be checked off
  • Definition of Done must be met, and so on

For the inputs, about all we might have is a general statement in a Definition of Ready (DoR) – e.g., “All AC are understood.” or “Mockups are complete.”

And that’s a problem.

As a grad student studying Software Development, John recalls one study in particular that stuck with him; it examined the fundamental causes of project failure:

  1. Poor requirements and requirements management
  2. Poor communication and collaboration
  3. Lack of domain expertise

There’s nothing in this list explicitly related to poor testing, lack of unit testing, or code reviews, etc. That’s not to say those things are not important; they certainly are. You could even argue that they are sub-elements of one of the above, like domain expertise. The point is that there are more (or other) fundamental problems that, if addressed, would have significant impact on success. And they need serious attention and ways to measure them.

Questions abound—

  • Who is holding other project participants accountable for the assets they create (i.e., the primary inputs on which developers rely to do their job)?
  • Who’s testing the things these people create?
  • Where are the metrics?
  • In other words, who’s testing the tester’s testcases to make sure they’re good?
  • What mechanism is forcing requirement writers to get better? Or UX designers or architects to provide the high-quality assets that developers need?

Let’s take requirements as an example.

Unless the development team are empowered to reject stories at will, in spite of requirements that lack clarity, or assumptions about clarity coming during implementation, there seems to be no good way to enforce good requirements. But it is so important to have well-written requirements.

So let’s start there, by developing AC for AC. Acceptance criteria must be…

Comprehendible by All Stakeholders

There is often a bias toward overly technical acceptance criteria when writing stories. AC that define technical requirements within the checklist of AC are stating the solution while avoiding definition of the intent. There is room for technical user stories in the backlog, often called Non-Functional Requirements, but the bulk of the backlog of products is comprised of Functional User Stories. Within these stories, anyone reading the acceptance criteria should be able to understand the intent of the functionality.

First Person, Active Voice

Acceptance criteria should be in the voice of the person who is drawing value from the story, and the diction should be something they would actually say. What is often widely accepted within Agile circles is the need to write user stories in first person, using active voice. For example, stating, “I can search for a name,” is an acceptance criterion that is clear, straightforward, and easily understandable by anyone reading it. It communicates the value the user is gaining, and it uses language that the user would actually choose to communicate the value they derive.

One technique that you might consider for your team is developing a vernacular within your AC to know the general direction of the solution you intend (without actually stating the implementation). For example in the AC above, “I can search for a name,” indicates that there is a search field, that perhaps there is an active click required, and that results are going to be fetched as a result. In another instance, it might be better stated to color the intention with the kind of solution you’re looking for. “I can search for names that are registered,” might imply that search is validated. “I see names that I am likely to search for,” might steer the developer toward implicit search or some caching of previous searches. The art of writing in the voice of the user is including enough detail to influence the solution, while not specifying the solution in the AC itself.

Short Prosaic Sentences

Brevity is the key to writing well, especially when writing good AC. “Omit needless words... Make definite assertions.” (Strunk & White, The Elements of Style). When AC begin to include technical or business jargon, they become unnecessarily long and complex. They also begin to be fragile from a testing perspective. When you’re designing for user needs, many solutions may be successful, but not all will pass long or technically complex acceptance criteria. The reason for driving toward short, prosaic sentences is so that the readers can easily hold conversations about them.

Also, brevity and prose are not necessarily unambiguous. “Definite assertions” in this context does not mean “mathematically provable.” Judgment often resides in the eye of the user or product owner. AC should be written in words that the user would speak. This isn’t to say that we don’t need disambiguation. We do. It’s just that the AC are not the place to do that. That should be taken care of in the specification of tests (see Martin and Melnik, Tests and Requirements, Requirements and Tests: A Möbius Strip).

An Assertion

One of the most common errors in writing AC is to annotate the technology. There is a place for annotation, but AC is not it. AC should be an assertion. You can tell that you are wandering from assertions into annotations when you begin to see AC that sound like, “The search bar is in the header,” or, “The button is light blue.” These annotations are fine and sometimes helpful when receiving wireframes or updated UX. However, these statements speak nothing of user value, and they should be avoided.

Another common mistake is to write acceptance criteria in terms of the actions that need to be completed in order to deliver the value. There seems to be a natural tendency for people to think in terms of tasks vs. outcomes – e.g., “What do we need to do?” instead of “What value do we need to produce?” This mistake happens at the story definition level as well. Rather than writing a story title or description in terms of value, many will write in terms of a specific action. Actions are important, but they are not AC.

A System Output, a Resultant State, or a Response

Many good AC will start with phrases such as: I can...; I see...; or I have…. Sometimes making negative statements will produce the proper response as well: I cannot…; I do not…. By beginning AC with these phrases we constrain ourselves to the intent of the story. Complex scenarios can exist, many of which are not easily distilled down to short, prosaic system outputs. In situations like these, you can supplement AC with a truth table or annotated wireframe, but these addenda should not be considered replacements for clear, testable, outputs.

Independent of Implementation

As in the test of communicating value in the voice of the one deriving it, the test of implementation-independence will allow for developers to approach the conversation creatively. When AC become prescriptive, creativity and innovation is stifled. Technology professionals with some years of experience under their belts will be able to share war stories of 300-page PRDs which became instantly obsolete with the first version update. AC should be evergreen in the sense that the user will get the value they need, regardless of whether the implementation is changed or updated.

Problem Space, not Solution Space

Writing good acceptance criteria is about staying in the problem space and out of the solution space. Dan Olsen, in The Lean Product Playbook, defines problem space as, “a customer problem, need or benefit that the product should address.” In contrast, the solution space is “a specific implementation to address the customer need or requirement.” (emphasis added).

Olsen offers additional insight in a Medium blog, entitled A Playbook for Achieving Product-Market Fit:

“Problem space vs. solution space helps you to understand what is that true competition and substitutes for the need that you are addressing,” Dan writes. “The classic example being that in the 1960s, NASA contractors spent millions to develop a so-called space pen so that American astronauts could write in space, whereas the Soviet cosmonauts used a pencil.”

People tend to live in the solution space without ever asking what the real problem is. In the product world, many teams rush into the solution space by coding new features without getting a clear idea of the problem space in which they find themselves. A user story’s AC should stay in the problem space, focusing on why a customer wants or needs your product. The solution space can live in the story, just not in the AC.

Summary – It’s the Fundamentals!

To improve on the success rate of software projects, focus on the fundamentals. One of those fundamentals is requirements definition. On Agile projects, acceptance criteria are a form of requirements definition. Do your acceptance criteria follow the guidelines we wrote here? If not, we would encourage you to consider them. Focus as much attention on improving the quality of your acceptance criteria as you do on other quality improvement initiatives.




Minutes to Read:   7