Share

Acceptance Criteria Checklist: 7 Easy Steps To Better Quality

Posted by Product Development Team on October 24, 2019 in categorySoftware Quality 7 min read

Authored by John Roets and Paul Gebel

While it may seem odd to propose “acceptance criteria for acceptance criteria” as a way to specifically address the quality of AC requirements, that’s precisely what we’re doing here.

We believe that creating a checklist of quantitative measures against which each requirement statement (acceptance criteria) can be judged – and then measuring the requirement’s quality over time – goes well beyond the more vanilla approach we so often hear (“The team understands the AC” statement in the Definition of Ready (DoR)) to a more explicit set of requirements for requirements.

But before we get too deep in the weeds, let’s take a step back.

Acceptance Criteria in an Agile Environment

On an Agile project, requirements are typically written as acceptance criteria on individual user stories. Developers are held accountable for an implementation that meets these requirements. They are under pressure to get the code right. We monitor them, measuring their code quality against various tools and metrics, including defect statistics, code analyzers, test results, code coverage, etc.

graphic-showing-code-inputs-outputs

When defects are found, negativity can arise about the outputs, that is, the code itself and the developers who created it. Invariably, proposed fixes usually center around development in some way: more unit tests, additional code reviews, better estimation, more experienced developers, etc.

Rarely is attention directed at the quality of the input assets. In fact, little more than lip service is paid to improving UI mockups, solution designs, testcases, and the requirements themselves.  Even worse, the development team is frequently held to a higher account to improve the assets than the asset creators themselves.

Policies that monitor the outputs owned by developers and their assets are easily instituted:

  • No failing unit tests allowed
  • Code coverage must be 80 percent
  • No violations of static code rules
  • All code must be reviewed
  • Testcases must pass
  • Acceptance criteria must be checked off
  • Definition of Done must be met
  • Etc.

But what policies have been put in place to measure and enforce the quality of the inputs? Who’s testing the testcases? Who’s keeping track of the quality of documented UX designs? Where is the historical data regarding the clarity of requirements? When quality issues are found with the inputs, what concrete actions are put in place for correcting them?

About all we might have is a general statement in a Definition of Ready (DoR) – e.g., “All AC are understood.” or “Mockups are complete.”

And that’s a problem.

The Fundamental Causes of Software Project Failure

As a grad student studying Software Development, John recalls one study in particular that stuck with him; it examined the fundamental causes of project failure:

  1. Poor requirements and requirements management
  2. Poor communication and collaboration
  3. Lack of domain expertise

There’s nothing in this list explicitly related to poor testing, lack of unit testing, or code reviews, etc. That’s not to say those things are not important; they certainly are. You could even argue that they are sub-elements of one of the above, like domain expertise.

The point is that there are more (or other) fundamental problems that, if addressed, would have significant impact on success. And they need serious attention and ways to measure them.

Questions abound—

  • Who is holding other project participants accountable for the assets they create (i.e., the primary inputs on which developers rely to do their job)?
  • Who’s testing the things these people create?
  • Where are the metrics?
  • In other words, who’s testing the tester’s testcases to make sure they’re good?
  • What mechanism is forcing requirement writers to get better? Or UX designers or architects to provide the high-quality assets that developers need?

Let’s take requirements as an example.

Acceptance Criteria’s Role In Delivering Well-Written Requirements

Unless the development team are empowered to reject stories at will, in spite of requirements that lack clarity, or assumptions about clarity coming during implementation, there seems to be no good way to enforce good requirements. But it is so important to have well-written requirements.

So let’s start there, by developing a checklist for Acceptance Criteria.

Download a .pdf version of our AC for AC Checklist!

A Checklist for Acceptance Criteria – AC for AC

  1. Write AC That Is Understandable by All Stakeholders

A bias often exists toward writing overly technical acceptance criteria when writing user stories. AC that spends time defining technical requirements within the checklist of AC are inadvertently stating the desired solution instead of articulating the requirement’s intent.

While there is room for technical user stories in the backlog, often called Non-Functional Requirements, the bulk of the product backlog is comprised of Functional User Stories. Within these stories, anyone reading the acceptance criteria should be able to understand the intent of the requirement.

  1. Write AC in First Person, Active Voice

Write acceptance criteria in the voice of the person who draws value from the story. Similarly, the writer’s language should reflect words the user would actually say. Widely accepted within Agile circles is the recommendation to write AC in first person, using active voice.

For example, “I can search for a name” is an acceptance criterion that is clear, straightforward, and easily understandable by anyone reading it. It communicates the value the user is gaining, from their unique perspective, and it uses language that the user would actually choose to communicate the value they derive.

Another technique to consider is developing a vernacular within your AC to know the general direction of the solution you intend – without actually stating the implementation. For example, the AC above “I can search for a name,” indicates the existence of a search field, that perhaps there is an active click required, and that results will be generated.

In yet another instance, you might consider writing your AC by suggesting the intent that’s driving the type of solution you’re looking for. “I can search for names that are registered,” might imply that search is validated. “I see names that I am likely to search for,” might steer the developer toward implicit search or some caching of previous searches.

The art of writing in the voice of the user is including enough detail to influence the solution, while not specifying the solution in the AC itself.

  1. Write Using Concise, Prosaic Sentences

Brevity is the key to writing well, especially when writing good AC. “Omit needless words... Make definite assertions,” as Strunk & White said in The Elements of Style. When AC begin to include technical or business jargon, they become unnecessarily long and complex. They also begin to be fragile from a testing perspective. When you’re designing for user needs, many solutions may be successful, but not all will pass long or technically complex acceptance criteria. The reason for driving toward short, prosaic sentences is so that the readers can easily hold conversations about them.

Also, brevity and prose are not necessarily unambiguous. “Definite assertions” in this context does not mean “mathematically provable.” Judgment often resides in the eye of the user or product owner. AC should be written in words that the user would speak. This isn’t to say that we don’t need disambiguation. We do. It’s just that the AC are not the place to do that. That should be taken care of in the specification of tests.

  1. Make An Assertion

One of the most common errors in writing AC is to annotate the technology. There is a place for annotation, but AC is not it. AC should be an assertion. You can tell that you are wandering from assertions into annotations when you begin to see AC that sound like, “The search bar is in the header,” or, “The button is light blue.” These annotations are fine and sometimes helpful when receiving wireframes or updated UX. However, these statements speak nothing of user value, and they should be avoided.

Another common mistake is to write acceptance criteria in terms of the actions that need to be completed in order to deliver the value. There seems to be a natural tendency for people to think in terms of tasks vs. outcomes – e.g., “What do we need to do?” instead of “What value do we need to produce?” This mistake happens at the story definition level as well. Rather than writing a story title or description in terms of value, many will write in terms of a specific action. Actions are important, but they are not AC.

  1. Write AC with Focus on a System Output, a Resultant State, or a Response

Many good AC will start with phrases such as: I can...; I see...; or I have…. Sometimes making negative statements will produce the proper response as well: I cannot…; I do not…. By beginning AC with these phrases, we constrain ourselves to the intent of the story.

Complex scenarios can exist, many of which are not easily distilled down to short, prosaic system outputs. In situations like these, you can supplement AC with a truth table or annotated wireframe, but these addenda should not be considered replacements for clear, testable, outputs.

  1. Good AC Is Independent of Implementation

As in the test of communicating value in the voice of the one deriving it, the test of implementation-independence will allow for developers to approach the conversation creatively. When AC become prescriptive, creativity and innovation is stifled.

Technology professionals with some years of experience under their belts will be able to share war stories of 300-page PRDs that became instantly obsolete with the first version update.

AC should be evergreen in the sense that the user will get the value they need, regardless of whether the implementation is changed or updated.

  1. Stay in the Problem Space, not the Solution Space

Writing good acceptance criteria is about staying in the problem space and out of the solution spaceDan Olsen, in The Lean Product Playbook, defines problem space as, “a customer problem, need or benefit that the product should address.” In contrast, the solution space is “a specific implementation to address the customer need or requirement.” (emphasis added).

Olsen offers additional insight in a Medium blog, entitled A Playbook for Achieving Product-Market Fit:

“Problem space vs. solution space helps you to understand what is that true competition and substitutes for the need that you are addressing,” Olsen writes. “The classic example being that in the 1960s, NASA contractors spent millions to develop a so-called space pen so that American astronauts could write in space, whereas the Soviet cosmonauts used a pencil.”

People tend to live in the solution space without ever asking what the real problem is. In the product world, many teams rush into the solution space by coding new features without getting a clear idea of the problem space in which they find themselves. A user story’s AC should stay in the problem space, focusing on why a customer wants or needs your product. The solution space can live in the story, just not in the AC.

Focus on the Fundamentals!

To improve on the success rate of software projects, focus on the fundamentals. One of those fundamentals is requirements definition. On Agile projects, acceptance criteria are a form of requirements definition. Does your team’s acceptance criteria follow the guidelines we wrote here? If not, we would encourage you to consider them. Focus as much attention on improving the quality of your acceptance criteria as you do on other quality improvement initiatives.

Download a .pdf version of our AC for AC Checklist

Interested in learning more about writing acceptance criteria? Contact ITX today! We’re excited to work with you.


John Roets is Software Solutions Architect at ITX Corp. He and the teams he works with follow Agile development practices. John has an MS degree in Software Development and Management from Rochester Institute of Technology and a BS degree in Electrical and Computer Engineering from Clarkson University. His interests lie at the intersection of technology, business strategy, and software processes.

Paul Gebel is Director of Product Management at ITX Corp. He earned his BFA and MBA at Rochester Institute of Technology, where he currently serves as Adjunct Professor. A veteran of the United States Navy, Paul’s experience also includes extensive project and product management experience and consultancy. At ITX, he works closely with high-profile clients, leveraging technology to help solve business problems so they can move, touch, and inspire the world.




Minutes to Read:   7