Thoughts on “TDD Guided by Zombies”

In “TDD Guided by Zombies”, James Grenning shares a seasonally-appropriate acronym for developing unit tests — just in time for our class to wrap up its unit testing section!  ZOMBIES stands for Zero, One, Many/More, Boundary behaviors, Interface definition, Exercise exceptional behavior, Simple scenarios/Simple solutions.  The first three letters (Zero, One, Many) describe, in order, the complexity of the behavior undergoing testing — for example, a queue with zero, one, and then many entries.  The second set of letters (Boundary, Interface, Exercise Exceptions) give the order in which to build test cases — start with boundaries, design the interface undergoing testing (or ensure that the code meets the interface requirements), and then test exceptional behavior.  The last letter tells the tester to create simple scenarios with simple solutions — add the minimal possible behaviors to production code in order to pass the tests generated by the first six parts of the acronym.

Grenning spends the bulk of the article on an example of a circular queue implementation in C++, showing how he progresses through each step.  He gives clear examples of both what to do and not to do.  However, I’m more concerned with the underlying principles and process, so in the interest of brevity I’ll skip a detailed review of his example and simply say that it’s thorough and worth a longer period of study.

I chose this article not just because of the season (or that it coincides neatly with our in-class work), but because it helps to answer the question of how.  We’ve learned many methods for generating unit test cases, but how do we pick the order?  How do we work backwards from tests to code in a way that makes sense and satisfies the specification that we’re handed?  This confirms what I learned in 343 last year: ZOMBIES are the key to TDD and to successful unit testing.

And what can I, as a learner, take away from this?  Firstly, and perhaps most trivially, it can help to wrap important concepts in cute acronyms.  It makes sharing and remembering knowledge easier.  Secondly, it clarifies the most important sets of values to test, and the most important times at which to test newly-instantiated objects (or other language-apprpriate constructs): when they are fresh and empty, when they have a single value, and when they contain many values.  Thirdly, the article really drives home the importance of minimalism in testing and test-driven coding.  If a simple solution is all that it takes to meet the specification, then use it.  If a complex solution can be simplified, simplify it.

This semester and onward, when I need to develop unit test cases, I’ll be thinking ZOMBIES.

Article link.

Posted in Uncategorized | Tagged , , | Leave a comment

Thoughts on “Deeper Testing (3): Testability”

In his article “Deeper Testing (3): Testability”, Michael Bolton defines the testability of a product not only in terms of how it can be manipulated (although that’s one of his categories), but as “a set of relationships between the product, the team, the tester, and the context in which the product is being developed and maintained”.  He breaks this into five somewhat overlapping regions:

  • Epistemic testability — how we, as testers and developers, find the “unknown unknowns” related to the product.
  • Value-related testability — understanding the goals of everyone with a stake in the product and what they intend to get out of it.
  • Intrinsic testability — designing a product to facilitate easier and more meaningful testing.  This includes the ability to manipulate and view the product and its environment.
  • Project-related testability — how the testing team is supported by and supports other teams or team members attached to the product.
  • Subjective testability — the skills of the tester or testers.

Bolton then details how he thinks a tester on an agile team should operate: by bringing up and advocating for the testability of the product, and trying to keep the growth rate of the product sustainable in relation to testing.

As I learn more in the course, I find it increasingly important to understand not just how we test, but why and in what context.  Bolton’s article is especially helpful in understanding the context, and that is why I chose to write about it.  The article highlights aspects of the environment that surrounds a product, and the ways that those aspects contribute to or detract from the feasability of testing.  It also speaks to a tester’s role in an agile team, which is in practical terms useful to know about as many companies use some form of agile development.

With any resource I find in relation to my coursework, I look at what I can take away and what I can apply in my life (sometimes not limited to software development or computer science).  This piece gives me a better understanding of how to begin testing a product — not by writing test cases, or even determining the optimal testing tools to use, but by looking at the bigger picture.  I need to ask myself what I don’t know about the product, how it is going to be used, how it can be designed (or was designed) to facilitate testing, how I as a tester can and should engage other team members, and what skills I have that can make testing easier or what skills I need to hone that I don’t already possess.

Article link. 

Posted in Uncategorized | Tagged , , | Leave a comment

Thoughts on “Rethinking Equivalence Class Partitioning, Part 1”

In his blog post, James Bach picks apart the Wikipedia article on equivalence class testing (ECT) as a way to explain the way in which ECT acts as a heuristic founded on logic rather than a rigid procedure that applies only to computer science.

His main points are:

A) That ECT is not purely based on input partitioning, nor is it purely based in computer science.  It comes from logical partitioning of sets.

B) That ECT is useful for dividing inputs (and outputs) into groups which are likely to expose the same bug in a system.

C) That ECT is about prioritization of bugs rather than exhaustive exposure.

In essence, he states that equivalence class testing is not an algorithm used to generate test cases, but a heuristic technique used to partition conditions such that they reveal the most important bugs first.

I chose this article because it relates to equivalence class testing (which we covered recently in class) and also refutes points made by a resource commonly used for basic knowledge.  I (and I think many other students) frequently use Wikipedia for summaries of subjects, and getting an expert opinion on what’s wrong with that kind of source helps deepen understanding of why they need to at the very least be supplemented by more nuanced sources.  It moves my understanding beyond the high school teacher’s decree of “DO NOT USE WIKIPEDIA IT IS NOT A GOOD SOURCE”.

While I think that picking apart the phrasing and wording of specific passages from a Wikipedia article is needlessly pedantic, I also think it’s important to critically interrogate sources of “common knowledge”.  When it comes to testing, it is necessary to understand the ways in which techniques are grounded in needs and conditions outside of just the code.  A tester should not only know how to test but also why those tests are used and what they are best at uncovering.  Bach’s article helped me think about why we test things, why we use the techniques that we use, and what we hope to get out of them.  When it comes to ECT, we partition conditions into classes in order to make a model of the ways in which the product under testing behaves.  The model can and should be modified through the testing process; conditions can be grouped differently in order to reveal different bugs.  Back presents ECT as well as other techniques as a fallable but important tool, and I think that’s good to keep in mind as I learn more about testing.

Link to blog post.


Posted in Uncategorized | Tagged , | Leave a comment

class HelloWorld{

public static void main(String[] args){

System.out.println(“Hello World!”);



Posted in Uncategorized | Tagged , | Leave a comment