Friday, December 01, 2006

Sanity Tests

Continuing my QA posts (Testers are from Venus, Programmers are from Mars, QA Day), I'd like to talk about Sanity Tests.
Regardless of your development methodology (Waterfall, Agile, Scrum, WhatEver...), at some point or another you have to forward releases to the QA team's inspection. Much too often, the QA team receives a version of the software, starts testing it, and after a while (one day, 3 days, sometimes a week), they realize that some basic functionality has been harmed too significantly to be able to continue their tests. So the release goes back to the programmers to fix the problem, after which the testers must start everything all over again.
An excellent way to avoid this, is to use a layered testing methodology. The idea is to create testing procedures that, instead of investigating a specific feature to its smallest details, investigates the whole system up to some level of detail. Then each test procedure is actually a "drill-down" into more specific parts in the whole system. The idea behind this methodology is to be able to provide as profound a test as time permits, without ever missing a single feature.
Yet, many organizations and QA teams are not build for this kind of methodology, or simply don't believe in it. Also, it has some inherent overlapping of work, which could make the whole thing completely irrelevant in many cases. Imagine a feature which, to be tested, requires a whole day or preparations. With layered testing you would need to setup this test several times - once for each "layer".
So if we can't use layered testing, what can we do to avoid the problem stated above?

Well, here comes the Sanity Test...

The purpose of a Sanity Test is to test all the basic functionality of the system, without getting into the specifics of any of them. For example, if you're working on a customer support system, it is completely inacceptable to get thrown out of the application whenever you try to open a new support case. On the other hand, you might want to continue testing the system, even if some of the reports don't open properly. So the idea is to test the system end-to-end by making sure that at its base, all features work, without focusing further into any of the features. From my experience, for a team of approximately 4 developers, the sanity test should take around 1-2 working days tops (that's for development cycles of ~1 month - Agilists would get other figures). The most important thing here is that: a build that does not pass Sanity Test is not fit to be transfered to QA!

I've been working with Sanity Tests at 3 separate organizations, twice were my initiative. It's not easy to make everybody understand how important it is. Most of the times, the developers are those responsible to do the testing (after all, although it's acceptable to have bugs, the developers are still responsible to provide a workable release), and they don't like it one bit. Project managers get p--d off when a release is not transfered on time to QA because it didn't pass the test. Some QA teams don't know how to enforce the rule that they should not accept non-sane builds. And, of course, creating the Sanity Test Procedure can be quite tricky - you want it to be such that it completes fast, but tests all features well enough to know they are more or less OK.

Despite the problems it incurs, Sanity Tests have proven temselves to be very useful in creating more reliable software on time. The developers become more responsible about their code, testers are more productive and the whole development process runs more smoothly.

No comments: