Test Until It Breaks
I’ve seen a lot of examples where the testing of a given product is no stronger than a basic sanity check on the use case it is designed…
I’ve seen a lot of examples where the testing of a given product is no stronger than a basic sanity check on the use case it is designed for. The problem is that we rarely know how our customers will use our products and inevitably they will do the most horrible thing we don’t expect. So the prescription is short and sweet, run tests until the product falls over and record and share those conditions with the people who need to know.
For example, suppose we have a special module a user can launch and we get a request to support the use of two modules at a time. The common testing tends to be “fire it up with one module and then two modules and check memory usage in each case” or some derivative of this. Instead, why not fire up 1 through n modules and monitor memory usage at every inflection point? In the “test until it breaks” paradigm, when the tests are complete, we not only know whether two modules are safe to run but we also have an expansive profile of the conditions for success and failure which will help us to communicate the surface area of expected success. Further, if we discover that n=5 means performance degrades substantially, for example, we may want to build in a safety check in the logic to prevent the use of more than five modules so that users have a good experience.
Originally published at http://adamjudelson.com on June 22, 2016.