A while back, I blogged about having separate dev, test/QA, and production environments. Recently, I was discussing with one of my clients the fact that they really needed more than one test/QA environment because they needed specific scenarios in their data to test the functionality of a broad spectrum of code paths. But they also needed a large volume of data in order to do some performance testing. And you could argue that these two different raison d'etre of the test systems would be better served with separate instances.
So this got me thinking about just how many different test or QA instances do you need or would you want in an ideal world. Here are some thoughts about different types of testing:
- Unit Tests: In many shops this is done by the developer to make sure that their code is doing what's expected, at least to pass the 80/20 rule (i.e. it works on the common or “normal“ data sets). Small data set is fine here.
- Functionality QA Tests: These verify that the Unit Tests passed, and expand to include the “edge cases“ (the 20% of the scenarios that take up 80% of your time). Small data sets are fine, with the focus being on a broad spectrum of data scenarios.
- Integration Testing: Do all the parts (units) which are often written by different people fit together properly?
- Performance / Volume Tests: Ideally, an exact copy of your production environment to really know how long something will take. In practice, there is probably some smaller percentage that would allow you to give a statistically significant forecast.
- Bug replication, testing, and analysis: Again, ideally and exact copy of your production environment so you can ultimately test your theory on a copy of the real data that had the original problem. Probably want this on a delayed-update such as refreshed nightly so you have time to catch the original data condition before the bug gets replicated into your test system. Also could be built from backups on an as-needed basis.
- Staging: Test your build and deployment process. Needs to match production in terms of code and schema versions. Also needs broad enough variety of data to touch all components at least once.
- Bug-fix / Minor Release Testing: May be same as Staging, but the emphasis is on making sure no new major version release code is included.
- Multiple Supported Versions Testing: If you support multiple versions of your product for clients, then you'll probably want to test bugs reported to find out which versions are affected. If your latest version is 5.0, and a client running version 3.7 reports a bug, you want to know if it can be fixed by uprading the client to the latest version, or if everyone is facing the same problem.
Any others that come to mind for you? Add them in the comments section.
posted @ Wednesday, June 02, 2004 11:56 PM