Release tests screaming for automation
I'm currently on my 4. day of (manual) release testing here in I45, and I feel there is a number of critical problems with the current practices of manual release test at the end of iterations, which could be solved by converting these tests to automatic tests. Here is a list of the deficiencies (IMHO) of the current release test approach.
- The test are very time consuming: Test I've run through so far (Test1,2, 10) took me around 2 days each to run through. Even though the time isn't dedicated 100% to release testing the constant switching back and forth between the release test makes is difficult to work efficiently on other tasks.
- The precision of the manual test are low. The manual process of reading through many pages of 'do this and verify that' are error prone, and very difficult to reproduce on a consistent manor.
- The test specifications are not very precise. As nearly all documentation written with the purpose of being read by another human being, the test specification are open to interpretation. This again means that the tests will be run in slightly different ways each time they are run.
- Lot's of redundancy/inconsistentcy: A lot of the same test functionality can be found in different tests. Small variations have been introduced across these similar bits of code, properly because of historical copy/pastes. This phenomenon often arises from attempt to 'code' extensive functionality through documentation.
- Tests are rarely run : Because of the timeconsuming nature of the current acceptance test/relase test they are only run at the end of iterations and often only a subset of the tests are run.
- Running extensive manual tests can be very boring and drain the motivation of the development team.
All these problems could be solved by writing automatic system test to replace the current manual tests. This would change the release test so:
- The automated parts of the release test, would provide the test status free-of-cost for at the end of a iteration. Eg. in I43 I spent the better part of 3 days trying to get a picture of the state of the unit test (TEST10), where in I44 the reference test result could be read directly from the /wiki/spaces/APP/pages/11541773 continuous integration server.
- Test specification written directly as code are very precise and therefore reproducible.
- It is much easier to reuse code and avoid redundancy and inconsistency compared to manual test specifications.
- Tests can be run on a continuous integration server, providing fast feedback on changes cause the acceptance tests to break. The current unit test suite is a good example of the value of such a quick feedback functionality. After the unit test have been added to the continuous integration server, it can now be used to get a real time, reference status (unit tested) functionality of the NetarchiveSuite code, which in turn leads to much quicker detection and fixes of broken commits, which again leads to faster code changes and bolder design maintenance (refactorings).
- The energy of the development team can be switched from laboring through exhausting manual test, to improving the quality of the tests.
- Stress testing, performance test, regression test, multiple platform tests, etc. becomes feasible because of the cheapness of running tests.
There is of course some short-comes of automatic system tests. Among these are:
- System tests can be pretty time consuming to implement because of the rich fixtures/interfaces they have to work with. This could be GUI's, databases, webservices, OS levels stuff, etc.
- System tests are also notoriously expensive to maintain: Because of the complicated environment making of a system test, it is prone to stop working as soon as parts of the system changes. This can on the other hand be a good thing, as these changes may be cause for further attention.
- Care should be taken to maintain human documentation of the automatic tests (readable test specifications. This is because the test specification are the primary means of QA'ing the system tests, and essential in enabling the usage of the system test as acceptance tests/release tests. See Introduce more powerful test frameworks for a couple of solutions to this challenge.
- Automatic test can not validate things like documentation and look & feel, so QA of these project aspects still need to be performed manually.
Nevertheless, I think a automated system test would great help increase the development efficiency of the NetarchiveSuite project and make it more fun to be a part of the development team. In my experience a highly automated QA foundation is a critical ingredient in the creation of a motivated, high performance teams. Without investing the work needed to implement a automatic system test framework it will become increasing difficult to make introduce new functionality and QA the application.