Test Driven


What to expect of your test managers

A checklist for programme managers

Article comments

If you have responsibility for IT delivery in some form, whether you are an IT Director or Programme Manager - at some stage, testing is likely to have led to 'creative tension' in your teams. Perhaps defects have been discovered late or test managers have complained of poorly articulated requirements. Whatever the issue, having realistic expectations of what your test managers can achieve is critical to smooth relations.

Here are some basic guidelines for managing the testing function, what you need to know at a strategic level about testing and how to ensure that you get the best from your test managers.

What a programme manager can expect from a test manager

Strategically, you need to be sure your test manager has the people, documentation and processes in place. Within four weeks of a project starting you should see:

  • Test strategy - so you know what they're doing, why, and what it buys you. The strategy document should be no longer than ten pages; keep it slim and easy to read by referencing other documents that define your company's quality management system.
  • Test plan - relating to the project/product plan and explaining who is doing what and when. It'll change a lot so it too should be short.

Tactically there are several things you should see:

  • Estimates:
  1. of test development and execution. These will change as much as the requirements change - synchronising the two streams is a key objective.
  2. of release readiness. This will provide an end date and the probability of hitting it. Getting your team to focus on this from the start will expose tensions early
  • Reports of test progress (test creation, test execution, retest status) - so you get early warning of problems, you know how fast the system is, and how close to release you are.
  • Demonstration of risk management by matching test coverage against risk. You don't want awkward allegations that the system doesn't perform key functions when it's deployed. Equally you want to be sure all your streams are managing risk properly. Tip: workshop this until you're happy.
  • Assurance that the only problems found in the field will be trivial. This is a by-product of release readiness and is directly-related to the predicted and actual numbers of bugs.
  • Deployment plan that shows a process leading from the developed system through testing to deployment.
  • Deployed system - this is the responsibility of whoever is in charge of deploying the system. If that's the test manager and the scope of work is manageable that may be acceptable, if not, and in mid- to large-size companies it probably isn't - your test manager is likely to be overloaded with tasks that are to some extent outside of his or her control

What a programme manager must NOT do to test managers

A programme manager must not expect:

  • estimates to remain fixed in concrete. Just because you were beaten up when you failed to meet targets, that's a rotten reason for beating up your team. Never accept estimates without some view of the probability of those estimates being met - you also need an explanation of the reasoning behind the estimates.
  • real users to test - they don't think like testers - they have a day job which they want to get back to
  • the testers to work with substandard requirements
  • that because the developers are late, that testing can be curtailed to meet a tighter deadline
  • that tests need only be run once (and therefore need not be scripted)
  • a new release to be made before testers have finished testing the old one (some bits will never get tested)
  • to run your reports solely by the numbers. If your team isn't meeting their targets, back off and find out why.

Neither should a programme manager insist on:

  • running performance or stress tests before the system is 95% functional in order to surface underlying issues early,
  • testers running tests in an haphazard or "exploratory" manner rather than writing scripts, ("we'll find more bugs this way" - you'll find fewer)
  • premature releasing
  • testers showing that every bug (no matter how trivial) is fixed before adding any more features.

Finally, if the testing function is to run smoothly, programme managers should not refuse to:

  • correct bugs in the requirements ("we don't have time")
  • allow the test manager to add to the risk log or fail to monitor and act on it
  • allow testers to run exploratory tests when all their scripted tests are run
  • identify the scope and coverage of tests to the client ("oh, sure we'll test it")
  • insist that coders unit-test all their code ("they don't have time")
  • use enough testers (we recommend two developers to one tester, reverse this if the system is safety-critical) ("we don't have the budget")
  • allow enough time to write tests for changed requirements
  • allow testers to finish testing a build before starting testing another build
  • test the performance of the system.

Posted by Peter Farrell-Vinay, managing consultant, SQS

Enhanced by Zemanta

Email this to a friend

* indicates mandatory field






ComputerWorldUK Webcast

Advertisement
ComputerworldUK
Share
x
Open