It’s easy to make mistakes when testing software or planning a testing effort. Some mistakes are made so often, so repeatedly, by so many different people, that they deserve the label Classic Mistake.
Classic mistakes cluster usefully into five groups:

• The Role of Testing: who does the testing team serve, and how does it do that?
• Planning the Testing Effort: how should the whole team’s work be organized?
• Personnel Issues: who should test?
• The Tester at Work: designing, writing, and maintaining individual tests.
• Technology Rampant: quick technological fixes for hard problems.

The role of testing
• Thinking the testing team is responsible for assuring quality.
• Thinking that the purpose of testing is to find bugs.
• Not finding the important bugs.
• Not reporting usability problems.
• No focus on an estimate of quality (and on the quality of that estimate).
• Reporting bug data without putting it into context.
• Starting testing too late (bug detection, not bug reduction)

Planning the complete testing effort
• A testing effort biased toward functional testing.
• Under emphasizing configuration testing.
• Putting stress and load testing off to the last minute.
• Not testing the documentation
• Not testing installation procedures.
• An over reliance on beta testing.
• Finishing one testing task before moving on to the next.
• Failing to correctly identify risky areas.
• Sticking stubbornly to the test plan.

Personnel issues
• Using testing as a transitional job for new programmers.
• Recruiting testers from the ranks of failed programmers.
• Testers are not domain experts.
• Not seeking candidates from the customer service staff or technical writing staff.
• Insisting that testers be able to program.
• A testing team that lacks diversity.
• A physical separation between developers and testers.
• Believing that programmers can’t test their own code.
• Programmers are neither trained nor motivated to test.
The tester at work

• Paying more attention to running tests than to designing them.
• Un-reviewed test designs.
• Being too specific about test inputs and procedures.
• Not noticing and exploring “irrelevant” oddities.
• Checking that the product does what it’s supposed to do, but not that it doesn’t do what it isn’t supposed to do.
• Test suites that are understandable only by their owners.
• Testing only through the user-visible interface.
• Poor bug reporting.
• Adding only regression tests when bugs are found.
• Failing to take notes for the next testing effort.

Test automation
• Attempting to automate all tests.
• Expecting to rerun manual tests.
• Using GUI capture/replay tools to reduce test creation cost.
• Expecting regression tests to find a high proportion of new bugs.

Code coverage
• Embracing code coverage with the devotion that only simple numbers can inspire.
• Removing tests from a regression test suite just because they don’t add coverage.
• Using coverage as a performance goal for testers.
• Abandoning coverage entirely.

Powered by ScribeFire.

Advertisements