Category: Testing Methdologies


A Metric is a quantitative measure of the degree to which a system, component or process possesses a given attribute. Software metrics are measures that are used to quantify the software, software development resources and software development process. A metric is defined to be the name of a mathematical function used to measure some attribute of a product or process. The actual numerical value produced by a metric is a measure.

For example, cyclomatic complexity is a metric; when applied to program code, the number yielded by the formula is the cyclomatic complexity measure.

* Management metrics , which assist in the management of the software development process.
* Quality metrics , which are predictors or indicators of the product qualities.

Metrics related to software error detection (“Testing”) in the broad sense, grouped into the following categories:

General metrics that may be captured and analysed throughout the product life cycle

Software Requirements metrics , which may give early warning of quality problems in requirements specifications

Software Design metrics , which may be used to assess the status of software designs;

Code metrics reveal properties of the program source code;

Test metrics can be used to control the testing process, to assess its effectiveness, and to set improvement targets;

Software Installation metrics, which are applicable during the installation process;

Software Operation and Maintenance metrics , including those used in providing software product support.

Test Metrics

The following are the metrics collected in the testing process.

1.Defect age .
Defect age is the time from when a defect is introduced to when it is detected (or fixed). Assign the numbers 1 through 6 to each of the software development activities from software requirements to software operation and maintenance. The defect age is computed as shown.

(Activity Detected – Activity Introduced)

Average Defect Age = –——————————————————

Number of Defects

2. Defect response time
This measure is the time between when a defect is detected to when it is fixed or closed.
3. Defect cost ($ d )
The cost of a defect may be computed as:

$ d = ( cost to analyse the defect) + (cost to fix it)
+ (cost of failures already incurred due to it)

4. Defect removal efficiency (DRE)
The DRE is the percentage of defects that have been removed during an activity, computed with the equation below. The DRE can also be computed for each software development activity and plotted on a bar graph to show the relative defect removal efficiencies for each activity. Or, the DRE may be computed for a specific task or technique (e.g., design inspection, code walkthrough, unit test, 6 month operation, etc.). [SQE]
Number Defects Removed
DRE = –—————————————————— * 100
Number Defects At Start Of Process

5 Mean time to failure (MTTF)
Gives an estimate of the mean time to the next failure, by accurately recording failure times t i , the elapsed time between the ith and the (i-1)st failures, and computing the average of all the failure times. This metric is the basic parameter required by most software reliability models. High values imply good reliability.

MMTF should be corrected by a weighted scheme similar to that used for computing Fault density (see under Test Metrics).

6 . Fault density (FD)
This measure is computed by dividing the number of faults by the size (usually in

KLOC, thousands of lines of code).

Hope these are useful.

~Himanshu~

Here is one Security Testing Checklist that may help you
1. Are all the Internet-facing servers within the system registered with the corporate web office?
2. Do the test plans for the system include tests to verify that security functionality has been properly
implemented?
3. If the system is rated high on the business effect assessment or if it is Internet facing, has the
company security office been consulted to determine whether or not additional security testing
is required?
4. Has the security test covered the following?
a. application testing
b. back doors in code
c. denial of service testing
d. directory permissions
e. document grinding (electronic waste research)
f. exploit research
g. firewall and application control list
h. intrusion detection systems
i. manual vulnerability testing and verification
j. network surveying
k. password cracking
l. PBX testing
m. port scanning
n. privacy review
o. redundant automated vulnerability scanning
p. review of IDS and server logs
q. security policy review
r. services probing
s. social engineering
t. system fingerprinting
u. trusted systems testing
v. user accounts
w. wireless leak tests

Regards,

Himanshu

This is a cry frequently heard as deadlines approach 😉 .
You can hear most of the managers screaming about this issue….
However there could be a number of answers:

1. The testers were not able to complete testing due to a new release being loaded.
2. The bug was not in an earlier release (reload that earlier release and see).
3. The bug could not be tested for earlier because some part of the release did not work and inhibited
the test’s ability to “see” the bug.
4. The bug was in some part of the system not originally planned for the release for which a test has
only just been written.
5. The bug was found while running some other test.
6. The bug was in a part of a system which was not the focus of testing.
7. The bug would have been found eventually, but the tester hadn’t run the test (which would have
found it) yet.
8. And yes, maybe if we’d been more thorough we’d have found that bug earlier.

Its always good to keep in place a corrective action in place so that the impact of the issue can be minimized and the stake holders and the client/s do not lose faith on you and your team.

Here is an artcle by Aashu Chandra my manager at Infogain (my previous company) about What to do if a bug has leaked into production?

Enjoy and let me know your thoughts on this.

Regards,

Himanshu

Every now and then I hear people saying that we don’t have enough time for testing or our estimates have gone wrong due to some resource issues, however we can resolve these things by doing risk analysis, we need to identify the areas where testing should be focused.
Since it’s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects.

This requires judgement skills, common sense, and experience. (If warranted, formal methods are also available.) Considerations can include:
– Which functionality is most important to the project’s intended purpose?
– Which functionality is most visible to the user?
– Which functionality has the largest safety impact?
– Which functionality has the largest financial impact on users?
– Which aspects of the application are most important to the customer?
– Which aspects of the application can be tested early in the development cycle?
– Which parts of the code are most complex, and thus most subject to errors?
– Which parts of the application were developed in rush or panic mode?
– Which aspects of similar/related previous projects caused problems?
– Which aspects of similar/related previous projects had large maintenance expenses?
– Which parts of the requirements and design are unclear or poorly thought out?
– What do the developers think are the highest-risk aspects of the application?
– What kinds of problems would cause the worst publicity?
– What kinds of problems would cause the most customer service complaints?
– What kinds of tests could easily cover multiple functionalities?
– Which tests will have the best high-risk-coverage to time-required ratio?

I hope this helps everyone.

Regards,

Himanshu

Most of us misunderstand or get confused with the concepts and overview of SOA based Testing, This article of mine talks about web Services and also describes the challenges and benefits of SOA Testing.

Please post your inputs and comments.

Regards,

Himanshu

Care should be taken when communicating fault information to developers and managers. In ancient Greece messengers who brought bad news were executed so some things have improved! 🙂 However, we must still tread carefully.

Dashing up to a developer and saying “You fool, you did this wrong” is not likely to encourage him or her to investigate the problem. More likely the developer will go on the defensive, perhaps arguing that it is not a fault but it is the tester who does not understand it.

A more successful approach may be to approach the developer saying “I don’t understand this, would you mind explaining it to me please?” In demonstrating it to the tester the developer may then spot the fault and offer to fix it there and then.

Cem Kaner (co-author of “Testing Computer Software”) says that the best tester is not the one who finds the most faults but the one who manages to have the most faults fixed. This requires a good relationship with developers.

So from next time when you are raising a defect please follow the above given approach and you’ll c the diference it makes.

The cost of faults escalates as we progress from one stage of the development life cycle to the next stage. A requirement fault found during a review of the requirement specification will cost very little to correct since the only thing that needs changing is the requirement specification document.

If a requirement fault is not found until system testing then the cost of fixing it is much higher as the requirement specification will need to be changed together with the functional and design specifications and the source code. After these changes some component and integration testing will need to be repeated and finally some of the system testing. If the requirement fault is not found until the system has been put into real use then the cost is even higher since after being fixed and re-tested the new version of the system will have to be shipped to all the end users affected by it.

Furthermore, faults that are found in the field (i.e. by end-users during real use of the system) will cost the end-users time and effort. It may be that the fault makes the users’ work more difficult or perhaps impossible to do. The fault could cause a failure that corrupts the users’ data and this in turn takes time and effort to repair.

The longer a specification fault remains undetected the more likely it is that it will cause other faults because it may encourage false assumptions. In this way faults can be multiplied so the cost of one particular fault can be considerably more than the cost of fixing it.
The cost of testing is generally lower than the cost associated with major faults (such as poor quality product and/or fixing faults) although few organizations have figures to confirm this.