I received an e-mail from Snigdha,Gaurav, and Samy asking me about types of testing one can perform:

Here is the compiled list:

Black box testing – not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

White box testing – based on knowledge of the internal logic of an application’s code. Tests are based on coverage of code statements, branches, paths, conditions.

Unit testing – Unit is the smallest compilable component. A unit typically is the work of one programmer.This unit is tested in isolation with the help of stubs or drivers.Typically done by the programmer and not by testers.

Incremental integration testing – continuous testing of an application as new functionality is added; requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing – testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

Functional testing – black-box testing aimed to validate to functional requirements of an application; this type of testing should be done by testers.

System testing – black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

End-to-end testing – similar to system testing but involves testing of the application in a environment that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Even the transactions performed mimics the end users usage of the application.

Sanity testing – typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state.

Smoke testing – The general definition (related to Hardware) of Smoke Testing is:
Smoke testing is a safe harmless procedure of blowing smoke into parts of the sewer and drain lines to detect sources of unwanted leaks and sources of sewer odors.
In relation to software, the definition is Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.

Static testing – Test activities that are performed without running the software is called static testing. Static testing includes code inspections, walkthroughs, and desk checks

Dynamic testing – test activities that involve running the software are called dynamic testing.

Regression testing – Testing of a previously verified program or application following program modification for extension or correction to ensure no new defects have been introduced.Automated testing tools can be especially useful for this type of testing.

Acceptance testing – final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

Load testing -Load testing is a test whose objective is to determine the maximum sustainable load the system can handle. Load is varied from a minimum (zero) to the maximum level the system can sustain without running out of resources or having, transactions suffer (application-specific) excessive delay.
Stress testing – Stress testing is subjecting a system to an unreasonable load while denying it the resources (e.g., RAM, disc, mips, interrupts) needed to process that load. The idea is to stress a system to the breaking point in order to find bugs that will make that break potentially harmful. The system is not expected to process the overload without adequate resources, but to behave (e.g., fail) in a decent manner (e.g., not corrupting or losing data). The load (incoming transaction stream) in stress testing is often deliberately distorted so as to force the system into resource depletion.


Performance testing – Validates that both the online response time and batch run times meet the defined performance requirements.
Usability testing – testing for ‘user-friendliness’. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

Install/uninstall testing – testing of full, partial, or upgrade install/uninstall processes.

Recovery testing – testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Security testing – testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

Compatibility testing – testing how well software performs in a particular hardware/software/ operating system/network/etc. environment.

Exploratory testing – often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

Ad-hoc testing – similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

Monkey testing:-monkey testing is a testing that runs with no specific test in mind. The monkey in this case is the producer of any input data (whether that be file data, or input device data).
Keep pressing some keys randomely and check whether the software fails or not.

User acceptance testing – determining if software is satisfactory to an end-user or customer.
Comparison testing – comparing software weaknesses and strengths to competing products.

Alpha testing – testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by users within the development team.

Beta testing – testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

Mutation testing – a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources

Cross browser testing – application tested with different browser for usablity testing & compatiblity testing

Concurrent testing – Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores etc.

Negative testing – Testing the application for fail conditions,negative testing is testing the tool with improper inputs.for example entering the special characters for phone number

I hope things are very clear now.

Keep Testing 🙂