1. Acceptance Testing Acceptance Testing is defined as testing that verifies the system is ready to be released to end users. Acceptance Testing may also be referred to as User Testing.
2. Alpha Testing Alpha Testing is a phase of testing of an application when development is nearing completion and minor design changes may still be made as a result of such testing.
3. Automated Testing Automated Testing is the management and performance of test activities that include the development and execution of test scripts so as to verify test requirements, using an automated test tool. Automated testing automates test cases that have traditionally been conducted by human testers. IBM Rational Robot and Mercury WinRunner are examples of automated testing packages.
4. Beta Testing Beta Testing is a phase of testing when development is essentially completed and final bugs and problems need to be found before final release.
5. Benchmark Testing Benchmark Testing is defined as testing that measures specific performance results in terms of response times under a workload that is based on functional requirements. Benchmark results provide insight into how the system performs at the required load and can be compared against future test results. Benchmark Testing is a prerequisite for Stress Testing.
6. Black Box Testing Black box testing is what most testers spend their time doing. Black box testing ignores the source code and focuses on the program from the outside, which is how the customer will use it. Black box thinking exposes errors that will elude glass box testers.
7. Boundary Testing Boundary Testing is testing the program’s response to extreme input values.
8. Compatibility Testing Compatibility Testing is testing that one product works well with another product
9. Configuration Testing Configuration Testing is defined as testing that verifies that the system operates correctly in the supported hardware and software environments. Configuration testing is an ideal candidate for automation when the system must be tested in multiple environments.
10. Conversion Testing Conversion Testing is testing upgrading from one version of the product to another version.
11. Documentation Testing Documentation Testing is testing all associated documentation related to the development project. This may include online help, user manuals, etc
12. Domain Testing Domain Testing utilizes specialized business knowledge relating to the program that is provided by subject matter experts.
13. Error Recovery Testing Error Recovery Testing involves testing the programs error messages by intentionally making as many errors as possible
14. Functional Audit (FA) The FA compares the software system being delivered against the currently approved requirements for the system.
15. Functional Testing Functional testing is defined as testing that verifies that the system conforms to the specified functional requirements. Its goal is to ensure that user requirements have been met.
16. Glass Box Testing Glass box testing is part of the coding stage. The programmer uses his knowledge and understanding of the source code to develop test cases. Programmers can see internal boundaries in the code that are completely invisible to the outside tester. Glass box testing may also be referred to as White Box testing.
17. Integration Testing Integration testing may be considered to have officially begun when the modules begin to be tested together. This type of testing, sometimes called gray box implies a limited visibility into the software and its structure. As integration proceeds, gray box testing approaches black box testing, which is more nearly pure functional testing, with no reliance on knowledge of software structure or software itself.
18. Installation Testing Installation Testing involves testing whether the installation program installs the program correctly.
19. Mainstream Usage Testing Mainstream Usage Testing involves testing the system by using it like customers would use it.
20. Manual Testing Manual testing is defined as testing that is conducted by human testers.
21. Module Testing Module testing is a combination of debugging and integration. It is sometimes called glass box testing (or white box testing), because the tester has good visibility into the structure of the software and frequently has access to the actual source code with which to develop the test strategies. As units are integrated into their respective modules testing moves from unit testing to module testing.
22. Multi-user Testing Multi-user Testing involves testing the program while more than one user is using it at the same time.
23. Operational Testing Operational Testing involves functional testing of a system independent of specialized business knowledge.
24. Performance Testing Performance Testing is defined as testing that verifies that the system meets specific performance objectives in terms of response times under varying workloads. This may also be referred to as Load Testing. An example of a performance test requirement may be: Utilizing 400 virtual users, 90% of all transactions have an average response time of 10 seconds or less and no response time can exceed 30 seconds. Performance Testing encompasses Stress Testing and Benchmark Testing.
25. Physical Audit (PA) The PA is intended to assure that the full set of deliverables is an internally consistent set (i.e., the user manual is the correct one for this particular version of the software). It compares the final form of the code against the final documentation of that code.
26. Post Implementation Review (PIR) The PIR is held once the software system is in production. The PIR is usually conducted 6 to 9 months after implementation. Its purpose is to determine whether the software has, in fact, met the user’s expectations for it in actual operation.
27. Regression Testing Regression testing is defined as testing that verifies that the system functions as required and no new errors have been introduced into a new version as a result of code modifications. Regression testing is an iterative process conducted on successive builds and as a result is an ideal candidate for automation. Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality assurance measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. Regression Testing is also a phase of testing that occurs near the end of a testing cycle.
28. Scenario Test A Scenario test simulates a real world situation where a user would perform a set of detailed steps to accomplish a specific task.
29. Smoke (Build Verification) Test A Smoke test validates that a fundamental operation or area of the program is ready to undergo more complex Functional, or Scenario Testing.
30. Software Quality Systems Plan (SQSP) The SQSP address the activities to be performed on the project in support of the quest for quality software. All activities to be accomplished in the software quality area should receive the same personnel, resource, and schedule discussion as in the overall SDP, and any special tools and methodologies should be discussed.
31. Software Test Plan (STP) The Software Test Plan documents the test program, timelines, resources, and tests to be performed for a test cycle leading to the release of a product or completion of a project.
32. Stress Testing Stress Testing is defined as testing that exercises the system to the point that the server experiences diminished responsiveness or breaks down completely with the objective of determining the limits of the system. This may also be referred to as Volume Testing. An example of stress testing may be to send thousands of queries to the database.
33. Structured Testing Structured testing involves the execution of predefined test cases.
34. Testing Testing involves operating an application under controlled conditions and evaluating the results in order to confirm that the application fulfills it stated requirements.
35. Test Case A Test Case is a specific set of steps to be executed in a program that are documented using a predefined format. Execution of the steps should result in a predefined expected result. If the expected result occurs the test cases passes. If the expected result does not occur the test case fails. Failure of a test case indicates a problem or defect with the application under test.
36. Test Cycle A Test Cycle encompasses all the testing (Initial Testing, Alpha Testing, Beta Testing, and Regression Testing) that is conducted leading to the release of a product or completion of a project.
37. Test Case Index A Test Case Index is a list of all Test Cases relating to a Test Plan.
38. Test Program A Test Program is the methodology utilized for testing a particular product or project. The details of the Test Program are documented in the Test Plan.
39. Test Readiness Review (TRR) The TRR is a formal phase end review that occurs during the Coding Phase and prior to the onset of user (acceptance testing). The TRR determines whether the system is ready to undergo user (acceptance) testing.
40. Test Traceability Matrix (TTM) A Test Traceability Matrix tracks the requirements to the test cases that demonstrate software compliance with the requirements.
41. Unit Testing Unit Testing involves glass box testing of code conducted by the programmer that has written the code. Unit testing is primarily a debugging activity that concentrates on the removal of coding mistakes. It is part and parcel with the coding activity itself.
42. User (Acceptance) Testing User testing is primarily intended to demonstrate that the software complies with its requirements. This type of testing is black box testing, which does not rely on knowledge of the source code. This testing is intended to challenge the software in relation to its satisfaction of the functional requirements. These tests have been planned based on the requirements as approved by the user or customer.
43. Unstructured Testing Unstructured testing involves exploratory testing without the use of predefined test cases. |
Gr8 work... keep it up...