Software Quality Testing

Software Quality Testing is a Blog dedicated to extensive Software Quality Assurance and Software Testing information. This is intended to be an one-stop Information Center for all your Software Quality and Testing needs.

 
Weather
Click for Chennai (Madras), India Forecast
My Ballot Box
My Ballot Box
Which suite of Automation tool do you use while testing?

Mercury
IBM's Rational
Silk
Open Source Tools
Never use Tools


View Results
Hot Updates

:: Bug Free Zone ::

Translate
Sponsored Links
Other High Rated Testing Blogs
All Time Best Testing Articles
Technorati
Discussion Board
Visitor Count
Visitor Locations
Locations of visitors to this page
Play Sudoku



free+games free+online+games download+games
Timers
Subscribe

Enter your email address:

Delivered by FeedBurner


Add to Google Subscribe in Rojo Subscribe in NewsGator 
Online Subscribe in podnova Subscribe in FeedLounge Subscribe in Bloglines Add to netvibes Add to The Free Dictionary Add to Bitty Browser Add to Plusmo
Powered by

Page copy protected against web site content infringement by Copyscape BLOGGER

Site Score
Silktide SiteScore for this website Blogrolling.com Hot 500
FireFactor
Software Quality Assurance Interview Questions (Part-10)
Thursday, November 30, 2006

1. What is the difference between a test strategy and a test plan?
2. What is the relationship between test scripts and test cases?
3. What goes into a test package?
4. What test data would you need to test that a specific date occurs on a specific day of week?
5. How would you prioritize the test?
6. It is the eleventh hour and we have no test scripts, cases, or data. What would you do first?
7. How would you communicate defects ?
8. What would you do if management pressure is stating that testing is complete and you feel differently?
9. It is important to gather actual metrics with this once-in-a-lifetime test?
10. What would you record about a defect?
11. What is data - driven automation?
12. What are the main attributes of test automation?
13. Does automation replace manual testing?
14. How will you choose a tool for test automation?
15. How you will evaluate the tool for test automation?
16. What are main benefits of test automation?
17. What could go wrong with test automation?
18. How you will describe testing activities?
19. What testing activities you may want to automate?
20. Describe common problems of test automation.
21. What types of scripting techniques for test automation do you know?
22. What are principles of good testing scripts for automation?
23. What tools are available for support of testing during software development life cycle?
24. Can the activities of test case design be automated?
25. Why did you ever become involved in QA/testing?
26. What is the testing lifecycle and explain each of its phases?
27. What is the difference between testing and Quality Assurance?
28. What are basic, core, practises for a QA specialist?
29. What do you like about QA?
30. What has not worked well in your previous QA experience and what would you change?
31. How you will begin to improve the QA process?
32. What is the difference between QA and QC?
33. What is UML and how to use it for testing?
34. What is CMM and CMMI? What is the difference?
35. What do you like about computers?
36. Do you have a favourite QA book? More than one? Which ones? And why.
37. What is the responsibility of programmers vs QA?
38. How can software QA processes be implemented without stifling productivity?
39. How is testing affected by object-oriented designs?
40. What is extreme programming and what does it have to do with testing?
41. Write a test transaction for a scenario where 6.2% of tax deduction for the first $62,000 of income has to be done
42. What would be the Test Objective for Unit Testing? What are the quality measurements to assure that unit testing is complete?
43. Prepare a checklist for the developers on Unit Testing before the application comes to testing department.
44. Draw a pictorial diagram of a report you would create for developers to determine project status.
45. Draw a pictorial diagram of a report you would create for users and management to determine project status.
46. What 3 tools would you purchase for your company for use in testing? Justify the need?
47. Put the following concepts, put them in order, and provide a brief description of each:
system testing
acceptance testing
unit testing
integration testing
benefits realization testing
48. What processes/methodologies are you familiar with?
49. What type of documents would you need for QA/QC/Testing?
50. How can you use technology to solve problem?
51. What type of metrics would you use?
52. How to find that tools work well with your existing system?
53. What automated tools are you familiar with?
54. How well you work with a team?
55. How would you ensure 100% coverage of testing?
56. How would you build a test team?
57. What problem you have right now or in the past? How you solved it?
58. What you will do during the first day of job?
59. What would you like to do five years from now?
60. Tell me about the worst boss you have ever had.
61. What are your greatest weaknesses?
62. What are your strengths?
63. What is a successful product?
64. What do you like about Windows?
65. What is good code?
66. Who is Kent Beck, Dr Grace Hopper, Dennis Ritchie?
67. What are two primary goals of testing?
68. If your company is going to conduct a review meeting, who should be on the review committee and why?
69. Write any three attributes which will impact the Testing Process?
70. What activity is done in Acceptance Testing, which is not done in System testing?
71. You are a tester for testing a large system. The system data model is very large with many attributes and there are a lot of inter-dependencies within the fields. What steps would you use to test the system and also what are the effects of the steps you have taken on the test plan?
72. Explain and provide examples for the following black box techniques?
Boundary Value testing
Equivalence testing
Error Guessing
73. Describe a past experience with implementing a test harness in the development of software.
74. Have you ever worked with QA in developing test tools? Explain the participation Development should have with QA in leveraging such test tools for QA use.
75. Give me some examples of how you have participated in Integration Testing.
76. How would you describe the involvement you have had with the bug-fix cycle between Development and QA?
77. What is unit testing?
78. Describe your personal software development process.
79. How do you know when your code has met specifications?
80. How do you know your code has met specifications when there are no specifications?
81. Describe your experiences with code analyzers.
82. How do you feel about cyclomatic complexity?
83. Who should test your code?
84. How do you survive chaos?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-9)

1. When should testing start in a project? Why?
2. How do you go about testing a web application?
3. Difference between Black & White box testing
4. What is Configuration management? Tools used?
5. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)
6. Would you like to work in a team or alone, why?
7. Give me 5 strong & weak points of yours
8. Why do you want to join our company?
9. When should testing be stopped?
10. What sort of things would you put down in a bug report?
11. Who in the company is responsible for Quality?
12. The top management was feeling that when there are any changes in the technology being used, development schedules etc, it was a waste of time to update the Test Plan. Instead, they were emphasizing that you should put your time into testing than working on the test plan. Your Project Manager asked for your opinion. You have argued that Test Plan is very important and you need to update your test plan from time to time. It is not a waste of time and testing activities would be more effective when you have your plan clear. Use some metrics. How you would support your argument to have the test plan consistently updated all the time.
13. The QAI is starting a project to put the CSTE certification online. They will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this new project. The project had a very high cost of testing. After going in detail, someone found out that the testers are spending their time on software that does not have too many defects. How will you make sure that this is correct?
14. What are the disadvantages of over testing?
15. What happens to the test plan if the application has a functionality not mentioned in the requirements?
16. You are given two scenarios to test. Scenario 1 has only one terminal for entry and processing whereas scenario 2 has several terminals where the data input can be made. Assuming that the processing work is the same, what would be the specific tests that you would perform in Scenario 2, which you would not carry on Scenario 1?
17. Your customer does not have experience in writing Acceptance Test Plan. How will you do that in coordination with customer? What will be the contents of Acceptance Test Plan?
18. How do you know when to stop testing?
19. What can you do if the requirements are changing continuously?
20. What is the need for Test Planning?
21. What are the various status reports you will generate to Developers and Senior Management?
22. Define and explain any three aspects of code review?
23. Why do you need test planning?
24. Explain 5 risks in an e-commerce project. Identify the personnel that must be involved in the risk analysis of a project and describe their duties. How will you prioritize the risks?
25. Who defines quality?
26. What is an equivalence class?
27. Is a A fast database retrieval rate a testable requirement?
28. Should we test every possible combination/scenario for a program?
29. What criteria do you use when determining when to automate a test or leave it manual?
30. When do you start developing your automation tests?
31. Discuss what test metrics you feel are important to publish an organization?
32. In case anybody cares, here are the questions that I will be asking:
33. Describe the role that QA plays in the software lifecycle.
34. What should Development require of QA?
35. What should QA require of Development?
36. How would you define a bug?
37. Give me an example of the best and worst experiences you have had with QA.
38. How does unit testing play a role in the development / software lifecycle?
39. Explain some techniques for developing software components with respect to testability.
Posted @ Thursday, November 30, 2006   2 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-8)

1. How do you determine what to test?
2. How do you decide when you have tested enough?
3. How do you test if you have minimal or no documentation about the product?
4. Describe me to the basic elements you put in a defect report?
5. How do you perform regression testing?
6. At what stage of the life cycle does testing begin in your opinion?
7. How do you analyze your test results? What metrics do you try to provide?
8. Realising you will not be able to test everything - how do you decide what to test first?
9. Where do you get your expected results?
10. If automating - what is your process for determining what to automate and in what order?
11. In the past, I have been asked to verbally start mapping out a test plan for a common situation, such as an ATM. The interviewer might say, Just thinking out loud, if you were tasked to test an ATM, what items might you test plan include? These type questions are not meant to be answered conclusively, but it is a good way for the interviewer to see how you approach the task.
12. If you are given a program that will average student grades, what kinds of inputs would you use?
13. Tell me about the best bug you ever found.
14. What made you pick testing over another career?
15. What is the exact difference between Integration & System testing, give me examples with your project.
16. How did you go about testing a project?
17. What are the limitations of automating software testing?
18. What skills needed to be a good test automator?
19. How1 to find that tools work well with your existing system?
20. Describe some problem that you had with automating testing tool.
21. What are the main attributes of test automation?
22. What testing activities you may want to automate in a project?
23. How to find that tools work well with your existing system?
24. What criteria would you use to select Web transactions for load testing?
25. For what purpose are virtual users created?
26. Why it is recommended to add verification checks to your all your scenarios?
27. In what situation would you want to parameterize a text verification check?
28. Why do you need to parameterize fields in your virtual user script?
29. What are the reasons why parameterization is necessary when load testing the Web server and the database server?
30. How can data caching have a negative effect on load testing results?
31. What usually indicates that your virtual user script has dynamic data that is dependent on you parameterized fields?
32. What are the various status reports that you need generate for Developers and Senior Management?
33. You have been asked to design a Defect Tracking system. Think about the fields you would specify in the defect tracking system?
34. Write a sample Test Policy?
35. Explain the various types of testing after arranging them in a chronological order?
36. Explain what test tools you will need for client-server testing and why?
37. Explain what test tools you will need for Web app testing and why?
38. Explain pros and cons of testing done development team and testing by an independent team?
39. Differentiate Validation and Verification?
40. Explain Stress, Load and Performance testing?
41. Describe automated capture/playback tools and list their benefits?
Posted @ Thursday, November 30, 2006   1 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-7)

1. Are regression tests required or do you feel there is a better use for resources?
2. Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.
3. Tell me about a difficult time you had at work and how you worked through it.
4. Give me an example of something you tried at work but did not work out so you had to go at things another way.
5. How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The client does not want to mask dates on the output files to allow compares. - Answer-Rerun baseline and future date input files same # of days as future dated run of program with change. Now run a file compare against the baseline future dated output and the changed programs future dated output.
6. What is the structure of the company?
7. Who is going to do the interview-possible background information of interviewer?
8. What is the employers environment (platforms, tools, etc.)?
9. What are the methods and processes used in software arena?
10. What automating testing tools are you familiar with?
11. How did you use automating testing tools in your job?
12. Describe some problem that you had with automating testing tool.
13. How do you plan test automation?
14. Can test automation improve test effectiveness?
15. What is Negative testing?
16. What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
17. What are two of your strengths that you will bring to our QA/testing team?
18. How would you define Quality Assurance?
19. What do you like most about Quality Assurance/Testing?
20. What do you like least about Quality Assurance/Testing?
21. What is the Waterfall Development Method and do you agree with all the steps?
22. What is the V-Model Development Method and do you agree with this model?
23. What is the Capability Maturity Model (CMM)? At what CMM level were the last few companies you worked?
24. What is a Good Tester?
25. Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
26. List 5 words that best describe your strengths.
27. What are two of your weaknesses?
28. What methodologies have you used to develop test cases?
29. In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?
30. Define each of the following and explain how each relates to the other: Unit, System, and Integration testing.
31. Define Verification and Validation. Explain the differences between the two.
32. Explain the differences between White-box, Gray-box, and Black-box testing.
33. How do you go about going into a new organization? How do you assimilate?
34. Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
35. What is ISO 9000? Have you ever been in an ISO shop?
36. When are you done testing?
37. What is the difference between a test strategy and a test plan?
38. What is ISO 9003? Why is it important
39. What are ISO standards? Why are they important?
40. What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
41. What is IEEE? Why is it important?
42. Do you support automated testing? Why?
43. We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
44. What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
45. Are reusable test cases a big plus of automated testing and explain why.
46. Can you build a good audit trail using Compuware QACenter products. Explain why.
47. How important is Change Management in computing environments?
48. Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.
49. We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
50. When is a good time for system testing?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-6)

1. What are the benefits of creating multiple actions within any virtual user script?
2. How you used WinRunner in your project?
3. Explain WinRunner testing process?
4. What is contained in the GUI map in WinRunner?
5. How does WinRunner recognize objects on the application?
6. Have you created test scripts and what is contained in the test scripts in WinRunner?
7. How does WinRunner evaluate test results?
8. Have you performed debugging of the scripts in WinRunner?
9. How do you run your test scripts in WinRunner ?
10. How do you analyze results and report the defects in WinRunner?
11. What is the use of Test Director software in WinRunner?
12. Have you integrated your automated scripts from TestDirector in WinRunner?
13. What is the purpose of loading WinRunner Add-Ins?
14. What is meant by the logical name of the object in WinRunner?
15. If the object does not have a name then what will be the logical name in WinRunner?
16. What is the different between GUI map and GUI map files in WinRunner?
17. How do you view the contents of the GUI map in WinRunner?
18. When you create GUI map do you record all the objects of specific objects in WinRunner?
19. What is load testing?
20. What is Performance testing?
21. Explain the Load testing process? LoadRunner Version 7.2
22. When do you do load and performance Testing?
23. What are the components of LoadRunner?
24. What Component of LoadRunner would you use to record a Script?
25. When do you do load and performance Testing?
26. What are the components of LoadRunner?
27. What Component of LoadRunner would you use to play Back the script in multi user mode?
28. What is a rendezvous point about LoadRunner?
29. What is a scenario about LoadRunner?
30. Explain the recording mode for web Vuser script about LoadRunner?
31. Why do you create parameters about LoadRunner?
32. What is correlation LoadRunner? Explain the difference between automatic correlation and manual 33. correlation?
33. How do you find out where correlation is required LoadRunner? Give few examples from your projects?
34. Where do you set automatic correlation options about LoadRunner?
35. When do you disable log in Virtual User Generator, When do you choose standard and extended logs about LoadRunner?
36. What is a function to capture dynamic values in the web Vuser script about LoadRunner?
37. How do you debug a LoadRunner script?
38. How do you write user defined functions in LoadRunner?
39. What are the changes you can make in run-time settings about LoadRunner?
40. Where do you set Iteration for Vuser testing about LoadRunner?
41. How do you perform functional testing under load about LoadRunner?
42. What is Ramp up? How do you set this about LoadRunner?
43. What is the advantage of running the Vuser as thread? about LoadRunner
44. If you want to stop the execution of your script on error about LoadRunner, how do you do that?
Wh45. at is the relation between Response Time and Throughput about LoadRunner?
46. Explain the Configuration of your systems about LoadRunner?
47. How do you identify the performance bottlenecks about LoadRunner?
48. How did you find database related issues about LoadRunner?
49. What is the difference between Overlay graph and Correlate graph? about LoadRunner
50. How did you plan the Load? What are the Criteria about LoadRunner?
51. What does vuser_init action contain about LoadRunner?
52. What does vuser_end action contain about LoadRunner?
Posted @ Thursday, November 30, 2006   4 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-5)

1. What are the properties of a good requirement?
2. Ho to do test if we have minimal or no documentation about the product?
3. What are all the basic elements in a defect report?
4. Is an A fast database retrieval rate a testable requirement?
5. What is software quality assurance?
6. What is the value of a testing group? How do you justify your work and budget?
7. What is the role of the test group vis-a-vis documentation, tech support, and so forth?
8. How much interaction with users should testers have, and why?
9. How should you learn about problems discovered in the field, and what should you learn from those problems?
10. What are the roles of glass-box and black-box testing tools?
11. What issues come up in test automation, and how do you manage them?
12. What development model should programmers and the test group use?
13. How do you get programmers to build testability support into their code?
14. What is the role of a bug tracking system?
15. What are the key challenges of testing?
16. Have you ever completely tested any part of a product? How?
17. Have you done exploratory or specification-driven testing?
18. Should every business test its software the same way?
19. Discuss the economics of automation and the role of metrics in testing.
20. Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
21. When have you had to focus on data integrity?
22. What are some of the typical bugs you encountered in your last assignment?
23. How do you prioritize testing tasks within a project?
24. How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
25. When should you begin test planning?
26. When should you begin testing?
27. Do you know of metrics that help you estimate the size of the testing effort?
28. How do you scope out the size of the testing effort?
29. How many hours a week should a tester work?
30. How should your staff be managed? How about your overtime?
31. How do you estimate staff requirements?
32. What do you do (with the project tasks) when the schedule fails?
33. How do you handle conflict with programmers?
34. How do you know when the product is tested well enough?
45. What characteristics would you seek in a candidate for test-group manager?
46. What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?
47. How do your characteristics compare to the profile of the ideal manager that you just described?
38. How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
39. Who should you hire in a testing group and why?
40. What is the role of metrics in comparing staff performance in human resources management?
41. How do you estimate staff requirements?
42. What do you do (with the project staff) when the schedule fails?
43. Describe some staff conflicts you have handled.
44. Is automation (or testing) a label for other problems?
45. Are testers trying to use automation to prove their prowess?
46. Can testability features be added to the product code?
47. Do testers and developers work cooperatively and with mutual respect?
48. Is automation is developed on an iterative basis?
49. Have you defined the requirements and success criteria for automation?
50. Are you open to different concepts of what test automation can mean?
51. Is test automation lead by someone with an understanding of both programming and testing?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-4)

1. How would you categorize the severity of defects?
2. Where do you imagine you will be spending the bulk of your time?
3. When do you know you have tested enough?
4. What types of documents would you need for QA, QC, and Testing?
5. What did you include in a test plan?
6. Describe any bug you remember.
7. What is the purpose of the testing?
8. What do you like (not like) in this job?
9. What is quality assurance?
10. What is the difference between QA and testing?
11. How do you scope, organize, and execute a test project?
12. What is the role of QA in a development project?
13. What is the role of QA in a company that produces software?
14. Define quality for me as you understand it
15. Describe to me the difference between validation and verification.
16. Describe to me what you see as a process. Not a particular process, just the basics of having a process.
17. Describe to me when you would consider employing a failure mode and effect analysis.
18. Describe to me the Software Development Life Cycle as you would define it.
19. What are the properties of a good requirement?
20. How do you differentiate the roles of Quality Assurance Manager and Project Manager?
21. Tell me about any quality efforts you have overseen or implemented. Describe some of the challenges you faced and how you overcame them.
22. How do you deal with environments that are hostile to quality change efforts?
23. In general, how do you see automation fitting into the overall process of testing?
24. How do you promote the concept of phase containment and defect prevention?
25. If you come onboard, give me a general idea of what your first overall tasks will be as far as starting a quality effort.
26. What kinds of testing have you done?
27. Have you ever created a test plan?
28. Have you ever written test cases or did you just execute those written by others?
29. What did your base your test cases?
30. You are the test manager starting on system testing. The development team says that due to a change in the requirements, they will be able to deliver the system for SQA 5 days past the deadline. You cannot change the resources (work hours, days, or test tools). What steps will you take to be able to finish the testing in time?
31. Your company is about to roll out an e-commerce application. It is not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to reduce the business risks and commercial risks?
32. In your organization, testers are delivering code for system testing without performing unit testing. Give an example of test policy:
Policy statement
Methodology
Measurement
33. Testers in your organization are performing tests on the deliverables even after significant defects have been found. This has resulted in unnecessary testing of little value, because re-testing needs to be done after defects have been rectified. You are going to update the test plan with recommendations on when to halt testing. What recommendations are you going to make?
34. How do you measure: Test Effectiveness Test Efficiency
35. You found out the senior testers are making more mistakes then junior testers; you need to communicate this aspect to the senior tester. Also, you do not want to lose this tester. How should one go about constructive criticism?
36. You are assigned to be the test lead for a new program that will automate take-offs and landings at an airport. How would you write a test strategy for this new program?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-3)

1. What do you like most about Quality Assurance/Testing?
2. What do you like least about Quality Assurance/Testing?
3. What is the Waterfall Development Method and do you agree with all the steps?
4. What is the V-Model Development Method and do you agree with this model?
5. What is a Good Tester?
6. Could you tell me two things you did in your previous assignment (QA/Testing related hopefully) that you are proud of?
7. List 5 words that best describe your strengths.
8. What are two of your weaknesses?
9. What methodologies have you used to develop test cases?
10. In an application currently in production, one module of code is being modified. Is it necessary to re- test the whole application or is it enough to just test functionality associated with that module?
11. How do you go about going into a new organization? How do you assimilate?
12. Define the following and explain their usefulness: Change Management, Configuration Management, Version Control, and Defect Tracking.
13. What is ISO 9000? Have you ever been in an ISO shop?
14. When are you done testing?
15. What is the difference between a test strategy and a test plan?
16. What is ISO 9003? Why is it important
17. What are ISO standards? Why are they important?
18. What is IEEE 829? (This standard is important for Software Test Documentation-Why?)
19. What is IEEE? Why is it important?
20. Do you support automated testing? Why?
21. We have a testing assignment that is time-driven. Do you think automated tests are the best solution?
22. What is your experience with change control? Our development team has only 10 members. Do you think managing change is such a big deal for us?
Are23. reusable test cases a big plus of automated testing and explain why.
24. Can you build a good audit trail using Compuware QACenter products. Explain why.
25. How important is Change Management in computing environments?
26. Do you think tools are required for managing change. Explain and please list some tools/practices which can help you managing change.
27. We believe in ad-hoc software processes for projects. Do you agree with this? Please explain your answer.
28. When is a good time for system testing?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-2)

1. What are all the basic elements in a defect report?
2. Is an A fast database retrieval rate a testable requirement?
3. What is software quality assurance?
4. What is the value of a testing group? How do you justify your work and budget?
5. What is the role of the test group vis-à-vis documentation, tech support, and so forth?
6. How much interaction with users should testers have, and why?
7. How should you learn about problems discovered in the field, and what should you learn from those 8. problems?
8. What are the roles of glass-box and black-box testing tools?
9. What issues come up in test automation, and how do you manage them?
10. What development model should programmers and the test group use?
11. How do you get programmers to build testability support into their code?
12. What is the role of a bug tracking system?
13. What are the key challenges of testing?
14. Have you ever completely tested any part of a product? How?
15. Have you done exploratory or specification-driven testing?
16. Should every business test its software the same way?
17. Discuss the economics of automation and the role of metrics in testing.
18. Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
19. When have you had to focus on data integrity?
20. What are some of the typical bugs you encountered in your last assignment?
21. How do you prioritize testing tasks within a project?
22. How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.
23. When should you begin test planning?
24. When should you begin testing?
25. Do you know of metrics that help you estimate the size of the testing effort?
26. How do you scope out the size of the testing effort?
27. How many hours a week should a tester work?
28. How should your staff be managed? How about your overtime?
29. How do you estimate staff requirements?
30. What do you do (with the project tasks) when the schedule fails?
31. How do you handle conflict with programmers?
32. How do you know when the product is tested well enough?
33. What characteristics would you seek in a candidate for test-group manager?
34. What do you think the role of test-group manager should be? Relative to senior management? Relative to other technical groups in the company? Relative to your staff?
35. How do your characteristics compare to the profile of the ideal manager that you just described?
36. How does your preferred work style work with the ideal test-manager role that you just described? What is different between the way you work and the role you described?
37. Who should you hire in a testing group and why?
38. What is the role of metrics in comparing staff performance in human resources management?
39. How do you estimate staff requirements?
40. What do you do (with the project staff) when the schedule fails?
41. Describe some staff conflicts you have handled.
42. Why did you ever become involved in QA/testing?
43. What is the difference between testing and Quality Assurance?
44. What was a problem you had in your previous assignment (testing if possible)? How did you resolve it?
45. What are two of your strengths that you will bring to our QA/testing team?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance Interview Questions (Part-1)

1. What automating testing tools are you familiar with?
2. How did you use automating testing tools in your job?
3. Describe some problem that you had with automating testing tool.
4. How do you plan test automation?
5. Can test automation improve test effectiveness?
6. What is data - driven automation?
7. What are the main attributes of test automation?
8. Does automation replace manual testing?
9. How will you choose a tool for test automation?
10. How you will evaluate the tool for test automation?
11. What are main benefits of test automation?
12. What could go wrong with test automation?
13. How you will describe testing activities?
14. What testing activities you may want to automate?
15. Are regression tests required or do you feel there is a better use for resources?
16. Our software designers use UML for modeling applications. Based on their use cases, we would like to plan a test strategy. Do you agree with this approach or would this mean more effort for the testers.
17. Tell me about a difficult time you had at work and how you worked through it.
18. Give me an example of something you tried at work but did not work out so you had to go at things another way.
19. How can one file compare future dated output files from a program which has change, against the baseline run which used current date for input. The client does not want to mask dates on the output files to allow compares
20. Tell me about the worst boss you have ever had.
21. What are your greatest weaknesses?
22. What are your strengths?
23. What is a successful product?
24. What do you like about Windows?
25. What is good code?
26. What are basic, core, practices for a QA specialist?
27. What do you like about QA?
28. What has not worked well in your previous QA experience and what would you change?
29. you will begin to improve the QA process?
30. What is the difference between QA and QC?
31. What is UML and how to use it for testing?
32. What is CMMI?
33. What do you like about computers?
34. Do you have a favourite QA book? More than one? Which ones? And why.
35. What is the responsibility of programmers vs QA?
36. What are the properties of a good requirement?
37. How to do test if we have minimal or no documentation about the product?
Posted @ Thursday, November 30, 2006   0 comments

Back to Top
Bookmark This Site
43 Basic Quality Assurance/Software Testing Terminologies
Wednesday, November 29, 2006
1. Acceptance Testing
Acceptance Testing is defined as testing that verifies the system is ready to be released to end users. Acceptance Testing may also be referred to as User Testing.

2. Alpha Testing
Alpha Testing is a phase of testing of an application when development is nearing completion and minor design changes may still be made as a result of such testing.

3. Automated Testing
Automated Testing is the management and performance of test activities that include the development and execution of test scripts so as to verify test requirements, using an automated test tool. Automated testing automates test cases that have traditionally been conducted by human testers. IBM Rational Robot and Mercury WinRunner are examples of automated testing packages.

4. Beta Testing

Beta Testing is a phase of testing when development is essentially completed and final bugs and problems need to be found before final release.

5. Benchmark Testing

Benchmark Testing is defined as testing that measures specific performance results in terms of response times under a workload that is based on functional requirements. Benchmark results provide insight into how the system performs at the required load and can be compared against future test results. Benchmark Testing is a prerequisite for Stress Testing.

6. Black Box Testing
Black box testing is what most testers spend their time doing. Black box testing ignores the source code and focuses on the program from the outside, which is how the customer will use it. Black box thinking exposes errors that will elude glass box testers.

7. Boundary Testing
Boundary Testing is testing the program’s response to extreme input values.

8. Compatibility Testing
Compatibility Testing is testing that one product works well with another product

9. Configuration Testing
Configuration Testing is defined as testing that verifies that the system operates correctly in the supported hardware and software environments. Configuration testing is an ideal candidate for automation when the system must be tested in multiple environments.

10. Conversion Testing
Conversion Testing is testing upgrading from one version of the product to another version.

11. Documentation Testing

Documentation Testing is testing all associated documentation related to the development project. This may include online help, user manuals, etc

12. Domain Testing
Domain Testing utilizes specialized business knowledge relating to the program that is provided by subject matter experts.

13. Error Recovery Testing
Error Recovery Testing involves testing the programs error messages by intentionally making as many errors as possible

14. Functional Audit (FA)
The FA compares the software system being delivered against the currently approved requirements for the system.

15. Functional Testing

Functional testing is defined as testing that verifies that the system conforms to the specified functional requirements. Its goal is to ensure that user requirements have been met.

16. Glass Box Testing
Glass box testing is part of the coding stage. The programmer uses his knowledge and understanding of the source code to develop test cases. Programmers can see internal boundaries in the code that are completely invisible to the outside tester. Glass box testing may also be referred to as White Box testing.

17. Integration Testing

Integration testing may be considered to have officially begun when the modules begin to be tested together. This type of testing, sometimes called gray box implies a limited visibility into the software and its structure. As integration proceeds, gray box testing approaches black box testing, which is more nearly pure functional testing, with no reliance on knowledge of software structure or software itself.

18. Installation Testing
Installation Testing involves testing whether the installation program installs the program correctly.

19. Mainstream Usage Testing
Mainstream Usage Testing involves testing the system by using it like customers would use it.

20. Manual Testing
Manual testing is defined as testing that is conducted by human testers.

21. Module Testing
Module testing is a combination of debugging and integration. It is sometimes called glass box testing (or white box testing), because the tester has good visibility into the structure of the software and frequently has access to the actual source code with which to develop the test strategies. As units are integrated into their respective modules testing moves from unit testing to module testing.

22. Multi-user Testing
Multi-user Testing involves testing the program while more than one user is using it at the same time.

23. Operational Testing
Operational Testing involves functional testing of a system independent of specialized business knowledge.

24. Performance Testing

Performance Testing is defined as testing that verifies that the system meets specific performance objectives in terms of response times under varying workloads. This may also be referred to as Load Testing. An example of a performance test requirement may be: Utilizing 400 virtual users, 90% of all transactions have an average response time of 10 seconds or less and no response time can exceed 30 seconds. Performance Testing encompasses Stress Testing and Benchmark Testing.

25. Physical Audit (PA)
The PA is intended to assure that the full set of deliverables is an internally consistent set (i.e., the user manual is the correct one for this particular version of the software). It compares the final form of the code against the final documentation of that code.

26. Post Implementation Review (PIR)
The PIR is held once the software system is in production. The PIR is usually conducted 6 to 9 months after implementation. Its purpose is to determine whether the software has, in fact, met the user’s expectations for it in actual operation.

27. Regression Testing
Regression testing is defined as testing that verifies that the system functions as required and no new errors have been introduced into a new version as a result of code modifications. Regression testing is an iterative process conducted on successive builds and as a result is an ideal candidate for automation. Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality assurance measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. Regression Testing is also a phase of testing that occurs near the end of a testing cycle.

28. Scenario Test
A Scenario test simulates a real world situation where a user would perform a set of detailed steps to accomplish a specific task.

29. Smoke (Build Verification) Test

A Smoke test validates that a fundamental operation or area of the program is ready to undergo more complex Functional, or Scenario Testing.

30. Software Quality Systems Plan (SQSP)
The SQSP address the activities to be performed on the project in support of the quest for quality software. All activities to be accomplished in the software quality area should receive the same personnel, resource, and schedule discussion as in the overall SDP, and any special tools and methodologies should be discussed.

31. Software Test Plan (STP)
The Software Test Plan documents the test program, timelines, resources, and tests to be performed for a test cycle leading to the release of a product or completion of a project.

32. Stress Testing
Stress Testing is defined as testing that exercises the system to the point that the server experiences diminished responsiveness or breaks down completely with the objective of determining the limits of the system. This may also be referred to as Volume Testing. An example of stress testing may be to send thousands of queries to the database.

33. Structured Testing
Structured testing involves the execution of predefined test cases.

34. Testing
Testing involves operating an application under controlled conditions and evaluating the results in order to confirm that the application fulfills it stated requirements.

35. Test Case
A Test Case is a specific set of steps to be executed in a program that are documented using a predefined format. Execution of the steps should result in a predefined expected result. If the expected result occurs the test cases passes. If the expected result does not occur the test case fails. Failure of a test case indicates a problem or defect with the application under test.

36. Test Cycle
A Test Cycle encompasses all the testing (Initial Testing, Alpha Testing, Beta Testing, and Regression Testing) that is conducted leading to the release of a product or completion of a project.

37. Test Case Index
A Test Case Index is a list of all Test Cases relating to a Test Plan.

38. Test Program
A Test Program is the methodology utilized for testing a particular product or project. The details of the Test Program are documented in the Test Plan.

39. Test Readiness Review (TRR)
The TRR is a formal phase end review that occurs during the Coding Phase and prior to the onset of user (acceptance testing). The TRR determines whether the system is ready to undergo user (acceptance) testing.

40. Test Traceability Matrix (TTM)
A Test Traceability Matrix tracks the requirements to the test cases that demonstrate software compliance with the requirements.

41. Unit Testing
Unit Testing involves glass box testing of code conducted by the programmer that has written the code. Unit testing is primarily a debugging activity that concentrates on the removal of coding mistakes. It is part and parcel with the coding activity itself.

42. User (Acceptance) Testing
User testing is primarily intended to demonstrate that the software complies with its requirements. This type of testing is black box testing, which does not rely on knowledge of the source code. This testing is intended to challenge the software in relation to its satisfaction of the functional requirements. These tests have been planned based on the requirements as approved by the user or customer.

43. Unstructured Testing
Unstructured testing involves exploratory testing without the use of predefined test cases.


Please refer
Top 75 Basic Software Testing Terminologies for an Excellent list of Testing Terminologies.
Posted @ Wednesday, November 29, 2006   1 comments

Back to Top
Bookmark This Site
The Cost of Quality is the Cost of Bugs !
The first graphic projected on the overhead screen in a QA Class shows a rising curved line representing the cost of finding a bug as the project proceeds. That is, bugs caught during the requirements phase are much less costly than bugs caught during the design phase, the ones identified during the design phase cost less than those caught during coding, and bugs caught during coding cost less than bugs identified during test. The big no-no is to let bugs get caught after delivery. The cost of these bugs is huge, and many a creative graphic artist has depicted this fact by making the bugs larger in the later phases of development.
I have a problem with this graphic. I think it is original intent was to dramatize the necessity of front-loading the development effort, i.e., not skimping when it comes to doing requirements analysis or program design. The problem is that this graphic is now used to downplay the importance of system testing and the subject matter expertise needed to do it right.
How many times have you heard a QA person say that you can not test quality into a product, it has to be built in.
To better understand the bug-cost graphic I look at these bugs in the following way - the small bug depicted in the requirements phase represents the cost of finding a requirements bug in this phase, the larger bug shown in the design phase is the cost of this same requirements bug if found during the design phase, and the largest bug of them all looming above the delivery phase is the cost of this same requirements bug if found during this later phase.
If you look at the graph in this way the value of bugs found in the final testing phase take on a different quality. Instead of seeing these bugs as monstrosities that should have been caught earlier in the development cycle, they become normal sized bugs that are not so scary. This new way of looking at the chart might even make finding bugs in this phase seem as natural as finding those little bugs during the requirements phase.
I hope my point is coming across - testing during the final stages of development is just as important as creating a solid requirements or design document. As a matter of fact, common sense tells you that testing is one of the most useful tools to ensure a quality product. Why do you think they make doctors and lawyers pass tests before getting their license to practice? And why do you think professors test their students several times during the semester, and base their final grade on the outcome of these tests? I think it is because we all know that the test motivates people to do their best. The end result is a doctor we can trust and, when applied to the software engineering discipline, a program we can rely on.
Posted @ Wednesday, November 29, 2006   0 comments

Back to Top
Bookmark This Site
How to become an effective Testing Interviewer
1. When you get a resume for interview from your HR Manager, go through the resume and check with yourself whether you can interview such a profile you have recieved and also do let your manager know your comfort level to interview the candidate based on the technical skills of your and of your candidate.

2. Every question you ask, should be towards the objective of revealing how fit the skill of the candidate, is towards the opening you have.

3. Do not ask too much about theory on testing, no two people know the same definition. And in the world of testing, there is no fixed definition for any technical term !

4. Have a discussion (not rapid fire questions) and or try to test something with the candidate and see how his/her approach and thought process is towards the testing.

5. Do not ask questions that do not have a standard answer in this world, like - What is the difference between Sanity and Smoke testing? - rather it would be challenging for the candidate, if you ask - If you know what Sanity testing is, could you tell me its significance or let me know what impact would a project have if Sanity testing is not done?

6. Appreciate the candidate if he/she is better than you and let him/her know that someone else would interview them to take a decision of hiring. If you say so, you are a non egoistic, humble and bold person.

7. If you could learn from the candidate, make a note of the learnings during and or after the interview.

8. Ask for interview feedback from the candidate and try to better yourself through the feedback. Not all candidates would give a proper feedback but there are many people like me !

9. The most important of all, read Dr Cem Kaner's - Interviewing Software Testing Candidates .
Posted @ Wednesday, November 29, 2006   0 comments

Back to Top
Bookmark This Site
Your Ignorance May Help You in Testing !
Monday, November 27, 2006
From: Quality Vista

Some testers take it upon themselves to learn as much as possible about the inner workings of the system under test. This type of gray box testing is valuable, and most testers have the technical wherewithal to grasp much of what is going on behind the scenes. But it is important to recognize that sometimes ignorance is strength when it comes to finding problems that users will encounter.
When it comes to software testing, ignorance can actually be a source of strength. In particular, software testers who are not familiar with how the system has been put together can be in a better position to find certain bugs than testers who may be more familiar with the system internals.
Testers are often treated as second-class citizens, and this makes many of them eager to learn as much as they can about the software they are testing in order to prove their technical competence. This is no wrong, but we nned to recognize the maximum benefit of the inherent unfamiliarity that testers have when they are new to a project.
It is well known that the value of independent testing partly comes from having a set of fresh eyes on the software product. They are more apt to try things that might fail or notice problems that others have overlooked. Testers who are new to a project-with the least knowledge about the mechanics-bring the freshest sets of eyes.
Our software products will not be successful if we expect our users to have to understand the inner workings of our software. Therefore, testers without this understanding can teach us a valuable lesson about how our software will be used. There are several areas of testing in which ignorant testers can be helpful.
Usability
Testers unfamiliar with the inner workings of a product can often be very helpful in identifying usability problems. They might notice that the program flow is confusing or that terms used are hard to understand. Smart companies regularly put new employees through usability tests. This gives the company the benefit of the new employees fresh perspectives, gives the employees insight into how the products they are going to help develop will be used, and puts everyone on notice that design is important. Even if you do not have such a program, you can still make sure that new testers are given a chance to express their observations regarding product usability.
Installation and Setup
Software installation and configuration are areas that are often handled late in the development process. As a consequence, early testers often must learn how to manually install and configure the software. Workarounds are common, perhaps requiring the manual copying of certain files or the manual creation of particular accounts or data sets. Testers who come to a project later will not have been trained to avoid these problems and are thus more likely to stumble across installation problems that had been deferred.
Error Handling
New testers are less likely to know how the software is supposed to be used and therefore are more likely to stumble across errors. It is important for software to handle errors appropriately (give appropriate notice of invalid input, provide options for recovery, and ensure that no data is lost). Error handling code is always a good place to look for defects. Informed testers will want to plan to exercise all error conditions, but the most important ones to check are the ones that the programmers did not anticipate. Uninformed testing is one good strategy for finding them.
Program Flow
Early defects in software can train testers to avoid certain paths. I remember one product I tested where problems were likely if you closed dialogs with the x-button at the upper-right corner, rather than using the close button at the bottom. This happened often enough that the experienced testers never used the x-button. With time, many of these problems were fixed, but the testers still avoided the x-button as a matter of trained behavior. A new tester, untrained to avoid the x-button, found additional instances of these problems that had been missed.
Documentation
If you already understand the system, it is hard to read documentation as if you do not. This is why all new testers on a project should be expected to review documentation. As a new tester on a project, I once found a bad documentation bug in the product tutorial. The problem was with the instructions for setting up some data in an early chapter. They were incorrect, but the error was of no consequence until several chapters later, when further processing would not give you the documented results. Even though this version of the documentation was in use in the field, everyone internally thought that someone else knew about the problem and was taking care of it.
It is certainly true that ignorant testers are more likely to report bugs that will not actually be fixed, either because the error is theirs or because they are simply reporting a known design problem that is not going to be fixed. Nonetheless, the value of the fresh perspective they bring makes it worth having to sort through their bug reports. It also sets up a healthier dynamic than when testers try to anticipate which problems are actually valid before they report them.
Thus we get the most out of the staff we have and not feel like they need to know more than they do in order to provide valuable contributions. That said, we can not argue against learning and increasing our knowledge. We can learn a lot about the internal workings of the systems we test and this gray box information is incredibly valuable to developing sound testing strategies.
When we get new testers on staff, let us make sure to benefit from the fresh eyes they bring and recognize the value of what they do not know.
Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
How to Rate a Software Tester
From: Quality Vista

Scene 1: You are picnicking by a river. You notice someone in distress in the water. You jump in and pull the person out. The mayor is nearby and pins a medal on you. You return to your picnic. A few minutes later, you spy a second person in the water. You perform a second rescue and receive a second medal. A few minutes later, a third person, a third rescue, and a third medal. This continues throughout the day. By sunset, you are weighed down with medals and honors. You are a hero. Of course, somewhere in the back of your mind there is a sneaking suspicion that you should have walked upriver to find out why people were falling in all day. But, then again, that would not have earned you as many awards.
Scene 2: You are sitting at your computer. You find a bug. Your manager is nearby and rewards you. A few minutes later you find a second bug. And so on. By the end of the day, you are weighed down with accolades and stock options. If the thought pops up in your mind that maybe you should help prevent those bugs from getting into the system, you squash it—bug prevention does not have as much personal payoff as bug hunting.
What You Measure Is What You Get
B.F. Skinner told us fifty years ago that rats and people tend to perform those actions for which they are rewarded. It is still true today. In our world, as soon as testers find out that a metric is being used to evaluate them, they strive mightily to improve their performance relative to that metric - even if the resulting actions do not actually help the project. If your testers find out that you value finding bugs, you will end up with a team of bug-finders. If prevention is not valued, prevention will not be practiced.
For instance, I once knew a team where testers were rewarded solely for the number of bugs they found and not for delivering good products to the customer. As a result, if testers saw a possible ambiguity in the spec, they would not point it out to the development team. They would quietly sit on that information until the code was delivered to test, and then they would pounce on the code and file bugs galore. The testers were rewarded for finding lots of bugs, but the project suffered deeply from all the late churn and bug-fixing.
That example sounds crazy, but it happened because the bug count metric supported it. On the flip side, I know of a similar project where testers worked collaboratively to deliver a high-quality product. They reviewed the spec and pointed out ambiguities, they helped prevent defects by performing code reviews, and they worked closely with development. As a result, very few bugs were found in the code that was officially delivered to test, and high-quality software was delivered to the customer.
Unfortunately, management was fixated on the bug count metrics found in the testing phase. Because the testers found few bugs during the official test phase, management decided that the developers must have done a great job, and they gave the developers a big bonus. The testing team did not get a bonus. How many of those testers do you think supported prevention on the next project?
It is Not About the Bugs
Software testing is not about finding bugs. It is about delivering great software. No customer ever said with a straight face, Wow! You found and fixed 65,000 bugs - that must be really great software! So, why do many currently use bug counts as a measurement tool? The answer is simple: Bugs are just so darn countable that they are practically irresistible.
They can be counted, tracked, and used for forecasting. And it is tempting to do numerical gymnastics with them, such as dividing them by KLOC (thousand lines of code), plotting their rate over time, or predicting their future rates. But all this ignores the complexities that underlie the bug count. Bugs are a useful barometer of your process, but they can not tell the whole story. They merely help you ask useful questions.
So What Should We Measure?
Here are some thoughts:
1. How many staff hours are devoted to a project? This is a real cost that we care about. How effectively did your whole team (project managers, developers, and testers) go from concept to delivery? Instead of treating these groups as independent teams with clear-cut deliverables to each other, evaluate them as a unit that is moving from concept to code. Encourage the different specialties to work together. Have program management make the spec more crisp. Have development provide testability hooks. Have the test team supply early feedback and testing.
2. How many bugs did your customer find? What are customers saying about your product? Have you looked through the support calls on your product? What is customer feedback saying to you about your softwares behavior in the field?
3. How many bugs did you prevent? Are you using code analysis tools to clean up code before it ever gets past compilation? Are you tracking the results?
4. How effectively did your tests cover the requirements and the code? Coverage metrics can be a useful, though not comprehensive, indicator of how your testing is proceeding. 5. Finally, a squishy but revealing metric: How many of your own people feel confident about the quality of the product? In some aircraft companies, after the engineers sign off on the project, they all get on the plane for a quick test flight. Assuming that none of your fellow engineers have a death wish, that is a metric you have to respect! It not only says that you found lots of bugs along the way, but that you are satisfied with the resulting deliverable.
Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
Difference between Alpha and Beta Testing
Alpha testing is final testing before the software is released to the general public. First, (and this is called the first phase of alpha testing), the software is tested by in-house developers. They use either debugger software, or hardware-assisted debuggers. The goal is to catch bugs quickly. Then, (and this is called second stage of alpha testing), the software is handed over to us, the software QA staff, for additional testing in an environment that is similar to the intended use.
Following alpha testing, beta versions of the software are released to a group of people, and limited public tests are performed, so that further testing can ensure the product has few bugs. Other times, beta versions are made available to the general public, in order to receive as much feedback as possible. The goal is to benefit the maximum number of future users.
Difference between Alpha and Beta Testing
In-house developers and software QA personnel perform alpha testing. The public, a few select prospective customers, or the general public performs beta testing.
Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
Difference between Verification and Validation
Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help.
Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
Difference between Verification and Validation:
Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself. The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the actual testing of an actual product. The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product.
Posted @ Monday, November 27, 2006   1 comments

Back to Top
Bookmark This Site
WinRunner Interview Questions
1. How to recognise the objects during runtime in new build version (test suite) comparing with old guimap?
2. Wait(20) - What is the minimum and maximum time the above mentioned synchronization statements will wait given that the global default timeout is set to 15 seconds.
3. Where in the user-defined function library should a new error code be defined?
4. In a modular test tree, each test will receive the values for the parameters passed from the main test. These parameters are defined in the Test properties dialog box of each test.Refering to the above, in which one of the following files are changes made in the test properties dialog saved?
5. What is the scripting process in Winrunner?
6. How many scripts can we generate for one project?
7. What is the command in Winrunner to invoke IE Browser? And once I open the IE browser is there a unique way to identify that browser?
8. How do you load default comments into your new script like IDEs?
9. What is the new feature add in QTP 8.0 compare in QTP 6.0
10. When will you go to automation?
11. How to test the stored procedure?
12. How to recognise the objects during runtime in new build version (test suite) comparing with old guimap?
13. what is use of GUI files in winrunner?
14. Without using the datadriven test, how can we test the application with different set of inputs?
15. How do you load compiled module inside a comiled module?
16. Can you tell me the bug life cycle?
17. How to find the length of the edit box through WinRunner?
18. What is file type of WinRunner test files, its extension?
19. What is candidate release?
20. What type of variables can be used with in the TSL function?

More Interview Questions and Answers
Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
Software Quality Assurance and Software Testing

Quality Assurance is comparatively new phenomenon as to testing. I would like here to be specific while defining quality, its assurance and differences from traditional testing methodologies.

Glance at Software Industry and Quality artifacts:

Unlike the industrial revolution in West software revolution did not take centuries to sow the seed and then boom with a vertical graph, instead it took the software industry less almost half a century to evolve, flourish, and experience the crests and troughs. Quite rightly this is the offspring of baby boomers generation, post world war scenarios, super powers marathon, and industrial revolution.

Although the software industry learned fast but still the path it followed was same as was taken by its predecessor industries. While saying that I mean look at the traditional industry, it took them centuries to understand and digest the need of quality product and satisfied customer. Similarly software industry started without having any knowledge of the customers exact needs, without any framework, without any diversified targets.

I think many of the readers would agree with me if I say that software industry had the sole aim of superiority of weapons, in its infant days. Fact of the mater is the todays software industry owes a lot to the Department of Defence (DoD) and its subsidiaries. All the buzzwords we utter today like CMM, Malcom Baldrige (MBNQA), and SPICE are the product and requirement of DoD.

To stream line the process of its software manufacturing, and outsourcing, DoD came up with these different sorts of parameters, standards, and tools to measure where the organization stands which intend to do business with DoD by providing the solution towards the software needs of DoD.

But still do you know what is quality? How and why would different people and organizations have different set of processes and standards? If they all do not have the same framework then who is performing the actual quality work and how can we categorize them.

To deduce the actual trend and definition of quality lets discuss a few examples. For NASA softwares reliability, and performance matters most. For Mr. X software is good enough if it looks good, GUIs are aesthetic; performance and reliability are secondary issues for him in the initial stage. Hence for two different customers primary and secondary issues swap. Same is the case with the quality of these soft wares, if NASAs software is not very aesthetic in look and feel but it performs excellently with accuracy and is 100% reliable they would say it is of high quality whereas if Mr. X is made to use a very reliable but difficult to use software with crude interfaces, he would definitely going to resist. For Mr. X the quality parameters are not the same as for NASA.

So we conclude that quality is a relative term and it is a variable, which keeps on changing. That is why I keep on saying Quality lies in the eyes of beholder.

Now if we try to define a few major quality artifacts we would have a checklist that would look some thing like this:

Quality software must

1. Meet the requirements of user
2. Have sufficient and up to date documentation.
3. Be cost effective.
4. Be user friendly
5. Easy to learn
6. Reliable
7. Manageable
8. Maintainable
9. Be Secure

But REMEMBER!
To every quality artifact there is an equal and opposite resisting force

Interestingly within the same organizations quality definition changes and it becomes a nightmare for the software engineers and project managers to fulfill each users needs. For example the Manager wants to control the security system and do not want any body to breach it where as the data entry operator or the accountant says; What the hell! I am human and have made a mistake now why can not I change my posted data with out bringing it into the notice of my bosses, I always do that when I worked manually. See there is a clash of requirements, which is, can in longer run be treated as a quality artifact, and to resolve this, software houses need good software developers, project managers but also The Best Analysts.

Quality Assurance group contain among themselves some of the best analysts, who can fore see and help in removing not only the potential bugs but also can put themselves in different users shoes, try to work with the software as per the users perspective, and come up with potential problems that the software may cause with in the organization and if unfortunately the tussle would not let the software run and it would be complete loss on part of software firm. This can only be achieved if you have quality assurance group not mere testers in your organization, and the QA group is involved in all phases of SDLC.

This was a case on client end, what about quality awareness on software development end? If you visit or work in any software house you would be told that QUALITY is our main IDEA yet unfortunately one finds that most of these are mere lip servicing slogans and the management has only one goal To earn Money. Yes money is an important thing in business but it is not the only thing. What if you earn millions in a day and due to poor quality of your product or lack of customer satisfaction you end up in loosing market for good, or in a courtroom, where you may have to repay your unsatisfied client more than what you have earned? Most importantly if any such disaster occurs it brings bad reputation to company and its employees too. This may end up in good human resources leaving the company as no one want to be a part of a bad team.

It has also been experienced quite often that in times of crunch the department or people who suffered first and the most, is the quality assurance department. This act in its self shows top managements lack of commitment and vision towards quality. Quite interestingly it is not the case if the organization have a small testing group and no QA group. Do you know why, just because the difference between QA and testing is still jumbled in the minds and testers are given more credit than QA people who are not only good testers but also good analytics.

Reader may have also experienced that if the conflict arises between different teams / Departments, the management or Project managers are often more inclined towards developers than quality assurance group. I have even had the experience of working with some of the top IT professionals who think quality assurance or testing people are just there to prove some point otherwise the code was of very high quality and has defects which are of no importance.

This attitude towards QA and testing is also reflected in our educational institutes where during the 3 or 4 years degree program students hardly learn quality assurance courses. Even if some institute offers the course the faculty and the management fails in bringing forth the importance of this subject, the potential in the field of SQA.

So why testing is be different than QA. To test anything one need not t know or think or cater about the different analytical aspects, hence any one with domain knowledge, and software functional documents. He or she is not concerned what the actual requirements of software were. What went wrong and where and why this is not the actual client need. The only thing he / she would verify is that the software is working as per FS and if Fs is wrong it is not his / her headache. A tester can become a good QA resource if he / she has the God gifted analytical skills but this needs dedication, motivation, drive and support from the top management too. A tester needs to analyze only that specific area and how to crash the application remain within the scope.

Lets take the example of the automobile industry, interestingly the person who is driving the car for test driver is just a tester and he need not to be an expert or automobile engineer. What he / she can report or verify is that the automobile is smooth, performing well, comfortable and makes the driver happy. Now the automobile engineers who are watching the whole development cycle of the car and test drive they are the one who can get the feed back from the testers, mechanics, engineers and designers and make the recommendations for enhancing the features and quality of the machine.

Another basic difference between a tester and a QA personal is that tester is a black box for client, client may not need to know or never come to know who is the tester who is working on his / her software. Similarly for tester client is never a directly approach able interface. From experience I have learned that there should always be a QA personal included in the team, which is in direct contact with the end client. This will help in numerous ways, which I will discuss sometimes later.

Epitome:

Quality is a relative term, a variable that can vary and change its definitions according to needs, scope, culture, environment and geo-political effects.

Quality Assurance group people are not mere critics or testers; incidentally you can find the best of analysts from the QA department.

A good tester can become a brilliant QA personal but it is not possible to be an excellent QA personal without having a testing experience.

Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
Top 5 Stupid Excuses for Not Using Automated Testing Tools
1) I do not have time for formal testing, our schedule is too tight.

This is an oldie but goodie. Has there ever been a development project that went according to schedule? Does anyone know of a project where QA time was not cut at least in half in order to meet the deployment timeline? Automated testing can actually help a great deal in this scenario, enabling you to perform more tests (and more types of tests) in less time. Put another way, you can spend less time scripting, and more time testing using automated testing tools.

2) It is too expensive.

Do not let a few bad apples spoil the whole bunch. Not all automated testing tools are overpriced, and not all vendors are looking to gouge you. Testing tool licenses start at less than $10,000 and hosted load testing solutions can bring down the cost even more while eliminating the need for you to develop a load-testing infrastructure. And do not even get me started about the cost of unplanned application downtime. A leading technology research group estimates that an online retailer will lose nearly $60,000 an hour if a critical application goes down. An online brokerage will lose $537,000 for just five minutes of downtime. Given those figures, does not it make sense to fix potential problems before they lead to downtime?

3) They are too hard to use.

I know you have been hurt before. Legacy testing tools, most of which were originally developed for client/server environments, can be a bear to use. Some even require proprietary languages. But a new class of Web-based testing solutions enable you to create very complex scripts with no programming in just minutes. If it is been a while since you evaluated testing tools (i.e., more than two years), it would be worth your while to see what is out there now.

4) We load tested it manually.

I will try to break this to you gently - you can not load test applications manually unless your expected load is smaller than your development team, and you can duplicate the production environment in your office. Companies have actually said to me, We are all set - we all stayed late one night and logged on to the application simultaneously - it worked fine! Chances are, your application will find its real-world load a little more painfull! Automated load testing is the only way to see how your application will truly hold up under a variety of load scenarios.

5) We were very careful - there are not any bugs.

This is my favorite. No developer likes to think there could be any problems with his or her application - but the fact is, I have NEVER been through a test that did not find at least one problem. More often than not, we find several major ones. A colleague has observed a particular psychological phenomenon within development teams that trot out No.5 as an excuse. It is modeled after the better known Four Phases of Grief, and he calls it the Four Phases of Software Testing.

The Four Phases of Software Testing are:
a) Arrogance - You are just here to verify that my application is perfect.
b) Denial - You are not testing it right - there is nothing wrong with that feature. Try it again.
c) Pleading - Oh no! Can you help me fix it?
d) Acceptance - OK, now we know how to avoid that situation next time. What are we testing next?

By the time they reach Acceptance, most people have converted.
Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
Top 6 Software Testing hitches and Tips to handle them
This is an attempt to identify 6 common software testing hitches, most of the testing projects face and a few tips to overcome them. Software testing is an integral part of the software development life cycle. Inadequately tested applications result in loss of credibility with the customers, be it an existing customer or a new one. It is therefore very essential that effective testing be performed with an intention to eliminate the common problems that might cause havoc before releasing any software.

1. Poor Planning or Estimation
Effective planning is one of the most critical and challenging steps in a test project. Test planning and estimating indicate the order and way in which the tasks are to be carried out and the resources required to execute the same. Proper planning is not possible without a sound and reliable estimate.
1. Effort: Delays can result because of lack of resources to perform the activities in a certain time frame or in less efficient use of resources because too many resources are allocated
2. Schedule: Schedule is estimated after the effort is estimated. Developers underestimate the effort and resources required for testing. As a consequence of which, deadlines are not met or software is delivered only partially tested to the ultimate end user
3. Cost: When budgets are not correctly estimated, it becomes relatively expensive; it might result in some test activities to be cancelled causing more insecurity about the quality of the project
How to tackle?
Take a percentage of the total effort, employ standard ratios in testing based on previous similar test processes, allow for overheads, estimate the hours for individual activities and extrapolate the results. Inadequate testing because of lack of knowledgeable resources such as using testers with little or no experience also results in poor quality of testing. Do not forget to include
1. The training time required to improve the knowledge level of the resources on the domain or technology
2. Buffer time required to resolve any risks that you foresee
2. Ambiguous Requirements
Without adequate documentation, the testing effort usually takes longer and allows more errors to get through to the released version. Ambiguity in requirements makes the test design phase a tedious task. The cost of uncovering and correcting requirement deficiencies at this stage is significantly less than that incurred during the acceptance-testing phase. There may be numerous implied or underlying requirements, which may be overlooked by the testers on glancing the requirements. It is therefore essential that the requirements be understood thoroughly at the initial phase of testing.
How to tackle?
The testers can review the requirements and prepare a list of queries to be addressed on the requirements and get them clarified even before preparing the test cases to enable them deliver a quality product. A report with the deficiencies in requirements may also be prepared.
3. Insufficient Test Coverage
A good test suite will achieve high coverage. Inadequate number of cases cannot complete testing the functionality in its entirety. Test coverage is only a measure of the quality of testing. If high-test coverage is not achieved, it is imperative that the process needs to be strengthened. Another factor to be added is the inadequate test data that does not completely the possible ranges.
How to tackle?
1. The associated test case identification number (say a unique number for every case) can be marked against the requirements in an excel sheet to ensure that test cases are written for all requirements. Low coverage indicates a process problem, which might require test generation technique to be improved, or training to be imparted to the tester. Many tools are available in the market to measure the test coverage.
2. It is not possible to test all conditions in an application system. Data design with valid, invalid data to cover normal processing operations adequately can be prepared. Techniques of boundary values, equivalence partitioning can be applied while preparing test data.
4. Uncontrolled Test Environment
The more the test environment resembles the final production environment, the more is the reliability of testing. Lack of such an environment will result in unpredictable results while in production.
How to tackle?
1. Testing should take place in a controlled environment. It is therefore separated from the development or production environment. The ownership of test environment should be with the testing team and without their permission no change should happen in the environment.
2. Measures can be taken up to set up the test environment in time and ensure that it is well managed; ie. The test environment should be sufficiently representative for the test to be performed that is, it should be closer to or the same as that of the production environment. It is necessary that the Test Manager or a coordinator manages deliveries from development team and is made responsible for the set up, version management, authorizations etc. If an independent test team is established, it will be ideal to have an independent configuration team also.
5. Testing as an Afterthought
Underestimation of the effort and resources required for testing results in starting testing activities at the fag end of the development cycle when it becomes difficult to fix the major bugs unearthed by the testers and also results in compromising on details in the test documents owing to time constraints.
How to tackle?
Test planning can be initiated as soon as the requirements are defined. Process of test execution in parallel with application development can be adopted.
6. Inadequate Test documentation
Inadequate/improper test documents (test plans, test specifications, defect reports etc) results in loss of time while analyzing what to be tested/re-tested and the related areas to be tested, which might in turn have an impact in the delivery or the quality of the product.
How to tackle?
1. Adequate effort needs to be spent on documentation also, as test documentation is a very important task in all phases of testing.
2. Care can be taken that all documents related to testing are prepared right from the beginning of the SDLC and updated continuously.
Posted @ Monday, November 27, 2006   0 comments

Back to Top
Bookmark This Site
Alternative Ways of Software Testing
If there is absolutely no time for you to document your system, do consider these other approaches of testing

User/Acceptance Testing
User Testing is used to identify issues with user acceptability of the system. The most trivial to the most complicated defects can be reported here. It is also possibly the best way to determine how well people interact with the system. Resources used are minimal and potentially requires no prior functional or system knowledge. In fact the more unknown a user is to the system, the more preferred he is for user testing. A regressive form of user testing can help uncover some defects in the high-risk areas.

Random / Exploratory Testing
Random Testing has no rules. The tester is free to do what he pleases and as he seems fit. The system will benefit however, if the tester is functionally aware of the domain and has a basic understanding of business requirements. It will prevent reporting of unnecessary defects. A variation of random testing, called as Monte Carlo simulation, which is described as generation of values for uncertain variables over and over to simulate a model can also be used. This can be used for systems, which have large variation in input data of the numbers or data kind.

Customer Stories
User/Customer stories can be defined as simple, clear, brief descriptions of functionality that will be valuable to real users. Small stories are written about the desired function. The team writing the stories will include customer, project manager, developer and the tester. Concise stories are hand written on small cards such that the stories are small and testable. Once a start is made available to the tester, available functional and system knowledge can help speed up the process of testing and further test case development.
Posted @ Monday, November 27, 2006   2 comments

Back to Top
Bookmark This Site
Previous Post
Archives
Software Testing Links
Quality Assurance Links
Latest Security Alerts
Sponsored Links
SIX SIGMA Links
Project Management Links
Software Estimation & Measurement Links
Sponsored Links
Tips for Job Seekers
Listed
ServTrek
Killer Directory
Site Load Testing Tool
Quick Register
Search Engine Ginie
One Way Link directory
Software Testing Zone
DaniWeb Best blogs on the Web: all about WWW Computers Blogs - Blog Top Sites Top Blogs Software blogs Blog Flux Directory Free Blog Listings @ Blog Annoucne Find Blogs in the Blog Directory PHP Blog Manager - Free Blog Directory Bloggapedia - Find It! World Top Blogs - Blog TopSites
Blogarama - The Blog Directory BlogElites.com
© Copyright 2006 Debasis Pradhan . All Rights Reserved.