TESTING INERVIEW Preparation

 

TESTING INTERVIEW QUESTIONS AND ANSWERS


1.What is the purpose of testing?

Ans:    The purpose of testing can be quality assurance, verification and validation, or reliability estimation. Testing can be used as a generic metric as well. Correctness testing and reliability testing are two major areas of testing. Software testing is a trade-off between budget, time and quality.

2. Difference between QA and testing?

Ans;   Quality Assurance (QA) is a part of quality management focused on providing confidence that quality requirements will be fulfilled.  Testing is a process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.


        QA and Testing both have to make software better, but QA enhances the quality via an improvement of development process and testing enhances it via finding bugs. Testing is also called Quality Control (QC).

3. Describe SDLC?

Ans:      A software development life cycle (SDLC) model is a conceptual framework describing all activities in a software development project from planning to maintenance. This process is associated with several models, each including a variety of tasks and activities.

4.  What is a  defect?

Ans:       While testing when a tester executes the test cases he might observe that the actual test results do not match from the expected results. The variation in the expected and actual results is known as defects. Different organizations have different names to describe this variation, commonly defects are also known as bug, problem, incidents or issues.

5. If you are a given program that will average student grades,what kind of inputs would you use?

Ans:

 

6.  What you will do during your first day?

Ans:    I will attend for the orientation program and after that i will interact with collegues.I will know about my work & position and i will continue my work.

 

 

7.   What is verification?

Ans:  It makes sure that the product is designed to deliver all functionality to the customer.Verification is done at the starting of the development process. It includes reviews and meetings, walkthroughs, inspection, etc. to evaluate documents, plans, code, requirements and specifications.

 

8.    What is a validation?

Ans:      This is done through dynamic testing and other forms of review. According to the Capability Maturity Model (CMMI-SW v1.1), Software Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements.

 

9. Types of review?

Ans:   1. Walkthrough:

          2. Technical review:

          3. Inspection

10. What is quality?

Ans:      The standard of something as measured against other things of a similar kind; the degree of excellence of something. “an  improvement  in product quality”

 

11. What are the different levels of testing?

1.      Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.

2.      Integration Testing is a level of the software testing process where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.

3.      System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.

4.      Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery

 

12. What is black box testing?

Ans:    Black Box Testing, also known as Behavioural Testing, is a software testing method in which the internal structure/design/implementation of the item being tested is not known to the tester. These tests can be functional or non-functional, though usually functional.

13. What is white box testing?

Ans:    White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a software testing method in which the internal structure/design/implementation of the item being tested is known to the tester. The tester chooses inputs to exercise paths through the code and determines the appropriate outputs. Programming know-how and the implementation knowledge is essential. White box testing is testing beyond the user interface and into the nitty-gritty of a system.

14. What is unit testing?

Ans:    Unit Testing is a level of the software testing process where individual      units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.

 

15. What is good code?

Ans:     Code makes an application work for users. Getting tests passing or removing duplication is useful, but the code needs to be shipped so that users can use it.

16. What kinds of testing haveu yo done?

Ans:   

 

17. How did you go about testing a project?

Ans:   

 

 

18. When should testing start in a project? Why?

Ans:

 

 

19. How does unit testing  play a role in the development /software life  Cycle.

Ans:

20. What do you like (not like ) in this job?

Ans:

21. what made you pick testing over other career?

Ans:        Testing is one aspect which is very important in the Software Development Life Cycle (SDLC). I llike to be part of the team which is responsible for the quality of the application being delivered. Also, QA has broad oppurtunities and large scope for learning various technologies. And ofcourse it has lot more oppurtunities than the Development

22.what are the testing key challenges?

Ans:    Following are some challenges while testing software

              1.Requirements are not freezed.

              2.Application is not testable.

              3.Ego problems.

             4.Defect in defect tracking system

             5.Miscommunication or no Communication

             6.Bug in software development tools.

             7.Time pressures

23.what is the exact diff btwn integration and system testing? Give me ex. With your project.

Ans:  Integration testing: Testing all the set of integrated modules is call as integration testing.

For example you have to test the keyboard of a computer than it is a unit testing but when you have to combine the keyboard and mouse of a computer together to see its working or not than it is the integration testing. So it is prerequisite that for performing integration testing a system must be unit tested before.

System testing: It meens testing all the functinalities of the project

24.Differences btw white box testing &black box testing?

Ans:       Black-box testing  is a way of testing software without having much knowledge of the internal workings of the software itself. Black box testing is often referred to as behavioral testing, in the sense that you want to test how the software behaves as a whole. It is usually done with the actual users of the software in mind, who usually have no knowledge of the actual code itself.

        White box (aka clear box)   is testing of the structural internals of the code – it gets down to the for loops, if statements, etc. It allows one to peek inside the ‘box’. Tasks that are typical of white box testing include boundary tests, use of assertions, and logging.

25.. How do you go about testing a web application?

Ans:       It’s clear that for testing any application, one should be clear about the requirements and specification documents. For testing web application, the tester should know, what the web application deals with. For Testing Web application, the test cases written should be in two different types, 1) The Test cases related to the Look and Feel of the Web pages and navigation and 2) The test cases related to the functionality of the web application. Make sure, whether the web application is connected to the Database for the inputs. Write Test cases based on the Database and write test cases for the backend testing as well if there is any database. The web application should be tested for the server response time for displaying the web pages, Make sure the web pages under load as well. For load testing, the tools are very much useful for simulating the many users.

26.Describe the differences btw validation &verification?

Ans:   Verification  is a process to define pre-define activity i.e.,( before testing process starts, check whether all relavent documents are prepared and ensure that standard which we create to meet the requirement) to prepare a application.  

        Validation: its a process to test the apllication and deliver error free application

27.What is configuration managment ?Tool used?

Ans:     Configuration Management is a set of interrelated processes, management techniques, and supporting tools  that assure:

1. Our configurations are as they should be, meeting necessary requirements and matching thelatest documentation.

2. Changes to our configurations are properly evaluated, authorized, and implemented.

Tools include Rational ClearCase, Doors, PVCS, CVS and many others.

28.what is an equivalences class?

Ans:        EC Testing is when you have a number of test items (e.g. values) that you want to test but because of cost (time/money) you do not have time to test them all. Therefore you group the test item into class where all items in each class are suppose to behave exactly the same. The theory is that you only need to test one of each item to make sure the system works.

29.Describe a past expereince with implementing a test hardness in the development of software?

Ans: Hardness: an arrangement of straps for attaching a horse to a cart. Test Harness: This class of tool supports the processing of tests by working it almost painless to

 1. Install a candidate program in a test environment

 2. feed it input data

3. simulate by stubs the behaviour of subsidairy modules.

30. How can you use technology to solve problem?

Ans:          Technology is one of many tools that organizations use to help solve problems. The entire process of problem solving involves gathering and analyzing data, and then putting forth solutions that remedy an issue in the business. Decision making involves the tools that help management and other personnel choose what to do during the problem-solving process. The two concepts seem independent to some people, but when you throw technology into the mix, you can see the close relationship problem solving and decision making have with one another.

31.how to find the tools work well with your existing system?

Ans:      1.Discuss with the support officials

              2. Download the trial version of the tool and evaluate.

              3.We can also check the system for back word and forward compatibility. 

              4.verification can be done with the forth comming changes too..

32.What is uml and how to use it for testing?

Ans:       The UML is a visual modeling language that can be used to specify, visualize, construct, and document the artifacts of a software system.

In following Testing Phases we can use UML:

UNIT TESTING: we use Class & State Diagram which covers correctness,error handling pre / post conditions, invariants

FUNCTIONAL TESTING : we use interaction and class diagrams. Which covers functional and API behavior,integration issues.

SYSTEM TESTING : use case,activity, and interaction diagrams. Which covers workload,contention,synchronization & recovery

REGRESSION TESTING: we use interaction and class diagrams.Which covers Unexpected behavior from new /changed function. Also for deployment use case and deployment diagrams are used.

33.How to do test if we have minimal or no documentation above the product?

Ans:    In these type of situations we do Adhoc testing(Testing with out any Required Documents), but basically testing without documents itsself is a bug,only thing we can do is by using our testing knowledge and experience wecan carry out.

34.Have you ever completly tested any part of a product?how?

Ans:    One cannot do 100 % testing and say that the product is bug free.

Below points should be noted:

1. Prepare tracebility matrix so that you wil know if any test case/ functionality is missed out      By this you would know how far you have tested.

2. Also, make sure that you have covered critical, complex functionality And there are no showstopper, critical bugs , major bugs.

35.Have you done exploratory or specification driven testing?

Ans:      Yes, i done both the Exploratory and Specification-Driven Testing in my careerI assigned for a project, which is a new Technology "Cold Fusion", Testing methodology is same but the technology different, I learned with different ideas and done testing.Exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run.Learning a New domain and technology,learn and test the software is good practice for exploratory testing.Specification-driven testing aims to test the functionality of software according to the applicable requirements.Thus, the tester inputs multiple data into, and only sees the outputs from, the test object. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given inputs, the output values (or behavior), either "is" or "is not" the same as the expected value specified in the test case.Specification-driven testing done by Automating the test cases. here set of inputs are parameterized (excel sheets) and driven and verified the Expected results through different behaviour.

36.How do you estimate staff requrirements?

Ans:      The staff requirement is done by the requirement of the  client, number of modules , the time given by the client  so depending on this all we recruit the staff

37.what do you do when  the schedule fails?

38.Testing types?

Ans:    Black box testing – Internal system design is not considered in this type of testing. Tests are based on requirements and functionality.

         White box testing – This testing is based on knowledge of the internal logic of an application’s code. Also known as Glass box Testing. Internal software and code working should be known for this type of testing. Tests are based on coverage of code statements, branches, paths, conditions.

       Unit testing – Testing of individual software components or modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. may require developing test driver modules or test harnesses.

        Incremental integration testing – Bottom up approach for testing i.e continuous testing of an application as new functionality is added; Application functionality and modules should be independent enough to test separately. done by programmers or by testers.

       Integration testing – Testing of integrated modules to verify combined functionality after integration. Modules are typically code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

        Functional testing – This type of testing ignores the internal parts and focus on the output is as per requirement or not. Black-box type testing geared to functional requirements of an application.

        System testing – Entire system is tested as per the requirements. Black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.

       End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

        Sanity testing - Testing to determine if a new software version is performing well enough to accept it for a major testing effort. If application is crashing for initial use then system is not stable enough for further testing and build or application is assigned to fix.

       Regression testing – Testing the application as a whole for the modification in any module or functionality. Difficult to cover all the system in regression testing so typically automation tools are used for these testing types.

        Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.

         Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.

         Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.

       Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.

       Usability testing – User-friendliness check. Application flow is tested, Can new user understand the application easily, Proper help documented whenever user stuck at any point. Basically system navigation is checked in this testing.

39.what is functional testing?

Ans:        Functional testing is a software testing process used within software development in which software is tested to ensure that it conforms with all requirements. Functional testing is a way of checking software to ensure that it has all the required functionality that's specified within its functional requirements.

40.what is quality?

Ans:   Quality is about meeting the minimum standard required to satisfy customer needs.High quality products meet the standards set by customers - for example, a high quality washing-up liquid can claim that one squirt is sufficient to clean a family's dirty plates after a meal. A poor quality washing-up liquid requires several squirts.

41. At what stage of the life cycle does testing begin in your opinion?

Ans:

 

42. What are your greatest weaknesses?             

Ans:

 

43. What are your strengths?

Ans:

 

44. What is software-testing methodology?

Ans:

 

45. Quality attributes?

Ans:

 

46. What is the general testing process?

Ans:

47. Would you like to work in a team or alone, why?

Ans:

 

48. Describe any bug you remember.

Ans:

 

49. What do you like (not like) in this job?

Ans:

 

50. How do you scope, organize, and execute a test project?

Ans:

 

51. What is the role of QA in a development project?

Ans:

 

52. What did you include in a test plan?

Ans:

 

53. What kinds of testing have you done?

Ans:

 

54. Have you ever written test cases or did you just execute those written by others?

Ans:                    

 

55. How do you determine what to test?

Ans:

 

56. How do you decide when you have 'tested enough?'

Ans:

 

57. How do you test if you have minimal or no documentation about the product?

Ans:

 

58. How do you perform regression testing?

Ans:

 

59. How did you go about testing a project?

Ans:

 

60. When should testing start in a project? Why?

Ans:

 

61. What is the value of a testing group? How do you justify your work and budget?

 

Ans.     Robert Dorfman's paper in 1943 introduced the field of (Combinatorial) Group Testing. In combinatorial mathematics, group testing refers to any procedure which breaks up the task of locating elements of a set which have certain properties into tests on subsets ("groups") rather than on individual elements. A familiar example of this type of technique is the false coin problem of recreational mathematics. In this problem there are n coins and one of them is false, weighing less than a real coin. The objective is to find the false coin, using a balance scale, in the fewest number of weighings. By repeatedly dividing the coins in half and comparing the two halves, the false coin can be found quickly as it is always in the lighter half.

 

62. How much interaction with users should testers have, and why?

Ans :     Generally testers should think like users for quality output. Normally the SRS or client specs are referred.  However when the project undergoes constant modifications from the client, testers can sit in on the conversations chats so that they have a general idea of where the projects is headed. Also many a times, something may not be testable, tester can pitch in with suggestions. Gernerally, as a tester we did not have any interaction with the user. but some times (for ex. when no documentation available ) we may have some interaction with the end users with the permission of our higher officials.

63. How should you learn about problems discovered in the field, and what should you learn from those problems?

Ans:

 

 

64. How do you get programmers to build testability support into their code?

Ans:     Once the project comes,there can be a general technical discussion between the development and testing teams to clear out the doubts mentioned in BRD.Also it gives a clear cut idea about the functionalities to be tested giving value additions to coding and unit testing.

For example:The tester can expose his ideas of testing to some extent like negative testing or testing of additional functionalities,which can be valuable for the programmer for coding/unit testing.

 

65. How would you define a "bug?"

Ans:          A software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program's source code or its design, or in frameworks and operating systems used by such programs, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, defect reports, fault reports, problem reports, trouble reports, change requests, and so forth.

 

66. What is priority & Severity?

Ans   In the Bug Tracking the terms “Priority” and “Severity” are used to share the importance of a bug among the team and to fix it accordingly.

      Priority   means how fast  it has to be fixed. Normally talking about this, “High Severity” bug are marked as “High Priority” bugs & it’s should be resolved as early as possible, but this case is not all the time. There can be different exceptions to this rule and depending on the nature of the application it can be change from company to company.

 Example:  To deal with all issues present what issues to be consider on first based on its urgency or importance on application under test. Adding this field in while reporting bug will help analyzing the Bug Report

     Severity:  It is totally related to the quality standard or devotion to standard. Severity means how severe it is affecting the functionality. Severity is associated with standards.

The severity type is defined by the tester based on the written test cases and functionality.

Is related to technical aspect of the product. It reflects on how bad the bug is for the system.

It is totally related to the quality standard or deviation to standard. Severity means how big functionality is affecting of the product. The Test Engineer can decide the severity level of the bug. Based on Bug Severity the product fixes are done. The severity is assigned by tester. Based on seriousness of the bug severity is assigned to defect. It can be divided into four categories:

Show Stopper:  4 – Cannot able to test application further.

Major Defect:  3 – Major functionality not working but able to test application.

Minor Defect:  2 –Bug in Functionality but in the sub module or one under the other module.

Cosmetic:  1 – Issues in location of the object or the look and feel issue.

 

67. Give me an example of the best and worst experiences you've had with QA.

Ans:

 

 

68. Explain some techniques for developing software components with respect to testability.

Ans:

 

 

69. Describe a past experience with implementing a test harness in the development of software.

Ans:

 

 

70. Give me some examples of how you have participated in Integration Testing.

Ans:

 

 

71. What is good code?

Ans:      Code is said good when it is:

  • Simple: so it's easy to understand
  • Readable: with semantics and a style guide
  • Open Source: which makes it easy to develop further and extend
  • Documented: so that it's still understandable after a period of time
  • Correct: so it does what it's supposed to do
  • Efficient: so you don't waste time or resources
  • Unit tests: because you wouldn't not calibrate a scale before using it
  • Structured: because it makes code more readable
  • Preserved: so that a record of it remains
  • Portable: so it doesn't just run on my current machine
  • Well designed: so it's easier to adapt to different systems
  • Follows best practices of the community: because this helps promote good practice

 

 

72. What is the role of a bug tracking system?

Ans:     A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. It may be regarded as a type of issue tracking system. A bug tracking system is usually a necessary component of a good software development infrastructure, and consistent use of a bug or issue tracking system is considered one of the "hallmarks of a good software team”.

73. What are the key challenges of testing?

Ans:    The following are the testing challenges:

          1) Testing the complete application

          2) Misunderstanding of company processes

          3) Relationship with developers

          4) Regression testing:

          5) Lack of skilled testers:

          6) Testing always under time constraint:

          7) Which tests to execute first

          8 ) Understanding the requirements:

          9) Automation testing:

         10) Decision to stop the testing:

         11) One test team under multiple projects:

         12) Reuse of Test scripts:

        13) Testers focusing on finding easy bugs:

 

74. Have you ever completely tested any part of a product? How?

Ans:

 

 

75. Have you done exploratory or specification-driven testing?

Ans:

 

 

76. Discuss the economics of automation and the role of metrics in testing.

Ans:

 

 

77. When have you had to focus on data integrity?

Ans:

 

 

78. What are some of the typical bugs you encountered in your last assignment?

Ans:

 

 

 

79. How do you prioritize testing tasks within a project?

Ans:     Let discuss few examples of Priority & Severity from High to Low:

High Priority & High Severity:

  1. All show stopper bugs would be added under this category (I mean to say tester should log Severity as High, to set up Priority as High is Project manager’s call), means bug due to which tester is not able to continue with the Software Testing, Blocker Bugs.
  2. Let’s take an example of High Priority & High Severity, Upon login to system “Run time error” displayed on the page, so due to which tester is not able to proceed the testing further.

High Priority & Low Severity:

  1. On the home page of the company’s web site spelling mistake in the name of the company is surely a High Priority issue. In terms of functionality it is not breaking anything so we can mark as Low Severity, but making bad impact on the reputation of company site. So it highest priority to fix this.

Low Priority & High Severity:

  1. The download Quarterly statement is not generating correctly from the website & user is already entered in quarter in last month. So we can say such bugs as High Severity, this is bugs occurring while generating quarterly report. We have time to fix the bug as report is generated at the end of the quarter so priority to fix the bug is Low.
  2. System is crashing in the one of the corner scenario, it is impacting major functionality of system so the Severity of the defect is high but as it is corner scenario so many of the user not seeing this page we can mark it as Low Priority by project manager since many other important bugs are likely to fix before doing high priority bugs because high priority bugs are can be visible to client or end user first.

Low Priority & Low Severity:

  1. Spelling mistake in the confirmation error message like “You have registered success” instead of successfully, success is written.
  2. Developer is missed remove cryptic debug information shortcut key which is used developer while developing he application, if you pressing the key combination LEFT_ALT+LEFT_CTRL+RIGHT_CTRL+RIGHT_ALT+F5+F10 for 1 mins (funny na).

80. Do you know of metrics that help you estimate the size of the testing effort?

Ans:

 

81. How do you scope out the size of the testing effort?

a. Size of the system

it would take longer to test a larger system. In some projects, it is possible to know about the size of the system in terms of Function Points, Use Case Points or Lines of Code. You should take the size of the system into account when estimating the test effort.

b. Types of testing required

sometimes, it is important to perform multiple types of testing on the system. For example, other than functional testing, it may be necessary to perform load testing, installation testing, help files testing and so on. You should create the effort estimates for each type of testing separately.

c. Scripted or exploratory testing

it may be feasible to only execute test cases or do exploratory testing or do both. If you intend to do script testing and do not have test cases available, you should estimate the time it would take to create the test cases and maintain them. Scripted testing requires test data to be created. If the test data is not available, you should estimate the effort it would take to create and maintain test data.

d. "Non-testing" activities

Apart from creating and executing tests, there are other activities that a tester performs. Examples include creating test logs/ reports, logging defects and entering time in the project management tool.

e. Test cycles

by a test cycle, I mean a complete round of testing (build verification testing followed by attempted execution of all test cases followed by all defects logged in the defect tracking system). In practice, one test cycle is not sufficient. You should estimate the number of test cycles it would take to promote the system to the client or production.

82. How many hours a week should a tester work?

Ans:     The testers should work in a week in between 40 and 45 hours.

83. How do you handle conflict with programmers?

Ans:     Maintain a better relationship with the developers. Before raising the Bug makes sure that the bug is a real & valid one.Dont blame the developer for mistakes.

84. How do you know when the product is tested  well enough?

Ans:         It is possible to do enough testing but determining the how much is enough is difficult. Simply doing what is planned is not sufficient since it leaves the question as to how much should be planned? What is enough testing can only be confirmed by evaluating the results of testing? If lots of defects are found with a set of planned tests it is likely that more tests will be required to assure that the required level software quality is achieved. On the other hand if very few defects are found with the planned set of tests, then no more tests will be required.

Testing should provide information to the stakeholders of the system, so that they can make an informed decision about whether to release a system into production or to customers. Testers are not responsible for making that decision; they are responsible for providing the information so that the decision can be made in the light of good information.

So going back to my question, testing is done when its objectives have been achieved and more specifically, you are done testing when:

·         You are unlikely to find additional defects.

·         You have a sufficiently high level of confidence that the software is ready to be released.

85. What characteristics would you seek in a candidate for test-group manager?

Ans:   A good QA, test, or QA/Test(combined) manager should:

·         be familiar with the software development process.

·         be able to maintain enthusiasm of their team and promote a positive atmosphere, despite what is a somewhat 'negative' process (e.g., looking for or preventing problems)

·         be able to promote teamwork to increase productivity

·         be able to promote cooperation between software, test, and QA engineers

·         be have the diplomatic skills needed to promote improvements in QA processes

·         have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to

·         be have people judgment skills for hiring and keeping skilled personnel

·         be able to communicate with technical and non-technical people, engineers, managers, and customers.

·         be able to run meetings and keep them focused

86. What do you think the role of test-group manager should be? Relative to senior management?

Ans:      ROLES OF test-group manager INCLUDE

·         Defect find and close rates by week, normalized against level of effort (are we    finding defects, and can developers keep up with the number found and the ones necessary to fix?)

·         Number of tests planned, run, passed by week (do we know what we have to test, and are we able to do so?)

·         Defects found per activity vs. total defects found (which activities find the most defects?)

·         Schedule estimates vs. actual (will we make the dates, and how well do we estimate?)

·         People on the project, planned vs. actual by week or month (do we have the people we need when we need them?)

·         Major and minor requirements changes (do we know what we have to do, and does it change?)

87. What is performance testing?

Ans:  Performance testing, a non-functional testing technique performed to determine   system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

Performance Testing Techniques:

Load testing - It is the simplest form of testing conducted to understand the behavior of the system under a specific load. Load testing will result in measuring important business critical transactions and load on the database, application server, etc., are also monitored.

Stress testing - It is performed to find the upper limit capacity of the system and also to determine how the system performs if the current load goes well above the expected maximum.

Soak testing - Soak Testing also known as endurance testing, is        performed to determine the system parameters under continuous expected load. During soak tests the parameters such as memory utilization is monitored to detect memory leaks or other performance issues. The main aim is to discover the system's performance under sustained use.

Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large amount and measuring the performance of the system. The main aim is to determine whether the system will be able to sustain the workload.

Attributes of Performance Testing:

Speed

Scalability

Stability

Reliability

 

 

 

88. What is load testing?

Ans:     A load test is type of software testing which is conducted to understand the behavior of the application under a specific expected load.

·         Load testing is performed to determine a system’s behavior under both normal and at peak conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. E.g. If the number of users are increased then how much CPU, memory will be consumed, what is the network and bandwidth response time.

·         Load testing can be done under controlled lab conditions to compare the capabilities of different systems or to accurately measure the capabilities of a single system.

·         Load testing involves simulating real-life user load for the target application. It helps you determine how your application behaves when multiple users hits it simultaneously.

·         Load testing differs from stress testing, which evaluates the extent to which a system keeps working when subjected to extreme work loads or when some of its hardware or software has been compromised.

·         The primary goal of load testing is to define the maximum amount of work a system can handle without significant performance degradation.

Examples of load testing include:

Downloading a series of large files from the internet.

Running multiple applications on a computer or server simultaneously.

Assigning many jobs to a printer in a queue.

Subjecting a server to a large amount of traffic.

Writing and reading data to and from a hard disk continuously.

89. What is installation testing?

Ans:  Installation Testing: It is performed to verify if the software has been installed with all the necessary components and the application is working as expected. This is very important as installation would be the first user interaction with the end users.

Companies launch Beta Version just to ensure smoother transition to the actual product.

Installation Types: The following are the installation types:

Silent Installation

Attended Installation

Unattended Installation

Network Installation

Clean Installation

Automated Installation

Uninstallation Testing:  Uninstallation testing is performed to verify if all the components of the application is removed during the process or NOT. All the files related to the application along with its folder structure have to be removed upon successful uninstallation. Post Uninstallation System should be able to go back to the stable state.

90. What is security/penetration testing?

Ans:       There is a considerable amount of confusion in the industry regarding the differences between vulnerability scanning and penetration testing as the two phrases is commonly interchanged. However, their meaning, and implications are very different. A vulnerability assessment simply identifies and reports noted vulnerabilities, whereas a penetration test attempts to exploit the vulnerabilities to determine whether unauthorized access or other malicious activity is possible. Penetration testing typically includes network penetration testing and application security testing as well as controls and processes around the networks and applications, and should occur from both outside the network trying to come in (external testing) and from inside the network.

91. What is recovery/error testing?

Ans:  In software testing, recovery testing is the activity of testing how well an application is able to recover from crashes, hardware failures and other similar problems.

Recovery testing is the forced failure of the software in a variety of ways to verify that recovery is properly performed. Recovery testing should not be confused with reliability testing, which tries to discover the specific point at which failure occurs. Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. It is basically testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

Examples of recovery testing:

While an application is running, suddenly restart the computer, and afterwards check the validness of the application's data integrity.

While an application is receiving data from a network, unplug the connecting cable. After some time, plug the cable back in and analyze the application's ability to continue receiving data from the point at which the network connection disappeared.

Restart the system while a browser has a definite number of sessions. Afterwards, check that the browser is able to recover all of them.

92. What is compatibility testing?

Ans:     In computer world, compatibility is to check whether your software is capable of running on different hardware, operating systems, applications, network environments or mobile devices.

93. What is comparison testing?

Ans:    Comparison testing comprises of comparing the contents of files, databases, against actual results. They are capable of highlighting the differences between expected and actual results.  Comparison test tools often have functions that allow specified sections of the files be ignored or masked out. This enables the tester to mask out the date or time stamp on a screen or field as it is always different from the expected ones when a comparison is performed.

94. What is acceptance testing?

Ans:  Acceptance testing, a testing technique performed to determine whether or not the software system has met the requirement specifications. The main purpose of this test is to evaluate the system's compliance with the business requirements and verify if it is has met the required criteria for delivery to end users.

There are various forms of acceptance testing:

·         User acceptance Testing

·         Business acceptance Testing

·         Alpha Testing

·         Beta Testing

95. What is alpha testing?

Ans:   Alpha testing is a type of acceptance testing; performed to identify all possible issues/bugs before releasing the product to everyday users or public.  The focus of this testing is to simulate real users by using black box and white box techniques. The aim is to carry out the tasks that a typical user might perform. Alpha testing is carried out in a lab environment and usually the testers are internal employees of the organization. To put it as simple as possible, this kind of testing is called alpha only because it is done early on, near the end of the development of the software, and before beta testing.

96. What is beta testing?

Ans:    Beta Testing of a product is performed by "real users" of the software application in a "real environment" and can be considered as a form of external user acceptance testing.

Beta version of the software is released to a limited number of end-users of the product to obtain feedback on the product quality. Beta testing reduces product failure risks and provides increased quality of the  product through customer validation.

It is the final test before shipping a product to the customers. Direct feedback from customers is a major advantage of Beta Testing. This testing helps to tests the product in real time environment.

97. What testing roles are standard on most testing projects?

Ans:    The roles differ across industry, company, and even the software development process in use.  For example, tester roles differ between Waterfall and Agile approaches to software development.  Nevertheless, some standard roles are:  Quality Assurance Manager Quality Assurance Lead Quality Assurance Senior Engineer Quality Assurance Engineer Test Manager Test Lead Senior Tester ,Tester Software Test Manager Senior Software Test Engineer Software Developer in Test (SDET)Software Test Engineer (STE)

98. What is a test schedule?

Ans:    A test schedule includes the testing steps or tasks, the target start and end dates, and responsibilities. It should also describe how the test will be reviewed, tracked, and approved.

99. How do you create a test strategy?

Ans:    The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team. The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment.

Inputs for this process:

·         A description of the required hardware and software components, including test tools. This information comes from the test environment, including test tool data.

·         A description of roles and responsibilities of the resources required for the test and schedule constraints. This information comes from man-hours and schedules.

·         Testing methodology. This is based on known standards.

·         Functional and technical requirements of the application. This information comes from requirements, change request, technical and functional design documents.

·         Requirements that the system can not provide, e.g. system limitations.output for this process:

·         An approved and signed off test strategy document, test plan, including test cases.

·         Testing issues requiring resolution. Usually this requires additional negotiation at the project management level.

100. How do you introduce a new software QA process?

Ans:       Introducing new software a process depends on the organization size, the complexity of the project, the project model. For small organizations which have small scale projects, more ad hoc approach would be suitable and they may not be interested in introducing the processes. However if they want to introduce it, then a formal testing process should be rolled out through the organization so that all the developers are aware of the importance of QA and Qc and make sure that they understand the difference between them. It could be first tried out on a less complex project and could be rolled out later to other projects as well.

101.Give me five common problems that occur during software development?

Ans:    A.Poorly written requirements, unrealistic schedules, inadequate testing, adding new features after development is underway and poor communication.

1. Requirements are poorly written when requirements are unclear, incomplete, too general, or not testable; therefore there will be problems.

2. The schedule is unrealistic if too much work is crammed in too little time.

3. Software testing is inadequate if none knows whether or not the software is any good until customers complain or the system crashes.

4. It's extremely common that new features are added after development is underway.

5. Miscommunication either means the developers don't know what is needed, or customers have unrealistic expectations and therefore problems are guaranteed.

102.Give me five solutions to problems that occur during software development?

Ans:    A.Solid requirements, realistic schedules, adequate testing, firm requirements and good communication.

1. Ensure the requirements are solid, clear, complete, detailed, cohesive, attainable and testable. All players should agree to requirements. Use prototypes to help nail down requirements.

2. Have schedules that are realistic. Allow adequate time for planning, design, testing, bug fixing, re-testing, changes and documentation. Personnel should be able to complete the project without burning out.

3. Do testing that is adequate. Start testing early on, re-test after fixes or changes, and plan for sufficient time for both testing and bug fixing.

4. Avoid new features. Stick to initial requirements as much as possible. Be prepared to defend design against changes and additions, once development has begun and be prepared to explain consequences. If changes are necessary, ensure they're adequately reflected in related schedule changes. Use prototypes early on so customers' expectations are clarified and customers can see what to expect; this will minimize changes later on.

5. Communicate. Require walkthroughs and inspections when appropriate; make extensive use of e-mail, networked bug-tracking tools, tools of change management. Ensure documentation is available and up-to-date. Do use documentation that is electronic, not paper. Promote teamwork and cooperation.

103.Do automated testing tools  make testing easier?

Ans:   Yes and no.

For larger projects, or ongoing long-term projects, they can be valuable. But for small projects, the time needed to learn and implement them is usually not worthwhile.

A common type of automated tool is the record/playback type. For example, a test engineer clicks through all combinations of menu choices, dialog box choices, buttons, etc. in a GUI and has an automated testing tool record and log the results. The recording is typically in the form of text, based on a scripting language that the testing tool can interpret.

If a change is made (e.g. new buttons are added, or some underlying code in the application is changed), the application is then re-tested by just playing back the recorded actions and compared to the logged results in order to check effects of the change.

One problem with such tools is that if there are continual changes to the product being tested, the recordings have to be changed so often that it becomes a very time-consuming task to continuously update the scripts.

Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

104.What makes a good test engineer?

Ans:. The following attributes makes a good tester

·         A ‘test to break’ attitude,

·         An ability to take the point of view of the customer,

·         A strong desire for quality, and an attention to detail.

·         Tact and diplomacy are useful in maintaining a cooperative relationship with developers,

·         An ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.

·         Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers’ point of view, and reduce the learning curve in automated test tool programming.

·         Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.

105.what makes a good QA engineer?

Ans:   It's easy to recognize a good QA engineer when you find one. The obvious job-description answers are:

·         strong attention to detail

·         good technical knowledge

·         has a few years of experience

·         has a good attitude

·         excellent verbal and written communication

·         Easy enough - we've all seen that list. So what makes a great QA Engineer?

106.What should be done after a bug is found?

Ans:         Once a bug is found this should be communicated to developer. Before reporting the bug make sure that the bug is well documented with steps to repro, conditions under which this bug is occurring, how many time it occurs &  the excepted result of the bug. The bug report should accurate & complete, so that developer can get the exact failure reason. Based on this developer can get exact idea of problem faced by user & it helps to resolve the problem accurately. To facilitate this task tester should repro the bug & verify that this is a bug & add same repro steps with example & attach screenshots which helps to prove that a bug has been encountered. Also attach the related logs that provide the activities about the time of bug occurrence.

While reporting the bug, it should be divided into various categories like GUI, Functional or Business, Navigational, Validation error etc. which will again help to categorize the bugs in the bug management.

The approach of tester would help a lot to get bug fixed bug quickly & correctly. The main thumb rule is that you (tester) should be confident enough while reporting the bug. Before adding a bug makes sure that you are not adding duplicate bug (which is already logged). Many of the bugs tracking systems are help to identify or prevent to add duplicate bug, which is restricting to adding unnecessary bugs which will help to reduce rework in case of bug management.

Along with bug report adding few extra information would definitely help developer to get exact scenario or steps to understand the problem like types of hardware & software, environment configuration, setup, versions (like Browser version & name) etc.

107.What is configuration management?

Ans:    Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product's performance, functional and physical attributes with its requirements, design and operational information throughout its life.

Configuration management (CM) refers to a discipline for evaluating, coordinating, approving or disapproving, and implementing changes in artifacts that are used to construct and maintain software systems. An artifact may be a piece of hardware or software or documentation. CM enables the management of artifacts from the initial concept through design, implementation, testing, baselining, building, release, and maintenance.

 

 

108.What if the software is soo buggyit cant be tested at all?

Ans:       The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

109.How do u know when to stop testing?

Ans:      This can be difficult to determine. Many modern software applications are so complex and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are...

·         Deadlines, e.g. release deadlines, testing deadlines;

·         Test cases completed with certain percentage passed;

·         Test budget has been depleted;

·         Coverage of code, functionality, or requirements reaches a specified point;

·         Bug rate falls below a certain level; or

·         Beta or alpha testing period ends.

110.What if there isn't enough time for throught testing?

Ans:     Sometimes Tester need common sense to test a application!!!

I am saying this because most of the times it is not possible to test the whole application within the specified time. In such situations it’s better to find out the risk factors in the projects and concentrate on them.

Here are some points to be considered when you are in such a situation:

1) Find out Important functionality is your project?

2) Find out High-risk module of the project?

3) Which functionality is most visible to the user?

4) Which functionality has the largest safety impact?

5) Which functionality has the largest financial impact on users?

6) Which aspects of the application are most important to the customer?

7) Which parts of the code are most complex, and thus most subject to errors?

8) Which parts of the application were developed in rush or panic mode?

9) What do the developers think are the highest-risk aspects of the application?

10) What kinds of problems would cause the worst publicity?

11) What kinds of problems would cause the most customer service complaints?

12) What kinds of tests could easily cover multiple functionalities?

Considering these points you can greatly reduce the risk of project releasing under less time constraint.

111.What if the project isn't  big enough to justify extensive testing?

Ans:         Consider the impact of project errors, not the size of the project. However, if extensive testing is still not justified, risk analysis is again needed and the same considerations as described previously in 'What if there isn't enough time for thorough testing?' apply. The tester might then do ad hoc testing, or write up a limited test plan based on the risk analysis.

112.What can be done if requirements are changing continuously?

Ans:   If client requirements are chaging continuously, then below changes are expected.

           1. Change in design document or new design document HLD &LLD

           2. Change in test case scenarios

           3. Change in test cases

4. Developer needs to write the code as per the change.

5. Tester needs to write new test cases

6. Tester needs to test the change.

7. In agile, work done in iteration can be affected.

8. Involves more cost.

113.What if the  application has functionally that wasn't in the requirements?

Ans.          It may take serious effort to determine if an application has significant unexpected or hidden functionality, which it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer.

If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant risk.

 

114.How can software QA processes be implemented  without stifling productivity?

Ans.         Implement QA processes slowly over time. Use consensus to reach agreement on processes and adjust and experiment as an organization grows and matures. Productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection. Panics and burnout will decrease and there will be improved focus and less wasted effort.

At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings and promote training as part of the QA process.

However, no one, especially talented technical types, like bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug fixing and calming of irate customers.

115.What if organization is growing so fast that fixed QA processes are impossible?

Ans     This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than...

·         Hire good people

·         Ruthlessly prioritize quality issues and maintain focus on the customer;

·         Everyone in the organization should be clear on what quality means to the customer.

116.Why do you recommend that we test during the design phase?

Ans:       I recommend that we test during the design phase because testing during the design phase can prevent defects later on. I recommend verifying three things.

·         verify  the design is good, efficient ,compact,teastable and maintainable.

·         verify the design meets the requirements and is complete (i.e. specifies all relationships between modules,how to pass data ,what happens in exceptional circumstances,stsrting state of each module).

·         verify the design incorporates enough memory,I/O devices and quick enough runtime for th final product.

117.Process and procedurs -why follow them?

Ans.           It is necessary for organizations to build effective and standard processes in the workplace, for streamlining operations. An organization’s productivity is directly connected to the efficiency of its day-to-day operations. It is therefore, important for employees to undergo process training within the organization, as it helps them to understand and follow the in-built processes and procedures better.

Process training helps employees adhere to set of processes laid out by the organization and ensures timely delivery of the business outputs. It also ensures increase in productivity and reduces the process time, as employees are trained and they efficiently follow the steps involved to complete a task. The efficiency of employees, in following the pre-set guidelines pertaining to processes and procedures, depends on the effectiveness of the training they receive.

Here is a presentation that provides guidelines on developing an effective training program, for employees, on process and procedures.

119.What is parallel/audit testing?

Ans.        Prallel/audit  testing is a type of testing where the testers reconciles the output of the new system to the output of the current system,in order to verify the new system operates correctly.

120.what is funcional testing?

Ans.       Functional testing is a software testing process used within software development in which software is tested to ensure that it conforms with all requirements. Functional testing is a way of checking software to ensure that it has all the required functionality that's specified within its functional requirments.

121. What is usability testing?

Ans: Usability testing is done from the users point of view. This testing type makes an attempt to describe the “look and feel” and usage features or usage aspects of a product. Many types of testing are objective in nature. Many people disagrees that usability testing really belongs to The-Software-Testing domain.

Following factors support this statement:

1. “Look -and –feel” and Usability features cannot be objectively measured at all time because they are subjective in nature.

2. Definition of Good usability is person dependent and varies from user to user. Lets take an example, a developer or system administrator feels that use of command line flags is a good user interface. But an end user would definitely like to have everything in terms of GUI objects such as dialog boxes, menus etc.

3. User interface is an activity of design time. If the correct requirements are not gathered or if requirements are not translated into correct design, then that particular user interface will not meet the user needs.

122. What is integration testing?

Ans:           In Integration Testing, individual software modules are integrated logically and tested as a group. A typical software project consists of multiple software modules, coded by different programmers.  Integration testing focuses on checking data communication amongst these modules.

Hence it is also termed as 'I & T' (Integration and Testing), 'String Testing' and sometimes 'Thread Testing'.

Approaches/Methodologies/Strategies of Integration Testing:

The Software Industry uses variety of strategies to execute Integration testing , viz.

·         Big Bang Approach :

·         Incremental Approach: which is further divided into following

·         Top Down Approach

·         Bottom Up Approach

·         Sandwich Approach - Combination of Top Down and Bottom Up

123. What is system testing?

Ans: System testing is the testing of behavior of a complete and fully integrated software product based on the software requirements specification (SRS) document. In main focus of this testing is to evaluate Business / Functional / End-user requirements.

 Types of system testing:      

  Usability

·         User interface testing

·         Manual support testing

Functional

·         Graphical User Interface  Coverage

·         Error handling

·         Input Domain Coverage

·         Manipulation Coverage

·         Backend /Database Coverage

Non-functional         

·         Reliability

·         Compatibility

·         Portability

·         System Integration testing /end to end

·         Localization / internalization

·         Installation / Uninstallation

·         Performances

Ø  Load

Ø  Stress

Ø  Data volume

Ø  Soak

·         Security

·         Acceptability

124. What is end-to-end testing?                                        

Ans:          Unlike System Testing, End-to-End Testing not only validates the software system under test but also  checks it's integration with external interfaces. Hence, the name "End-to-End". The purpose of End-to-End Testing is to exercise a complete production-like scenario. Along with the software system, it also validates batch/data processing from other upstream/downstream systems.

End to End Testing is usually executed after functional and system testing. It uses actual production like data and test environment to simulate real-time settings. End-to-End testing is also called Chain Testing

125. What is regression testing?

Ans:    Regression testing is a type of software testing that intends to ensure that changes (enhancements or defect fixes) to the software have not adversely affected it.The purpose of regression testing is to confirm that a  recent program or code change has not adversely affected existing features.

Regression testing is nothing but full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine. This testing is done to make sure that new code changes should not have side effects on the existing functionalities. It  ensures that old code still works once  the new code changes are done.

Need of Regression Testing

Regression Testing is required when there is a

·         Change in requirements and code is modified according to the requirement

·         New feature is added to the software

·         Defect fixing

·         Performance issue fix

Challenges in Regression Testing:

Following are the major testing problems for doing regression testing:

·         With successive regression runs, test suites become fairly large.  Due to time and budget constraints, the entire regression test suite cannot be executed

·         Minimizing test suite while achieving maximum test coverage remains a challenge

·         Determination of frequency of Regression Tests , i.e., after every modification  or every build update or after a bunch of bug fixes, is a challenge

126. What is sanity testing?

Ans:      After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed to ascertain that the bugs have been fixed and no further issues are introduced due to these changes.The goal is to determine that the proposed functionality works roughly as expected. If sanity test fails, the build is rejected to save the time and costs involved in a more rigorous testing.

Sanity Testing is the subset of Regression Testing and it is performed when we do not have enough time for doing testing  Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.

127. How do you estimate staff requirements?

Ans:  

128. What do you do (with the project staff) when the schedule fails?

Ans:

 

129. Describe some staff conflicts you’ve handled.

Ans:

 

130. What problem you have right now or in the past? How you solved it?

Ans:

 

131. What would you like to do five years from now?

Ans:

 

132. Tell me about the worst boss you've ever had.

Ans:

 

133. What do you like about QA?        

Ans:

 

134. Test Estimation

Ans:

 

135. Should every business test its software the same way?

Ans:

 

136. How is testing affected by object-oriented designs?

Ans:

 

137. How do you estimate staff requirements?

Ans:

 

138. What do you do (with the project tasks) when the schedule fails?

Ans:

 

139. How do you estimate staff requirements?

Ans:

 

140. What do you do (with the project staff) when the schedule fails?

Ans:

 

141. Describe some staff conflicts you’ve handled.

Ans:

 

142. Describe to me what you see as a process. Not a particular process, just the basics of having a process.

Ans:

 

143. Describe to me when you would consider employing a failure mode and effect analysis.

Ans:

 

144. What are the properties of a good requirement?

Ans:

 

 

145. How do you differentiate the roles of Quality Assurance Manager and Project Manager?

Ans:

 

146. Tell me about any quality efforts you have overseen or implemented. Describe some of the  challenges you faced and how you overcame them.

Ans:

 

147. How do you deal with environments that are hostile to quality change efforts?

Ans:

 

148. In general, how do you see automation fitting into the overall process of testing?

Ans:

 

 

149. How do you promote the concept of phase containment and defect prevention?

Ans:

 

150. If you come onboard, give me a general idea of what your first overall tasks will be as far as starting a quality effort.

Ans:

 

151. How do you analyze your test results? What metrics do you try to provide?

Ans:

 

152. Realizing you won't be able to test everything - how do you decide what to test first?

Ans:

 

153. If automating - what is your process for determining what to automate and in what order?

Ans:

 

154. In the past, I have been asked to verbally start mapping out a test plan for a common situation,such as an ATM. The interviewer might say, "Just thinking out loud, if you were tasked to test an ATM, what items might you test plan include?" These type questions are not meant to be answered conclusively, but it is a good way for the interviewer to see how you approach the task.

Ans:

 

155. Tell me about the best bug you ever found.

Ans:

 

156. What made you pick testing over another career?

Ans:

 

157. What is the exact difference between Integration & System testing, give me examples with your project.

Ans:

 

158. How do you go about testing a web application?                 

Ans:

 

159. What do you plan to become after say 2-5yrs (Ex: QA Manager, Why?)                     

Ans:

 

160. When should testing be stopped?

Ans:

161.What sort of things would you put down in a bug report?

Ans: The following things we put in a bug report:

·         Category

·         Reproducilbilty

·         Severity

·         priority

·         Summary

·         Description

·         Additional Inforation

·         Activity Type

·         Bug-type

·         Detection

·         Injection

·         snap shots if necessary.

162. What is an equivalence class?

Ans:     divide set of test conditions into partitions that can be considered the same. The divided sets are called Equivalence Partitions or Equivalence Classes. Then we pick only one value from each partition for testing. This technique is that if one condition/value in a partition passes all others will also pass. Likewise , if one condition in a partition fails , all other conditions in that partition will fail.

163. Should we test every possible combination/scenario for a program?

Ans:     Its  not mandatory. There are several points to be considered here. Delivery time available, personnel availability, complexity of the application under testing, available resources, budget etc. But we should test convert to the maximum extent satisfying the primary functionalities which are must from the customer point of view.

164. What criteria do you use when determining when to automate a test or leave it manual?

Ans:       when all the initials bugs r fixed and the application has it maturity level then we will go automation  for regaression  testing once we complete it with manual testing

165. Discuss what test metrics you feel are important to publish an organization?

Ans:    Important metrics are as follows:

·         Defect Density

·         Defect Detection effectiveness

·         Test effectiveness

·         Tester Effectiveness

·         Defect Distribution and  Bugs Severity

166. How do you feel about cyclomatic complexity?

Ans:     It is used to measure the complexity of software process. It determines the how many number of inputs are needed to test the application in all the possible ways.

167. What processes/methodologies are you familiar with?

Ans:

 

168. What type of metrics would you use?

Ans:   We use requirement traceability matrix

169. How to find that tools work well with your existing system?

Ans:

 

 

170. What automated tools are you familiar with?

Ans:

 

 

171. How well you work with a team?

Ans:

 

172. How would you ensure 100% coverage of testing?

Ans:        If all the test cases written for the application functionality are tested we can say we have covered 100% test coverage, but 100% testing is not possible

173. How would you build a test team?

Ans:    We are not going to build the testing team. Team lead or project manager  category people going to prepare the team based on project size ,availability of test engineers ,test duration ,test environment resources.

 

 

174. How you will begin to improve the QA process?

Ans:     By preparing test strategy, the test plans, test documents, test cases, test scenario

175. What are CMM and CMMI? What is the difference?

Ans:      Capability Maturity Model (CMM): A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers practices for planning, engineering and managing software development and maintenance

           Capability Maturity Model Integration (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM.

CMM measures the maturity level of an organization by determining if an organization completes the specific activities listed in the Key Performance Areas (KPA), oblivious to whether the completion of such activity leads to the desired result. CMMI is also an activity based approach but the major difference is that CMMI takes a more result-oriented approach when defining and measuring Key Performance Areas.

CMM KPA concentrates on the completion of specific tasks or processes and does not motivate the organization to focus on process architecture. CMMI, on the other hand has an iterative lifecycle that integrates the latest best practices from the industry and attacks risks in process architecture at an early stage.

 

176. Traceability Matrix?

Ans:A traceability matrix is a document .it consists of detailed requirements of the product to the matching parts of high-level design, detailed design, test plan, and test cases. The traceability matrix links a business requirement to its corresponding functional requirement right up to the corresponding test cases.

·         If a Test Case fails, traceability helps determine the corresponding functionality easily.

·         It also helps ensure that all requirements are tested.

177. Requirement Management?

Ans:    Requirements management is the process of documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. It is a continuous process throughout a project

178. DP Plan/Defect Prevention?

Ans:         Defect prevention is an important activity in any software project. The purpose of Defect Prevention is to identify the cause of defects and prevent them from recurring. Defect

Prevention involves analyzing defects that were encountered in the past and taking specific actions to prevent the occurrence of those types of defects in the future. Defect Prevention can   be applied to one or more phases of the software lifecycle to improve software process quality.

179. Process Maturity Vs Process Capability?

Ans: Capability levels, which belong to a continuous representation, apply to an organization’s process-improvement achievement in individual process area.

         Maturity levels, which belong to a staged representation, apply to an organization’s overall process-improvement achievement using the model.

180. What types of documents would you need for QA, QC, and Testing?

Ans:    For QA/ QC brs,Srs,Fs documents are needed

·         Test scenarios,

·         Test strategy,

·         Test cases,

·         Traceability matrix  (RTM)

·         Acceptance test plan,

·         All types of testing phases and test plan if required for testing

 

 

181. Have you ever created a test plan?

Ans :       Definition  : The software test plan is the primary means by which software testers communicate to the product development team.

182. Describe the components of a test plan, such as tools for interactive products and for database products, as well as cause and effect graphs and data flow diagrams.

Ans :  Main components of test plan are :

·         Background(platform)

·         Test Items(Programs/modules)

·         Features to be testd

·         Features not to be tested

·         Approach

·         Item pass/Fail criteria

·         Suspension/resumption criteria

·         Test Deliverales

·         Testing Tasks

·         Environmental needs

·         Responsibilities

·         Staffing & Training

·         Schedule

·         Resources

·         Risks

·         Approvals

Data flow diagram :

A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system  modeling its process aspects. Often they are a preliminary step used to create an overview of the system which can later be elaborated. DFDs can also be used for the visualization of data processing (structured design).

Cause and effect graph :

In software testing, a cause–effect graph is a directed graph that maps a set of causes to a set of effects. The causes may be thought of as the input to the program, and the effects may be thought of as the output. Usually the graph shows the nodes representing the causes on the left side and the nodes representing the effects on the right side. There may be intermediate nodes in between that combine inputs using logical operators such as AND and OR.

183. How to you develop a test plan and schedule? Describe bottom up and top down     approaches

Ans :             A test plan is contract between the testers and the project team describing the role of testing in the project. The purpose of test plan is to prescribe the scope, approach, resources and schedule of the testing activities. To identify items being tested, the feature to be tested, the testing task to be performed, the personnel responsible for each task and the risks associated with the plan. From this, it is imperative that test plan is made by taking inputs from the product development team, keeping in consideration the project deadlines and risks involved while testing the product or components of the product.

The steps in creating test plan are:

1. Identifying requirements for Test: This includes tests for Functionality, Performance, and Reliability etc.

2. Assess Risk and Establish Test Priorities: In this step risks are identified and risk magnitude indicators (high, medium, low) are assigned.

3. Develop Test Strategy: This includes following things:

       i. Type of test to be implemented and its objective

      ii. Stage in which test will be implemented

       iii. Test Completion Criteria

When all these 3 steps are completed thoroughly, a formal document is published stating above things which is known as “Test Plan”.

Bottom up Integration Testing:

The program is combined and tested from the bottom of the tree to the top. Each component at the lowest level of the system hierarchy is tested individually first, then next component is to be tested. Since testing starts at the very low level of software, drivers are needed to test these lower layers. Drivers are simply programs designed specifically for testing that make calls to these lower layers. They are developed for temporary use and need to be replaced when actual top level module is ready.

Eg: Consider a Leave Management System. In order to approve leave, there has to be a module to apply leave. If this module for apply leave is not ready, we need to create a driver (which will apply for leave) in order to test the approve leave functionality.

Top down Integration Testing:

Modules are tested by moving downwards in control hierarchy, beginning with main control module. A module being tested may call another that is not yet tested. For substituting lower modules, stubs are used. Stubs are dummy modules developed to test the control hierarchy. Stubs are special purpose programs that simulate the activity of missing component.

Eg: In Leave Management System, once leave is approved, the leave status can be seen in leave report. So we need to create a dummy implementation of a leave report (stub).

184. Have you ever written test case or did you just execute those written by others?

Ans :      A test case is a scenario made up of a sequence of steps and conditions or variables, where test inputs are provided and the program is run using those inputs, to see how it performs. An expected result is outlined and the actual result is compared to it. Certain working conditions are also present in the test case, to see how the program handles the conditions.

 

 

 

185. How do you test if you have minimal or no documentation about the product ?

Ans :          Testing is always performed against a given set of requirements and expectations. In the absence of these, one must first try and gather as much information and requirements on the product. This is achieved by:

Performing Exploratory Testing

(Perform a detailed study of the product / application under test, and make a list of features and functionality. The approach usually taken is a depth first breadth later approach.)

Construct a details functionality vs. requirement map. This will act as Mini Functional Specifications document. This document can then server as your reference document to perform detailed structured testing activities.

186. How do you determine what to test?

Ans :            The duties of the software Tester is to go through the requirements documents and functional specification and based on those documents one should focus on writing test cases, which covers all the functionality to be tested. The tester should carry out all these procedures at the time of application under development. Once the build is ready for testing, we know what to test and how to proceed for the testing. Make sure, main functionality is tested first, so that all other testers can be ready for testing their modules/functionality.

 

 

187.How do you decide when you have tested enough?

Ans:    These are some aspects:

·         When maximum number of test cases are executed

·         All the Requirements are mapped that is RTM is filled completely

·         When Test coverage is > 80%

·         when bug rate falls below certain level

188.How do you perform regression testing?

Ans :     During comprehensive testing, the testers are reporting mismatches in b/w our test expected values and build actual values as defect reports. After receiving defect reports from testers the developers are conducting a review meeting to fix defects, if our defect is accepted by developers then they are performing changes in coding and then they will release modified build with release note.The release note of a modified build to resolve reported defects.

 

 

189. At what stage of life cycle does testing begin in your opinion?

Ans :         Testing is a continuous process and it starts as and when the requirements for the project /product begins to be framed. Requirements phase: testing is done to check whether the project/product details are refelecting clients ideas or giving an idea of complete project from the clients perspective (as he wished to be) or not.

190. How do analyse your test results?What metrics do you try to provide?

Ans :        After the execution testcases on the corresponding build or version we get the results. before track down them into the bugtracking tool we have to analyze them to assign the seviority. when we assign the seviority, try to analyse the problem from the perception of the end user.  from the above u can know how sevior the problem is .

191.Realizing you wont be able to test everything-how do you decide what to test first?

Ans :           First of all if limited time is given by the customer for testing due to immediate release, in that case customer also tells which all requirements are critical for him. Hence in that case we'll be testing only those requirements.Otherwise, if nothing is specified by the customer in that case we can proceed as follows:1. System Testing (wherein we test the Installation, compatibility etc).2. Functional testing (which covers main functionality like as if system is producing accurate output).3. Performance Testing.4. Little bit of regression testing.

192. Where do you get your expected results?

Ans :        Expected results    is another way of asking "How do you know if a bug is a bug?". There are several ways to determine this depending upon what info is available. 

·         You can check the spec or requirements.

·           You can check against the previous version of the product.

·         You can check against competing products.

·          You can talk to the programmer to see how he/she got the idea something should work a certain way.

·         Ease of usage.

193. If you were given a program that will average student grades, what kinds of inputs would you use?

Ans:    refer question no.5 for answer

194.When should testing be stopped?

Ans :        There can be no end to Testing.its a continuous process.But there can be some factors influencing the span of testing process:

·         The Exit criteria are met

·         when project deadlines comes.

·         when all the core functionality is tested .

·         Test budget is depleted.

195.Explain some techniques for developing software components with respect to testability?

Ans :        Testability is anything that makes software easier to test by making it easier to design and execute tests.

software testability can be explained in terms of:

·         Control: The better we can control it, the more the testing can be automated and optimized.

·         Visibility: What we see is what we test.

·         Operability: The better it works, the more efficiently it can be tested.

·         Simplicity: The less there is to test, the more quickly we can test it.

·         Understandability: The more information we have, the smarter we test.

·         Suitability: The more we know about the intended use of the software, the better we can organize our testing to find important bugs.

·         Stability: The fewer the changes, the fewer the disruptions to testing.

For all of these above the requirements must be clear and not ambiguous. So the analyst finalising on the requirements must always keep in mind "whether the client requirement is testable or not"

196.How would you ensure 100% coverage of testing?

Ans : We can not perform 100% testing on any application. but the criteria to ensure test completion on a project is

·         All the test cases are executed with the certain percentage of pass.

·         Bug falls below a certain level

·         Test budget depleted

·         Dead lines reached(project or test)

·         When all the functionalities are covered in a testcases

·          All critical & high bugs must have a status of CLOSED.

197.When have you had to focus on data integrity?

Ans :   Data Integrity refers to the validity of data.

Data integrity can be compromised in a number of ways:

·         Human errors when data is entered

·         Errors that occur when data is transmitted from one computer to another

·         Software bugs or viruses

·         Hardware malfunctions, such as disk crashes.

We usually focus on Data Integrity checks when porting an application from one Database to another. For example Oracle To MySQL.

198.How do you prioritise testing tasks with in a project?

Ans :          We can prioritize testing tasks within a project by determining the risk factor of each and every module on the basis of impact and likelihood and then start testing the module having the highest risk factor.

199.Do you know of metrics that help you estimate the size of testing effort?

Ans:       Metrics-Based Approach:A useful approach is to track past experience of an organization's various projects and the associated test effort that worked well for projects. Once there is a set of data covering characteristics for a reasonable number of projects, then this 'past experience' information can be used for future test project planning. (Determining and collecting useful project metrics over time can be an extremely difficult task.)

200.How do you scope out the size of testing effort?

Ans:    Testing effort depend upon thing -

·         Requirement size

·         Time of delivery

·         Most important type of testing - like only function or performance , secuirty and all other type testing.

So effort estimation should be done on the basis of above.

201. how do you promote the concept of phase containment and defect prevention?

Ans:   Phase containment is the act of containing faults in one phase of software development  before they escape and are found in subsequent phases.An error is a fault that is introduced in the current phase of software development.A defect is a fault that was introduced in prior phases of software development and discovered in subsequent phases.You promote the concept of phase containment by relating this concept to the organization's costs and profitability.  In order to do that, you will need to identify the faults that escaped phase and were found in later phases.  You will also need to determine the average costs of defects that escape and are found in subsequent phases.

202.Describe the any bug you remember?

Ans:          For example in our Project, the bug was like Sales Tax should be calculated w.r.t the Address on Shipment, but in the address if some Space is given in b/w, the sales tax was not calculating.

203.describe the basic elements you put in a defect report?

Ans:        Bug report contains complete description of the bug such as Identifier, Description, Status of the defect, Severity, Priority, Platform, Product Name, Product Version, Detected during (development,certification etc), Detected by, Detected on Date, Frequency, Whether the defect is Reproducable or not.

204.what sort of things would you put down in a bug report?

Ans:    BUG REPORT contents will depend on the organization/.

In general Bug Report contains:

1. BUG ID

2. APPLICATION

3. MODULE

4. FEATURE

5. STATUS

6. PROBLEM TYPE

7. PRIORITY

8. SEVERITY

9. REPORTED BY(EX:TEST ENGINEER)

10. PLATFORM

11. TEST LEAD

12. DEVELOPER TO FIX

13. PROJECT MANAGER

14. SNAPSHOT(IF REQUIRED)

15. PROBLEM SUMMARY

205.how would you define a bug?

Ans:   an unexpected defect, fault, flaw, or imperfection <the software was full of bugs>

206.what are all the basic elements in a defect report?

Ans: 

 

 

 

207.can you explain typical defect life cycle?

Ans:   Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its closure. The Life Cycle varies from organization to organization and is governed by the software testing process the organization or project follows and/or the Defect tracking tool being used.

Nevertheless, the life cycle in general resembles the following:

Status  & Alternative Status

NEW

ASSIGNED    OPEN

DEFERRED

DROPPED     REJECTED

COMPLETED            FIXED, RESOLVED, TEST

REASSIGNED          REOPENED

CLOSED        VERIFIED

Defect Status Explanation

·         NEW: Tester finds a defect and posts it with the status NEW. This defect is yet to be studied/approved. The fate of a NEW defect is one of ASSIGNED, DROPPED and DEFERRED.

·         ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect and if it is found to be valid it is assigned to a member of the Development Team. The assigned Developer’s responsibility is now to fix the defect and have it COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In that case, a defect can be open yet unassigned.

·         DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in upcoming releases instead of the current release it is DEFERRED. This defect is ASSIGNED when the time comes.

·         DROPPED / REJECTED: Test / Development/ Project lead studies the NEW defect and if it is found to be invalid, it is DROPPED / REJECTED. Note that the specific reason for this action needs to be given.

·         COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the defect that is ASSIGNED to him or her. Now, the ‘fixed’ defect needs to be verified by the Test Team and the Development Team ‘assigns’ the defect back to the Test Team. A COMPLETED defect is either CLOSED, if fine, or REASSIGNED, if still not fine.

 

 

If a Developer cannot fix a defect, some organizations may offer the following statuses:

·         Won’t Fix / Can’t Fix: The Developer will not or cannot fix the defect due to some reason.

·         Can’t Reproduce: The Developer is unable to reproduce the defect.

·         Need More Information: The Developer needs more information on the defect from the Tester.

·         REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ defect is in fact not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it. A REASSIGNED defect needs to be COMPLETED again.

·         CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed fixed and is no more of any concern, it is CLOSED / VERIFIED. This is the happy ending.

Defect Life Cycle Implementation Guidelines

·         Make sure the entire team understands what each defect status exactly means. Also, make sure the defect life cycle is documented.

·         Ensure that each individual clearly understands his/her responsibility as regards each defect.

·         Ensure that enough detail is entered in each status change. For example, do not simply DROP a defect but provide a reason for doing so.

·         If a defect tracking tool is being used, avoid entertaining any ‘defect related requests’ without an appropriate change in the status of the defect in the tool. Do not let anybody take shortcuts. Or else, you will never be able to get up-to-date defect metrics for analysis.

208.what is quality assurance?

Ans:     Quality Assurance (QA) is a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers. QA is applied to physical products in pre-production to verify what will be made meets specifications and requirements, and during manufacturing production runs by validating lot samples meet specified quality controls. QA is also applied to software to verify that features and functionality meet business objectives, and that code is relatively bug free prior to shipping or releasing new software products and versions.

209.who in the company responsible for quality?

Ans:   In the company every one invloved in the software developmant of a software is responsible for the software quality.

A product is said to be quality product if it contains following.

·         it should be bug free

·         it should be delivered on time with in the budget

·         it should satisfy the requirements.

·         last but not the least it must satisfy the end user.

From the above u observe that Quality is not a single person duty. the persons who involved in the project should follow some standards. then only we can produce quality products.  QA & QC people are just monitoring the process to ensure that the people follows the standards by conducting walkthroughs, reviews, inspections, auditing, &training programs.

210.Who defines quality?

Ans:  The customer defines the word, “Quality.”

211.What is the differences btw QA & Testing?

Ans:     Quality Assurance (QA):

·         QA is planned and systematic way to evaluate quality of process used to produce a quality product.

·         The goal of a QA is to provide assurance that a product is meeting customer’s quality expectations.

·          QA deals with how to prevent bugs from occurring in a product being developed.

·         Software Quality Assurance Engineer’s main responsibility is to create and implement methods and standards to improve development process.

·         QA is associated with activities like measuring the quality of process used to develop a product, process improvement and defect prevention.

·         It consists of auditing and reporting procedures related to development and testing.

        Software testing: is a planned process that is used to identify the correctness, completeness, security and quality of software.

·         Testing is generally done to demonstrate that the software is doing what it is supposed to do as well as the software is not doing what it is not supposed to do.

·         The goal of testing or software tester is to locate defects and make sure that they get fixed.

 

 

212.What is the role of QA in a development project?

Ans:        Quality Assurance group assure the Quality it must monitor the whole development process. they are most concentration on prevention of bugs.    it must set standards, introduce review procedures, and educate people into better ways to design and develop products.

213.How do you scope,organise,and execute a test project?

Ans:   Execution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test execution log to note the execution of the procedure and whether or not the test procedure uncovered any defects. Checkpoints meetings are held throughout the execution phase. Checkpoint meetings are held daily, if required, to address and discuss testing issues, status and activities.

214.What should development require of QA?

Ans:            The development team wants the QA to identify those bugs that they generally fail to identify during the unit testing. QA team is expected to perform good amount of testing and report the bug to the developmet team so that they can fix that. and once the bug is fixed they are expected to re-test and verify the same.

215.what should QA require of development?

Ans:      Both developers and QA people should have the same goal (and their performance measured against that): deliver a quality product in time and on budget. You get to define "quality product", but it has to be the same for both groups. Why? Because if it isn't the same, you will get two groups with different agendas and that can quickly deteriorate into a situation that is to the detriment of the product/company.

QA should work (very) closely together with the developers and vice versa, but both should be totally independent of the other in their decision making. They are, after all, responsible for totally different aspects of product development

The way we have set it up is that "Product Development" is a "virtual" department realized by two concrete departments: QA and Development. Both report to the same member of the management team: the CTO. This ensures that there is a single person responsible for the product (our CTO) and that both QA and Development are independent of each other.

216.Define quality for me as u understand it?

Ans:     It is software that is reasonably bug-free and delivered on time and within the budget, meets the requirements and expectations and is maintainable.

 

 

217.Describe to me SDLC as you define it?

Ans:     It's a process of developing a software system in an organized, controlled, and predictable way. The process starts at the conception of the project to its termination with the company, sometime called a cradle-to-grave process.  System Development Life Cycle Model (SDLC Model)  This is also known as Classic Life Cycle Model (or) Linear Sequential Model (or) Waterfall Method.

This has the following activities. 

·         System/Information Engineering and Modeling 

·         Software Requirements Analysis 

·         Systems Analysis and Design  

·         Code Generation  

·         Testing

·         Maintenance

218.Describe to me what you see as a process. Not a particular process,just the basic of having a process?

Ans:       Process is defined as set of activities, methods, practices and transformations that an organization follows to develop and maintain its software projects and products.

219.How you will begin to improve the QA process?

Ans:   we can improve the QA process by following and maintaining the simple steps like going through the test statergy, and writing the test plans, test documents , test cases , test scenarios........without having any time pressures and in a orderly way

220.what is the dff btwn QA and QC?

Ans:    Quality Assurance is process oriented and focuses on defect prevention, while quality control is product oriented and focuses on defect identification

221.What is CMM and CMMI? What is the difference?

Ans:   CMM: Capability Maturity Model

·         Lack of Integration: CMM has separate models for each function. Such models often overlap, contradict, and display different levels of maturity. This lack of standardization leads to confusion and conflict during the implementation phase and increase training and appraisal costs.

·         Limitations of KPA: The “Key Performance Areas (KPA),” that define CMM levels focus on “policing” activities such as specifications, documentation, audits, and inspections, and do not reveal architecturally significant flaws.

·         Activity-based Approach: CMM is an activity-based approach that considers only the completion of a specific activity, and not whether the completed activity achieved the desired results.

·         Paperwork: CMM places great importance on paperwork and meetings that take management’s time and effort away from actual work processes. CMM traps the organization in recording and complying with processes, often at the cost of strategic goals.

CMMI: Capability Maturity Model Integration 

·         The Software Engineering Institute at Carnegie Mellon University developed Capability Maturity Model Integration (CMMI) in 2006 to integrate and standardize the separate models of CMM, and to eradicate other drawbacks of CMM.

·         CMMI documents industry best practices categorized on separate areas of interests rather than separate functions. Organizations choose from any of the 22 available models depending on the business objectives and each model covers all the functional areas.

CMMI vs CMM KPA

Both CMM and CMMI define five distinct levels of process maturity based on Key Performance Areas (KPA’s). The KPA's of CMMI levels overcome the inefficiency of CMM levels to unearth significant architectural flaws.

·         Level 1 (Initial): The first level of both CMM and CMMI describes an immature organization without any defined processes, run in an ad hoc, uncontrolled, and reactive manner.

·         Level 2 (Repeat): Organizations that repeat some processes attain Level 2 CMM. Level 2 of CMMI however requires management of organizational requirements through planned, performed, measured, and controlled processes.

·         Level 3 (Defined): CMM Level 3 mandates a set of documented standard processes to establish consistency across the organization. CMMI Level 3 is an improvement of CMMI Level 2 and describes the organizational processes in standards, procedures, tools, and methods.

·         Level 4 (Manage): CMM Level 4 requires organizations to attain control over processes by using quantitative statistical techniques. CMMI Level 4 demands likewise, but also identifies sub processes that significantly contribute to overall process efficiency.

·         Level 5 (Optimized): CMM Level 5 mandates use of quantitative tools and objectives to manage process improvement. CMMI Level 5 on the other hand focuses on continuously improving process performance through incremental and innovative technological improvements.

While CMM is a certification tool, CMMI is not. An organization is appraised and awarded a CMMI Rating from 1 to 5 depending on the extent to which the organization adopts the selected CMMI model.

222 Describe to me when you would consider employing a failure mode and effect analysis.

Ans:     Failure means :the fact of something expected not being done”.That is application performs actions against the requirements. Effect analysis :since application not perform actions according to the requirements we’ve to analyse the effect. I.e, where the application deviates from it’s requirements and causes for that one/

223 How do you differentiate the roles of Quality Assurance Manager and Project Manager?

Ans:    Quality Assurance Manager (QA Manager) defines the process to be followed at each phase of SDLC. He defines the standards to be followed, the documents to be maintained and sets the standard for the product.

       Project Manager’s responsibility to ensure that the things defined by QA manager are being implemented. He develops the product from start to finish with his team and ensures that the product which is to be rolled out is Defect free and it reaches the standards and views defined by QA Manager.

QA managers can audit the process for certain time periods which are being handled by the Project manager

224 Tell me about any quality efforts you have overseen or implemented. Describe some of the challenges you faced and how you overcame them.

Ans:

225 How do you deal with the environment  that are hostile  to quality change efforts?

Ans:      It depends on the end user or client. if he is computer literate, then it's better to provide clear documentation on each and every feature and actions. if he is not computer literate,then reingineering(start from the scratch) is better.

 

 

 

    226 What is software testing?

        Ans:      Software testing is the process of evaluation a software item to detect differences between            given input and expected output. Also to assess the feature of A software item. Testing assesses the quality of the product. Software testing is a process that should be done during the development process. In other words software testing is a verification and validation process.

           Verification  is the process to make sure the product satisfies the conditions imposed at the start of the development phase. In other words, to make sure the product behaves the way we want it to.

           Validation is the process to make sure the product satisfies the specified requirements at the end of    the development phase. In other words, to make sure the product is built as per customer requirements.

       227  What is the purpose of testing?

       Ans:      Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:

  • To improve quality
  • For Verification & Validation (V&V)

       It was done by test engineer. in system testing test engineer test the 
                  1)user interface (colour,size,font,allignement,e.t.c),
                  2)usability (tab movements,curser blinking,shotcut  keys,e.t.c),
                 3)functional (front end,back end),
                 4)non functional (performance,security),
                 5)validate.
          Stub  : It is a temporary called program.It functions similarly like sub modules when called by the main module.

Ans:      We can map SDLC with Testing in the following way:

          In the requirements stage a tester can start writing a testplan.

          In the designing stage, a tester can start writing 60% of Testcases.

         In the Coding stage, a tester can write the remaining testcases aftre studing the functionality.

          In the testing stage, a tester can start different kinds of testing.

  1.To verify functional, performance, and reliability  requirements placed on major design items or groups   of   units.
 2.success and error cases being simulated via appropriate  parameter and data inputs.
3. Simulated usage of shared data areas and inter-process  communication is tested and individual subsystems are  exercised through their input interface.
4. Test cases are constructed to test that all components  within assemblages interact correctly
Alignment
Font sizes
Colors
LoGO resolution...etc

       228   What types of testing do testers perform?

Ans:         Test types are introduced as a means of clearly defining the objective of a certain level for a program or project.  A test type is focused on a particular test objective, which could be the testing of the function to be performed by the component or system; a non-functional quality characteristics, such as reliability or usability; the structure or architecture of the component or system; or related to changes, i.e confirming that defects have been fixed (confirmation testing or retesting) and looking for unintended changes (regression testing). Depending on its objectives, testing will be organized differently. Hence there are four software test types:

1.      Functional testing:

2.      Non-functional testing

§  Functionality testing

§  Reliability testing

§  Usability testing

§  Efficiency testing

§  Maintainability testing

§  Portability testing

§  Baseline testing

§  Compliance testing

§  Documentation testing

§  Endurance testing

§  Load testing

§  Performance testing

§  Compatibility testing

§  Security testing

§  Scalability testing

§  Volume testing

§  Stress testing

§  Recovery testing

§  Internationalization testing and Localization testing

3.      Structural testing

 

229. What is the Outcome of Testing?

Ans:

 

 

 

230. What kind of testing have you done?

Ans:

 

 

231  What is the need for testing?

Ans:     refer 227 question for answer

232. What are the entry criteria for Functionality and Performance testing?

Ans:

 

233 What is test metrics?

Ans:      Test metrics

•       Metrics can be defined as “STANDARDS OF  MEASUREMENT”.

•       Metric is a unit used for describing  or measuring an attribute.

•       Test metrics are the means by which the software quality can be measured.

•       Test provides the visibility into the readiness of the product , and gives clear measurement of the quality and completeness of the product.

234 Why we Need Metrics?

Ans:      You cannot improve what you cannot measure.” “You cannot control what you cannot measure

   TEST METRICS HELPS IN:

·         Take decision for next phase of activities

·         Evidence of the claim or prediction

·         Understand the type of improvement required

·         Take decision on process or technology change

     TYPES OF METRICS:

   Base Metrics (Direct Measure):   Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.

    Ex: # of Test Cases, # of Test Cases Executed

Calculated Metrics (Indirect Measure) :   Calculated Metrics convert the Base Metrics data into more useful information.  These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).

Ex: % Complete, % Test Coverage

Metrics life Cycle:

DefectPattern

Defect Metrics:

 

234 Why do you go for White box testing, when Black box testing is available?

Ans:     B'coz White Box testing tests the unit of code / program  where as Black Box testing has nothing to do with the internal code all it does is based upon expected output it test the system. So in any case none of them will be replaced by the other.Based on the testing requirement respective tests needs to be done.

235 What are the entry criteria for Automation testing?

Ans:

236. When to start and Stop Testing?

Ans:  The following are the common test start criteria:

·         Testing starts right from the requirements phase and continues till the end of SDLC

·         Objective of starting early: Requirements related defects caught later in the SDLC result in higher cost to fix the defect.

The following are few of the common Test Stop criteria:

·         All the high priority bugs are fixed.

·         The rate at which bugs are found is too small.

·         The testing budget is exhausted.

·         The project duration is completed.

·         The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

·         Measuring Test Coverage.

·         Number of test cycles.

·         Number of high priority bugs.

237 Define quality?

Ans:     Software quality is the degree of conformance to explicit or implicit requirements and expectations.

Definition by IEEE

§  The degree to which a system, component, or process meets specified requirements.

§  The degree to which a system, component, or process meets customer or user needs or expectations.

Definition by ISTQB

§  quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.

§  software quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.

 

238. What is Baseline document, Can you say any two?

Ans:

 

 

 

239. What is verification?

 

Ans:      Refer question no. 226 for answer

 

 

240. What is validation?

 

Ans :     Refer question no. 226 for answer

 

 

241. What is quality assurance?

Ans:         Quality Assurance (QA) is a way of preventing mistakes or defects in manufactured products and avoiding problems when delivering solutions or services to customers. QA is applied to physical products in pre-production to verify what will be made meets specifications and requirements, and during manufacturing production runs by validating lot samples meet specified quality controls. QA is also applied to software to verify that features and functionality meet business objectives, and that code is relatively bug free prior to shipping or releasing new software products and versions.

 

242. What is quality control?

Ans:      Quality control (QC) is a procedure or set of procedures intended to ensure that a manufactured product or performed service adheres to a defined set of quality criteria or meets the requirements of the client or customer.

 

243. What is SDLC and TDLC?

Ans:      SDLC:    The software development life cycle (SDLC) is a framework defining tasks performed at each step in the software development process. SDLC is a structure followed by a development team within the software organization. It consists of a detailed plan describing how to develop, maintain and replace specific software. The life cycle defines a methodology for improving the quality of software and the overall development process.

SDLC is a process of developing various information systems.sdlc contains 7 phases:

        1.Requirements Capturing

        2.Analasys

        3.Design

        4.Coding

        5.Testing

        6.Implementation

        7.Support/Maintenence

          TDLC:TDLC means testing development life cycle:

1.      test plan

2.      identify test scenarios

3.      preparing test cases

4.      executing test cases

5.      identify defects

6.      reporting defects

7.      tracking defects

8.      close

 

 

244. What are the Qualities of a Tester?

Ans:    Qualities of testers:

·         Attention to detail

·         Ability to communicate

·         Patience

·         Willingness to learn

·         Prioritization skills

·         Time Management

·         Organization skills

·         Adaptability

·         Ability to think outside the box

245. When to start and Stop Testing?

Ans:     An early start to testing reduces the cost, time to rework and error free  sw that is delivered to the client. however in sdlc testing starts can be started from the requirements phase and ends with deployment of sw. however it also depends on the development model that is being used.

it is difficult to determine when to stop testing, as testing is a never ending process and no one can say that any sw is 100% tested

 

246. What are the various levels of testing?

Ans:    The different levels of testing are:

·         component testing

·         integration testing

·         system testing

·         acceptance testing

247. What are the types of testing you know and you experienced?

Ans:    Types of testing experienced:

·         functional testing

·         non-functional testing

·         usability

·         reliability

·         compatibility

·         installation/ un installation

·         performance

·         security

·         accessibility

·         localization

·         system integration testing

·         portability

·         stress

·         volume

·         load

·         unit

·         manual support testing

·         regression

·         retesting

·         sanity

·         soak testing

·         add-hoc testing

·         paired testing

·         agile testing

·         alpha, beta testing...etc....

 

248. What exactly is Heuristic checklist approach for unit testing?

Ans:       it is a method of achieving the most appropriate solution of several found by alternative methods is selected at successive stages testing. the checklist prepared to proceed is called heuristic checklist.

 

249. After completing testing, what would you deliver to the client?

Ans:      After completing of testing we will give client to user manual and product. and also it depends on test plan document . the test deliverables are:

a) test plan document. b)master test case document. c) test summary report. d)defects reports.

 

250. What is a Test Bed?

Ans:      Before starting the actual testing the element: which supports the testing activity such as test data, data guide lines are collectively called as test bed.

 

251. What is a Data Guidelines?

Ans:         Data guide lines are used to specify the data required to populate the test bed and prepare test scripts. it includes all data parameters that are required to test the conditions derived from the requirements or specifications.

 

252. Why do you go for Test Bed?

Ans:      Every software need some kind of hardware requirements to operate in full fledge which should be fullfilled before testing. test bed comprises of 1) os required 2) how configuration like(ram,hard disk,etc....,).so test bed is always considered before testing.

a) to meet customer requirements.

b)to find as many as quick defects.

c) to grow in market.

d)to exempt development environment.

e) to adhere to quality factors.

 

253. What is Severity and Priority and who will decide what?

Ans:      severity: it describes the seriousness of the defect with respect to functionality and it is given by test engineer.

        priority: it describes the importance of defect to solve with respect to customer. it is decided by developer.

 

254. Can Automation testing replace manual testing? If it so, how?

Ans:      No, automation testing is not replace of manual testing  as these tools to follow gigo principle of computer tools and also    because performance type of testing we can do via automation not manual same as user interface testing and look and feel type of testing we can do via manual testing.

255. What is a test case?

Ans:      A set of input values,execution preconditions,expected results and execution post conditions developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

256. What is a test condition?

Ans:     An item or event of a component or system that could be verified by one or more test cases,e.g. a function,transaction,feature,quality attribute,or structural element.

 

257. What is the test script?

Ans:      it is commonly used to refer to a test procedure specification,especially an automated one.

 

 

 

258. What is the test data?

Ans:      Data that exists before the  test is executed, and that affects or is affected by the component or system under test.

 

259. What is an Inconsistent bug?

Ans:    A  bug which is not 100% reproducible when restarting tjhe same in the same version of the application. which occurs sometimes and not occurs sometimes or unstable.

 

260. What is the difference between Re-testing and Regression testing?

Ans:     retesting: it is done to ensure that bug is fixed and now failed functionality is working fine or not

done for verifying the feild of fixed bugs.

·         planned testing

·         done on failed test cases

regression testing:it is re-execution of test cases for unchanged part to see that unchanged functionality      is working fine or not.

·         generic testing

·         Done on passed test cases.

 

 

261.What are the different types of testing techniques?

Ans:     There are 2 types of testing technique:

1. Black Box testing: It is a method of testing in which the black box test engineer will perform testing on the functional part of the application without having any knowledge on the structural part of an application.

2. White box testing: It is a method of testing in which the white box test engineer will perform testing on the structural part of an application. Usually the developers are the White box test engineers.

Apart from this 2 techniques there are one more desired technique called gray box testing.

Gray box testing: It is a method of testing in which one will perform testing on both the functional part and structural part of an application. Usually who has knowledge on the structural part of an application will perform this testing.

262.What are the different types of test case techniques?

Ans:      The test case techniques which are used in Software Testing

1. Equivalent Class Partitioning

2. Boundary Value Analysis

3. Error Guessing or Probability Class Partitioning

4. Decision Table

5. Special Values

6. Error Based

7. I/O Domain

8. Decision Table

9. Flow Chart

263.What are the risks involved in  testing?

Ans:  The following are the major risks involved in Software Testing.

·         Unclear or ambiguous Requirements

·         Lack of time

·         Lack of Resources (SW,HW and Human)

·          Lack of Documents(BRS,SRS or Use Cases)

·         Lack of knowledge on projet

264.Differentiate test bed and test environment?

Ans:     Test Bed :An execution environment configured for testing. Mayconsist of specific hardware, OS, network topology,configuration of the product under test, other application or system software.

          Test Environment :The hardware and software environment in which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers.

265.What if's the difference between  defect,error,bug,failure,fault?

Ans:      Error: A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. This can be a misunderstanding of the internal state of the software, an oversight in terms of memory management, confusion about the proper way to calculate a value, etc.

      Failure: The inability of a system or component to perform its required functions within specified performance requirements. See: bug, crash, exception, and fault.

       Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner. See: anomaly, defect, error, exception, and fault. Bug is terminology of Tester.

      Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner. See: bug, defect, error, exception.

 Defect:Commonly refers to several troubles with the software products, with its external behavior or with its internal features.

 

266.What is the difference between the quality and testing?

Ans:  Quality is nothing but, up to what extent the application is having the correction/completeness/Perfection/standard value and everything related to customer specifications

          Testing is nothing but,Its a process of identifying Quality of an application

267.What is the difference between white and black box testing?

Ans:    White box testing:  White box testing is done by the Developers. This requires knowledge of the internal coding of the software. White box testing is concerned with testing the implementation of the program. The intent of this testing is not to exercise all the different input or output conditions, but to exercise different programming structures and data structures used in the program. It is commonly called structural testing.White box testing mainly applicable to lower levels of testing: Unit testing and Integration Testing. Implementation knowledge is required for white box testing.

    Black box testing: Black box testing is done by the professional testing team. This does not require knowledge of internal coding of the application. Testing the application against the functionality of the application without the knowledge of internal coding of the software.In Black box testing the structure of the program is not considered. Test cases are decided solely on the basis of the requirements or specification of the program or module.Black box testing mainly applicable to higher levels of testing: Acceptance Testing and System Testing. Implementation knowledge is not required for black box testing.

268.what is the difference between quality assurance and quality control?

Ans:     Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.

  Quality Control: A set of activities designed to evaluate a developed work product.

269.What is the difference between testing and debugging?

Ans:      Testing is meant to find defects in the code, or from a different angle, to prove to a suitable level (it can never be 100%) that the program does what it is supposed to do. It can be manual or automated, and it has many different kinds, like unit, integration, system / acceptance, stress, load, soak etc. testing.

 

     Debugging is the process of finding and removing a specific bug from the program. It is always a manual .

In simpler way you can say like -

Testing Finding and locating a defect carries out by the testers intention is to find as many bugs / defects as possible.

Debugging Fixing the defects/bugs carried out by developers intention is to fix the bugs  / defects

 

270.What is the difference between bug and defect?

An:      .Bug: An informal word describing any of the above.Deviation from the expected result.

A software bug is an error, flaw, mistake, failure, or fault in a computer program that prevents it from working as intended, or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design. It is said that there are bugs in all useful computer programs, but well-written programs contain relatively few bugs, and these bugs typically do not prevent the program from performing its task. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports about bugs in a program are referred to as bug reports, also called PRs (problem reports), trouble reports, CRs (change requests), and so forth.

           Defect: Problem in algorithm leads to failure. A defect is for something that normally works, but it has something out-of-spec.

271.What is the difference between  verification and validation?

Ans:    Validation:Determination of the correctness of the products with respect to the user needs and requirements.

           Verification:Determination of the correctness of the product with respect to the test conditions/requirement imposed at the start.

Verification  & Validation:

1. Verification is a static practice of verifying documents, design, code and program.

1. Validation is a dynamic mechanism of validating and testing the actual product.

2. It does not involve executing the code.

2. It always involves executing the code.

3. It is human based checking of documents and files.

3. It is computer based execution of program.

4. Verification uses methods like inspections, reviews, walkthroughs, and Desk-checking etc.

4. Validation uses methods like black box (functional)  testing, gray box testing, and white box (structural) testing etc.

5. Verification is to check whether the software conforms to specifications.

5. Validation is to check whether software meets the customer expectations and requirements.

6. It can catch errors that validation cannot catch. It is low level exercise.

6. It can catch errors that verification cannot catch. It is High Level Exercise.

7. Target is requirements specification, application and software architecture, high level, complete design, and database design etc.

7. Target is actual product-a unit, a module, a bent of integrated modules, and effective final product.

8. Verification is done by QA team to ensure that the software is as per the specifications in the SRS document.

8. Validation is carried out with the involvement of testing team.

9. It generally comes first-done before validation.

9. It generally follows after verification.

272.What is the difference between functional spec. and business requirement specification?

Ans:          BRS: BRS contains the basic requirements of customer that are to be developed as software, project cost, schedule, target dates. SRS: SRS is implemented form of BRS. SRS is often referred as parent document of project  management document such as design specifications,statmnets of works ,software architecture specifications, testing and validation plans and documentation plans.The basic issues of SRS is what is the functionality(what is the s/w supposed to do)what are the external interfaces (how does the software interact with the user, other hardware, and other system software)performance(What is the speed of application ,recovery time ,response time, availability of various software functions)attributes(what is the portability, security, correctness etc )design constraints (OS environments.implemnation of languages, database integrity and resource limits) SRS contains the functional and non functional requirements only.   

            FRS: FRS document provides the more detailed and described form of SRS.It contains the technical information and data needed to design the application. FRS define the what are software functionality will be and how to implement

273.What is the difference between unit testing and integration testing?

Ans:             A unit test is a test written by the programmer to verify that a relatively small piece of code is doing what it is intended to do. They are narrow in scope, they should be easy to write and execute, and their effectiveness depends on what the programmer considers to be useful. The tests are intended for the use of the programmer, they are not directly useful to anybody else, though, if they do their job, testers and users downstream should benefit from seeing fewer bugs.

Part of being a unit test is the implication that things outside the code under test are mocked or stubbed out. Unit tests shouldn't have dependencies on outside systems. They test internal consistency as opposed to proving that they play nicely with some outside system.

An integration test is done to demonstrate that different pieces of the system work together. Integration tests cover whole applications, and they require much more effort to put together. They usually require resources like database instances and hardware to be allocated for them. The integration tests do a more convincing job of demonstrating the system works (especially to non-programmers) than a set of unit tests can, at least to the extent the integration test environment resembles production.

Actually "integration test" gets used for a wide variety of things, from full-on system tests against an environment made to resemble production to any test that uses a resource (like a database or queue) that isn't mocked out.

274.What is the difference between  volume and load?

Ans:           Testing application with large number of data in database is done by volume testing. Where as load testing involves anticipated loads levels for identifying problems among resources response time.Load testing is performed under the customer expected configuration with an expected load. Where as volume testing is performed under huge volume of data.

275.What is the difference between volume and stress?

Ans    Volume testing: This is done to test how the system handles  when there is a need for huge volumes    of data.

       Stress testing: Here we apply more no of users or no of transactions than prescribed with varying

resources(ram,bandwidth etc)and check where the system cannot handle that much load. The intention of this is to break the system.

276.What is the difference between  stress and load testing?

Ans:      Load Testing is testing the application for a given load requirements which may include any of the following criteria:

·         Total number of users.

·         Response Time

Through Put Some parameters to check State of servers/application.While stress testing is testing the application for unexpected load. It includes

·         Virtual users

·         Think-Time

 

277.What is the difference between two tier and three tier architecuture?

Ans:     Two-Tier Architecture:   The two-tier is based on Client Server architecture. The two-tier architecture is like client server application. The direct communication takes place between client and server. There is no intermediate between client and server. Because of tight coupling a 2 tiered application will run faster.

Advantages:

·         Easy to maintain and modification is bit easy

·         Communication is faster

Disadvantages:

·         In two tier architecture application performance will be degrade upon increasing the users.

·         Cost-ineffective

Three-Tier Architecture:   Three-tier architecture typically comprise a presentation tier, a business or data access tier, and a data tier. Three layers in the three tier architecture are as follows:

1) Client layer: It is also called as Presentation layer which contains UI part of our application. This layer is used for the design purpose where data is presented to the user or input is taken from the user. For example designing registration form which contains text box, label, button etc.

2) Business layer: In this layer all business logic written like validation of data, calculations, data insertion etc. This acts as a interface between Client layer and Data Access Layer. This layer is also called the intermediary layer helps to make communication faster between client and data layer.

3) Data layer:   In this layer actual database is comes in the picture. Data Access Layer contains methods to connect with database and to perform insert, update, delete, get data from database based on our input data.

Advantages:

·         High performance, lightweight persistent objects

·         Scalability – Each tier can scale horizontally

·         Performance – Because the Presentation tier can cache requests, network utilization is minimized, and the load is reduced on the Application and Data tiers.

·         High degree of flexibility in deployment platform and configuration

·         Better Re-use

·         Improve Data Integrity

·         Improved Security – Client is not direct access to database.

·         Easy to maintain and modification is bit easy, won’t affect other modules

·         In three tier architecture application performance is good.

Disadvantages

·         Increase Complexity/Effort

 

278.what is the difference between client sever and web based testing?

Ans:       Client - Server Testing :

·         It's a two tier application. Front end and Back end, where front end is client request and back end is mounted with application and database servers together.

·         Limited no of users are going to access this type of application.

·         We have to concentrate only on functionality testing.

      

 Web based Testing :

·         It's a 3tier/n tier architecture. Client request --> Web server --> Application Server --> Database Server.

·         Unlimited no of users, where we can not restrict the no of users who will access the application

·          We need to concentrate on functionality testing as well as non functionality testing like security testing, performance testing, compatibility testing.

279.What is the difference between integration and system testing?

Ans:       Integration testing is a testing in which individual software modules are combined and tested as a group while System testing is a testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements.

          System testing will be conducting at final level while Integration testing will be done at each time of module binding or a new module need to bind with the system.

280.what is the difference between code walkthrough and code review?

Ans:   walk through is a informal review.In walk through author of code actually lead the walkthrough and other participants just review the code.

       Code review can be formal or informal.If its informal then it  is known as peer review or walk through where as if its  formal then it is known as inspection. reviews are static testing processes hence no code is

actually executed in review.

281.What is the diff between walk through and inspection?

Ans:       Walk through is an informal meeting for evaluation. No preparation is required

           Inspection is a methog that deserves careful consideration by an organizatin concerned with the quality of the product. It is conducted by quality control members

282.What is the Diff between SIT & IST?

Ans:      System Integration Testing:During This Testing Testers will Check Whether The App Supports Other APP Interfaces To Complete Its Business Transactions

Ex:Online E Commerce App Will Depend On Banking APP To  Complete Its Transactions

 

Interconnect Stress Test (IST) is an accelerated stress test method used to evaluate the integrity of the Printed Circuit Board (PCB) interconnect structure. It's an objective test whose results are timely, repeatable, reproducible and unique

 

283.What is the Diff between static and dynamic?

Ans:      Static Testing:Under Static Testing code is not executed. Rather it manually checks the code, requirement documents, and design documents to find errors. Hence, the name "static".

Main objective of this testing is to improve the quality of software products by finding errors in early stages of the development cycle. This testing is also called as Non-execution technique or verification testing.

         Dynamic Testing:Under Dynamic Testing code is executed. It checks for functional behavior of software system , memory/cpu usage and overall performance of the system. Hence the name "Dynamic"Main objective of this testing is to confirm that the software product works in conformance with  the business requirements. This testing is also called as Execution technique or validation testing

284.What is the diff between alpha testing and beta testing?

Ans:    Alpha Testing:   During This Testing Organisation People will invite the Customer to The Company,Here They Will Give Training on the App And They Will Get The Feedback About the product From The Customers

           Beta Testing:During This TestIng The responsible Team Will Go To The Customer And Under Their Environment They Will Give Training And Get Feedback about the APP

285.What are the Minimum requirements to start testing?

Ans:   1.baseline document

          2. test condition, test cases,test script

           3. stable application

          4. minimum hardware/software requirements.

286.What is Smoke Testing & when it will be done?

Ans:       Smoke Testing: Genrally The Testing Process Will Start With The Smoke Testing To Estimate the stability of that Build.IN This Testers Will Concentrate On Factors Like:

·         Understandale

·          Operatable

·         Observable

·         Controlable,

·         Consistency,

·         Maintainable,

·         automatable

It should be done at Early Stages Of the Test Process

287.What is Adhoc Testing? When it can be done?

Ans:       Due TO Lack Of time  Testers Are Not Able To Conduct a systamatic Way of Testing.In Such Cases Testers Will Go to The Adhoc styles

288.What is cookie testing?

Ans:       We need to check that our web application page is able to writing the cookies properly on different browsers (as specified in requirements) and application should works properly by using them. We should always need to check our web application on majorly used browsers like IE,Firefix etc

289.What is security testing?

Ans:          Here Testers Will Check Whether the APP is Securable in Terms Of Accessable and Authorize control.It is used to Find Out all THe Loop Holes and weakness of the System.It Ensures That sytem or APP used by the organisation Are Secured And not Exposed To Any Type Of Attacks.

290.What is database testing?

Ans: Data base testing generally deals with the follwoing:

          a)Checking the integrity of UI data with Database Data

          b)Checking whether any junk data is displaying in UI other than that stored in Database

          c)Checking execution of stored procedures with the input values taken from the database tables

          d)Checking the Data Migration .

          e)Execution of jobs if any

291. What is the relationship between Quality & Testing?

Ans:

 

 

292. How do you determine, what to be tested?

Ans:

293. How do you go about testing a project?

Ans:

294. What is the Initial Stage of testing?

Ans:

295. What is Web Based Application Testing?

Ans:

 

296. What is Client Server Application Testing?

Ans:

 

297. What is Two Tier & Three tier Architecture?

Ans:

 

298. What is the use of Functional Specification?ans:

 

299. Why do we prepare test condition, test cases, test script (Before Starting Testing)?

Ans:

 

300. Is it not waste of time in preparing the test condition, test case & Test Script?

Ans:

 

301. How do you go about testing of Web Application?

Ans:        For testing any application, one should be clear about the requirements and specification documents. For testing web application, the tester should know what the web application deals with. For Testing Web application, the test cases written should be in two different types, 1) The Test cases related to the Look and Feel of the Web pages and navigation and 2) The test cases related to the functionality of the web application. Make sure, whether the web application is connected to the Database for the inputs. Write Test cases based on the Database and write test cases for the backend testing as well if there is any database. The web application should be tested for the server response time for displaying the web pages, Make sure the web pages under load as well. For load testing, the tools are very much useful for simulating the many users. 

 

 

302. How do you go about testing of Client Server Application?

Ans:

 

303. What is meant by Static Testing?

Ans:      Test includes inspections and structured peer reviews of requirements and design as well as code.

It is used to check whether the process is correct or not.

304. Can the static testing be done for both Web & Client Server Application?

Ans:     Static testing is done before executing test cases. They involve walkthrough, inspection and review. It’s done in any testing applications.

305. In the Static Testing, what all can be tested?

Ans:

 

306. Can test condition, test case & test script help you in performing the static testing?

Ans:

 

307. What is meant by dynamic testing?

Ans:      During this testing they are executing the application by giving inputs and by observing outputs.

Verify that the software satisfy the specified requirements.

308. Is the dynamic testing a functional testing?

Ans:     Dynamic testing includes both functional and non-functional testing (Load/Stress/Performance testing). Dynamic testing includes unit, Integration, system and acceptance testing.

309. Is the Static testing a functional testing?

Ans:       This sort of testing is also called verification. This testing involves including inspections, walkthroughs. Dynamic testing deals with the functional testing (i.e.) executional testing. When it comes to static every things will be verified.

310. What is the functional testing you perform?

Ans:  

 

311. What is meant by Alpha Testing?

Ans:        During this testing organisation people invite the customer to the company they are giving training on the project and get the feedback from the customer about the application.

312. What kind of Document you need for going for an Functional testing?

Ans:      A test team can utilize a variety of documents depending on industry, company and/or project.  In general, the following documents can be used to faciliate functional testing:

1) Systems Requirements Specifications (SRS)

2) Business Requirements Document (BRD)

3) Functional Requirements Document (FRD)

4) Work-flow Diagrams

5) Wireframes

6) Mock-ups

7) Test Strategy

8) Test Plan

313. What is meant by Beta Testing?

Ans:   During this testing responsible team will go to the customers place and customers are giving involvement and the training and getting the feedback, about the application.

314. At what stage the unit testing has to be done?

Ans:

315. Who can perform the Unit Testing?

Ans:  Developers can perform the unit testing.

316. When will the Verification & Validation be done?

Ans: Verification is done in the earlier stages of the project  and during Unit and system testing. Validation   is done during the acceptance testing.

317. What is meant by Code Walkthrough?

Ans:     A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the    programmer's logic and assumptions.

 

 

318. What is meant Code Review?

Ans:       A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.

319. What is the testing that a tester performs at the end of Unit Testing?

Ans:

 

320. What are the things, you prefer & Prepare before starting Testing?

Ans:    Before starting testing we have to prepare test environment, test data, test cases and most important thing is good knowledge on the application in all respective aspects.

321. What is Integration Testing?

Ans:  Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before validation testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.

 

322. What is Incremental Integration Testing?

 

Ans:       The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause. Within incremental integration testing  a range of possibilities exist, partly depending on the system architecture:

§  Top down: Testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.

§  Bottom up: Testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.

§  Functional incremental: Integration and testing  takes place on the basis of the functions nad functionalities, as documented in the functional specification.

 

323. What is meant by System Testing?

Ans       system testing: testing all the requirements in the application is called system testing.

 

 

 

324. What is meant by SIT?

Ans:         System integration testing (SIT) is a high-level software testing process in which testers verify that all related systems maintain data integrity and can operate in coordination with other systems in the same environment. The testing process ensures that all subcomponents are integrated successfully to provide expected results

 

325. When do you go for Integration Testing?

Ans        we will normall go for integration testing when the   individual software modules are combined and tested as a  group.

326. Can the System testing be done at any stage?

Ans:

 

 

327. What are stubs & drivers?

Ans       Drive : It is a temporary Calling program.It functions similarly like main module for calling the submodules. 

 

328. What is the Concept of Up-Down & Down-Up in Testing in integration testing?

Ans:       Up_Down :  An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested

               Down-Up:An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

 

329. What is the final Stage of Integration Testing?

Ans:     System integration testing is the finnal stage in integration testing

 

330. Where in the SDLC, the Testing Starts

 

331. What is the Outcome of Integration Testing?

 

Ans:       The outcome of integration tetsing is:

 

332. What is meant by GUI Testing?

Ans:       GUI means -Graphical User Interface  Testing user interface design is called GUI testing. that means:

 

333. What is meant by Back-End Testing?

Ans:       Checking the database by writing the queries to execute the  testing process by knowing the database table structure is  know as backend testing  Or To validate the data in the data base with respective  fronted screens is called back - end testing

 

334. What are the features, you take care in Prototype testing?

Ans:      In prototype testing  the  tester person  should  see that all the client  need are  meet  by the  prototype  application and  he should  also take  care of the  free flow  of navigation

 

335. What is Mutation testing & when can it be done?

Ans:       Mutation testing is done to see the performance of the Test Cases, in order to check test cases, application code will be changed and on execution those code changes should be pointed out, if the chanses are pointed out then the test cases are correct other wise we can consider the test cases are not appropriate.

 

336. What is Compatibility Testing?

Ans:  Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:

 

337. What is Usability Testing?

Ans:            Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users.

 

338. What is the Importance of testing?

Ans:      SDLC: Software Development Life Cycle is the process of building the application or product or project through different phases.Requirement - H/w, S/W Resources, Plan, Team Size, Budget and etcAnalysis  - Requirements, Functional Spec's, BRS, FRS, SRS, Usecase, Test Cases,

TestplanDesign - HLDD, LLDD, Detailed Design Documents. coding BY LLDD the modules designed by an indivisual logic for every moduleTesting - Smoke Testing, Functional Testing, Integration Testing, and System Testing.Implementing Maintainance - Port Testing, Change Request, Enhancement and etc

 

339. What is meant by regression Testing?

Ans:      While fixing any bug, developer makes some changes in code,  when this new code is merge with existing one, it may  introduce new bugs. To verify it, testing team should  execute the previous test cases to verify whether it gives  the same result. This testing is called Regression Testing.

 

340. When we prefer Regression & what are the stages where we go for Regression Testing?

Ans:        Regression testing is necessary to ensure that any bug fixes or code changes have not caused any side effects to the present version of the product.Ii is conducted at every stage(i.e., Integration and system testing stages) after smoke testing

341. What is Performance Testing?

Ans:     Performance testing, a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under various workload. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

Performance Testing Techniques:

Load testing - It is the simplest form of testing conducted to understand the behaviour                 of the system under a specific load. Load testing will result in measuring important business critical transactions and load on the database, application server, etc., are also monitored.

Stress testing - It is performed to find the upper limit capacity of the system and also to determine how the system performs if the current load goes well above the expected maximum.

Soak testing - Soak Testing also known as endurance testing, is performed to determine the system parameters under continuous expected load. During soak tests the parameters such as memory utilization is monitored to detect memory leaks or other performance issues. The main aim is to discover the system's performance under sustained use.

Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large amount and measuring the performance of the system. The main aim is to determine whether the system will be able to sustain the workload.

 

342. What is the Performance testing; those can be done manually & automatically?

Ans:       Testing the performance of any application is called performance testing. Take an example of Gmail.com. At a time so many people are accessing this website. So we have to check at max. how many people can access this site without any interruption. We can do it manually and automatically. But manually for this type of big application is impossible. We can use Load runner for this type of applications. We create virtual servers in thousands or in lakhs depending on the customer requirement. and we'll check the performance of the site.

 

343. What is Volume, Stress & Load Testing?

Ans:     Performance testing, load testing and stress testing are three different things done for        different purposes. Certainly, in many cases they can be done by the same people with the same tools at virtually the same time as one another, but that does not make them synonymous. Performance testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test with regard to speed, scalability and/or stability characteristics. It is also the superset of other classes of performance-related testing such as load and stress testing.

A load test is a performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations.

A stress test is a performance test focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes beyond those anticipated during production operations. Stress tests may also include tests focused on determining or validating performance characteristics of the product under test when subjected to workload models and load volumes while the product is subjected to other stressful conditions, such as limited memory, insufficient disk space or server failure.

344. What is a Bug?

Ans:     A software bug is a problem causing a program to crash or produce invalid output. The problem is caused by insufficient or erroneous logic. A bug can be an error, mistake, defect or fault, which may cause failure or deviation from expected results.

Most bugs are due to human errors in source code or its design. A program is said to be buggy when it contains a large number of bugs, which affect program functionality and cause incorrect results.

 

345. What is a Defect?

Ans:      While testing when a tester executes the test cases he might observe that the actual test results do not match from the expected results. The variation in the expected and actual results is known as defects

 

346. What is the defect Life Cycle?

Ans:         Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found during testing.

The bug has different states in the Life Cycle. The Life cycle of the bug can be shown diagrammatically as follows:

Defect life cycle includes following steps or status:

1.         New: When a defect is logged and posted for the first time. It’s state is given as new.

2.         Assigned: After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. It’s state given as assigned.

3.         Open:  At this state the developer has started analyzing and working on the defect fix.

4.         Fixed:  When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.

5.         Pending retest:  After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. Hence its status is pending retest.

6.         Retest:  At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not.

7.         Verified:  The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “verified”.

8.         Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.

9.         Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.

10.       Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.

11.       Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.

 

 

 

12. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

13. Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of colour of some text then it is not a bug but just some change in the looks of the  application.

 

347. What is the Priority in fixing the Bugs?

 

Ans:          Our prioritization system is always in the context of the current iteration of the          software and works as follows.

           Priority 1 - All work stops except for this item, we release the fix as soon as it is    tested.

           Priority 2 - The next release will not go out without this item resolved.

           Priority 3 - Really desired in this release, but if we run out of time we will push it.

           Priority 4 - We really don't expect to get to this in this release, but if you run out of tasks, work on it.

           Priority 5 - Don't work on it.

 

348. Explain the Severity you rate for the bugs     found?

 

Ans:        “severity” is associated with standards. “Severity” is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior.

A variety of commercial, problem tracking/management software tools are available. These tools, with the detailed input of software test engineers, give the team complete information so developers can understand the bug, get an idea of its ‘severity’, reproduce it and fix it. The ‘severity’ of a problem is defined in accordance to the customer’s risk assessment and recorded in their selected tracking tool.

 

349. Diff between UAT & IST?

 

Ans:       System integration testing is testing performed when two systems, generally presumed stable themselves, are integrated with one another. For example, this could be when an inventory management system is integrated with a sales accounting system. Each system feeds into the other.

 

The goal of systems integration testing is to ensure that the data crossing the boundary between systems is received, stored and used appropriately by the receiving system. Until integration begins, testing of the isolated systems is done on mocked or replayed data and not on "live" data. Integration testing is the final step before customer acceptance testing.

 

User Acceptance Testing is often the final step before rolling out the application.Usually the end users who will be using the applications test the application before ‘accepting’ the application.This type of testing gives the end users the confidence that the application being delivered to them meets their requirements. This testing also helps nail bugs related to usability of the application.

 

 

350. What is meant by UAT?

 

Ans:           In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience.

In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the software is tested in the "real world" by the intended audience. UAT can be done by in-house testing in which volunteers or paid test subjects use the software or, more typically for widely-distributed software, by making the test version available for downloading and free trial over the Web. The experiences of the early users are forwarded back to the developers who make final changes before releasing the software commercially.

 

351. What all are the requirements needed for UAT?

Ans:          UAT is the last and final stage after which the system will go LIVE, and therefore, the crust of this activity is to make sure that maximum scenarios are tested in the system and if issues are found they are reported accordingly. Due to the criticality and importance of the UAT phase, the role of the UAT conductor requires multi-faceted skills. These qualities allow the person playing that role to perform this important activity; the business analyst must think in the shoes of the user to understand his problem. Absence of these skills may fail the overall UAT phase.

Further, following skills and competencies are required to be possessed by the Business Analyst to conduct effective/successful UAT:

People Handling: Business Analyst that holds good skills of people handling and can develop a good relationship with users in order to explain his point of view, and that skill also helps business analysts to understand the point of view of users. In UAT, users sometimes try to resist change or try to imply his point, but having a good relationship with the business analyst, the issue of ego doesn't come.

352. What are the docs required for Performance Testing?

Ans:     The docs required for performance testing are as follows: BRS, SRS, Use Case Doc and  the Benchmark Doc.

 

 

353. What is risk analysis?

 

Ans:          Risk analysis is the second step of risk management. In risk analysis you study the risks identified is the identification phase and assign the level of risk to each item. You first need to categorize the risks and then need to determine the level of risk by specifying likelihood and impact of the risk.

Likelihood is the percentage of the risk occurrence and arises from different technical    f  actors. Some of the technical factors which should be considered while assessing    likelihood are:

1. How complex the technology is?

2. Technical skills of the test team

3. Team conflicts

4. Geographically distributed teams

5. Bad quality of the tools used in the project

6. Complex integration etc.

Impact is the effect of the risk in case it happens. Impact arises from business       considerations. You should consider following business factors while assessing impact.

1. Loss of customers

2. Loss of business

3. Loss or harm to society

4. Financial loss

5. Criminal proceedings against company

6. Loss of license to continue business

 

354. How to do risk management?

 

Ans:           Risk management is a critical activity in software test planning and tracking. It includes the identification, prioritization/analysis and treatment of risks faced by the business. Risk management is performed at various levels, project level, program level, organization level, industry level and even national or international level. In this article, risk management is understood to be done at a project level within the context of software testing. Risks arise from a variety of perspectives like project failure, safety, security, legal liabilities and non-compliances with regulations. An important thing to understand is that risks are potential problems, not yet occurred. A problem that has already occurred is an issue and is treated differently in software test planning. Risk management in software testing consists of the following activities:

Risk Identification

Risks are identified within the scope of the project.  Risks can be identified using a number of resources e.g. project objectives, risk lists of past projects, prior system knowledge, understanding of system usage, understanding of system architecture/ design, prior customer bug reports/ complaints, project stakeholders and industry practices. For example, if certain areas of the system are unstable and those areas are being developed further in the current project, it should be listed as a risk.

It is good to document the identified risks in detail so that it stays in project memory and can be clearly communicated to project stakeholders. Usually risk identification is an iterative process. It is important to re-visit the risk list whenever the project objectives change or new business scenarios are identified. As the project proceeds, some new risks appear and some old risks disappear.

 

Risk Prioritization

It is simpler to prioritize a risk if the risk is understood accurately. Two measures, Risk Impact and Risk Probability, are applied to each risk. Risk Impact is estimated in tangible terms (e.g. dollar value) or on a scale (e.g. 10 to 1 or High to Low). Risk Probability is estimated somewhere between 0 (no probability of occurrence) and 1 (certain to occur) or on a scale (10 to 1 or High to Low).  For each risk, the product of Risk Impact and Risk Probability gives the Risk Magnitude.  Sorting the Risk Magnitude in descending order gives a list in which the risks at the top are the more serious risks and need to be managed closely.

Adding all the Risk Magnitudes gives an overall Risk Index of the project. If the same Risk Prioritization scale is used across projects, it is possible to identify the riskier projects by comparing the Risk Magnitudes.

 

355. What are test closure documents?

 

Ans:        A Test Closure Document contains a checklist of all of the items that must be met in order to close a test project as well as a list of activities that must be performed after the project is closed. This document may include (not an exhaustive list):

BEFORE CLOSING A TEST PROJECT

1) Exit criteria is met

2) Testing performed against test plan

3) All test cases are mapped to requirements

4) All test cases have been executed unless known and agreed upon

5) All defects addressed

 

AFTER CLOSING A TEST PROJECT

1) Highlights and lowlights

2) Lessons learned

3) Process improvements

4) Test estimation evaluation

5) Defect trend analysis

6) Properly archive all relevant test collateral

- test plan(s)

- test report(s)

- test data

- Relevant emails

 

 

 

356. What is traceability matrix?

 

Ans:      A traceability matrix is a type of document that helps correlate and trace  business, application, security or any other requirements to their implementation, testing or completion. It evaluates and relates between different system components and provides the status of project requirements in terms of their level of completion.

 

 

357. What ways you followed for defect management?

 

Ans:       There is no named process for Defect Management. Nevertheless the tester needs to track the defect raised by him/her till closure. As a test lead its a vital responsibility to ensure the defect raised is validation by him/her in a triage checking its genuinity & aesthetics. Later the defect needs to assigned to the right contact in development team. Now this defect if accepted by the development will be fixed and released to QA in the next build. Tester should test and close the defect if this defect no longer exists or else re-open the same defect. Sometimes a fix for a defect gives rise to more defect or affect a functionality. A good tester always looks at identifying these flaws also. After each cycle, a lead needs to prepare a metrics depicting the Fall/Rise in the defect count(valid) and rate of defect fix. At end of the testing cycle, a final metrics of this sort shall help the Development &Business  get the Stability index of the application. Deriving the Module wise defect report indicating the root cause is the main input for Root Cause analysis meet, in which Pivot table & Fish bones diagrams are drawn to understand the root causes of each defect/class of defects. This helps in executing the next project/release better by implementing the learning from the current project.