Wednesday, August 15, 2007

Interview

Testing is the evaluation of process it verify to specified specification and identified the difference between the actual and expected results.

Testing Life Cycle

Unlike SDLC (Software Development life cycle) there is STLC (software testing life cycle). Different Organizations have different names of phases in defining the life cycle. But broadly we can summarize as follows

A systematic approach to testing that normally includes these phases:

1. Risk Analysis

2. Planning Process

3. Test Design

4. Performing Test

5. Defect Tracking and Management

6. Quantitative Measurement

7. Test Reporting

1. Risk Analysis

A. Risk Identification

1. Software Risks - Knowledge of the most common risks associated with software development, and the platform you are working on.

2. Testing Risks - Knowledge of the most common risks associated with software testing for the platform you are working on, tools being used, and test methods being applied.

3. Premature Release Risk - Ability to determine the risk associated with releasing unsatisfactory or untested software products.

4. Business Risks - Most common risks associated with the business using the software.

5. Risk Methods - Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products, and processes; assessing their likelihood, and initiating strategies to test for those risks.

B. Managing Risks

1. Risk Magnitude - Ability to rank the severity of a risk categorically or quantitatively.

2. Risk Reduction Methods - The strategies and approaches that can be used to minimize the magnitude of a risk.

3. Contingency Planning - Plans to reduce the magnitude of a known risk should the risk event occur.CSTE Body of Knowledge
Skill Category 6

2. Test Planning Process

A. Pre-Planning Activities

1. Success Criteria/Acceptance Criteria - The criteria that must be validated through testing to provide user management with the information needed to make an acceptance decision.

2. Test Objectives - Objectives to be accomplished through testing.

3. Assumptions - Establishing those conditions that must exist for testing to be comprehensive and on schedule; for example, software must be available for testing on a given date, hardware configurations available for testing must include XYZ, etc.

4. Entrance Criteria/Exit Criteria - The criteria that must be met prior to moving to the next level of testing, or into production.

B. Test Planning

1. Test Plan - The deliverables to meet the test’s objectives; the activities to produce the test deliverables; and the schedule and resources to complete the activities.

2. Requirements/Traceability - Defines the tests needed and relates those tests to the requirements to be validated.

3. Estimating - Determines the amount of resources required to accomplish the planned activities.

4. Scheduling - Establishes milestones for completing the testing effort.

5. Staffing - Selecting the size and competency of staff needed to achieve the test plan objectives.

6. Approach - Methods, tools, and techniques used to accomplish test objectives.

7. Test Check Procedures (i.e., test quality control) - Set of procedures based on the test plan and test design, incorporating test cases that ensure that tests are performed correctly and completely.

C. Post-Planning Activities

1. Change Management - Modifies and controls the plan in relationship to actual progress and scope of the system development.

2. Versioning(change control/change management/configuration management) - Methods to control, monitor, and achieve change.

3. Test Design

A. Design Preparation

1. Test Bed/Test Lab - Adaptation or development of the approach to be used for test design and test execution.

2. Test Coverage - Adaptation of the coverage objectives in the test plan to specific system components.

B. Design Execution

1. Specifications - Creation of test design requirements, including purpose, preparation and usage.

2. Cases - Development of test objectives, including techniques and approaches for validation of the product. Determination of the expected result for each test case.

3. Scripts - Documentation of the steps to be performed in testing, focusing on the purpose and preparation of procedures; emphasizing entrance and exit criteria.

4. Data - Development of test inputs, use of data generation tools. Determination of the data set or sub-set needed to ensure a comprehensive test of the system. The ability to determine data that suits boundary value analysis and stress testing requirements.

4. Performing Tests

A. Execute Tests - Perform the activities necessary to execute tests in accordance with the test plan and test design (including setting up tests, preparing data base(s), obtaining technical support, and scheduling resources).

B. Compare Actual versus Expected Results - Determine if the actual results met expectations (note: comparisons may be automated).

C. Test Log - Logging tests in a desired form. This includes incidents not related to testing, but still stopping testing from occurring.

D. Record Discrepancies - Documenting defects as they happen including supporting evidence.

5. Defect Tracking and Correction

A. Defect Tracking

1. Defect Recording - Defect recording is used to describe and quantify deviations from requirements.

2. Defect Reporting - Report the status of defects; including severity and location.

3. Defect Tracking - Monitoring defects from the time of recording until satisfactory resolution has been determined.

B. Testing Defect Correction

1. Validation - Evaluating changed code and associated documentation at the end of the change process to ensure compliance with software requirements.

2. Regression Testing - Testing the whole product to ensure that unchanged functionality performs as it did prior to implementing a change.

3. Verification - Reviewing requirements, design, and associated documentation to ensure they are updated correctly as a result of a defect correction.

6. Acceptance Testing

A. Concepts of Acceptance Testing - Acceptance testing is a formal testing process conducted under the direction of the software user’s to determine if the operational software system meets their needs, and is usable by their staff.

B. Roles and Responsibilities - The software tester’s need to work with users in developing an effective acceptance plan, and to ensure the plan is properly integrated into the overall test plan.

C. Acceptance Test Process - The acceptance test process should incorporate these phases:

1. Define the acceptance test criteria

2. Develop an acceptance test plan

3. Execute the acceptance test plan

7. Status of Testing

Metrics specific to testing include data collected regarding testing, defect tracking, and software performance. Use quantitative measures and metrics to manage the planning, execution, and reporting of software testing, with focus on whether goals are being reached.

A. Test Completion Criteria

1. Code Coverage - Purpose, methods, and test coverage tools used for monitoring the execution of software and reporting on the degree of coverage at the statement, branch, or path level.

2. Requirement Coverage - Monitoring and reporting on the number of requirements exercised, and/or tested to be correctly implemented.

B. Test Metrics

1. Metrics Unique to Test - Includes metrics such as Defect Removal Efficiency, Defect Density, and Mean Time to Last Failure.

2. Complexity Measurements - Quantitative values accumulated by a predetermined method, which measure the complexity of a software product.

3. Size Measurements - Methods primarily developed for measuring the software size of information systems, such as lines of code, function points, and tokens. These are also effective in measuring software testing productivity.

4. Defect Measurements - Values associated with numbers or types of defects, usually related to system size, such as 'defects/1000 lines of code' or 'defects/100 function points'.

5. Product Measures - Measures of a product’s attributes such as performance, reliability, failure, usability,

8. Test Reporting

A. Reporting Tools - Use of word processing, database, defect tracking, and graphic tools to prepare test reports.

B. Test Report Standards - Defining the components that should be included in a test report.

C. Statistical Analysis - Ability to draw statistically valid conclusions from quantitative test results.

No comments: