Objectives
- Become familiar with a performance test management approach appropriate for CMMI, auditable, and highly regulated projects.
- Learn how to maximize effectiveness without sacrificing control or compliance.
- Learn how to provide managers and stakeholders with progress and value indicators.
- Learn how to provide a structure for capturing information within the schedule, not in addition to it.
- Learn how to apply an approach designed to adapt to change without generating excessive rework, management, or audit concerns.
Overview
In today’s software-engineering industry, the complexity and critical nature of some of the systems necessitates regulatory oversight. It is always a challenge to balance the pressure of oversight with staying flexible enough to engineer a system effectively and efficiently. There is no reason why regulatory compliance and flexibility cannot work well together ― you need only expand the task list and accept some tradeoffs in the schedule and engineering resources.
Capability Maturity Model® Integration (CMMI) is used here as a paradigmatic example of a process generally viewed as anything but flexible. CMMI is frequently seen as a heavyweight approach, generally more appropriate for safety-critical software and software that is subject to regulatory standards and/or process audits. CMMI was created by the Software Engineering Institute at Carnegie Mellon University and is defined as follows:
“Capability Maturity Model® Integration (CMMI) is a process improvement approach that provides organizations with the essential elements of effective processes. It can be used to guide process improvement across a project, a division, or an entire organization.”
The nature of performance testing makes it difficult to predict what type of test will add value, or even be possible. Obviously, this makes planning all the more challenging. This chapter describes an industry-validated approach to planning and managing performance testing. This approach is sensitive to the need for auditability, progress tracking, and changes to plans that require approval without being oppressively procedural.
How to Use This Chapter
Use this chapter to understand the approach for performance testing in regulated (CMMI) development environments and its relationship with the core activities of performance testing. Also use this chapter to understand what is accomplished during these activities. To get the most from this chapter:
- Use the “CMMI Performance-Testing Activities” section to get an overview of the approach to performance testing in CMMI environments, and as quick reference guide for you and your team.
- Use the various activity sections to understand the details of the most critical performance-testing tasks.
- Additionally, use “Chapter 4 – Core Activities” in this guide to understand the common core activities involved in successful performance-testing projects. This will help you to apply the concepts underlying those activities to a particular approach to performance testing.
Introduction to the Approach
The key to the approach is to plan at the performance test work item level and to fit those work items into the existing plan for accomplishing the project. This allows for compliance, auditability, and approval gates while leaving the execution details in the hands of those assigned to complete a particular work item.
When viewed from a linear perspective, the approach starts by examining the software-development project as a whole, the relevant processes and standards, and the performance acceptance criteria for the system. The results of this examination include the team’s view of the success criteria for the performance-testing effort.
Once the success and acceptance criteria are understood at a high level, planning and test design become the primary activities. The resulting plan and test design should guide the general approach to achieving those criteria by summarizing what performance testing activities are anticipated to add the most value at various points during the development cycle. These points may include key project deliveries, checkpoints, iterations, or weekly builds. For the purposes of this chapter, these events are collectively referred to as “performance builds”. Frequently, while the plan and test design is evolving, the performance specialist and/or the team will begin setting up a performance test environment including the system under test and a load-generation environment that includes monitoring and load-generation tools.
With a plan, test design, and the necessary environments in place, test designs are implemented for major tests, or work items are identified for imminent performance builds. When performance-testing for a particular performance build is complete, it is time to report, archive data, and update the performance test plan and test designs as appropriate, ensuring that the correct processes are followed and approvals obtained. Ultimately, the final performance build will be tested and it will be time to compile the final report.
CMMI Performance-Testing Activities
This approach described in this chapter can be represented by the following 12 activities.
Figure 7.1 CMMI Performance Testing Activities
- Activity 1. Understand the Process and Compliance Criteria. This activity involves building an understanding of the process and the compliance requirements.
- Activity 2. Understand the System and the Project Plan. Establish a fairly detailed understanding of the system you are to test and the project specifics for the development of that system.
- Activity 3. Identify Performance Acceptance Criteria. This activity includes identify the performance goals and requirements. This also includes identifying the performance testing objectives.
- Activity 4. Plan Performance-Testing Activities. This activity includes mapping work items to the project plan, determining durations, prioritizing the work, and adding detail to the plan.
- Activity 5. Design Tests. This activity involves identifying key usage scenarios, determining appropriate user variances, identifying and generating test data, and specifying the metrics to be collected.
- Activity 6. Configure the Test Environment. This activity involves setting up your actual test environment.
- Activity 7. Implement the Test Design. This activity involves creating your tests.
- Activity 8. Execute Work Items. This activity involves executing your performance test work items.
- Activity 9. Report Results and Archive Data. This activity involves consolidating results and sharing data among the team.
- Activity 10. Modify the Plan and Gain Approval for Modifications. This activity involves reviewing and adjusting the plan as needed.
- Activity 11. Return to Activity 5. This activity involves continuous testing through the next delivery, iteration, and checkpoint release.
- Activity 12. Prepare the Final Report. This activity involves the creation, submission, and acceptance of the final report.
Relationship to Core Performance-Testing Activities
The following graphic show how the seven core activities from Chapter 4 map to these twelve activities:
Figure 7.2 Relationship to Core Performance Testing Activities
CMMI Performance Testing Activity Flow
The following graphic is more representative of an actual instance of this performance-testing approach. The graphic shows that there is a more or less well-defined, linear structure that has clear places for approval gates, re-planning, and checkpoints. The loop from activity 11 back to activity 5 illustrates how the same basic approach is followed iteration after iteration.
Figure 7.3 CMMI Performance Testing Activity Flow
Activity 1. Understand the Process and Compliance Criteria
This step has almost nothing to do with performance testing, yet it is absolutely critical to the overall success of the performance testing sub-project. Performance testing can be complex enough, even without finding out in the middle of the process that you need to reproduce test data or results from previously conducted tests because an audit is scheduled to take place in two weeks.
You must completely understand the process and compliance requirements even before you start planning your performance testing, because it is the only way to ensure that the testing effort does not get derailed or stuck in a bureaucratic process of change-request approvals and sign-offs. Fortunately, these rules and regulations are almost always thoroughly documented, making this step relatively straightforward, and the challenges frequently lie in obtaining and interpreting these documents.
Determine the Process
Process documentation is typically easy to obtain ― the challenge lies in understanding and interpreting how that process applies to performance testing. Software development process documentation rarely addresses performance testing directly. If this is the case for your project, perhaps the best way to determine the appropriate process is to extrapolate the document to include performance testing to the extent possible, and then submit the revised process to the project manager and/or process engineer for approval. You may have to iterate before getting approval, but it is still better to submit the performance-testing process concept before the project launches than afterward.
Determine Compliance Criteria
Regulatory and compliance documents may be harder to obtain because they often are not made readily available for review by non-executives. Even so, it is important to review these standards. The specific language and context of any statement related to testing is critical to determining a compliant process. The nature of performance testing makes it virtually impossible to follow the same processes that have been developed for functional testing.
For example, when executing a performance test simulating hundreds of users, if three of those users record response times that do not achieve the documented requirement, does that requirement pass or fail? Which test case does that count against, the one that sets the response time, the volume, or the workload? Does the entire test fail? What happens to the thousands of other measurements collected during the test? Do those three failing measurements get one defect report, or three, or none, because the average response time was acceptable? These are the kinds of questions you will likely face and need to resolve based on whatever specific standards have been applied to your project.
Once you understand both the process and the compliance criteria, take the time to get your interpretations approved by an appropriate stakeholder. Compliance is not your specialty; performance testing is. Get help when you need it.
Activity 2. Understand the System and the Project Plan
Once you have a firm understanding of the process and compliance requirements, the next step is to establish a fairly detailed understanding of the system you are to test and the project specifics for the development of that system. Again, in a CMMI-type project, there are usually many documents to read and project plans to reference. These may include use case documents and models, state-transition diagrams, logical and physical architecture diagrams, storyboards, prototypes, contracts, and requirements. Although all of these documents are valuable, even when taken together, they frequently do not contain all of the information you will need in order to create an adequate performance test plan.
Understand the System
The information about the system contained in these documents is frequently abstracted from the end user in such a way that it is difficult to envision how individuals and groups of users will interact with the system. This is where you need to put your business analyst skills to use. Some of the things you will want to make sure you understand include:
- Who or what are the users of the system? What are their reasons for using the system, their expectations, and their motivations?
- What are the most frequently occurring usage scenarios for the system?
- What are the business-critical usage scenarios for the system?
- What are the different ways that a user can accomplish a task with system?
- How frequently will a user access the system?
- What is the relative distribution of tasks that a group of users will conduct over time?
- How many users are likely to interact with the system at different points in time?
Review the Project Plan
With the system information in hand, it is time to turn to the project plan. It is important to remember that performance testing is a sub-project, not the main project. Therefore it is your responsibility to blend performance testing into the plan with as little overall impact to the project as possible. This is where milestones, checkpoints, builds, and iterations come in.
The specific items you are mostly likely to be interested in relate to hardware components, supporting software, and application functionality becoming available for performance testing. Coupling this information with the compliance criteria; requirements, goals, and objectives, as well as the information you have collected about the system and its usage, you can put together a performance test plan that fits into the project without adding unnecessary overhead.
Activity 3. Identify Performance Acceptance Criteria
Regardless of the process your team is following, it is a good idea to at least start identifying desired performance characteristics of an application early in the development life cycle. This is frequently more important to complete prior to starting your testing, when you have the added pressure of having to record, demonstrate, and possibly get approval for how you are going to validate each of these characteristics.
Performance Requirements
Remember that requirements are those characteristics required by contract, law, or a significant stakeholder. When facing roadblocks to reviewing contracts, it is important to explain that the specific language and context of any statement related to application performance is critical to determining compliance. For example, the difference between “transactions will” and “on average, transactions will” is tremendous. The first case implies that every transaction will comply every single time. The second case is completely ambiguous, as you will see in below.
To determine requirements, focus on contracts and legally binding agreements, or standards related to the software under development. Also, get the executive stakeholders to commit to any performance conditions that might cause them to refuse to release the software into production. The resulting criteria may or may not be related to any specific business transaction or condition, but if they are, you must ensure that those transactions or conditions are included in your performance testing.
Performance Goals
Performance goals can be more challenging to determine. Performance goals are those characteristics that are desired by stakeholders, users, developers, or other interested individuals, but that will not automatically prevent shipment of the product if the goals are not exactly met. Good sources for soliciting performance goals include:
- Project documentation and contracts
- Interviews with stakeholders
- Competitive analysis
- Usability studies
Performance-Testing Objectives
The performance tester does not always have easy access to either explicit or implied objectives, and therefore frequently must conduct a systematic search for them. The easiest way to determine and record performance-testing objectives is simply to ask each member of the project team what value you can add for him or her while you are performance testing at a particular point in the project, or immediately following the accomplishment of a particular milestone.
While it is not always easy to find and schedule time with each member of the team — especially when you consider that the project team includes executive stakeholders, analysts, and possibly even representative users — team members are generally receptive to sharing information that will help you establish valuable performance-testing objectives.
Such objectives might include providing resource utilization data under load, generating specific loads to assist with tuning an application server, or providing a report of the number of objects requested by each Web page. Although it is most valuable to collect performance-testing objectives early in the project life cycle, it is also important to periodically revisit these objectives, ask team members if they would like to see any new objectives added, and gain approval for changes or additions as necessary.
Once you have determined the performance requirements, goals, and testing objectives, record them in a manner appropriate to your process. This often includes a formal document and entry into a requirements-management system.
Activity 4. Plan Performance-Testing Activities
All test plans are challenging to do well. To have any realistic hope of creating a plan that will more or less guide the performance-testing activities for the duration of the project without needing a major overhaul, you need to both forward- and reverse-engineer the plan to accommodate what testing “must” be done, what testing “should” be done, and when any particular test “can” be done.
Map Work Items to Project Plan
You can accomplish this by mapping performance requirements, goals, and objectives, as well as compliance criteria, against the key deliveries, milestones, iterations, and checkpoints. The following table provides an example of this mapping.
Iteration 1 | Iteration 2 | Iteration 3 | Checkpoint 1 | |
500 users will be able to log in over a 5-minute period (interim and final requirement). | X | ü | ||
All page response times will be under 6 seconds (goal). | X | X | X | X |
Tune application server for improved performance and scalability (objective). | X | X | X | |
Ensure that all procedures, scripts, data, and results from tests used to validate interim or final requirements are archived sufficiently to repeat the test and results later, if needed (compliance). | ü |
In this table, an ‘X’ represents a compliance task or test case (generically referred to as work items) that can be accomplished during a particular test phase according to the project plan. A ‘ü’ represents a work item that must be accomplished during a particular test phase because of performance or compliance requirements.
Add Durations
Next, add the duration of each phase and the estimated duration of each work item.
Iteration 1 | Iteration 2 | Iteration 3 | Checkpoint 1 | |
500 users will be able to log in over a 5-minute period (interim and final requirement). | X | ü | ||
All page response times will be under 6 seconds (goal). | X | X | X | X |
Tune application server for improved performance and scalability (objective). | X | X | X | |
Ensure that all procedures, scripts, data, and results from tests used to validate interim or final requirements are archived sufficiently to repeat the test and results later, if needed (compliance). | ü |
Prioritize Work Items by Phase
The previous section covered the forward-engineering aspect of planning performance-testing activities. Now that you have added this information, you apply reverse-engineering to determine which work items will be accomplished during which phase to ensure that all work items are appropriately covered. The following table provides an example.
Iteration 1 | Iteration 2 | Iteration 3 | Checkpoint 1 | |
500 users will be able to log in over a 5-minute period (interim and final requirement). | X | X | ü | |
All page response times will be under 6 seconds (goal). | X | X | X | X |
Tune application server for improved performance and scalability (objective). | X | X | X | |
Ensure that all procedures, scripts, data, and results from tests used to validate interim or final requirements are archived sufficiently to repeat the test and results later, if needed (compliance). | ü |
Add Detail to the Plan
Finally, with this information you can detail the plan for each work item to include:
- The reason for this test at this time
- Priority for execution at this time
- Prerequisites for execution
- Tools and scripts required
- External resources required
- Risks to completing the work item
- Data of special interest
- Areas of concern
- Pass/fail criteria
- Completion criteria
- Planned variants on tests
- Load range
- Specifically what data will be collected
- Specifically how that data will be collected
- Who will assist, how, and when
- Additional information needed to repeat the work item later, if needed
Completing this information constitutes a draft or initial performance test plan. In most cases, this draft should be reviewed, potentially enhanced, and approved by the appropriate managers or stakeholders prior to executing the plan.
Activity 5. Design Tests
Designing performance tests involves identifying key usage scenarios, determining appropriate user variances, identifying and generating test data, and specifying the metrics to be collected. Ultimately these items will provide the foundation for workloads and workload profiles.
When designing and planning tests, the intent is to simulate real-world tests that can provide reliable data to help facilitate making informed business decisions. Real-world test designs will significantly increase the reliability and usefulness of results data.
Key usage scenarios for the application under test typically surface during the process of identifying the application’s desired performance characteristics. If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script. Consider the following when identifying key usage scenarios, remembering to think about both human and system users, such as batch processes and external applications:
- Contractually obligated usage scenario(s)
- Usage scenarios implied or mandated by performance-testing goals and objectives.
- Most common usage scenario(s)
- Business-critical usage scenario(s)
- Performance-intensive usage scenario(s)
- Usage scenarios of technical concern
- Usage scenarios of stakeholder concern
- High-visibility usage scenarios
After the key usage scenarios have been identified, they will need to be elaborated into tests. This elaboration process typically involves the following activities:
- Determine navigation paths for key scenarios.
- Determine individual user data and variances.
- Determine relative distribution of scenarios.
- Identify target load levels.
- Identify metrics to be captured during test execution.
Determine Navigation Paths for Key Scenarios
Human beings are unpredictable, and Web sites commonly offer redundant functionality. Even with a relatively small number of users, it is almost certain that real users will not only use every path you think they will to complete a task, but they also will inevitably invent some that you had not planned. Each path a user takes to complete an activity will place a different load on the system. That difference may be trivial, or it may be enormous ― there is no way to be certain until you test it. There are many methods to determine navigation paths, including:
- Identifying the user paths within your Web application that are expected to have significant performance impact and that accomplish one or more of the identified key scenarios.
- Reading design and/or usage manuals.
- Trying to accomplish the activities yourself.
- Observing others trying to accomplish the activity without instruction (other than would be given to a new user prior to his or her first use of the system).
- Analyzing empirical data from Web server logs captured during pre-production releases and usage studies.
Determine Individual User Data and Variances
During the early stages of development and testing, user data and variances are most often estimated based on expected usage and observation of users working with similar applications. These estimates are generally enhanced or revised when empirical data from Web server logs becomes available. Some of the more useful metrics that can be read or interpreted from Web server logs include:
- Page views per period. A page view is a page request that includes all dependent file requests (.jpg files, CSS files, etc). Page views can be tracked over hourly, daily, or weekly time periods to account for cyclical patterns or bursts of peak user activity on the Web site.
- User sessions per period. A user session is the sequence of related requests originating from a user visit to the Web site, as explained previously. As with page views, user sessions can span hourly, daily, and weekly time periods.
- Session duration. This metric represents the amount of time a user session lasts, measured from the first page request until the last page request is completed, and including the time the user pauses for when navigating from page to page.
- Page request distribution. This metric represents the distribution, in percentages, of page hits according to functional types (Home, login, Pay, etc.). The distribution percentages will establish a weighting ratio of page hits based on the actual user utilization of the Web site.
- Interaction speed. Also known as “user think time,” “page view time,” and “user delay,” this metric represents the time users take to transition between pages when navigating the Web site, constituting the think time behavior. It is important to remember that every user will interact with the Web site at a different rate.
- User abandonment. This metric represents the length of time that users will wait for a page to load before growing dissatisfied, exiting the site, and thus abandoning their user session. Abandoned sessions are quite normal on the Internet and consequently will have an impact on the load test results.
Determine the Relative Distribution of Scenarios
Having determined which scenarios to simulate and what the steps and associated data are for those scenarios, and having consolidated those scenarios into one or more workload models, you now need to determine how often users perform each activity represented in the model relative to the other activities needed to complete the workload model.
Sometimes one workload distribution is not enough. Research and experience have shown that user activities often vary greatly over time. To ensure test validity, you must validate that activities are evaluated according to time of day, day of week, day of month, and time of year. The most common methods for determining the relative distribution of activities include:
- Extract the actual usage, load values, common and uncommon usage scenarios (user paths), user delay time between clicks or pages, and input data variance (to name a few) directly from log files.
- Interview the individuals responsible for selling/marketing new features to find out what features/functions are expected and therefore most likely to be used. By interviewing existing users, you may also determine which of the new features/functions they believe they are most likely to use.
- Deploy a beta release to a group of representative users — roughly 10-20 percent the size of the expected user base — and analyze the log files from their usage of the site.
- Run simple in-house experiments using employees, customers, clients, friends, or family members to determine, for example, natural user paths and the page-viewing time differences between new and returning users.
- As a last resort, you can use your intuition, or best guess, to make estimations based on your own familiarity with the site.
Once you are confident that the model is good enough for performance testing, supplement the model with the individual usage data you collected previously in such a way that the model contains all the data you need to create the actual test.
Identify Target Load Levels
A customer visit to a Web site comprises a series of related requests known as a user session. Users with different behaviors who navigate the same Web site are unlikely to cause overlapping requests to the Web server during their sessions. Therefore, instead of modeling the user experience on the basis of concurrent users, it is more useful to base your model on user sessions. User sessions can be defined as a sequence of actions in a navigational page flow, undertaken by a customer visiting a Web site.
Without some degree of empirical data, target load levels are exactly that — targets. These targets are most frequently set by the business, based on its goals related to the application and whether those goals are market penetration, revenue generation, or something else. These represent the numbers you want to work with at the outset.
As soon as Web server logs for a pre-production release or a current implementation of the application become available, you can use data from these logs to validate and/or enhance the data collected by using the resources above. By performing a quantitative analysis on Web server logs, you can determine:
- The total number of visits to the site over a period of time (month/week/day).
- The volume of usage, in terms of total averages and peak loads, on an hourly basis.
- The duration of sessions for total averages and peak loads, on an hourly basis.
- The total hourly averages and peak loads translated into overlapping user sessions to simulate real scalability volume for the load test.
By combining the volume information with objectives, key scenarios, user delays, navigation paths, and scenario distributions from the previous steps, you can determine the remaining details necessary to implement the workload model under a particular target load.
Identify Metrics to Be Captured During Test Execution
When identified, captured, and reported correctly, metrics provide information about how your application’s performance compares to your desired performance characteristics. In addition, metrics can help you identify problem areas and bottlenecks within your application.
It is useful to identify the metrics that relate to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design. When identifying metrics, use either specific desired characteristics or indicators that are directly or indirectly related to those characteristics.
Considerations
Consider the following key points when designing a test:
- Real-world test designs are sensitive to dependencies outside the control of the system, such as humans and other systems interacting with the application.
- Realistic test designs are based on real operations and data, not mechanistic procedures.
- Realistic test designs produce more credible results and thus enhance the value of performance testing.
- Realistic simulation of user delays and think times is crucial to the accuracy of the test.
- If users are likely to abandon a task for any reason, this should be accounted for in your test design.
- Remember to include common user errors in your scenarios.
- Component-level performance tests are an integral part of real-world testing.
- Real-world test designs can be more costly and time-consuming to implement, but they deliver far more accurate results to the business and stakeholders.
- Extrapolation of performance results from unrealistic tests can be inaccurate as the system scope increases, and frequently lead to poor decisions.
- Involve the developers and administrators in the process of determining which metrics are likely to add value and which method best integrates the capturing of those metrics into the test.
- Beware of allowing your tools to influence your test design. Better tests almost always result from designing tests on the assumption that they can be executed and then adapting the test or the tool when that assumption is proven false, rather than by not designing particular tests based on the assumption that you do not have access to a tool to execute the test.
Activity 6. Configure the Test Environment
It may be the case that this step does not apply to your project due to regulatory stipulations. For example, it may be necessary to conduct performance testing in a particular lab, supervised by a particular agency. If that is the case for your project, feel free to skip the rest of this step; if not, consider the following.
Load-generation and application-monitoring tools are almost never as easy to engineer as you might expect. Whether issues arise from setting up isolated network environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing, or establishing version compatibility between monitoring software and server operating systems, there always seem to be issues.
To exacerbate the potential for problems, load-generation tools always lag behind evolving technologies and practices, but that cannot be avoided. Tool creators cannot build in support for every technology, meaning that the vendors will not even start developing support for a particular technology until it has become prominent from their perspective.
This often means that the biggest challenge involved in a performance testing project is getting your first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between simulated and real users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly.
Activity 7. Implement the Test Design
The details of creating an executable performance test are extremely tool-specific. Regardless of the tool that you are using, creating a performance test typically involves taking a single instance of your test script and gradually adding more instances and/or more scripts over time, thereby increasing the load on the component or system. A single instance of a test script frequently equates to a single simulated or virtual user.
Activity 8. Execute Work Items
When an iteration completes or a delivery is made, the performance testing begins with the highest-priority performance test work item related to that delivery that is reasonable to conduct. At the conclusion of each work item, make your findings available to the team, reprioritize the remaining work items to be conducted during the phase, and then move on to the next-highest-priority execution plan. Whenever possible, limit work item executions to one- to two-days each. By doing so, no time will be lost if the results from a particular work item turn out to be inconclusive, or if the initial test design needs modification in order to produce the intended results.
In general, the keys to performance test work item execution include:
- Analyzing results immediately so you can re-plan accordingly
- Communicating frequently and openly across the team
- Recording results and significant findings
- Recording other data needed to repeat the test later
- Revisiting performance testing priorities every few days
- Adapting the test plan and approach as necessary, gaining the appropriate approval for your changes as required.
Activity 9. Report Results and Archive Data
Even though you are sharing data and preliminary results at the conclusion of each work item, it is important to consolidate results, conduct trend analysis, create stakeholder reports, and do collaborative analysis with developers, architects, and administrators before starting the next performance-testing phase. You should allow at least one day between each phase, though you may need more time as the project nears completion. These short analysis and reporting periods are often where the “big breaks” occur. Reporting every few days keeps the team informed, but note that issuing only summary reports rarely tells the whole story.
Part of the job of the performance tester is to find trends and patterns in the data, which can be a very time-consuming endeavor. It also tends to inspire a re-execution of one or more tests to determine if a pattern really exists, or if a particular test was skewed in some way. Teams are often tempted to skip this step to save time. Do not succumb to that temptation; you might end up with more data more quickly, but if you do not stop to look at the data collectively on a regular basis, you are unlikely to know what that data means until it is too late.
As a rule, when a performance test work item is completed, all of the test scripts, test data, test results, environment configurations, and application version information need to be archived for future reference. The method for archiving this information can vary greatly from team to team. If your team does not already have an archival standard, ensure that you include one in your performance test plan.
It is generally acceptable to forgo archiving data related to tests deemed invalid due to obvious mistakes in test setup or execution. Check the compliance criteria that apply to your project. When in doubt, archive the data anyway, but include a note describing the mistake or error.
Activity 10. Modify the Plan and Gain Approval for Modifications
At the completion of each testing phase, it is important to review the performance test plan. Mark the work items that have been completed and evaluate what, if any, cascading effects those completed items have on the plan. For example, consider whether a completed work item eliminates an alternate or exception case for a future phase, or if a planned work item needs to be rescheduled for some reason.
Once you have adjusted the plan, remember to gain approval for the adjustments as required by your process and compliance regulations.
Activity 11. Return to Activity 5
Once the plan has been updated and approved, return to Activity 5 to continue testing with the next delivery, iteration, or checkpoint release. However, that is easier said than done. Sometimes, no matter how hard you try, there are simply no valuable performance testing tasks to conduct at this time. This could be due to environment upgrades, mass re-architecting/refactoring, or other work that someone else needs time to complete. If you find yourself in this situation, make wise use of your time by preparing as much of the final report as you can base on the available information.
Activity 12. Prepare the Final Report
Even after the performance testing is complete and the application has been released for production or for Independent Verification and Validation (IV&V), the job is not done until the final report is completed, submitted to the relevant stakeholders, and accepted. Frequently, these reports are very detailed and well-defined. If you did a good job of determining compliance criteria in activity 1, this should be a relatively straightforward if somewhat detailed and time-consuming task.
Summary
Performance testing in CMMI, auditable, and highly regulated projects entails managing the testing in a highly planned, monitored environment. This type of performance testing is particularly challenging because it is frequently impossible to conduct the next planned activity until you have resolved any defects detected during the previous activity. The key to managing performance testing in such environments is to map work items to the project plan and add details to the plan.
No comments:
Post a Comment