Contact us

Chapter 8 – Evaluating Systems to Increase Performance Testing Effectiveness

Objectives

  • Learn techniques to effectively and efficiently capture the system’s functions.
  • Learn techniques to effectively and efficiently capture expected user activities.
  • Learn techniques to effectively and efficiently capture the system’s logical and physical architecture.

Overview

Although system evaluation is an ongoing process throughout the performance testing effort, it offers greater value when conducted early in the test project. The intent of system evaluation is to collect information about the project as a whole, the functions of the system, the expected user activities, the system architecture, and any other details that are helpful in guiding performance testing to achieve the specific needs of the project. This information provides a foundation for collecting the performance goals and requirements, characterizing the workload, creating performance-testing strategies and plans, and assessing project and system risks.

A thorough understanding of the system under test is critical to a successful performance-testing effort. The measurements gathered during later stages are only as accurate as the models that are developed and validated in this stage. The evaluation provides a foundation for determining acceptable performance; specifying performance requirements of the software, system, or component(s); and identifying any risks to the effort before testing even begins.

How to Use This Chapter

Use this chapter to learn how to evaluate systems for a performance-testing effort. The chapter walks you through the main activities involved in system evaluation. To get the most from this chapter:

  • Use the “Approach for Evaluating the System” section to get an overview of the activities included in system evaluation, and as quick reference guide for you and your team.
  • Use the remaining sections of the chapter to understand the details and critical explanation of system evaluation.

Approach for Evaluating the System

Evaluating the system includes, but is not limited to, the following activities:

  • Identify the user-facing functionality of the system.
  • Identify non–user-initiated (batch) processes and functions.
  • Determine expected user activity.
  • Develop a reasonable understanding of potential user activity beyond what is expected.
  • Develop an exact model of both the test and production architecture.
  • Develop a reasonable model of actual user environments.
  • Identify any other process/systems using the architecture.

These activities can be accomplished by following these steps:

  • Capture system functions and/or business processes.
  • Capture user activities.
  • Capture the logical and physical architecture.

These steps are explained in detail in the following sections.

Capture System Functions and/or Business Processes

In this step, you identify the system’s core functions to help build the performance acceptance criteria. Subsequently, workload models can be assessed to validate both the acceptance criteria and the collection of system functions.

For performance testing, it is essential to identify the core functions of the system under test. This enables you to make an initial determination of performance acceptance criteria, as well as the user community models used to assess the application’s success in meeting these acceptance criteria.

To ensure that all of the system functions are captured, start by meeting with stakeholders to determine the overall purpose of the system or application. Before you can determine how best to test a system, you must completely understand the intent of the system. It is often the case that the project documents do not explicitly express all of the functionality implied by the stakeholders’ vision. This is why it is a good idea to start with the stakeholders before moving on to evaluate documentation.

Valuable resources for determining system functionality include:

  • Interviews with stakeholders
  • Contracts
  • Information about how similar applications are used
  • Client expectations
  • Your own experiences with similar applications
  • Design documents
  • State transition diagrams
  • Requirements and use cases
  • Marketing material
  • Project plans
  • Business cycles
  • Key business processes

Considerations

Consider the following key points when capturing system functions and/or business processes:

  • Meet with stakeholders to determine the overall purpose of the system.
  • Keep in mind that contracts and documentation may deviate from the stakeholders’ views of the system. System functions may be user-initiated, scheduled (batch) processes, or processes that are not directly related to the system but nevertheless influence it, such as virus scans and data backups.
  • Interviews, documents, and plans frequently contain high-level functions that include a lot of implied functionality. For example, “provide a secure log-in method” implies session tracking, lost password retrieval, new user creation, user identification, user roles, and permissions, and so on.

Capture User Activities

In this step, you identify the key user activities for the application under test. Because it is impractical and virtually impossible to simulate every possible user task or activity in a performance test, you need to decide which activities are most important to simulate. However, before you can do this, you must determine what the possible user activities are.

One place to start is to evaluate the competition’s Web site (or application, since competing applications may not be Web-based). Whether or not it is explicitly stated, at some point during the project it is likely to become very obvious that the goal is to allow your users to perform all of the activities available from the competitor. Knowing what these activities are in advance will prevent you from being surprised when they show up in the application — whether or not they appear in any of the documentation.

Valuable resources for determining system functionality include:

  • Information about how similar applications are used
  • Client expectations
  • Your own experiences with similar applications
  • Requirements and use cases
  • Interviews with stakeholders
  • Marketing material
  • Help and user documentation
  • Client organizational chart
  • Network or application security matrix
  • Historical data (invoices, Web logs, etc.)
  • Major business cycles (monthly calculation, year-end process, five-year archiving, etc.)

Once you have collected a list of what you believe are all the activities a user can perform, circulate the list among the team along with the question, “What else can a user of any type possibly do with this application that isn’t on this list?”

Considerations

Consider the following key points when capturing system functions and/or business processes:

  • Evaluate the competitor’s Web site, since it is likely that keeping up with the competition will eventually become a project goal.
  • Remember to take all categories of users into account when soliciting possible user activities. Customers, administrators, vendors, and call-center representatives are likely to use and have access to very different aspects of the application that may not be easily found in the documentation.
  • Spend extra time soliciting exception- and error-case activities, which are often implied or buried in documentation.
  • If you find activities missing that seem important to you, or that appear in competing applications, consult with the relevant team members as soon as possible. These may indicate unintentional oversights.

Capture the Logical and Physical Architecture

In this step, you identify the relationship between the application and the structure of the hardware and software. This information is critical when you are designing performance tests to address specific areas of concern, and when you are trying to locate a performance bottleneck.

A poor understanding of system architecture can lead to adverse affects on performance testing later in the project and can add time to the tuning process. To capture the logical and physical architecture, the performance tester generally meets with technical stakeholders, architects, and administrators for both the production and test environments. This is critical because designing an effective test strategy requires the performance tester to be aware of which components or tiers of the system communicate with one another and how they do so. It is also valuable to understand the basic structure of the code and contributing external software.

Because the term “architecture” is used in so many different ways by different teams, the following sections have been included for clarity.

Logical Architecture

Logical architecture, as it is used in this chapter, refers to the structure, interaction, and abstraction of software and/or code. That code may include everything from objects, functions, and classes to entire applications. You will have to learn the code-level architecture from your team. When doing so, remember to additionally explore the concept of logical architectural tiers.

The most basic architecture for Web-based applications is known as the three-tier architecture, where those tiers often correspond to physical machines with roles defined as follows:

  • Client tier (the user’s machine) – presents requested data.
  • Presentation tier (the Web server) – handles all business logic and serves data to the client(s).
  • Data storage tier (the database server) – maintains data used by the system, typically in a relational database.

Figure 8.1 Three-tier Architecture

More complex architectures may include more tiers, clusters of machines that serve the same role, or even single machines serving as the host for multiple logical tiers.

Figure 8.2 Multi-tier Architecture

Specifically, this complexity implies the following:

  • It is reasonable to think of a logical tier as a grouping of related functions.
  • Any tier that is depicted in a logical diagram may span more than one physical machine, share one or more machines with one or more other tiers, or be exclusively tied to a dedicated machine.
  • Arrows connecting logical tiers represent a flow of data, not network cables or other physical connections.

One source of confusion is that virtually no one uses terms such as “file storage tier.” The “file storage tier” is generally referred to as “the file server,” whether or not that tier resides on a dedicated server. The same is true of the presentation tier (Web server), application or business logic tier (application server, often abbreviated as app server), data storage tier (database server), and so on.

Put simply, the key to understanding a logical architecture is that in this type of architecture, each tier contains a unique set of functionality that is logically separated from the other tiers. However, even if a tier is commonly referred to as “server,” it is not safe to assume that every tier resides on its own dedicated machine.

Physical Architecture

It should be clear that the physical architecture of the environment — that is, the actual hardware that runs the software — is at least as important as the logical architecture.

Many teams refer to the actual hardware as the “environment” or the “network architecture,” but neither term actually encompasses everything of interest to a performance tester. What concerns the tester is generally represented in diagrams where actual, physical computers are shown and labeled with the roles they play, along with the other actual, physical computers with which they communicate. The following diagram shows an example of one such physical architecture diagram.

Figure 8.3 Physical Architecture

System Architecture

The system architecture is actually just a consolidation of the logical and physical architectures. The diagram below is an example depiction of system architecture. Obviously, it does not include every aspect of the architecture, but it does serve to highlight some points of interest for performance testing, in this case:

  • Authentication and application tiers can be served by two servers.
  • The mapping will allow information to better design performance tests.
  • Performance tests can be targeted at the application tier directly, for example.

Putting these two pieces of the puzzle together adds the most value to the performance-testing effort. Having this information at your fingertips, along with the more detailed code architecture of what functions or activities are handled on which tiers, allows you to design tests that can determine and isolate bottlenecks.

Figure 8.4 System Architecture

Considerations

Consider the following key points when capturing the system’s logical and physical architecture:

  • Some teams view testing as a “black-box” activity; that is, it reminds and educates the team of the necessity to design performance tests well and maintain knowledge of the entire system — from the load-balancing scheme to the thread-sharing model of code objects. Doing so allows the performance tester to identify high risk areas early in the project.
  • To test a Web farm, it is necessary to use Internet Protocol (IP) switching techniques to correctly simulate production because of server affinity for IP addresses.
  • Application servers and Web servers are frequently multi-homed (that is, having more than one network interface card), with one facing the clients/Web server and another facing the Web server/database back end. This is done for security reasons and also to avoid network utilization on one network interface card for both types of traffic. Characteristics such as this can have a significant impact on performance test design, execution, and analysis.
  • The performance tester will not be as effective if he or she is not accepted by the development team as a technical resource. By determining the system architecture, the performance tester can establish him or herself as a technical resource with the developers and architects.

Summary

Although system evaluation is an ongoing process throughout the performance-testing effort, it provides the most value when conducted early in the performance-testing project.

During the system evaluation process, collect information about the project as a whole, the functions of the system and/or business processes, the expected user activities, the system architecture, and any other details that are helpful in guiding performance testing in order to achieve the project’s specific needs.

This information helps in defining the performance goals and requirements, characterizing the workload, creating performance test strategies and plans, and assessing project and system risks.

No comments: