Goal Based Performance Testing



Goal-based performance testing is a strategy employed in software testing, particularly performance testing, to ensure that software systems meet specific performance objectives under various conditions. In goal-based performance testing, the testing process is driven by predefined goals or objectives rather than just testing for the sake of it.

Goal-based testing works using the following steps:

  1. Defining Performance Goals: The first step involves defining what performance goals are important for the software system being tested. These goals could include response time thresholds, throughput requirements, resource utilization targets, and scalability benchmarks.
  2. Designing Test Scenarios: Test scenarios are designed based on your defined performance goals. These scenarios simulate different user behaviors, loads, and system configurations to evaluate how the software performs under various conditions. For example, scenarios might include peak usage periods, heavy transaction loads, or sudden spikes in user activity.
  3. Executing Tests: Test scenarios are executed against the software system under the test requirements. During this phase, performance metrics such as response times, throughput, resource utilization, and system scalability are measured and monitored.
  4. Analyzing Results and Improvement: The collected performance data is analyzed to assess whether the system meets the predefined goals. This analysis helps identify performance bottlenecks, scalability limitations, and areas where the system may fail to meet performance requirements. Based on the test results analysis, make any necessary improvements and optimizations to your software system.
  5. Repeating the Process: Goal-based performance testing is an iterative process. After making improvements, the testing cycle is repeated to validate whether the performance goals have been achieved and to identify any new performance issues that may have arisen.

Performance tests primarily focus on checking the speed and reliability of an application, split into load tests (which are goal-oriented) and stress tests. With the rise of agile development methods, it’s become crucial to ensure that load testing results can be easily reproduced.

 

The Importance of Defining Performance Goals

Defining performance goals is the cornerstone of goal-based performance testing. These goals serve as the yardstick against which the software’s performance is measured. They provide a tangible framework for assessing whether the software meets its performance requirements under various conditions, including normal usage, peak loads, and stress scenarios.

 

Key Reasons for Goal-Based Performance Testing

  • Aligning with Stakeholder Expectations: By defining specific performance goals, you ensure that your software’s performance aligns with the expectations of stakeholders, including end-users, clients, and project sponsors.
  • Validating Performance Requirements: Goal-based testing helps validate whether your software meets its performance requirements, providing concrete metrics to assess performance adequacy.
  • Optimizing Resource Utilization: Goal-based testing helps optimize resource utilization by identifying inefficiencies or overutilization of system resources, leading to more efficient resource allocation and cost savings.
  • Scalability: By measuring performance under increasing loads or user concurrency, goal-based testing evaluates your software’s scalability, ensuring it can handle growing user bases and workloads.
  • Mitigating Risks: Proactively testing against predefined performance goals helps identify and mitigate performance-related risks before deploying the software, reducing the likelihood of downtime, user dissatisfaction, or financial losses.

 

Goal-Based Use Case: Problem

Suppose 20 concurrent users generate 2,000 transactions hourly in your new CRM application. Your objective is to devise a performance test ensuring an eight-second response time for the next four releases. Stress testing may not precisely replicate the expected throughput in these upcoming releases, as response times could vary from the current release.

 

Goal Based Use Case: Solution

  1. Integrate ThinkTimes into your scripts to introduce pauses between user actions.
  2. Determine the baseline and measure the runtime of single-user scripts to calculate the session time.
  3. Configure the workload parameters, including maximum users, goal-based transaction rate, and goal-based transaction time.
  4. Execute the goal-based performance test to simulate the expected load on the application.
  5. Review the test report to verify if the application managed to handle the load within the predefined response time boundaries.
  6. Repeat the test run in the subsequent four releases to assess if the application maintains throughput and response time thresholds over time.

 

Recommendations and Tips for Configuring LoadView’s EveryStep Tool

ThinkTime (required):

  • Create new keywords in The EveryStep Web Recorder (ThinkTimes) or reuse existing keywords.
  • Ensure allowed values are floating points 0.0 – 999.99.
  • Allow users to manually add ThinkTimes to scripts.
  • Remember that ThinkTimes represents waiting times and is automatically added by EveryStep Web Recorder during user action recording.
  • Multiple ThinkTimes can exist in one script.
  • ThinkTimes are disregarded in single script test runs.
  • ThinkTimes will be utilized in Calibration/Get Baseline.
  • ThinkTimes do not contribute to response time measurements.
  • ThinkTimes are ignored in stress tests.

User Concurrency (optional):

  • Introduce a new “WaitFor (Number of users)” keyword in the EveryStep Web Recorder.
  • This global waiting point blocks simulated users at a specific script section until the expected number of users reaches this part of the script.

Response Time Thresholds (optional):

  • Implement the new SetBoundary keyword in the EveryStep Web Recorder.
  • Syntax: SetBoundary(Timername, Bound 1, Bound 2).

Baseline/Calibration (required):

  • LoadView executes a single user test run.
  • ThinkTimes will be used as scripted.
  • LoadView calculates the session time: Session Time = script execution time + ThinkTime.

Configure Workload/Execution Plan (required):

  • Customers specify ramp-up time.
  • Customers define their goal transaction rate.
  • Customers set goal session time.
  • System calculates the number of users.
  • Customers decide whether to calculate response times during ramp-up or not.

Run Test (required):

  • LoadView executes the test according to the configured workload/execution plan.
  • LoadView collects response times of simulated scripts or transactions.
  • LoadView dynamically adjusts ThinkTime to achieve expected throughput; if the application under test slows down, ThinkTimes are reduced. If ThinkTimes are zero and session time exceeds the goal session time, an error message is raised indicating that the expected throughput could not be reached.
  • LoadView calculates response times of actual transactions and timers without ThinkTimes.

 

Recommendations and Tips for Integrating with Dotcom-Monitor

EveryStep Web Recorder

  • Introducing new ThinkTime keywords.
  • Ignore ThinkTime during single-user test runs.
  • Add ThinkTime during script recording.
  • Introduce new WaitFor(Number user) keyword.
  • Introduce new SetBoundary(TimerName, B1, B2) keyword.
  • The WaitFor keyword must be manually added to created scripts.
  • Utilize SetBoundary keyword.

Calibration/Get Baseline

  • Calculate session time during calibration.

Execution Plan/Workload

Option 1:

  • Add a new workload configuration feature.
  • Replace Execution plan with Workload feature.
  • Create a Workload configuration dialog to support stress tests, transaction goals, and other types.
  • Specify the ramp-up time.
  • Check the box for calculating response times during ramp-up (yes/no).

Option 2:

  • Use the enhanced execution plan configuration feature.
  • Select the test type (stress, goal-based).
  • Set the transaction goal details.
  • Specify the ramp-up time.
  • Check the box for calculating response times during ramp-up (yes/no).

Run Test

  • Calculate actual script execution time/session time.
  • Dynamically adjust ThinkTimes based on actual session time.
  • Raise a warning if expected throughput could not be reached.

Report

  • Configure a section for response time, actual vs. thresholds per timer.
  • Configure a section for throughput, actual vs. expected.

 

Tips for Integrating with Dotcom-Monitor: FAQ

 

What are the User Inputs?

  • ThinkTimes (Floating point, >0)
  • Goal Transactions per hour (Integer)
  • Max number of users (Integer)
  • Ramp up time (minutes)
  • Calculate response time during ramp up (Yes / No)

 

What is Baseline?

A single user execution of the device or script, incorporating ThinkTimes. The script execution time is calculated and stored as session time, along with additional details like required execution resources.

 

How Can You Dynamically Adjust the Load Test If Transaction Speed Changes on the Target System?

  • Calculate session time during calibration
  • Use ThinkTimes to reach the requested goal session time
  • Recalculate actual session time during test execution
  • Dynamically adjust ThinkTimes depending on actual session time
  • Log error message if script execution time is > goal session time
  • Specify the number of max users in the workload calculation

 

What is the WaitFor Keyword?

This simulates complex user scenarios, particularly concurrency situations, which is useful for testing if certain functionality works correctly with multiple users accessing a resource simultaneously.

 

What is the SetBoundary Keyword?

Helps verify the actual speed of a certain action or timer against specified SLA response time boundaries. If the allowed boundary is violated, an error message appears and is logged in the test report, simplifying SLA verification.

 

What Should Your Goals Be for Your Load Test?

  • Ensure 100 percent comparable performance tests across different releases/executions.
  • Include features for simulating regular or peak load patterns.
  • Achieve confidence that the system under test can handle the expected load within agreed boundaries.
  • Focus performance optimization on user actions that violate agreed boundaries.

 

What Type of Reports Should You Configure?

  • Create reports similar to current reports.
  • Include Avg, Min, Max, Stddev, Percentile response times.
  • Track Transactions ok, Transactions failed, and Error rate.
  • All response times should be without the inclusion of ThinkTimes.

 

Limitations

High goal session times may lead to session timeouts and false positives. It’s crucial to consider scenarios where web session timeouts are low, ensuring ThinkTimes are appropriately placed to prevent failures due to session timeouts.

 

What Will Happen If We Don’t Reach the Goal?

  • If the response time of an application under load testing slows down and the session time exceeds the goal session time, the expected transaction rate cannot be reached.
  • LoadView monitors actual session time during test execution and adjusts ThinkTimes to attempt reaching the expected goal-based transaction rate.
  • LoadView displays error messages on the monitoring screen if session time exceeds the goal session time.
  • LoadView continues test execution if the goal transaction rate cannot be achieved, marking the test run as failed and indicating affected devices in the report.

 

What Does a Sample Goal Based Workload Look Like?

Script/Device ST (sec) Not Editable GST (sec) User Input TPH User Input User Not Editable
Search_User 25 10 500 72
Inser_User 25 60 1000 216
    •  Ramp up time: 15 minutes
    • Measure response times during Ramp up: Yes / No
      ST: Session Time
    • GST: Goal Session Time
    • TPH: Transactions per hour
    • User: Calculated by LoadView (3600 / TPH) * GST = 72
    Take Your Concurrent User Testing to the
    Next Level

    Experience unparalleled features with limitless scalability. No credit card, no contract.