When it comes to load testing, there’s always the million-dollar question, “What does the client want to do with their application/system in terms of performance?”  I’m pretty sure that you will never get an easy answer to that question, and most of us always assume some things and carry out the performance testing, which may end up missing testing critical pieces, and end up with an unhappy client. An unhappy client means loss of business to your organization and a declining career as performance engineer.

But do not worry, in this article we are going talk about creating a checklist which will help you with those questions. This checklist is more like a step-by-step process which you can follow and adapt to your performance testing life cycle. In the discussed scenario, we can’t always blame our clients, because initially they might not know what the exact performance outcomes they want, but they have a clear knowledge on how the application works under different conditions. A good performance tester can manage this ambiguity with a well-defined questionnaire. And that is the very first item on the checklist, the requirement gathering questionnaire.

 

Requirement Gathering Questionnaire

Below is a questionnaire format which you can follow in your project. We need to get as many answers as possible for these questions. It will be better if you can have a call to discuss these questions. Make sure application architect/developer is joining the call with you and the client.

No Topic Questions
1 Application Type of application to be considered for testing (For example, web application/ mobile application, etc.).
2 Application architecture and technology/platform.
3 Load balancing technique used.
4 Communication protocol between client and server (For example: HTTP/S, FTP).
5 Performance testing objective. Performance metrics to be monitored (For example, response time for each of the actions, concurrent users etc.).
6 Critical scenarios to be considered for performance testing.
7 Details of background scheduled jobs/process, if any.
8 Technique used for session management and number of parallel sessions supported.
9 Customer SLA/ Requirements Expected number of concurrent users and total user base.  User distribution for scenarios in scope.
10 SLA for all transactions in scope for PT (expected throughput, response time, etc.).
11 Types of performance testing to be performed (Load testing, endurance testing, etc.).
12 System/Environment Test environment details (web/app/DB, etc., along with number of nodes).  Production-like environment recommended for performance test execution.
13 Comparison of production environment and performance test environment.
14 Whether application to be isolated during performance test execution to avoid interaction with other application.
15 Details of built-in logging mechanism or monitoring mechanism.
16 Others Application performance baseline results.
17 Current performance issues, if any.

 

Answer to these questions will help you to create a test strategy/test plan. If the client isn’t able to provide answers to all these questions, no need to worry. We can always explore the application under test and find the answers. For example, if an APM/log tool is in place, we can derive concurrent users, throughput, and response time from production system.  Never assume that you will get what you need without asking.

 

Find a Suitable Performance Testing Tool

A performance tester should carefully choose the best performance testing tool. There are lot of factors which need to consider before you finalize and propose the tool. Remember, Client budget is always a major factor while choosing the performance testing tool.

If you’re looking for a performance testing tool which is cost effective, easy to use, and can provide a complete performance solution, you should definitely try LoadView.  All types of performance testing require time and effort. Performing load testing can save an organization from potential humiliation, but the same time tests on free open-source tool like JMeter won’t justify the investment. Here is a comparison of LoadView against other performance/load testing solutions and why should you choose LoadView for your performance testing requirements.

When it comes to slow loading sites and applications, customers can become quickly impatient and will leave and find a replacement if your site/ application doesn’t meet their expectations.  Knowing how much your site/ application can handle is very important due to various reasons, but there are several significant scenarios that the LoadView platform can assist with:

  • Adaptability and scalability. Determining why your site or application slows down when multiple users access it.
  • Infrastructure.  Understanding what hardware upgrades are needed, if any. The cost of implementing additional hardware, and maintaining it, could be a potential waste of resources.
  • Performance requirements.  Your site or application can properly handle a few users, but what happens in real-world situations?
  • Third-party services.  See how external services behave when normal, or even peak, load conditions are presented.

 

Performance Testing Plan/Strategy

Once you’ve  gathered client requirements and selected suitable performance testing tool, we need to document our test plan/test strategy. Make sure to get the test plan sign-off from client before you start any performance activities. This is very important and it will help you to avoid any unnecessary glitches later on. These are some points that need to be included in the test plan.

  • Performance Test Objectives.  What we are going to achieve.
  • Project Scope.  What is the scope of project, example: Number of scripts, how long we need to test, Etc.
  • Application Architecture.  Application details such app server, DB server, you can include architectural diagram if you have it.
  • Environment Details.  Details about the environment we are going to test. It’s always good to have an isolated environment for performance testing.
  • Infrastructure Setup.  Initial setup for the performance testing (for example, cloud environment, tool installation, etc.).
  • Performance Test Approach.  How we’re going to carry out the test.  We should start with baseline test creation with less number of users, and then gradually we can increase the users and perform different type of tests like stress, endurance, etc.
  • Entry and Exit Criteria.  This is very important. We should always start performance testing when there are zero functional defects. Same way we should document when we can stop performance testing.
  • Defect Management.  We should follow same tool and practices followed by client to log defect related to performance testing.
  • Roles and Responsibilities.  Details about stake holders involved in the different activities during performance testing.
  • Assumptions and Risks.  If there are objectives which can be a risk to performance testing we should document it.
  • Test Data Strategy.  Details about test data strategy and how can we extract it.
  • Test Plan Time line and Key Deliverables.  Timeline of scripting, test execution, analysis, and deliverables for client review.

 

The actual performance testing relies on a combination of techniques. To achieve the expected goals, we need to prepare a different strategy for different performance testing requirements.  There are different metrics, such as user load, concurrent users, workload models,etc., that need to be considered before planning the load. If you have worked on workload modelling before, you would have heard about Little’s law. You should learn it and implement it before planning a test to get the desired results.

 

Real-time Performance Scripts/Scenarios

Once we’ve reached an agreement with the client on the performance plan/strategy, we should start preparation for scripting using agreed performance testing tool.

Below are some of the points which we need to remember to build a quality scripts

  • Always go with the documented test case steps while doing scripts.
  • Do replay and correct correlation requirements for a single user.
  • Do replay and correct parameterization requirements for a single user.  Parameterization can be anywhere the headers, cookies and body parameters.
  • Once the script is successful with single user data, run multiple iteration with different users. Additional correlation/parameterization may be required for different user data.
  • In some cases to achieve certain use cases, we may need to write block of code.  For example, selecting a particular response data for a user and manipulating it to another request.
  • Add think time, pacing, error handling for the script according to the work load model
  • Text check/Image check also very important step in the scripting.

 

Additional to the above steps, there will be requirements for simulation of cache/cookies and network conditions. A good performance engineer should consider all these factors before starting the execution. The performance testing engineer should also develop the most realistic patterns of virtual user behavior. To do this, it’s crucial to get the correct answer through the requirement gathering questionnaire.

  • What is the total number of the users working with the application, an average number per business hours?(8 hours per day)
  • What actions do users perform and how often?
  • How many users work with the application at the moment of the peak load?

Answers to above questions can be obtained through requirement gathering questionnaire or a statistical data collected with the help of web analytics tools such as APM tools/Google Analytics. Transactions analysis allows finding important business transactions and designing performance testing scenarios that will be used for Performance test execution.

 

Workload Modelling

We can plan our workload model in different ways. Performance engineers should learn and practice “Little’s law” concept before selecting a workload model. Below are some of the existing workload models. Again, the performance engineer should choose workload model based on the requirements in hand.

Workload Model 1.  It’s just a simple model, where number of users will be increased continuously as the test progress. Example: one user per second until the test is completed.

Workload Model 2.  In this model, the number of users will be increased like a step for entire duration of the test. For example, the first 15 minutes will be 100 users and next 15 minutes will be 200, etc. We can plan this type of test for endurance testing.

Workload Model 3.  This is the most common performance testing model. Number of users will be continuously increased for certain time (We call this the ramp up period). After that, users will have steady state for certain duration. Then users will start ramp down and test will finish. For example, if we’re  planning 1.5 hours of testing, we can give 15 minutes for ramping up the users and 15 minutes for ramp down. Steady state will be one hour. When we analyze the results, we will take only steady state for consideration.

Workload Model 4.  In this model, the number of users will be increased and decreased suddenly for entire duration. There are different names for this type of testing, like monkey testing, spike testing, etc.

We need to establish a comprehensive set of goals and requirements for performance testing. Define the performance testing parameters, and what constitutes acceptance criteria for each one. If you neither know the test nor the desired outcome, then the performance testing is a waste of time.

 

Gathering Data

Below are some of the very important metrics based on which we do performance work load modelling.

Response time: Response time is nothing but the time which server took to respond to the client request. There are lot of factors which will affect the server response time. Load test will help to find and eliminate those factors which are degrading response time.

Average response time: This will be average value of response time for entire duration of steady state in a load test.

90 percentile response time: The 90th percentile response time means that 90 percent of the response time falls below that value. For example, if you had 500 requests and each has different response time, then the 90th percentile is any value which 90 percent of all other values are below 90 the percentile.  Only 10 percent of response time will be higher than 90 percentile value. This is will be useful if some of your request has huge response time and they are skewing average response time result.

Requests per second: We need to find this value from APM tools or with the help of production logs. Based on the concurrent users we can plan for request per second to server.

Memory utilization: These are infrastructure-side metrics which help you to uncover bottlenecks. Also you should plan your workload real-time as possible. Make sure we are not bombarding server with continuous requests.

CPU utilization: Same as memory and need to have eye on this metrics while running the performance tests.

Error rate: We should keep track of the passed/error transaction ratio. Error rate should not be more than 2 percent of passed transactions. If the error rates are increasing the as concurrent users are increasing, there may be a potential bottleneck.

Concurrent users: We need to find this value from APM tools or with the help of production logs. Usually we find this value based on a business day.

Throughput: Throughput will show server capacity to handle the concurrent users.  There is a direct connection between concurrent users versus throughput. Throughput should increase as the number of users increase to access application. If throughput is decreasing as the number of user increasing, it’s a possible indication of bottlenecks.

User Load distribution:  User load distribution is one of the important factor to be considered while designing workload. If we have five scripts which carry out five different functionalities of an application, we need to split the user load to real as possible based on our investigation on production or APM tools.

These are very important metrics we should keep in mind while designing the workload model. There will be other metrics as well as depends on the client application architecture and requirements.

 

Checklist for Performance Testing Environment (Before Execution)

The checklist example below is usually followed by enterprise-level end-to-end performance testing, where there are a huge number of systems involved. It is always recommended to follow an environment checklist for small scale performance testing as well. Activities will change based on the application environments, as there is a huge difference when we compare it with the application on cloud vs application on-premises.

Performance Testing Checklist

 

Conclusion: Load Testing Preparation Checklist

Successful software performance testing doesn’t happen by accident. Architects/product owners/developers and the performance testing team must work together and address performance testing requirements before assessing the software. For more information on best practices in performance testing, read an introduction to LoadView to see how can you leverage your performance testing to the maximum.  Performance testing should be an ongoing process. When your website/application starts growing, you’ll need to make changes to accommodate a larger user base.  There is always a chance of improvement in any application and tests. We hope this gave you a good insight about the best practices in performance testing. Happy performance testing!

Sign up for the LoadView free trial to start running load tests. We’ll give you $20 in load testing credits to get started!