Performance testing can sometimes be misunderstood as hammering the server with high throughput of request, but concepts like think time, pacing and delays helps us to achieve the real user patterns happening over production. Designing the performance test scenario as close to realistic user patterns is very crucial to achieve results which find genuine issues and bottlenecks in applications. In the same context, think time and pacing have significant importance while developing load test scenarios. In this article, we will cover think time, pacing and delays, along with their meaning, best practices, and how we can set up these metrics as part of our load test scenario with LoadView.  Let’s first understand what think time and pacing in load tests means when it comes to load testing.

What is Think Time?

Think time in load testing is the time difference between each action of a single user. A user while browsing the application spends some amount of time (think time) before doing some action on the website. For example, on an e-commerce web application, a user clicks on a product tile, goes to its product display page, and then waits there to consume and read the content over this page before clicking on the Add to Cart button. The time spent from clicking on the product tile to clicking on Add to Cart is called think time. The value of think time varies from user to user, but for our test scenario, we can take the average of think time.

Typically, when you think about load and stress testing, you think about just serving up large amounts of concurrent users against your web applications, websites, or APIs to see how they perform under stress. While stress testing has its place in performance testing, this type of performance test isn’t suitable for understanding performance from the user’s perspective, as this does not really simulate actual real-world scenarios. This is where think time comes in to help better simulate the user journey steps, such as paths to purchase, searching for a product, or logging into an account, for example. Each of these steps have different think time values and it is important to take these into consideration when load testing.

What is Pacing?

Pacing is used during load tests to make sure we are running the test with desired transaction per second. It’s the time difference between each complete iteration of business flow. It helps us to control the count of requests sent to the server per second. Pacing is slightly different than think time. As we described above, think time is the delay between actions within iterations or steps. As we have mentioned, load testing is not about hitting the server with as many requests as possible with no delay, the test plan with desired throughput can be achieved by finding the correct value of pacing. Additionally, pacing, along with think time, also helps to better simulate the user’s experience and provides a more realistic load test. There is typically a short period of time between iterations, so it is an important factor to consider when setting up your load tests.

Why it’s Important to Introduce Delays in Load Testing Scenarios

Load testing the application before full stage roll-out saves us from a potential bad experience faced by end users with issues, like timeouts, slow page responses, and downtime. In order to get close to realistic load test results and find issues, if any, we would need to bring our test scenario to be as realistic as possible. Consideration of think time and pacing in our test scenario design helps us to test how server’s queuing management, thread utilization, and memory management is behaving under heavy load. For example, if we try adding think time in between each concurrent user action, during this delay the server tends to pick other pending tasks from the queue, executes the next task and then picks the old task again. This step is exactly what happens over production with real users. Adding think time also increases the time spent by the user on the application, which identifies issues related to concurrent user handling capacity of the server.

How to Calculate Delays for Applications

The number of concurrent virtual users, delays, and transaction per second (TPS) varies for each application. So, to calculate what should be the delays for our application, we can use below formulas.

Load test duration (in seconds) * (TPS + Delays) * Concurrent Users Count = Total Transactions

Let’s say for an example, we would like to generate 100,000 transactions, each transaction has a response time of 5 seconds and we would be running the test for 10 minutes (600 seconds). Let’s calculate how many concurrent users required assuming if we have 3 seconds of think time in delays. Using the above formula, we can calculate concurrent users count required. In our case it would be 100,000/(8*10*60) which comes out to be around 21 users. This way we can find the delays and numbers required for load tests.

Best Practices Before Running a Load Test

To get the best and most accurate results out of performance testing, we should consider answering below question which focus on best practices during load test.

Number of Concurrent Users

We would need to understand at what expected concurrent users we want to benchmark our application.

Simulating Real User Test Scenarios

Designing the test scenario keeping in mind the real user journey, think times spent by the user and delays between each test.

Geo-distributed Virtual Loads

Load injectors generating loads should be separated based on specific geo-locations, if our application is expected to receive traffic from all over the globe.

Setting Ramp Up Period

Setting the ramp up time period also helps in increasing the scale at application gradually and makes our test scenario realistic to application behavior.

Test Duration

The time duration of a test is important to understand how the server behaves when it is put under continuous straight-line load.

Adding Delays with LoadView

LoadView includes the EveryStep Web Recorder, which provides ease of creating test scenarios by recording the actions performed by us in a browser. It mimics the exact steps and behavior performed by the user, collects all data points, like selectors, actions and delays. While creating our test scenario, we would be required to mimic the real user journey with think time delays. Once we have stopped the recording, it creates a script which can be rerun with desired concurrent users. As you can see from the image below, we can also modify the script and update delays for individual steps, as needed for the test. Learn more about editing EveryStep Web Recorder scripts.

Add Delays to Script

The developed script with real user interaction with application and user journey is considered as the best approach which can help us achieve accurate results out of load test.

User Behavior Profile

Additionally, you have the option to modify user behavior from the LoadView platform.  As you can see in the image below, you can choose from Normal Delay or choose Custom Delay to set specific user behavior and delays for your applications. Learn more about Adjusting User Behavior.

Adjust User Behavior

Parting Thoughts:  Load Testing: Think Time, Pacing, and Delays

Performance testing an application is a critical aspect before we send it into production. It can only help us find those accurate performance related issues if best practices are followed and test scenarios are developed which cover real user journeys on the application. In this article, we looked at how keeping in mind think time and pacing delays during creation of test scenario design can help in finding the underneath issues of the system. It helps us find issues like page timeouts, slow page response, response time and server errors well in advance at high load.

These strategies can help us to move towards responsive and reliable applications and websites.  Try the EveryStep Web Recorder now and see how quickly you can create scripts for your applications.

Sign up for LoadView today and receive free load tests. Questions about the LoadView platform? Reach out to our support team to speak to one of our performance engineers.