Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning Assisted Load Test Generation for E-Commerce Applications (2007.12094v1)

Published 23 Jul 2020 in cs.PF

Abstract: Background: End-user satisfaction is not only dependent on the correct functioning of the software systems but is also heavily dependent on how well those functions are performed. Therefore, performance testing plays a critical role in making sure that the system responsively performs the indented functionality. Load test generation is a crucial activity in performance testing. Existing approaches for load test generation require expertise in performance modeling, or they are dependent on the system model or the source code. Aim: This thesis aims to propose and evaluate a model-free learning-based approach for load test generation, which doesn't require access to the system models or source code. Method: In this thesis, we treated the problem of optimal load test generation as a reinforcement learning (RL) problem. We proposed two RL-based approaches using q-learning and deep q-network for load test generation. In addition, we demonstrated the applicability of our tester agents on a real-world software system. Finally, we conducted an experiment to compare the efficiency of our proposed approaches to a random load test generation approach and a baseline approach. Results: Results from the experiment show that the RL-based approaches learned to generate effective workloads with smaller sizes and in fewer steps. The proposed approaches led to higher efficiency than the random and baseline approaches. Conclusion: Based on our findings, we conclude that RL-based agents can be used for load test generation, and they act more efficiently than the random and baseline approaches.

Citations (1)

Summary

We haven't generated a summary for this paper yet.