Locust Load Testing
2025-01-20
Stupidly, I had mentioned that the advantage of moving to AWS -> OCI was not only cost, but also performance, but I hadn't really tested it, but after the interview, I remembered and load-tested using Locust.
What is Locust? It is a Python-based open source load testing tool that is very easy to use because you can write scripts in Python code.
It's very simple to use, but I like to use poetry.
poetry add locust
and then write locustfile.py in the appropriate path.
The script I used for testing is as follows
from locust import HttpUser, task, between, TaskSet
import random
class UserBehavior(TaskSet):
def on_start(self):
self.min_post_id = 56
self.max_post_id = 59
self.categories = ["기술", "일상", "리뷰"]
@task(5)
def home(self):
self.client.get('/')
@task(3)
def get_post_detail(self):
post_id = random.randint(self.min_post_id, self.max_post_id)
self.client.get(f'/post/{post_id}')
@task(2)
def view_category(self):
category = random.choice(self.categories)
self.client.get(f'/?category={category}')
@task(2)
def search_posts(self):
search_terms = ["파이썬", "테스트", "개발"]
term = random.choice(search_terms)
self.client.get(f'/?search={term}')
@task(1)
def view_about(self):
self.client.get('/about')
class LocustUser(HttpUser):
host = "https://locust.load-test.com"
tasks = [UserBehavior]
wait_time = between(3, 7)
def on_start(self):
pass

In the following configuration window, you need to set the target number of concurrent users, the number of new users to create per second, and the host in this order. I ran the test for about 15 minutes with a maximum of 100 concurrent users and 10 users per second.

Average response time: main page: 81ms, post details: 62-72ms, overall average: 77ms
Error rate: 0% (perfect stability)
Throughput: 14.01 requests per second, 12,935 requests successfully served in total
Median : Very good, mostly 64ms
which is not too bad.
Next, to test peak load, I ran a test with 500 concurrent users and 50 users created per second, with the following results

Overall performance:
Total requests: 24,238.
Average response time: 4.6 seconds (4,589 ms), with a sharp increase in response time at the 99 percentile (18 seconds) and a very high maximum response time of 59 seconds
RPS (requests per second): 47.85
Failure rate: 1.04% (252), with all failures being ReadTimeout errors
After about 10 minutes, it started popping timeouts, and even before it popped, it had an incredible response time of 4.6 seconds.
I thought I should probably implement a caching solution, drop static files, or optimize my queries, but then I thought to myself, is it really possible for a personal tech blog, especially Junior's blog, to have 500 readers? I decided to take care of the more urgent things for now and improve them later.
However, what I really regret is that there is no performance test result on AWS's T2.Micro, so it is impossible to make an objective comparison, but in terms of simple specifications, there is a performance difference of 6~8 times from A1.Flex, which I am currently using, so I think the response time would not be more than 1 second even if there are only 20~30 co-contributors.
Kakao
Google
Naver