Skip to content

Commit

Permalink
Add example of a bottlenecked server and use that test to make a new …
Browse files Browse the repository at this point in the history
…graph for the docs.
  • Loading branch information
Lars Holmberg committed Jul 17, 2024
1 parent e33b413 commit 7152dc4
Show file tree
Hide file tree
Showing 6 changed files with 40 additions and 6 deletions.
Binary file added docs/images/bottlenecked_server.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/images/number_of_users.png
Binary file not shown.
Binary file removed docs/images/response_times.png
Binary file not shown.
Binary file removed docs/images/total_requests_per_second.png
Binary file not shown.
8 changes: 2 additions & 6 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,13 @@ The following screenshots show what it might look like when running this test us

| Under the *Charts* tab you'll find things like requests per second (RPS), response times and number of running users:
.. image:: images/total_requests_per_second.png

.. image:: images/response_times.png

.. image:: images/number_of_users.png
.. image:: images/bottlenecked_server.png

.. note::

Interpreting performance test results is quite complex (and mostly out of scope for this manual), but if your graphs start looking like this, the target service/system cannot handle the load and you have found a bottleneck.

When we get to around 9 users, response times start increasing so fast that even though Locust is still spawning more users, the number of requests per second is no longer increasing. The target service is "overloaded" or "saturated".
When we get to around 20 users, response times start increasing so fast that even though Locust is still spawning more users, the number of requests per second is no longer increasing. The target service is "overloaded" or "saturated".

If your response times are *not* increasing then add even more users until you find the service's breaking point, or celebrate that your service is already performant enough for your expected load.

Expand Down
38 changes: 38 additions & 0 deletions examples/bottlenecked_server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
"""
This example uses extensions in Locust's own WebUI to simulate a bottlenecked server and runs a test against itself.
The purpose of this is mainly to generate nice graphs in the UI to teach new users how to interpret load test results.
See https://docs.locust.io/en/stable/quickstart.html#locust-s-web-interface
"""

from locust import HttpUser, events, run_single_user, task

import time
from threading import Semaphore

# Only allow up to 10 concurrent requests. Similar to how a server with 10 threads might behave.
sema = Semaphore(10)


class WebsiteUser(HttpUser):
host = "http://127.0.0.1:8089"

@task
def index(l):
l.client.get("/slow")


@events.init.add_listener
def locust_init(environment, **kwargs):
assert environment.web_ui, "you can't run this headless"

@environment.web_ui.app.route("/slow")
def my_added_page():
with sema: # only 10 requests can hold this lock at the same time
time.sleep(1) # pretend each request takes 1 second to execute
return "Another page"


if __name__ == "__main__":
run_single_user(WebsiteUser)

0 comments on commit 7152dc4

Please sign in to comment.