-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Huge DirectMemory usage (Non Heap) and Threads #77
Comments
Hi @Pill30 Http2 allocate memory on each JMeter thread and create others threads needed to manage the connection polls and parallel workers over each of this threads (that is for the asynchronous and concurrent execution and for the multiplexing connections). You need to think like a each JMeter thread is like a instance of browser, like a browser, for process a concurrent networking need to create multiple thread for each asset on a page, and probably if you launch 400 browsers on that machine (400 JMeter threads), that take a lot of memory. The way to load with http2, you need to maintain the JMeter thread live like the time of a opened browser, and use the parallel execution with the http2 async controller. You need to launch less JMeter threads and "tune" the parallel controller and the concurrency to simulate the parallel execution of a browser with support of http2. If you idea is to simulate 400 concurrent http execution, use less threads and more http2 samplers in a http2 async controller. As you can see in Anton's video, he can handle 1200rps (in an old version of the plugin, an internal alpha version). The problem with handling 1200rps is that, for example, in Anton's case, each request was for content of up to 20MB, so if I multiply 1200rps it is 24GB/s of memory allocation that Java must handle (yes, per second), and it was one of the main reasons for Anton's OOM (big http response from each requests), it ran faster than the jvm could handle. The GC take some time and you need to add some "pauses" or in some moments to force the GC for free the memory, because in an extreme load test the jvm doesn't have the time to do it on time and the OOM can happen. We have incorporated in the most recent versions some changes to try to avoid OOM problems due to allocating memory too quickly and not having time for the jvm to release it. It is possible to mitigate this by incorporating some pauses to allow the GC to act, but it can happen to you if you don't press the brake. The ideal is always to calculate how much the response of an http request "weighs" in memory and the time that it remains alive. Also, if you have an active listener, it can cause OOM problems because JMeter keeps a lot of information in memory that you can later go to consult like in a ViewResult tree. The default request buffer limit on viewresult tree try to handle that but that requests takes memory also (and the listener of the view result tree "take and free" lot of memory on a heavy load). The default heap and the native fast access memory that jetty uses requires that the jvm be tuned better if you want to run heavy load simulations, you need to do that, and always take in consideration some of the advice mentioned on this response, you need to construct a more suitable test plan for http2 execution. Try my recommendations, use less threads and more async controller, Anton's video you how and also gives some tips on how to tune the jvm and blazemeter jvm to run better (The vast majority of the tuning recommendations Anton presents were provided by me). Feel free to tell us about your experience trying to set up a test plan oriented towards fewer threads and more use of the http2 async controller. |
Thanks for the response. The issue (if i understand correctly) is that it's using Direct Memory that is outside of the heap and as such is not subject to GC. The memory never gets released and keep growing until you get the error "Cannot reserve 8192 bytes of direct buffer memory" |
Hi @Pill30 By default JMeter provides a memory configuration necessary to be able to use its UI, but it does not have a configuration suitable for load testing, much less adapted to Jetty requirements. Jetty has an implementation that makes extensive use of Java NIO (Java New Input/Output API), and for this reason it is necessary to adapt the JVM configuration for better memory usage. For more information about the Jetty I/O architecture, visit the site related here. The important part to know is that since Java NIO is used, the memory heap must be correctly configured using the -XX:MaxDirectMemorySize=size argument for the JVM. For more information about the argument, see the java command documentation here It may also be necessary to increase the values of the -XX:MaxMetaspaceSize=size argument to a higher value. Keep in mind to make the necessary adjustments in case your test requires it. More information about the argument here. I share with you some finetuned tricks here By default the plugin is configured to have a certain general behavior, which is desirable to be configured and adjusted for the load test requirements. https://github.com/Blazemeter/jmeter-http2-plugin?tab=readme-ov-file#properties An example of the most common JMeter properties and some of the possible reference values (These are the properties that I use as a reference to then begin to fine-tune them) HTTPSampler.response_timeout=240000 The reasons for each assignment will be explained below. HTTPSampler.response_timeout: Maximum waiting time of request without timeout defined, in milliseconds By default, it does not run with a timeout, so if the server is blocked without providing a response, the connections are not released and neither is the test. httpJettyClient.idleTimeou: the max time, in milliseconds, a connection can be idle It is always advisable not to have connections waiting for too long. httpJettyClient.maxBufferSize: Maximum size of the downloaded resources in bytes The buffer is filled dynamically, but it has a maximum limit that must be assigned. httpJettyClient.byteBufferPoolFactor: Factor number used in the allocation of memory in the buffer of http client The Jetty client needs to know this factor to know what the buffer allocation "growth" mechanism will be. httpJettyClient.maxConnectionsPerDestination: Sets the max number of connections to open to each destinations Each destination is a particular host, so this limit indicates for the current thread, and that destination, how many connections it will establish by default. httpJettyClient.maxRequestsPerConnection: Set the maximum number of request per connection. Many web servers have a logic for closing connections when a certain number of http requests is reached per connection. This is property related with HTTP1.1 and also HTTP2, on HTTP1.1 use pipeline, and HTTP2 use multiplexing. httpJettyClient.maxConcurrentAsyncInController: Maximum number of concurrent http2 samplers inside a HTTP2 Async Controller The recommendation here is to try to find a balance between simulation reality and limits allowed by the server. Also take into consideration the httpJettyClient.maxRequestsPerConnection property, since if there are limits on the number of requests on the backend side, you should align yourself with it. A web page usually does not have more than 100 simultaneous requests for resources on the same host, that is why 100 is defined as the maximum limit and in some way tries to mitigate the possible problems that occur with the limits of some requests to the servers regarding the amount of requests per connection. httpJettyClient.minThreads and httpJettyClient.maxThreads: Minimum and Maximum number of threads per http client The default number of threads that jetty should create in its pool when instantiating our http client, as well as their maximum amount. Try to use the provided example of properties and use the jvm args like -XX:MaxDirectMemorySize=size take in consideration the other things previously mentioned. Tell us what you can observe about this. |
Thanks for you response... Looks like I'll have to do at lot more investigating and testing if we are to persevere. Sample jmx for reference: |
Test
Results:
Heap Size (Extended to): ~8.9GB (Note: keeps increasing even though stable load)
Total Java Memory Used: ~28.6GB (Note: keeps increasing even though stable load)
Threads: 5317!!
This occurs with/without Async Controller
Issue discussed here by serputko Anton Serputko https://www.youtube.com/watch?v=SCrnKbeVXUg&t=7500s
Note: The box eventually runs out of Memory and you receive the message:
So it looks like there might be a Memory leak as well.
########################################################################
Note: Repeating the same test with http1.1 Samplers works as expected:
Heap Size (Extended to): ~9.7GB (Remains consistent with stable load)
Total Java Memory Used: ~9.7GB (Remains consistent with stable load)
Threads: 423 (aligns to JMeter Threads)
The text was updated successfully, but these errors were encountered: