Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nginx creates more connection than poolsize, backlog to redis #265

Open
nikhilrakuten opened this issue May 26, 2023 · 5 comments
Open

Nginx creates more connection than poolsize, backlog to redis #265

nikhilrakuten opened this issue May 26, 2023 · 5 comments

Comments

@nikhilrakuten
Copy link

nikhilrakuten commented May 26, 2023

I am using https://github.com/openresty/lua-resty-redis#connect method and keepalive()
i have kept pool size to 200 and backlog 20, Still it creates 2000 connections to redis during a load test.
is_connected, err = client:connect(REDIS_SERVER1,REDIS_PORT,{ pool_size = 200, backlog = 10})

ngx-lua : 5.1
workers : 2

What could be the reason for more number of connection from nginx to redis ? How can we control or restrict it ?

Used Nginx conf

user              www-data;
worker_processes  2;
worker_rlimit_nofile  10240;

include /etc/nginx/modules-enabled/*.conf;

error_log         /var/log/nginx/error.log;
pid               /var/run/nginx.pid;

events {
  worker_connections  10240;
  multi_accept  on;
  use epoll;
 }
 http {
  
    server_tokens       off;

    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    root        /srv/www/default;

    sendfile    on;
    tcp_nopush  on;
    tcp_nodelay on;

    keepalive_timeout   120;
    keepalive_requests  1000;
    reset_timedout_connection on;

    server_names_hash_bucket_size 128;
    types_hash_max_size 2048;
    types_hash_bucket_size  64;

    open_file_cache max=200000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    client_body_timeout   5;
    client_header_timeout 5;
    client_header_buffer_size 256;
    client_max_body_size  16m;

    ##
    # Gzip Settings
    ##

    gzip            on;
    gzip_http_version 1.0;
    gzip_buffers    4 16k;
    gzip_min_length 1024;
    gzip_comp_level 2;
    gzip_proxied    expired no-cache no-store private auth;
    gzip_types      text/plain text/css text/javascript application/javascript application/x-javascript text/xml application/xml application/xml+rss application/json;
    gzip_disable    "MSIE [1-6]\.(?!.*SV1)";

    set_real_ip_from    10.8.0.0/16;
    real_ip_header      X-Forwarded-For;
    real_ip_recursive   on;

    include /etc/nginx/conf.d/*.conf;

    

    # proxy settings
    proxy_buffer_size       16k;
    proxy_buffers           8   16k;
    proxy_connect_timeout   60s;
    proxy_send_timeout      60s;
    proxy_read_timeout      60s;

    # local log
    access_log  /var/log/nginx/$host.access.log ltsv;

    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/sites-enabled/*;
 }

Steps To Reproduce:

  1. Take a 2 core CPU machine
  2. Use resty.redis lua
  3. configure poolsize 200 and backlog 10
  4. Do a stress test to generate 1000 req/s to nginx.
  5. check the connected clients in redis to verify
  6. check netstat | grep '6379' to check open connections.
@toredash
Copy link

10 workers?

@nikhilrakuten
Copy link
Author

@toredash no. its 2 workers.
I have verified the nginx.conf and with top command during the process.

@weestack
Copy link

weestack commented Jun 10, 2023

I've just created a new rewrite service for Nginx using redis, which performs 200% faster than native Nginx rewrites, especially with 20k+ rewrite rules. Now the dot over the i here, is to pool my connections, which is why I've stress tested it with 3.000 requests pr second, which after 1 minut meant there was 24.000 tcp connection on local host....

local redis_port = 6390
local redis_database = 1
-- connection timeout for redis in ms. don"t set this too high!
local redis_connection_timeout = 1000
local redis = require "nginx.redis";
local redis_client = redis:new();

-- 300 ms timeout
redis_client:set_timeouts(300, 300, 300);
redis_client:set_timeout(300);
local ok, err = redis_client:connect(redis_host, redis_port, { pool_size = 10, backlog = 10});
if not ok then
    ngx.log(ngx.DEBUG, "Redis connection error while retrieving rewrite rules: " .. err);
end

-- select specified database
redis_client:select(redis_database)

local function pool_redis_connection()
    -- put it into the connection pool of size 10,
    -- with 50 seconds max idle time
    local ok, err = redis_client:set_keepalive(5000, 10)
    if not ok then
        ngx.log(ngx.EMERG, "Redis failed to set keepalive: ", err)
    end
end

using 10 workers, the pool_redis_function is called before a redirect or ending lua_rewrite_by_file.

@zhuizhuhaomeng
Copy link
Contributor

@nikhilrakuten we need a nginx.conf that can reproduce this issue.

@nikhilrakuten
Copy link
Author

@zhuizhuhaomeng nginx conf file is updated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants