[Solved-4 Solutions] How to Increase the maximum number of tcp/ip connections in linux - Linux Tutorial
Problem:
How to increase the maximum number of tcp/ip connections in linux ?
Solution 1:
Maximum number of connections are impacted by certain limits on both client & server sides.
Client side: Increase the ephermal port range, and decrease the tcp_fin_timeout
Default values to find out:
- The ephermal port range is defines as the maximum number of outbound sockets a host can create from a particular I.P. address.
The
fin_timeout
defines the minimum time these sockets will stay inTIME_WAIT
state. Usual system defaults are: - net.ipv4.ip_local_port_range = 32768 61000
- net.ipv4.tcp_fin_timeout = 60
- Basically the system cannot consistently to guarantee more than (61000 - 32768) / 60 = 470 sockets per second.
- To increase the availability by decreasing the
fin_timeout
. Suppose we do both, we should see over 1500 outbound connections per second, more readily.
To change the values:
The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of "activity."
Default Sysctl
values on a typical linux box for tcp_tw_recycle
& tcp_tw_reuse
would be
Don't allow a connection from a "used" socket (in wait state) and force the sockets to last the complete time_wait cycle.
- To allows fast cycling of sockets in time_wait state and re-using them.
- This change make sure that this does not conflict with the protocols to use for the application that needs these sockets.
- On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket.
- If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024.
- Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.
txqueuelen
parameter an ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more the system can handle it.
Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.
Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.
Solution 2:
There are a couple of variables to set the max number of connections. Running out of file numbers first. To check ulimit -n. After that, there are settings in /proc, but those default to the tens of thousands.A single TCP connection ought to be able to use all of the bandwidth between two parties:
- To check TCP window setting is large enough. Linux defaults are good for everything except really fast inet link (hundreds of mbps) or fast satellite links and also find your bandwidth*delay product.
- Check for packet loss using ping to large packets (ping -s 1472 ...)
- Check for rate limiting. On Linux, this is configured with tc
- Confirm that the bandwidth is exists actually exists using e.g., iperf
- Confirm the protocol is sane.
- Connections is actually using (try netstat or lsof). If that number is substantial:
- Have a lot of bandwidth, e.g., 100mbps+. In this case, actually wants to up the ulimit -n. Still, ~1000 connections (default on my system) is quite a few.
- Some network problems to slowing down in connections (e.g., packet loss)
- Have something else slow down, e.g., IO bandwidth, especially in seeking. Have you checked iostat -x?
Additionally, using a consumer-grade NAT router (Linksys, Netgear, DLink, etc.), beware that may exceed its abilities with thousands of connections.
Solution 3:
To determine OS connection limit is by catting nf_conntrack_max.For example: cat /proc/sys/net/netfilter/nf_conntrack_max
Use the following script to count the number of tcp connections to a given range of tcp ports. By default 1-65535.
Solution 4:
From server side:
- To check if load balancer works correctly.
- TCP timeouts turn slow into 503 Fast Immediate response, load balancer work correctly working resource to serve.
Using node server, you can use toobusy from npm. To implementation:
From Client side:
- To group calls in batch, reduce the traffic and total requests number between client and server.
- Unnecessary duplicates requests handled by build a cache mid-layer.