Skip to content
Bots!
Bots!
  • About
    • Myself
    • আমার দোয়া
  • Bookmarks
    • Bookmarks
    • My OCI Bookmarks
    • Useful Proxmox Commands & Links
    • Learning Nano
    • Useful Sites
    • Useful Virtualbox Command
    • Useful MySQL Command
    • Useful Linux Command
    • BTT-CAS
  • Resources
    • Webinar on Cloud Adoption for Project Managers
  • Photos
  • Videos
  • Downloads
Bots!

Linux Tune Network Stack (Buffers Size) To Increase Networking Performance

Rumi, December 23, 2015

By default the Linux network stack is not configured for high speed large file transfer across WAN links. This is done to save memory resources. You can easily tune Linux network stack by increasing network buffers size for high-speed networks that connect server systems to handle more network packets.

The default maximum Linux TCP buffer sizes are way too small. TCP memory is calculated automatically based on system memory; you can find the actual values by typing the following commands:

$ cat /proc/sys/net/ipv4/tcp_mem

The default and maximum amount for the receive socket memory:

$ cat /proc/sys/net/core/rmem_default
$ cat /proc/sys/net/core/rmem_max

The default and maximum amount for the send socket memory:

$ cat /proc/sys/net/core/wmem_default
$ cat /proc/sys/net/core/wmem_max

The maximum amount of option memory buffers:

$ cat /proc/sys/net/core/optmem_max

Tune values

Set the max OS send buffer size (wmem) and receive buffer size (rmem) to 12 MB for queues on all protocols. In other words set the amount of memory that is allocated for each TCP socket when it is opened or created while transferring files:

WARNING! The default value of rmem_max and wmem_max is about 128 KB in most Linux distributions, which may be enough for a low-latency general purpose network environment or for apps such as DNS / Web server. However, if the latency is large, the default size might be too small. Please note that the following settings going to increase memory usage on your server.

# echo 'net.core.wmem_max=12582912' >> /etc/sysctl.conf
# echo 'net.core.rmem_max=12582912' >> /etc/sysctl.conf

You also need to set minimum size, initial size, and maximum size in bytes:

# echo 'net.ipv4.tcp_rmem= 10240 87380 12582912' >> /etc/sysctl.conf
# echo 'net.ipv4.tcp_wmem= 10240 87380 12582912' >> /etc/sysctl.conf

Turn on window scaling which can be an option to enlarge the transfer window:

# echo 'net.ipv4.tcp_window_scaling = 1' >> /etc/sysctl.conf

Enable timestamps as defined in RFC1323:

# echo 'net.ipv4.tcp_timestamps = 1' >> /etc/sysctl.conf

Enable select acknowledgments:

# echo 'net.ipv4.tcp_sack = 1' >> /etc/sysctl.conf

By default, TCP saves various connection metrics in the route cache when the connection closes, so that connections established in the near future can use these to set initial conditions. Usually, this increases overall performance, but may sometimes cause performance degradation. If set, TCP will not cache metrics on closing connections.

# echo 'net.ipv4.tcp_no_metrics_save = 1' >> /etc/sysctl.conf

Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.

# echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf

Now reload the changes:

# sysctl -p

Use tcpdump to view changes for eth0:

# tcpdump -ni eth0

Recommend readings:
Please refer to kernel documentation in Documentation /networking/ip-sysctl.txt for more information.
man page sysctl

Src: http://www.cyberciti.biz/faq/linux-tcp-tuning/

Administrations Configurations (Linux) Linux CommandsTCP Buffer

Post navigation

Previous post
Next post

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Myself…

Hi, I am Hasan T. Emdad Rumi, an IT Project Manager & Consultant, Virtualization & Cloud Savvyfrom Dhaka, Bangladesh. I have prior experience in managing numerous local and international projects in the area of Telco VAS & NMC, National Data Center & PKI Naitonal Root and CA Infrastructure. Also engaged with several Offshore Software Development Team.

Worked with Orascom Telecom-Banglalink, Network Elites as VAS partner, BTRC, BTT (Turkey) , Mango Teleservices Limited and Access to Informaiton (A2I-UNDP)

Currently working at Oracle Corporation as Principal Technology Solution and Cloud Architect.

You can reach me [h.t.emdad at gmail.com] and I will be delighted to exchange my views.

Tags

Apache Bind Cacti CentOS CentOS 6 CentOS 7 Debain Debian Debian 10 Debian 11 Debian 12 DKIM Docker endian icinga iptables Jitsi LAMP Letsencrypt Linux Munin MySQL Nagios Nextcloud NFS nginx pfsense php Postfix powerdns Proxmox RDP squid SSH SSL Ubuntu Ubuntu 16 Ubuntu 18 Ubuntu 20 Varnish virtualbox vpn Webmin XCP-NG zimbra

Topics

Recent Posts

  • Install Jitsi on Ubuntu 22.04 / 22.10 April 30, 2025
  • Key Lessons in life April 26, 2025
  • Create Proxmox Backup Server (PBS) on Debian 12 April 19, 2025
  • Add Physical Drive in Proxmox VM Guest April 19, 2025
  • Mount a drive permanently with fstab in Linux April 16, 2025
  • Proxmox 1:1 NAT routing March 30, 2025
  • Installation steps of WSL – Windows Subsystem for Linux March 8, 2025
  • Enabling Nested Virtualization In Proxmox March 8, 2025
  • How to Modify/Change console/SSH login banner for Proxmox Virtual Environment (Proxmox VE / PVE) March 3, 2025
  • Install Proxmox Backup Server on Debian 12 February 12, 2025

Archives

Top Posts & Pages

  • Install Jitsi on Ubuntu 22.04 / 22.10
©2025 Bots! | WordPress Theme by SuperbThemes