TCP/IP performance tuning

I was getting 70 MBps network IO performance for my TCP/IP based RPC program on 1Gb network. I ran the same program on 10GB network, I was expecting minimum 7-8X performance gain. But to my surprise the gain was merely 10% only.

I was using below settings to my TCP client and server socket.

// Set send buffer size
 setsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, &sendbuff, sizeof(sendbuff));
 // Set receive buffer size
 setsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &recvbuff, sizeof(recvbuff));
 // Set no delay option
 int flag = 1;
 setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, &flag, sizeof(flag));
 // Set keepalive socket
 int flag = 1;
 setsockopt(sockfd, SOL_SOCKET, SO_KEEPALIVE, &flag, sizeof(flag));

I started doing some random experiments by turning on/off socket settings. I achieved the 7-8X performance gain when I disabled socket send buffer and receive buffer sizes.

When I started exploring more regarding this I came across TCP autotuning concept. More about this can be found here:

Important notes from the above page:

TCP Autotuning automatically adjusts socket buffer sizes as needed to optimally balance TCP performance and memory usage. Autotuning is based on an experimental implementation for NetBSD by Jeff Semke, and further developed by Wu Feng’s DRS and the Web100 Project. Autotuning is now enabled by default in current Linux releases (after 2.6.6 and 2.4.16). It has also been announced for Windows Vista and Longhorn. In the future, we hope to see all TCP implementations support autotuning with appropriate defaults for other options, making this website largely obsolete.

NB: Manually adjusting socket buffer sizes with setsockopt() disables autotuning. Application that are optimized for other operating systems may implicitly defeat Linux autotuning.

Do not use setsockopt() to set send / receive buffer sizes unless you’ve found out the buffer sizes for your application which will out perform the TCP auto tuning. In general cases it is better to rely on the TCP autotuning.