- How to find the socket buffer size of linux
- Linux Solutions
- Solution 1 — Linux
- Solution 2 — Linux
- Solution 3 — Linux
- Solution 4 — Linux
- How to tell how much memory TCP buffers are actually using?
- Investigation: Question 1
- Investigation: Questions 2-3
- What values may Linux use for the default unix socket buffer size?
- 1 Answer 1
- How to find the socket buffer size of linux
- What is the max size of AF_UNIX datagram message in Linux?
How to find the socket buffer size of linux
What’s the default socket buffer size of linux? Is there any command to see it?
Linux Solutions
Solution 1 — Linux
If you want see your buffer size in terminal, you can take a look at:
They contain three numbers, which are minimum, default and maximum memory size values (in byte), respectively.
Solution 2 — Linux
For getting the buffer size in c/c++ program the following is the flow
int n; unsigned int m = sizeof(n); int fdsocket; fdsocket = socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP); // example getsockopt(fdsocket,SOL_SOCKET,SO_RCVBUF,(void *)&n, &m); // now the variable n will have the socket size
Solution 3 — Linux
Whilst, as has been pointed out, it is possible to see the current default socket buffer sizes in /proc , it is also possible to check them using sysctl (Note: Whilst the name includes ipv4 these sizes also apply to ipv6 sockets — the ipv6 tcp_v6_init_sock() code just calls the ipv4 tcp_init_sock() function):
sysctl net.ipv4.tcp_rmem sysctl net.ipv4.tcp_wmem
However, the default socket buffers are just set when the sock is initialised but the kernel then dynamically sizes them (unless set using setsockopt() with SO_SNDBUF). The actual size of the buffers for currently open sockets may be inspected using the ss command (part of the iproute / iproute2 package), which can also provide a bunch more info on sockets like congestion control parameter etc. E.g. To list the currently open TCP ( t option) sockets and associated memory ( m ) information:
Here’s some example output:
State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 192.168.56.102:ssh 192.168.56.1:56328 skmem:(r0,rb369280,t0,tb87040,f0,w0,o0,bl0,d0)
Here’s a brief explanation of skmem (socket memory) — for more info you’ll need to look at the kernel sources (i.e. sock.h):
> r:sk_rmem_alloc > rb:sk_rcvbuf # current receive buffer size > t:sk_wmem_alloc > tb:sk_sndbuf # current transmit buffer size > f:sk_forward_alloc > w:sk_wmem_queued # persistent transmit queue size > o:sk_omem_alloc > bl:sk_backlog > d:sk_drops
Solution 4 — Linux
Atomic size is 4096 bytes, max size is 65536 bytes. Sendfile uses 16 pipes each of 4096 bytes size. cmd : ioctl(fd, FIONREAD, &buff_size).
How to tell how much memory TCP buffers are actually using?
I realize one possible answer is «trust the kernel to do this for you,» but I want to rule out TCP buffers as a source of memory pressure.
Investigation: Question 1
This page writes, «the ‘buffers’ memory is memory used by Linux to buffer network and disk connections.» This implies that they’re not part of the RES metric in top .
To find the actual memory usage, /proc/net/sockstat is the most promising:
sockets: used 3640 TCP: inuse 48 orphan 49 tw 63 alloc 2620 mem 248 UDP: inuse 6 mem 10 UDPLITE: inuse 0 RAW: inuse 0 FRAG: inuse 0 memory 0
This is the best explanation I could find, but mem isn’t addressed there. It is addressed here, but 248*4k ~= 1MB, or about 1/1000 the system-wide max, which seems like an absurdly low number for a server with hundreds of persistent connections and sustained .2-.3Mbit/sec network traffic.
Of course, the system memory limits themselves are:
$ grep . /proc/sys/net/ipv4/tcp*mem /proc/sys/net/ipv4/tcp_mem:140631 187510 281262 /proc/sys/net/ipv4/tcp_rmem:4096 87380 6291456 /proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304
tcp_mem ‘s third parameter is the system-wide maximum number of 4k pages dedicated to TCP buffers; if the total of buffer size ever surpasses this value, the kernel will start dropping packets. For non-exotic workloads there’s no need to tune this value.
Next up is /proc/meminfo , and its mysterious Buffers and Cached items. I looked at several sources but couldn’t find any that claimed it accounted for TCP buffers.
. MemAvailable: 8298852 kB Buffers: 192440 kB Cached: 2094680 kB SwapCached: 34560 kB .
Investigation: Questions 2-3
To inspect TCP buffer sizes at the process level, we’ve got quite a few options, but none of them seem to provide the actual allocated memory instead of the current queue size or maximum.
State Recv-Q Send-Q ESTAB 0 0 . . skmem:(r0,rb1062000,t0,tb2626560,f0,w0,o0,bl0) . rcv_space:43690
- Recv-Q and Send-Q , the current buffer usage
- r and t , which are explained in this excellent post, but it’s unclear how they’re different from Recv-Q and Send-Q
- Something called rb , which looks suspiciously like some sort of max buffer size, but for which I couldn’t find any documentation
- rcv_space , which this page claims isn’t the actual buffer size; for that you need to call getsockopt
This answer suggests lsof , but the size/off seems to be reporting the same buffer usage as ss :
COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME sslocal 4032 michael 82u IPv4 1733921 0t0 TCP localhost:socks->localhost:59594 (ESTABLISHED)
And then these answers suggest that lsof can’t return the actual buffer size. It does provide a kernel module that should do the trick, but it only seems to work on sockets whose buffer sizes have been fixed with setsockopt ; if not, SO_SNDBUF and SO_RCVBUF aren’t included.
What values may Linux use for the default unix socket buffer size?
Linux documents the default buffer size for tcp, but not for AF_UNIX («local») sockets. The value can be read (or written) at runtime.
cat /proc/sys/net/core/[rw]mem_default
Is this value always set the same across different Linux kernels, or is there a range of possible values it could be?
1 Answer 1
The default is not configurable, but it is different between 32-bit and 64-bit Linux. The value appears to written so as to allow 256 packets of 256 bytes each, accounting for the different per-packet overhead (structs with 32-bit v.s. 64-bit pointers or integers).
On 64-bit Linux 4.14.18: 212992 bytes
On 32-bit Linux 4.4.92: 163840 bytes
The default buffer sizes are the same for both the read and write buffers. The per-packet overhead is a combination of struct sk_buff and struct skb_shared_info , so it depends on the exact size of these structures (rounded up slightly for alignment). E.g. in the 64-bit kernel above, the overhead is 576 bytes per packet.
/* Take into consideration the size of the struct sk_buff overhead in the * determination of these values, since that is non-constant across * platforms. This makes socket queueing behavior and performance * not depend upon such differences. */ #define _SK_MEM_PACKETS 256 #define _SK_MEM_OVERHEAD SKB_TRUESIZE(256) #define SK_WMEM_MAX (_SK_MEM_OVERHEAD * _SK_MEM_PACKETS) #define SK_RMEM_MAX (_SK_MEM_OVERHEAD * _SK_MEM_PACKETS) /* Run time adjustable parameters. */ __u32 sysctl_wmem_max __read_mostly = SK_WMEM_MAX; EXPORT_SYMBOL(sysctl_wmem_max); __u32 sysctl_rmem_max __read_mostly = SK_RMEM_MAX; EXPORT_SYMBOL(sysctl_rmem_max); __u32 sysctl_wmem_default __read_mostly = SK_WMEM_MAX; __u32 sysctl_rmem_default __read_mostly = SK_RMEM_MAX;
Interestingly, if you set a non-default socket buffer size, Linux doubles it to provide for the overheads. This means that if you send smaller packets (e.g. less than the 576 bytes above), you won’t be able to fit as many bytes of user data in the buffer, as you had specified for its size.
How to find the socket buffer size of linux
If you want see your buffer size in terminal, you can take a look at:
They contain three numbers, which are minimum, default and maximum memory size values (in byte), respectively.
There is also /proc/sys/net/core/rmem_default for recv and /proc/sys/net/core/wmem_default for send, as referenced man7.org/linux/man-pages/man7/socket.7.html
For getting the buffer size in c/c++ program the following is the flow
int n; unsigned int m = sizeof(n); int fdsocket; fdsocket = socket(AF_INET,SOCK_DGRAM,IPPROTO_UDP); // example getsockopt(fdsocket,SOL_SOCKET,SO_RCVBUF,(void *)&n, &m); // now the variable n will have the socket size
Is it safe to call socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) instead? I’m using this C call in my Swift TCP framework to get the buffer size to reduce recv calls.
Note: this is also only the read buffer size, use SO_SNDBUF to get the size of the write buffer. Under at least linux, you can use ioctl SIOCINQ and SIOCOUTQ, to respectfully get the current used state of the buffer.
Whilst, as has been pointed out, it is possible to see the current default socket buffer sizes in /proc , it is also possible to check them using sysctl (Note: Whilst the name includes ipv4 these sizes also apply to ipv6 sockets — the ipv6 tcp_v6_init_sock() code just calls the ipv4 tcp_init_sock() function):
sysctl net.ipv4.tcp_rmem sysctl net.ipv4.tcp_wmem
However, the default socket buffers are just set when the sock is initialised but the kernel then dynamically sizes them (unless set using setsockopt() with SO_SNDBUF). The actual size of the buffers for currently open sockets may be inspected using the ss command (part of the iproute / iproute2 package), which can also provide a bunch more info on sockets like congestion control parameter etc. E.g. To list the currently open TCP ( t option) sockets and associated memory ( m ) information:
Here’s some example output:
State Recv-Q Send-Q Local Address:Port Peer Address:Port ESTAB 0 0 192.168.56.102:ssh 192.168.56.1:56328 skmem:(r0,rb369280,t0,tb87040,f0,w0,o0,bl0,d0)
Here’s a brief explanation of skmem (socket memory) — for more info you’ll need to look at the kernel sources (i.e. sock.h):
r:sk_rmem_alloc rb:sk_rcvbuf # current receive buffer size t:sk_wmem_alloc tb:sk_sndbuf # current transmit buffer size f:sk_forward_alloc w:sk_wmem_queued # persistent transmit queue size o:sk_omem_alloc bl:sk_backlog d:sk_drops
What is the max size of AF_UNIX datagram message in Linux?
Currently I’m hitting a hard limit of 130688 bytes. If I try and send anything larger in one message I get a ENOBUFS error. I have checked the net.core.rmem_default , net.core.wmem_default , net.core.rmem_max , net.core.wmem_max , and net.unix.max_dgram_qlen sysctl options and increased them all but they have no effect because these deal with the total buffer size not the message size. I have also set the SO_SNDBUF and SO_RCVBUF socket options, but this has the same issue as above. The default socket buffer size are set based on the default socket options anyways. I’ve looked at the kernel source where ENOBUFS is returned in the socket stack, but it wasn’t clear to me where it was coming from. The only places that seem to return this error have to do with not being able to allocate memory. Is the max size actually 130688? If not can this be changed without recompiling the kernel?
That is a huge datagram. In my opinion, by the time that you have a datagram that large, you may as well have used TCP.
Yea, that doesn’t help. As I stated in the post, it won’t let you send a message over 130688 regardless of your wmem settings. I have them over 32MB and have tried many combinations below that.
Just to add to that. Its a misconception that the send buffers and receive buffers are for single messages. The buffer is the total kernel buffer for all messages. The wmem and qlen sysctl options actually will effect how and when send blocks. As the send buffer fills up (assuming no one receives), when the total bytes in the buffer would go beyond the buffer size or the total count will go beyond the qlen, send will block.
I get your point (and the question) better. Redacted confusing comment and upvoted; will explore around as time permits, as I’m interested in the answer as well.
I agree that its possible this is the hard limit. Just trying to find some proof and maybe some reasoning behind it.