Set buffer size linux

How do I increase the UDP receive buffer size?

I have a computer that communicates with a camera via UDP connected with a physical cable (no router or switch). The camera acts as the UDP server with the computer as client. Once in a while my Java APP hangs while an image is being transferred, line by line. My Java software waits for a line of the image that is never received and I believe this could be caused by receive buffer overflow. I’ve tried to increase the receive buffer maximum size in /etc/sysctl.conf :

sysctl -w net.core.rmem_max 1000000 

My program requests 7000000 bytes but at runtime reports that it received only 212992 bytes. When I try to ask the OS the min, default, and maximum size:

$ sysctl -a | grep usb net.ipv4.udp mem 185535 247780 371670 

Please eddit with UDP errors output from netstat -su this will show us how many UDP datagrams lost due to buffer overflow.

2 Answers 2

sysctl -w net.core.rmem_max=8388608 

This sets the max OS receive buffer size for all types of connections.

Resolving Slow UDP Traffic

If your server does not seem to be able to receive UDP traffic as fast as it can receive TCP traffic, it could be because Linux, by default, does not set the network stack buffers as large as they need to be to support high UDP transfer rates. One way to alleviate this problem is to allow more memory to be used by the IP stack to store incoming data. For instance, use the commands:

sysctl -w net.core.rmem_max=262143 
sysctl -w net.core.rmem_default=262143 

to increase the read buffer memory max and default to 262143 (256k — 1) from defaults of max=131071 (128k — 1) and default=65535 (64k — 1). These variables will increase the amount of memory used by the network stack for receives, and can be increased significantly more if necessary for your application.

Источник

Default buffer size for a file on Linux

The documentation states that the default value for buffering is: If omitted, the system default is used . I am currently on Red Hat Linux 6, but I am not able to figure out the default buffering that is set for the system. Can anyone please guide me as to how determine the buffering for a system?

3 Answers 3

Since you linked to the 2.7 docs, I’m assuming you’re using 2.7. (In Python 3.x, this all gets a lot simpler, because a lot more of the buffering is exposed at the Python level.)

All open actually does (on POSIX systems) is call fopen , and then, if you’ve passed anything for buffering , setvbuf . Since you’re not passing anything, you just end up with the default buffer from fopen , which is up to your C standard library. (See the source for details. With no buffering , it passes -1 to PyFile_SetBufSize , which does nothing unless bufsize >= 0 .)

Читайте также:  Command to create new directory in linux

If you read the glibc setvbuf manpage, it explains that if you never call any of the buffering functions:

Normally all files are block buffered. When the first I/O operation occurs on a file, malloc (3) is called, and a buffer is obtained.

Note that it doesn’t say what size buffer is obtained. This is intentional; it means the implementation can be smart and choose different buffer sizes for different cases. (There is a BUFSIZ constant, but that’s only used when you call legacy functions like setbuf ; it’s not guaranteed to be used in any other case.)

So, what does happen? Well, if you look at the glibc source, ultimately it calls the macro _IO_DOALLOCATE , which can be hooked (or overridden, because glibc unifies C++ streambuf and C stdio buffering), but ultimately, it allocates a buf of _IO_BUFSIZE , which is an alias for the platform-specific macro _G_BUFSIZE , which is 8192 .

Of course you probably want to trace down the macros on your own system rather than trust the generic source.

You may wonder why there is no good documented way to get this information. Presumably it’s because you’re not supposed to care. If you need a specific buffer size, you set one manually; if you trust that the system knows best, just trust it. Unless you’re actually working on the kernel or libc, who cares? In theory, this also leaves open the possibility that the system could do something smart here, like picking a bufsize based on the block size for the file’s filesystem, or even based on running stats data, although it doesn’t look like linux/glibc, FreeBSD, or OS X do anything other than use a constant. And most likely that’s because it really doesn’t matter for most applications. (You might want to test that out yourself—use explicit buffer sizes ranging from 1KB to 2MB on some buffered-I/O-bound script and see what the performance differences are.)

Источник

Set pipe buffer size

I have a C++ multithreaded application which uses posix pipes in order to perform inter thread communications efficiently (so I don’t have to get crazy with deadlocks). I’ve set the write operation non-blocking, so the writer will get an error if there is not enough space in the buffer to write.

if((pipe(pipe_des)) == -1) throw PipeException(); int flags = fcntl(pipe_des[1], F_GETFL, 0); // set write operation non-blocking assert(flags != -1); fcntl(pipe_des[1], F_SETFL, flags | O_NONBLOCK); 

Now I’d wish to set the pipe buffer size to a custom value (one word in the specific case). I’ve googled for it but I was not able to find anything useful. Is there a way (possibly posix compliant) to do it? Thanks Lorenzo PS: I’m under linux (if it may be useful)

This is a totally inappropriate use of assert(), unless your program only runs on platforms for which fcntl() never has an error.

I think you should just learn how to use synchronization primitives. Using a pipe will increase the overhead by about 100 times, and it seems like it can’t achieve what you want anyway.

Читайте также:  Linux no java virtual machine

I know how to use sync primitives 🙂 Actually I also have a version using sync primitives. Looking to the test results, the version with pipes is at least speeder as the sync one (in some cases pipes are speeder. )

Instead of pipes, you can use a unix socketpair. setsockopt(fd, SOL_SOCKET, SO_SNFBUF, size) is the call you want to set the buffer size.

3 Answers 3

Since you mentioned you are on Linux and may not mind non-portability, you may be interested in the file descriptor manipulator F_SETPIPE_SZ, available since Linux 2.6.35.

int pipe_sz = fcntl(pipe_des[1], F_SETPIPE_SZ, sizeof(size_t)); 

You’ll find that pipe_sz == getpagesize() after that call, since the buffer cannot be made smaller than the system page size. See fcntl(2) .

I googled «linux pipe buffer size» and got this as the top link. Basically, the limit is 64Kb and is hard coded.

Edit The link is dead and it was probably wrong anyway. The Linux pipe(7) man page says this:

A pipe has a limited capacity. If the pipe is full, then a write(2) will block or fail, depending on whether the O_NONBLOCK flag is set (see below). Different implementations have different limits for the pipe capacity. Applications should not rely on a particular capacity: an application should be designed so that a reading process consumes data as soon as it is available, so that a writing process does not remain blocked.

In Linux versions before 2.6.11, the capacity of a pipe was the same as the system page size (e.g., 4096 bytes on i386). Since Linux 2.6.11, the pipe capacity is 16 pages (i.e., 65,536 bytes in a system with a page size of 4096 bytes). Since Linux 2.6.35, the default pipe capacity is 16 pages, but the capacity can be queried and set using the fcntl(2) F_GETPIPE_SZ and F_SETPIPE_SZ operations. See fcntl(2) for more information.

Anyway, the following still applies IMO:

I’m not sure why you are trying to set the limit lower, it seems like a strange idea to me. If you want the writer to wait until the reader has processed what it has written, you should use a pipe in the other direction for the reader to send back an ack.

Источник

how to tune linux network buffer size

. The first adjustment is to change the default and maximum amount of memory allocated for the send and receive buffers for each socket. This will significantly increase performance for large transfers. The relevant parameters for the send and receive buffer default size per socket are net.core.wmem_default and net.core.rmem_default. In addition to the socket settings, the send and receive buffer sizes for TCP sockets must be set separately using the net.ipv4.tcp_wmem and net.ipv4.tcp_rmem parameters.

1 Answer 1

Short answer — r/wmem_default are used for setting static socket buffer sizes, while tcp_r/wmem are used for controlling TCP send/receive window size and buffers dynamically.

Читайте также:  Samsung ssd firmware update linux

More details: By tracking the usages of r/wmem_default and tcp_r/wmem (kernel 4.14) we can see that r/wmem_default are only used in sock_init_data():

void sock_init_data(struct socket *sock, struct sock *sk) < sk_init_common(sk); . sk->sk_rcvbuf = sysctl_rmem_default; sk->sk_sndbuf = sysctl_wmem_default; 

This initializes the socket’s buffers for sending and receiving packets and might be later overridden in set_sockopt:

int sock_setsockopt(struct socket *sock, int level, int optname, char __user *optval, unsigned int optlen) < struct sock *sk = sock->sk; . sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF); . sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF); 

Usages of tcp_rmem are found in these functions: tcp_select_initial_window() in tcp_output.c and __tcp_grow_window(), tcp_fixup_rcvbuf(), tcp_clamp_window() and tcp_rcv_space_adjust() in tcp_input.c. In all usages this value is used for controlling the receive window and/or the socket’s receive buffer dynamically, meaning it would take the current traffic and the system parameters into consideration. A similar search for tcp_wmem show that it is only used for dynamic changes in the socket’s send buffer in tcp_init_sock() (tcp.c) and tcp_sndbuf_expand() (tcp_input.c).

So when you want the kernel to better tune your traffic, the most important values are tcp_r/wmem. The Socket’s size is usually overridden by the user the default value doesn’t really matter. For exact tuning operations, try reading the comments in tcp_input.c marked as «tuning». There’s a lot of valuable information there.

Источник

Increase buffer size while running screen

I use screen as my window manager through putty. Screen has been great, but I need a way to increase my buffer when I run commands. I have no buffer when I scroll up, no std out is saved beyond my window size on any terminal. How can I increase this I can’t seem to find an option in the commands? Ctrl + a ? doesn’t seem to have what I am looking for.

4 Answers 4

I actually figured this out after looking through the man page. Setting the screen buffer inside .screenrc does work, but you can change it inside your screen session.

gives you a 1000 line buffer.

You can also set the default number of scrollback lines in .screenrc by using

Then entering copy mode will let you scroll around.

It would be good to clarify even more explicitly that scrollback does not work in .screenrc , only defscrollback does.

Do Ctrl + a : then enter scrollback 1234 sets your buffer to 1234 lines. You enter scrollback mode («copy mode») with Ctrl + a Esc , then move in vi-style, leave copy mode with another Esc

You actually do have something of a buffer, but it’s invisible to most terminal emulators (which is why e. g. scroll bars don’t appear to work). One way to get at it is to enter copy mode ( Ctrl — A , [ followed by arrow keys, PgUp , et cetera). The size of this buffer can be configured in .screenrc . You you an change its allocation inside your screen session:

gives you a 1000 line buffer.

That definitely sounds like a solution I will try this, I use Ctrl + a, Esc to get into copy mode personally, but they both work

Источник

Оцените статью
Adblock
detector