Is there a way to flush a POSIX socket?
Is there a standard call for flushing the transmit side of a POSIX socket all the way through to the remote end or does this need to be implemented as part of the user level protocol? I looked around the usual headers but couldn’t find anything.
9 Answers 9
What about setting TCP_NODELAY and than reseting it back? Probably it could be done just before sending important data, or when we are done with sending a message.
send(sock, "notimportant", . ); send(sock, "notimportant", . ); send(sock, "notimportant", . ); int flag = 1; setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int)); send(sock, "important data or end of the current message", . ); flag = 0; setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
TCP_NODELAY . setting this option forces an explicit flush of pending output .
So probably it would be better to set it after the message, but am not sure how it works on other systems
This should work with linux at least; when you set TCP_NODELAY it flushes the current buffer, which should, according to the source comments, even flush when TCP_CORK is set.
I confirm that it works. I prefer to set and reset TCP_DELAY after sending the «important data or end of the current message» because it allow to put this stuff inside a flush socket method.
You describe a situation where «notimportant» messages are mixed with messages that are to be sent ASAP. For this situation, I guess your solution is a good way to circumvent the delay that might by introduced by Nagle. However, if you do not send multiple messages but one compound message, you should send it with single send() call! It’s because even with Nagle, the first package is sent immediately. Only subsequent ones are delayed, and they are delayed only until the server acknowledges or replies. So just pack «everything» into the first send() .
It is true, but in this case, we do not know how large is non important data, nor if there were any delays between single sends. The problem arise when data is send in one direction. For example I work as a proxy — I forward everything I get relay that network will send it optimum way so I do not care. But then comes an important packet, which I want to send as soon as possible. Which is what I understand as flushing.
For Unix-domain sockets, you can use fflush() , but I’m thinking you probably mean network sockets. There isn’t really a concept of flushing those. The closest things are:
- At the end of your session, calling shutdown(sock, SHUT_WR) to close out writes on the socket.
- On TCP sockets, disabling the Nagle algorithm with sockopt TCP_NODELAY , which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.
It’s very likely that handling whatever issue is calling for a ‘flush’ at the user protocol level is going to be the right thing.
I don’t see any reason why disabling the Nagle algorithm «is generally a terrible idea». If you know what it does, there are many application protocol situations where disabling Nagle is exactly what you want to do. I suspect you haven’t had a situation where you really needed to do that or you don’t understand what it really does. In other words, this feature is there for a reason and it can also be disabled for a very good reason.
You’ll need to #include
Nagle could have been a wonderful thing if there was a TCP flush. Since it can optimize it’s buffer size to the current segment size. If you disable nagle you have a buffer of your own which is not synchronized to the segment size and so you will send half-filled packets over the network.
fflush has nothing to do with the issue even for unix domain sockets. It’s only relevant if you’ve wrapped your socket with a FILE via fdopen .
What I do is enable Nagle, write as many bytes (using non-blocking I/O) as I can to the socket (i.e. until I run out of bytes to send, or the send() call returns EWOULDBLOCK, whichever comes first), and then disable Nagle again. This seems to work well (i.e. I get low latency AND full-size packets where possible)
In RFC 1122 the name of the thing that you are looking for is «PUSH». However, there does not seem to be a relevant TCP API implementation that implements «PUSH». Alas, no luck.
Some answers and comments deal with the Nagle algorithm. Most of them seem to assume that the Nagle algorithm delays each and every send. This assumption is not correct. Nagle delays sending only when at least one of the previous packets has not yet been acknowledged (http://www.unixguide.net/network/socketfaq/2.11.shtml).
To put it differently: TCP will send the first packet (of a row of packets) immediately. Only if the connection is slow and your computer does not get a timely acknowledgement, Nagle will delay sending subsequent data until either (whichever occurs first)
- a time-out is reached or
- the last unacknowledged packet is acknowledged or
- your send buffer is full or
- you disable Nagle or
- you shutdown the sending direction of your connection
A good mitigation is to avoid the business of subsequent data as far as possible. This means: If your application calls send() more than one time to transmit a single compound request, try to rewrite your application. Assemble the compound request in user space, then call send() . Once. This saves on context switches (much more expensive than most user-space operations), too.
Besides, when the send buffer contains enough data to fill the maximum size of a network packet, Nagle does not delay either. This means: If the last packet that you send is big enough to fill your send buffer, TCP will send your data as soon as possible, no matter what.
To sum it up: Nagle is not the brute-force approach to reducing packet fragmentation some might consider it to be. On the contrary: To me it seems to be a useful, dynamic and effective approach to keep both a good response time and a good ratio between user data and header data. That being said, you should know how to handle it efficiently.
How to clear previous data in socket buffer?
I am sending data over to my server which writes the received data back to the client and the data also gets stored in a text file on the server. I have this problem: if the length of the sent message is smaller than previously sent messages, the extra characters get added to the current message. For example if I send «hello» first and then I send «all», the stored messages on the file are:
#include #include //strlen #include #include //inet_addr #include //write int main(int argc , char *argv[]) < int socket_desc , client_sock , c , read_size; struct sockaddr_in server , client; char client_message[2000]; //Create socket socket_desc = socket(AF_INET , SOCK_STREAM , 0); if (socket_desc == -1) < printf("Could not create socket"); >puts("Socket created"); //Prepare the sockaddr_in structure server.sin_family = AF_INET; server.sin_addr.s_addr = INADDR_ANY; server.sin_port = htons( 8888 ); //Bind if( bind(socket_desc,(struct sockaddr *)&server , sizeof(server)) < 0) < //print the error message perror("bind failed. Error"); return 1; >puts("bind done"); //Listen listen(socket_desc , 3); //Accept and incoming connection puts("Waiting for incoming connections. "); c = sizeof(struct sockaddr_in); //accept connection from an incoming client client_sock = accept(socket_desc, (struct sockaddr *)&client, (socklen_t*)&$ if (client_sock < 0) < perror("accept failed"); return 1; >puts("Connection accepted"); //Receive a message from client while( (read_size = recv(client_sock , client_message , 2000 , 0)) > 0 ) < //Send the message back to client write(client_sock , client_message , strlen(client_message)); //append the received data at the end of the file FILE *f = fopen("data.txt", "a"); if (f == NULL)< printf("Error opening file!\n"); exit(1); >char *text = client_message; fprintf(f, "%s\n", text); fclose(f); //clear socket > if(read_size == 0) < puts("Client disconnected"); fflush(stdout); >else if(read_size == -1) < perror("recv failed"); >return 0; >
So I think I need to clear the buffer after writing to the file, but I’m not sure how to do this. I would greatly appreciate your help!
Linux – How to clear socket buffer in linux
I have a question about socket programming. I want to clear the socket buffer.
I tried the following code:
int ret_read = read(return_events[index].data.fd, recv_buffer, sizeof(recv_buffer)); if(-1 == ret_read) < if(EAGAIN != errno) < printf("read data from %d error occured, errno=%d, %s.\n", return_events[index].data.fd, errno, strerror(errno)); /** Tag-position, I know buffer is not empty. I wonder clear buffer in code position.The buffer is socket recv buffer, is not recv_buffer. */ >continue; >
I don’t want to use read() again in Tag-position, becase I want to set the buffer to empty. Though using read() in Tag-position, I think it may fail.
Is there anyone who can tell me another way except read() in Tag-position?
Best Solution
It’s not different from any other buffer
bzero(recv_buffer, sizeof(recv_buffer));
Related Solutions
C++ – How to set, clear, and toggle a single bit
Setting a bit
Use the bitwise OR operator ( | ) to set a bit.
That will set the n th bit of number . n should be zero, if you want to set the 1 st bit and so on upto n-1 , if you want to set the n th bit.
Clearing a bit
Use the bitwise AND operator ( & ) to clear a bit.
That will clear the n th bit of number . You must invert the bit string with the bitwise NOT operator ( ~ ), then AND it.
Toggling a bit
The XOR operator ( ^ ) can be used to toggle a bit.
That will toggle the n th bit of number .
Checking a bit
You didn’t ask for this, but I might as well add it.
To check a bit, shift the number n to the right, then bitwise AND it:
That will put the value of the n th bit of number into the variable bit .
Changing the nth bit to x
Setting the n th bit to either 1 or 0 can be achieved with the following on a 2’s complement C++ implementation:
Bit n will be set if x is 1 , and cleared if x is 0 . If x has some other value, you get garbage. x = !!x will booleanize it to 0 or 1.
To make this independent of 2’s complement negation behaviour (where -1 has all bits set, unlike on a 1’s complement or sign/magnitude C++ implementation), use unsigned negation.
number ^= (-(unsigned long)x ^ number) & (1UL
unsigned long newbit = !!x; // Also booleanize to force 0 or 1 number ^= (-newbit ^ number) & (1UL
It's generally a good idea to use unsigned types for portable bit manipulation.
It's also generally a good idea to not to copy/paste code in general and so many people use preprocessor macros (like the community wiki answer further down) or some sort of encapsulation.
Linux – How to prompt for Yes/No/Cancel input in a Linux shell script
The simplest and most widely available method to get user input at a shell prompt is the read command. The best way to illustrate its use is a simple demonstration:
while true; do read -p "Do you wish to install this program?" yn case $yn in [Yy]* ) make install; break;; [Nn]* ) exit;; * ) echo "Please answer yes or no.";; esac done
Another method, pointed out by Steven Huwig, is Bash's select command. Here is the same example using select :
echo "Do you wish to install this program?" select yn in "Yes" "No"; do case $yn in Yes ) make install; break;; No ) exit;; esac done
With select you don't need to sanitize the input – it displays the available choices, and you type a number corresponding to your choice. It also loops automatically, so there's no need for a while true loop to retry if they give invalid input.
Also, Léa Gris demonstrated a way to make the request language agnostic in her answer. Adapting my first example to better serve multiple languages might look like this:
set -- $(locale LC_MESSAGES) yesptrn="$1"; noptrn="$2"; yesword="$3"; noword="$4" while true; do read -p "Install ($ / $)? " yn if [[ "$yn" =~ $yesexpr ]]; then make install; exit; fi if [[ "$yn" =~ $noexpr ]]; then exit; fi echo "Answer $ / $." done
Obviously other communication strings remain untranslated here (Install, Answer) which would need to be addressed in a more fully completed translation, but even a partial translation would be helpful in many cases.
Finally, please check out the excellent answer by F. Hauri.
C: how to clear out buffer from a read() on a socket
I am trying to receive data from a server, and it works fine for the first time, but as read() keeps looping it will also store the old values it previously read. Here is what i have so far.
char receive[50]; if((he = gethostbyname(servername)) == NULL ) < perror(strcat("Cannot find server named:", servername)); exit(0); >he = gethostbyname("localhost"); localIP = inet_ntoa(*(struct in_addr *)*he->h_addr_list); client_sock_desc = socket(AF_INET, SOCK_STREAM, 0); server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = inet_addr(localIP); server_addr.sin_port = htons(serverport); len = sizeof(server_addr); if(connect(client_sock_desc, (struct sockaddr *)&server_addr,len) == -1) < perror("Client failed to connect"); exit(0); >strcpy(buf, "CLIENT/REQUEST\n"); send(client_sock_desc, buf, strlen(buf), 0); //send actual function request //put a space before \n char to make it easier for the server for(i = 0; i < sizeof(wholeRequest); i++) < if(wholeRequest[i] == '\n') < wholeRequest[i] = ' '; wholeRequest[i+1] = '\n'; break; >> while(read(client_sock_desc, receive, sizeof(receive)) > 0) < strcpy(receive, ""); //attempt to erase all old values printf(receive); fflush(stdout); >close(client_sock_desc);
When the server sends data once and closes the socket, it works perfectly. But then I have the client open the socket again, send data to the server, and the server will once again send data to the client and close the socket. The client will again try to read the data the server sent, but this time it fills receive with both the new information and part of the old information