Linux socket connection refused

Sockets: connection failed, connection refused

I have a socket app written in c and I am executing on linux, but when I execute the server (./server), I get the following error:

#include #include #include #include #include #include #include #define DEFAULT_PORT 8080 void error(char* message) < perror(message); exit(EXIT_FAILURE); >void handler_client(void) < >int main(int argc , char **argv[])< int server_fd; int new_socket; int valread; struct sockaddr_in address; int opt = 1; int addrlen = sizeof(address); char buffer[1024] = ; char *hello = "Hello, I'm server"; if((server_fd = socket(AF_INET, SOCK_STREAM , 0)) == 0) < error("socket failed"); >if(setsockopt(server_fd , SOL_SOCKET, SO_REUSEADDR | SO_REUSEPORT , &opt , sizeof(opt))) < error("setsockertopt"); >address.sin_family = AF_INET; address.sin_addr.s_addr = INADDR_ANY; address.sin_port = htons(DEFAULT_PORT); if(bind(server_fd , (struct sockaddr *)&address, sizeof(address) ) < 0)< error("bind failure"); >if(listen(server_fd , 3) < 0)< error("listen"); >if((new_socket = accept(server_fd , (struct sockaddr *)&address , (socklen_t *)&addrlen)) < 0)< error("accept"); >valread = read( new_socket , buffer , 1024); printf(" message from client : %s \n",buffer); send(new_socket , hello , strlen(hello) , 0); printf("Hello message sent \n"); return 0; > 
#include #include #include #include #include #include #include #define DEFAULT_PORT 8080 #define BUFFER_SIZE 1024 void error(char* message) < perror(message); exit(EXIT_FAILURE); >int main(int argc , char** argv)< struct sockaddr_in address; int sock = 0; int valread; struct sockaddr_in serv_addr; char message[BUFFER_SIZE] = ; char buffer[BUFFER_SIZE] = ; if((sock = socket(AF_INET , SOCK_STREAM , 0)) < 0)< error("Sockert Creation fails"); >memset(&serv_addr , '0' , sizeof(serv_addr)); serv_addr.sin_family = AF_INET; serv_addr.sin_port = htons(DEFAULT_PORT); if(inet_pton(AF_INET , "127.0.0.1" , &serv_addr.sin_addr) if(connect(sock , (struct sockaddr *)&serv_addr , sizeof(serv_addr) ) < 0)< error("connection failed"); >printf("enter your message : "); fgets(message , BUFFER_SIZE-1 , stdin); send(sock , message , strlen(message) , 0); printf("Hello message sent\n"); valread = read(sock , buffer , BUFFER_SIZE); printf("response from buffer : %s\n",buffer); return 0; > 

Your code works without any changes for me. Are you running both client and server inside the same system (and did you start the server before starting the client)? Running on different systems will not work since you’ve explicitly specified 127.0.0.1 (i.e. the local host) as target.

My main system is Mac OS, but I have installed ubuntu on virtual machine, so can you help me, which target should I specify? thanks

You need to run both server and client inside the same system, i.e. either both inside MacOS or both inside the Ubuntu VM.

Understand, I am running both inside UBUNTU Vm, but even then I am getting connection refused, I can’t even start the server. does it have something to do with the port I am using?8080?

1 Answer 1

A «connection refused» error means you are trying to connection to a server IP:port that is either:

  1. not open for listening
  2. has too many pending client connections in its backlog
  3. blocked by a firewall/router/antivirus.

There is no way on the client side to differentiate which condition is causing the error. All it can do is try again later, and give up after awhile.

Since your client is trying to connection to 127.0.0.1 , the client and server MUST be run on the same machine. You will not be able to connect across machine boundaries, and that includes VM boundaries between host/client systems.

Читайте также:  Линукс сборка 64 бит

That being said, I see a number of mistakes in your code, but none that are causing the connection error.

    socket() returns -1 on error, not 0.

//if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) == 0)< if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) < 0) 
//setsockopt(server_fd , SOL_SOCKET, SO_REUSEADDR | SO_REUSEPORT, &opt, sizeof(opt)); setsockopt(server_fd , SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); setsockopt(server_fd , SOL_SOCKET, SO_REUSEPORT, &opt, sizeof(opt)); 

    you are filling serv_addr with '0' characters instead of 0 bytes.

On both sides, you are not doing any error handling on send() . But more importantly, you are not including the message's null terminator in the send, so the peer has no way of knowing when the end of the message is reached. TCP is a streaming transport, so you need to frame your messages in your data protocol. Send the message's length before sending the message itself. Or send a unique terminator after the message. The peer can then read the length before reading the message, or read until the terminator is reached.

Both send() and read() return how many bytes were actually sent/received, which can be (and frequently is) fewer bytes than requested. You need to call them in a loop to make sure that you send/receive everything you are expecting.

Also, read() does not return a null-terminated string, but your use of printf() expects one. Check valread for error, and only if read() was successful then pass valread to printf() as an input parameter so it knows how much data is actually in the buffer :

printf("message from client : %.*s\n", valread, buffer); 
printf("response from buffer : %.*s\n", valread, buffer); 

Источник

Can not connect to Linux "abstract" unix socket

I'm trying to use UNIX sockets for inter-thread communication. The program is only intended to run on Linux. To avoid creating the socket files, I wanted to use "abstract" sockets, as documented in unix(7). However, I don't seem to be able to connect to these sockets. Everything works if I'm using "pathname" sockets, though. Here is the code (I haven't quoted any error handling, but it's done): thread#1:

int log_socket = socket(AF_LOCAL, SOCK_STREAM, 0); struct sockaddr_un logaddr; socklen_t sun_len = sizeof(struct sockaddr_un); logaddr.sun_family = AF_UNIX; logaddr.sun_path[0] = 0; strcpy(logaddr.sun_path+1, "futurama"); bind(log_socket, &logaddr, sun_len); listen(log_socket, 5); accept(log_socket, &logaddr, &sun_len); . // send - receive 
struct sockaddr_un tolog; int sock = socket(AF_LOCAL, SOCK_STREAM, 0); tolog.sun_family = AF_UNIX; tolog.sun_path[0] = 0; strcpy(tolog.sun_path+1, "futurama"); connect(sock, (struct sockaddr*)&tolog, sizeof(struct sockaddr_un)); 

If all I do in the above code, is change the sun_path to not have leading \0, things work perfect. strace output:

t1: socket(PF_FILE, SOCK_STREAM, 0) = 0 t1: bind(0, , 110) t1: listen(0, 5) t2: socket(PF_FILE, SOCK_STREAM, 0) = 1 t2: connect(1, , 110 t2: ) = -1 ECONNREFUSED (Connection refused) t1: accept(0,

I know that the connect comes before accept, that's not an issue (I tried making sure that accept() is called before connect(), same result. Also, things are fine if the socket is "pathname" anyway).

For communication between threads of the same process, an ordinary pipe(2) should be enough! And you could also use pipes if all the communicating processes and/or threads have the same parent process!

@BasileStarynkevitch pipe will not work in my case. I need multiple threads to send info, and receive a synchronous response before moving on.

@BasileStarynkevitch for this, I will have to know in advance how many maximum pipes to open, or limit access to one using locks. The socket approach has less overhead for such case.

4 Answers 4

While I was posting this question, and re-reading unix(7) man page, this wording caught my attention:

an abstract socket address is distinguished by the fact that sun_path[0] is a null byte (’\0’). All of the remaining bytes in sun_path define the "name" of the socket

So, if I bzero'ed the sun_path before filling in my name into it, things started to work. I figured that's not necessarily straight-forward. Additionally, as rightfully pointed out by @davmac and @StoneThrow, the number of those "remaining bytes" can be reduced by specifying only enough length of the socket address structure to cover the bytes you want to consider as your address. One way to do that is to use SUN_LEN macro, however, the first byte of the sun_path will have to be set to !0, as SUN_LEN uses strlen .

If sun_path[0] is \0, The kernel uses the entirety of the remainder of sun_path as the name of the socket, whether it's \0-terminated or not, so all of that remainder counts. In my original code I would zero the first byte, and then strcpy() the socket name into the sun_path at position 1. Whatever gibberish that was in sun_path when the structure was allocated (especially likely to contain gibberish since it's allocated on the stack), and was included in the length of the socket structure (as passed to the syscalls), counted as the name of the socket, and was different in bind() and connect().

IMHO, strace should fix the way it displays abstract socket names, and display all the sun_path bytes from 1 to whatever the structure length that was supplied, if sun_path[0] is 0

Источник

Saved searches

Use saved searches to filter your results more quickly

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Socket.Connection refused on linux and osx #923

Socket.Connection refused on linux and osx #923

area-System.Net.Sockets bug os-linux Linux OS (any supported distro) tenet-compatibility Incompatibility with previous versions or .NET Framework

Comments

Not sure if it is a bug but this code works perfectly fine on windows:

[Fact] public static void UDP_MultipleSends()  var socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); socket.Connect(new IPEndPoint(IPAddress.Loopback, 12345)); for (int i = 0; i  100; i++)  socket.Send(Encoding.ASCII.GetBytes("hello world")); > >
Operating System Details Distributor ID: Ubuntu Description: Ubuntu 14.04.5 LTS Release: 14.04 Codename: trusty 
instance: 6963e4a4-daa1-404d-b285-a2db097e200f travis-ci-macos10.12-xcode8.3-1507738863 (via amqp) 
Error Message: System.Net.Sockets.SocketException : Connection refused 

The text was updated successfully, but these errors were encountered:

Do you have anything listening on that port @dv00d00 ?
You may run tcpdump -ni lo port 12345 (or strace)

It seems like the error is coming from OS:

strace -f -e network ~/dotnet-2.1.302/dotnet run [pid 28081] connect(26, , 16) = 0 Connect OK [pid 28081] sendmsg(26, ], msg_controllen=0, msg_flags=0>, 0) = 11 [pid 28081] sendmsg(26, ], msg_controllen=0, msg_flags=MSG_DONTWAIT|MSG_WAITALL|MSG_CONFIRM|MSG_ERRQUEUE|MSG_NOSIGNAL|MSG_WAITFORONE|0x85500000>, 0) = -1 ECONNREFUSED (Connection refused) 

When sending data locally, kernel can probably detect that port is closed and the sendmsg() call fails. That does not looks like problem with the runtime, more difference in how OS handle I/O.

[Fact] public static void UDP_MultipleSends()  var socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); var ep = new IPEndPoint(IPAddress.Loopback, 12345); for (int i = 0; i  100; i++)  socket.SendTo(Encoding.ASCII.GetBytes("hello world"), ep); > >

works without throwing exceptions. I don't believe anything is on that specific port, this was happening on a travis CI machine, but the same happened on my mac.

I did more testing with C and C# as well as I looked at Linux kernel code.
This seems to be way how Unix works. when sendto() is used, individual chunks of data are submitted independently and the call succeeds as long as there is space in socket buffer.
Since UDP is unreliable, this is has nothing to do with actual delivery.

I did also packet capture for both calls. In both cases I see:

furt@Ubuntu:~/git/wfurt-corefx-serial/src/System.Diagnostics.Process/src$ sudo tcpdump -eni lo [sudo] password for furt: tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes 14:48:23.512034 00:00:00:00:00:00 > 00:00:00:00:00:00, ethertype IPv4 (0x0800), length 53: 127.0.0.1.46992 > 127.0.0.1.12345: UDP, length 11 14:48:23.512043 00:00:00:00:00:00 > 00:00:00:00:00:00, ethertype IPv4 (0x0800), length 81: 127.0.0.1 > 127.0.0.1: ICMP 127.0.0.1 udp port 12345 unreachable, length 47 

When sendmsg() is trying to send data following happens: first message goes out without error.
When the ICMP error get's back it is remembered on "connection" (internal socket structure)
Subsequent sendmsg() calls fail.

Since this is OS behavior, I don't think it make sense to hide underlying error.
It seems that raising exception and allowing caller to deal with it is better approach.

I'm proposing to close this unless somebody objects. cc: @karelz
(note that linked PR does not change this behavior)

Источник

Оцените статью
Adblock
detector