- Saved searches
- Use saved searches to filter your results more quickly
- Error loading shared library ld-linux-x86-64.so.2: No such file or directory #274
- Error loading shared library ld-linux-x86-64.so.2: No such file or directory #274
- Comments
- UnsatisfiedLinkError: /tmp/snappy-1.1.4-libsnappyjava.so Error loading shared library ld-linux-x86-64.so.2: No such file or directory
- Saved searches
- Use saved searches to filter your results more quickly
- Error loading shared library ld-linux-x86-64.so.2 #435
- Error loading shared library ld-linux-x86-64.so.2 #435
- Comments
Saved searches
Use saved searches to filter your results more quickly
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error loading shared library ld-linux-x86-64.so.2: No such file or directory #274
Error loading shared library ld-linux-x86-64.so.2: No such file or directory #274
Comments
My deployment has google-cloud as a dependency and is failing with this message:
Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /home/nowuser/src/node_modules/40daee308a4cb349df8d6cf5a749dfb86d2a7524/src/node/extension_binary/grpc_node.node)
Searching in the google-cloud repo I saw this as a possible explanation:
The line that’s failing, process.dlopen, is the mechanism that Node uses to load native extensions. It looks like your system doesn’t support that, and without it, gRPC on Node simply will not work.
Does this apply here, and does that mean I can’t use google-cloud with now?
Node version 6.9.4
now version 4.0.1
The text was updated successfully, but these errors were encountered:
UnsatisfiedLinkError: /tmp/snappy-1.1.4-libsnappyjava.so Error loading shared library ld-linux-x86-64.so.2: No such file or directory
I am trying to run a Kafka Streams application in kubernetes. When I launch the pod I get the following exception:
Exception in thread "streams-pipe-e19c2d9a-d403-4944-8d26-0ef27ed5c057-StreamThread-1" java.lang.UnsatisfiedLinkError: /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/snappy-1.1.4-5cec5405-2ce7-4046-a8bd-922ce96534a0-libsnappyjava.so) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824) at java.lang.Runtime.load0(Runtime.java:809) at java.lang.System.load(System.java:1086) at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:179) at org.xerial.snappy.SnappyLoader.loadSnappyApi(SnappyLoader.java:154) at org.xerial.snappy.Snappy.(Snappy.java:47) at org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:435) at org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:466) at java.io.DataInputStream.readByte(DataInputStream.java:265) at org.apache.kafka.common.utils.ByteUtils.readVarint(ByteUtils.java:168) at org.apache.kafka.common.record.DefaultRecord.readFrom(DefaultRecord.java:292) at org.apache.kafka.common.record.DefaultRecordBatch$1.readNext(DefaultRecordBatch.java:264) at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:563) at org.apache.kafka.common.record.DefaultRecordBatch$RecordIterator.next(DefaultRecordBatch.java:532) at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.nextFetchedRecord(Fetcher.java:1060) at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1095) at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:949) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:570) at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:531) at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146) at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1103) at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851) at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808) at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774) at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Previously I have tried launching kafka and kafka-streams-app using docker containers and they worked perfectly fine. This is the first time I am trying with Kubernetes. This is my DockerFile StreamsApp:
FROM openjdk:8u151-jdk-alpine3.7 COPY /target/streams-examples-0.1.jar /streamsApp/ COPY /target/libs /streamsApp/libs CMD ["java", "-jar", "/streamsApp/streams-examples-0.1.jar"]
/ # ldd /usr/bin/java /lib/ld-musl-x86_64.so.1 (0x7f03f279a000) Error loading shared library libjli.so: No such file or directory (needed by /usr/bin/java) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f03f279a000) Error relocating /usr/bin/java: JLI_Launch: symbol not found
Saved searches
Use saved searches to filter your results more quickly
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error loading shared library ld-linux-x86-64.so.2 #435
Error loading shared library ld-linux-x86-64.so.2 #435
Comments
I try to join two stream by using Kafka Streams. While using multiple broker with wurstmeister/kafka, Kafka Stream shutdown itself and does not throw any exception if you do not set UncaughtExceptionHandler handler. I got this one below after setting handler:
java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni7105115125614966504.so: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /tmp/librocksdbjni7105115125614966504.so)
I made a few research about this and I think It is related with the libc6-compat version of the wurstmeister/kafka base image .
Note: Did not face this problem with single broker.
The text was updated successfully, but these errors were encountered:
kafka-docker version: 2.12_2.1.0
Os version: Ubuntu 16.04.3 LTS (Xenial Xerus)x86_64 x86_64 x86_64 GNU/Linux
It would be great if someone could provide a minimal example, configuration and steps to reproduce so this can be investigated further.
As a starting point, here is a simple join of two streams (just concatenate their value into one stream)
String letters = "letters"; String numbers = "numbers"; Properties props = new Properties(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9094,localhost:9095"); props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-join-test"); props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass()); props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass()); StreamsBuilder builder = new StreamsBuilder(); KStream left = builder.stream(letters); KStream right = builder.stream(numbers); KStream joined = left.join(right, (lval, rval) -> lval + "+" + rval, JoinWindows.of(Duration.ofSeconds(5)), Joined.with( Serdes.String(), Serdes.String(), Serdes.String()) ); joined.to("output"); new KafkaStreams(topology.build(), props).start();
I have a docker-compose, such as:
version: '2' services: zookeeper: image: wurstmeister/zookeeper ports: - "2181:2181" kafka1: image: wurstmeister/kafka:2.12-2.1.0 ports: - "9094:9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_HOST_NAME: 192.168.0.3 KAFKA_ADVERTISED_PORT: 9094 volumes: - /var/run/docker.sock:/var/run/docker.sock kafka2: image: wurstmeister/kafka:2.12-2.1.0 ports: - "9095:9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_ADVERTISED_HOST_NAME: 192.168.0.3 KAFKA_ADVERTISED_PORT: 9095 volumes: - /var/run/docker.sock:/var/run/docker.sock
I created the two kafka input topics (letters + numbers) with —partitions 10 —replication-factor 1 so that data is distributed between the two brokers:
bash-4.4# kafka-topics.sh --describe --zookeeper zookeeper Topic:letters PartitionCount:10 ReplicationFactor:1 Configs: Topic: letters Partition: 0 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: letters Partition: 1 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: letters Partition: 2 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: letters Partition: 3 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: letters Partition: 4 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: letters Partition: 5 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: letters Partition: 6 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: letters Partition: 7 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: letters Partition: 8 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: letters Partition: 9 Leader: 1002 Replicas: 1002 Isr: 1002 Topic:numbers PartitionCount:10 ReplicationFactor:1 Configs: Topic: numbers Partition: 0 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: numbers Partition: 1 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: numbers Partition: 2 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: numbers Partition: 3 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: numbers Partition: 4 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: numbers Partition: 5 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: numbers Partition: 6 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: numbers Partition: 7 Leader: 1001 Replicas: 1001 Isr: 1001 Topic: numbers Partition: 8 Leader: 1002 Replicas: 1002 Isr: 1002 Topic: numbers Partition: 9 Leader: 1001 Replicas: 1001 Isr: 1001
I populated input data using kafkacat:
echo -e "a:a\nb:b" | kafkacat -b localhost:9094 -P -t letters -K: echo -e "a:2\nb:3" | kafkacat -b localhost:9094 -P -t numbers -K:
The output stream is generated successfully
$ kafkacat -b localhost:9094 -C -t output -f '%k: %s\n' -o beginning b: b+3 a: a+2
This works fine for me. How different is your application? can you get your issue down to the minimum set of steps to reproduce it?