Linux log ip address

Grepping logs for IP addresses

I am quite bad at using «basic?» unix commands and this question puts my knowledge even more to test. What I would like to do is grep all IP adresses from a log (e.g. access.log from apache) and count how often they occur. Can I do that with one command or do I need to write a script for that?

8 Answers 8

You’ll need a short pipeline at least.

sed -e 's/\(7\+\.8\+\.2\+\.1\+\).*$/\1/' -e t -e d access.log | sort | uniq -c 

Which will print each IP (will only work with ipv4 though), sorted prefixed with the count.

I tested it with apache2’s access.log (it’s configurable though, so you’ll need to check), and it worked for me. It assumes the IP-address is the first thing on each line.

The sed collects the IP-addresses (actually it looks for 4 sets of digits, with periods in between), and replaces the entire line with it. -e t continues to the next line if it managed to do a substitution, -e d deletes the line (if there was no IP address on it). sort sorts.. 🙂 And uniq -c counts instances of consecutive identical lines (which, since we’ve sorted them, corresponds to the total count).

None of the answers presented here worked for me, so here is a working one:

cat yourlogs.txt | grep -oE "\b(8\.)1\b" | sort | uniq -c | sort 

it uses grep to isolate all ips. then sorts them, counts them, and sorts that result again.

You can skip the cat and just give grep the filename: grep -oE «\b(8<1,3>\.)8<1,3>\b» yourlogs.txt | sort | uniq -c | sort

you can do the following (where datafile is the name of the log file)

egrep '[[:digit:]]\.[[:digit:]]\.[[:digit:]]\.[[:digit:]]' datafile | sort | uniq -c 

edit: missed the part about counting address, now added

This fails, as egrep will print the whole line including timestamps, and each line will be unique, you need to single out the IP address and remove the rest of the line (or in some other way consider only the IP when checking uniqueness)

Читайте также:  Как в linux монтировать диск

This might actually fail, as Dave Tarsi points out, it’ll catch stuff like browser versions which are valid IP addresses. You need to know where the IP address is on the line (beginning), and only select those lines.

The following is a script I wrote several years ago. It greps out addresses from apache access logs. I just tried it running Ubuntu 11.10 (oneiric) 3.0.0-32-generic #51-Ubuntu SMP Thu Mar 21 15:51:26 UTC 2013 i686 i686 i386 GNU/Linux It works fine. Use Gvim or Vim to read the resulting file, which will be called unique_visits, which will list the unique ips in a column. The key to this is in the lines used with grep. Those expressions work to extract the ip address numbers. IPV4 only. You may need to go through and update browser version numbers. Another similar script that I wrote for a Slackware system is here: http://www.perpetualpc.net/srtd_bkmrk.html

#!/bin/sh #eliminate search engine referals and zombie hunters. combined_log is the original file egrep '(google)|(yahoo)|(mamma)|(query)|(msn)|(ask.com)|(search)|(altavista)|(images.google)|(xb1)|(cmd.exe)|(trexmod)|(robots.txt)|(copernic.com)|(POST)' combined_log > search #now sort them to eliminate duplicates and put them in order sort -un search > search_sort #do the same with original file sort -un combined_log > combined_log_sort #now get all the ip addresses. only the numbers grep -o '77*[.]84*[.]56*[.]29*' search_sort > search_sort_ip grep -o '62*[.]23*[.]85*[.]95*' combined_log_sort > combined_log_sort_ip sdiff -s combined_log_sort_ip search_sort_ip > final_result_ip #get rid of the extra column grep -o '^\|78*[.]18*[.]89*[.]37*' final_result_ip > bookmarked_ip #remove stuff like browser versions and system versions egrep -v '(4.4.2.0)|(1.6.3.1)|(0.9.2.1)|(4.0.0.42)|(4.1.8.0)|(1.305.2.109)|(1.305.2.12)|(0.0.43.45)|(5.0.0.0)|(1.6.2.0)|(4.4.5.0)|(1.305.2.137)|(4.3.5.0)|(1.2.0.7)|(4.1.5.0)|(5.0.2.6)|(4.4.9.0)|(6.1.0.1)|(4.4.9.0)|(5.0.8.6)|(5.0.2.4)|(4.4.8.0)|(4.4.6.0)' bookmarked_ip > unique_visits exit 0 

Источник

Как грепнуть все IP-адреса посетителей и их количество из acces-лога

Нужно получить все ip-адреса, которые имеются в access.log. Вывести списком ip адреса и кол-во запросов с каждого IP

Для чего это может быть нужным? Например, если api-сервер переезжает на новый адрес. На старый сервер по-прежнему приходят какие-то http-запросы. Нам нужно найти и вычленить все ip, которые до сих пор стучатся.

Решение задачи вариант 1

Берём linux консоль в руки и получаем все уникальные ip адреса из лога:

less /var/log/nginx/access.log | cut -d’ ‘ -f1 | sort | uniq

Читайте также:  Astra linux сервер домена

Разберём подробно что тут происходит:

  1. less — утилита для вывода содержимого файла /var/log/nginx/access.log. Указываем путь до нужного access-лога.
  2. cut -d’ ‘ -f1 — разбиваем строку на подстроки разделителем «пробел». Разделитель указывается флагом -d. Флагом -f указываем порядковый номер поля, которое будет отображаться в выводе. В данном случае «1» — первое поле, это и есть ip-адресс. Если в ваше логе IP идет вторым, то -f2
  3. sort — сортировка строк по порядку. Команда сгруппирует одинаковые строки «рядом». Команда sort необходима для корректной работы следующей команды — uniq .
  4. uniq — выведет только уникальные строки. Т.е. в результате будут только уникальные ip-адреса.

Улучшим команду, добавив вывод количества ip-адресов. Для вывода количества, нужно добавить флаг -с (от слова count) к команде uniq :

less /var/log/nginx/access.log | cut -d’ ‘ -f1 | sort | uniq -c

Результат работы команды будет в таком виде, сначала количество использований ip, затем сам ip:

less /var/log/nginx/access.log | cut -d' ' -f1 | sort | uniq -c 7 155.55.55.55 1005000 155.55.55.56 520 155.44.44.44 955 155.33.33.33

Получить все ip-адреса из лога

Решение вариант 2:

Получить все IP-адреса с помощью регулярных выражений:

sed -e 's/\(8\+\.7\+\.3\+\.2\+\).*$/\1/' -e t -e d access.log | sort | uniq -c

Выведет лог в таком же виде:

Но, такой вариант с поиском IP по регулярному выражению формата IPv4 будет более затратным по вычислительным и временным ресурсам.

Источник

How to track my public IP address in a log file?

I’d like to save my public IP address to a log file so I can use them to exclude my own visit to my websites in the stats collections. At the moment I can see my actual public IP address—whatsmyip.org—but I believe that every time I off the modem, it changes. I don’t have a static public IP address, and I think there isn’t a fixed range of IP that my ISP is giving me. I’m running Linux Mint 17.3, is there any way that I’ve already a similar log file? If not, can I track my future IPs and how?

4 Answers 4

This one will give you your public IP, remove /ip part to see more info.

of course, it’s not a full solution for you. But with it, you can write your own script, put that script in cron to make it get the IP and save into your log file.

You can try to use some dynamic dns services like noip.com Then You can access resources by dns name, which will changing according to Your ip.

Читайте также:  Raspberry pi gpio linux

Generally, Your provider may NAT with pool of addresses. And every curl https://ipinfo.io/ip request will return a random address from this pool, depends on the settings of the NAT.

It’s better to use different methods to track visits to web-site. F.e. cookies.

Here is a small python code to put in cron and collect addresses:

#!/usr/bin/env python from datetime import datetime import os import requests LOG = '/tmp/ip.log' URL = 'https://ipinfo.io/ip' r = requests.get(URL) if r.status_code == 200: ip = r.content.decode('ascii').rstrip('\n') last_ip = None if os.path.exists(LOG): f = open(LOG, 'r') last_ip = f.readlines()[-1].split()[-1] f.close() if ip != last_ip: f = open(LOG, 'a') f.write("<> <>\n".format(datetime.now(), ip)) 

Источник

How to log the ip addresses trying to connect to a port?

Is it possible to log all IP addresses that trying to connect or connected to port «5901» in Linux Debian? How can i do that?

I didn’t downvote it, but one of the reasons for a downvote on SF is that the question «does not show any research effort» and I’m sorry, but yours doesn’t.

3 Answers 3

You could do it using iptables

iptables -I INPUT -p tcp -m tcp --dport 5901 -m state --state NEW -j LOG --log-level 1 --log-prefix "New Connection " 

This will log new tcp connections on port 5901 to /var/log/syslog and /var/log/kernel.log like this

Dec 12 07:52:48 u-10-04 kernel: [591690.935432] New Connection IN=eth0 OUT= MAC=00:0c:29:2e:78:f1:00:0c:29:eb:43:22:08:00 SRC=192.168.254.181 DST=192.168.254.196 LEN=60 TOS=0x10 PREC=0x00 TTL=64 DF PROTO=TCP SPT=36972 DPT=5901 WINDOW=14600 RES=0x00 SYN URGP=0

if it’s short term — this should do:

tcpdump -n -i eth0 -w file.cap "port 5901" 

alternatively you can use the log target of iptables:

iptables -A INPUT -p tcp --dport 5901 -j LOG --log-prefix '** guests **'--log-level 4 

this might flood your logs

you can use netstat with options -v,-n,-t, -a

e.g. netstat -anp | :8080 | grep ESTABLISHED | wc -l OR

root@user:/home# netstat -vatn Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 192.168.1.174:8080 192.168.1.126:53021 ESTABLISHED tcp 0 0 192.168.1.174:8080 192.168.1.126:32950 ESTABLISHED tcp 0 0 192.168.1.174:8080 192.168.1.126:39634 ESTABLISHED tcp 0 0 192.168.1.174:8080 192.168.1.126:59300 ESTABLISHED tcp 0 0 192.168.1.174:8080 192.168.1.188:49551 ESTABLISHED tcp 0 0 192.168.1.174:9090 192.168.1.126:37865 ESTABLISHED tcp 0 0 192.168.1.174:9090 192.168.1.188:51411 ESTABLISHED tcp 0 0 192.168.1.174:8080 192.168.1.126:50824 ESTABLISHED 

Источник

Оцените статью
Adblock
detector