Log user actions linux

How can I fully log all bash scripts actions?

From my script output I want to capture ALL the logs data with error messages and redirect them all to log file. I have script like below:

#!/bin/bash ( echo " `date` : part 1 - start " ssh -f admin@server.com 'bash /www/htdocs/server.com/scripts/part1.sh logout exit' echo " `date` : sleep 120" sleep 120 echo " `date` : part 2 - start" ssh admin@server.com 'bash /www/htdocs/server.com/scripts/part2.sh logout exit' echo " `date` : part 3 - start" ssh admin@server.com 'bash /www/htdocs/server.com/scripts/part3.sh logout exit' echo " `date` : END" ) | tee -a /home/scripts/cron/logs 

I want to see all actions in file /home/scripts/cron/logs But I only see this what I put after echo command. How to check in logs that SSH command was successful? I need to gather all logs. I need this to monitor result of every command in my script, to better analyse what’s going on while script fails.

You can also use the «script» to perform this. See this post: [here][1] [1]: stackoverflow.com/questions/5985060/…

you must be careful with errexit — I noticed it is ignored or error does not exit script eg. if you have something like this in the script: command || dosmthg when command fails script does not exit immediately with errexit set but just run dosmthg and continues the script

If you’re expecting a prolonged session with with user input, gnu screen can also do this with the -L option: stackoverflow.com/questions/14208001/…

9 Answers 9

I generally put something similar to the following at the beginning of every script (especially if it’ll run as a daemon):

#!/bin/bash exec 3>&1 4>&2 trap 'exec 2>&4 1>&3' 0 1 2 3 exec 1>log.out 2>&1 # Everything below will go to the file 'log.out': 
  1. exec 3>&1 4>&2 Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
  2. trap ‘exec 2>&4 1>&3’ 0 1 2 3 Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.
  3. exec 1>log.out 2>&1 Redirect stdout to file log.out then redirect stderr to stdout . Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout .

From then on, to see output on the console (maybe), you can simply redirect to &3 . For example,

echo "$(date) : part 1 - start" >&3 

will go to wherever stdout was directed, presumably the console, prior to executing line 3 above.

Great KungFu here . — Even better is to use trap ‘exec 2>&4 1>&3’ 0 1 2 3 RETURN . The RETURN pseudo sigspec restore file descriptors each time a shell function or a script executed with the . or source builtins finishes executing. For this (and for other reasons) in my scripts I add as the last 2 lines : return & exit 0 — Just my 2 cents, kudos to you @nicerobot !

As I read your question, you don’t want to log the output, but the entire sequence of commands, in which case, the other answers won’t help you.

Invoke shell scripts with -x to output everything:

Log to the file you want with:

sh -x foo.sh >> /home/scripts/cron/logs

Dunno why this is with bash, but I could only get the ‘set -x’ output to the logging file with: bash -x foo.sh &> loggingfile.log

Читайте также:  Linux open vdi file

@JesseAdelman it’s because xtrace outputs everything to sterr, so you need to redirect stderr as well as stdout, which is what &> does.

to get the ssh output to your logfile, you have to redirect stderr to stdout . you can do this by appending 2>&1 after your bash script.

when this does not display the mesages in the right order, try to add another subshell:

In bash, you can put set -x and it will print off every command that it executes (and the bash variables) after that. You can turn it off with set +x .

If you want to be paranoid, you can put set -o errexit in your script. This means the script will fail and stop if one command returned a non-zero exit code, which is the unix standard way to signal that something went wrong.

If you want to get nicer logs, you should look at ts in the moreutils debian/ubuntu package. It will prefix each line with a timestamp and print it out. So you can see when things were happening.

Following off of what others said, the set manual is a good resource. I put:

#!/usr/bin/env bash exec 1> command.log 2>&1 set -x 

At the top of scripts I wish to keep going, or set -ex if it should exit upon error.

Testing with the OP’s dummy script, it records each command and whatever the result: ssh timeout, incorrect domain, etc. Or for a valid ssh address, it records the commands in the remote shell. If more info about ssh logins is required, perhaps ssh -vv ?

I’ve found that the @nicerobot (How can I fully log all bash scripts actions? ) answer might be not enough to completely redirect all output to the console. Some output still can be lost.

The complete redirection looks like this:

#!/bin/bash # some basic initialization steps including `NEST_LVL` and `SCRIPTS_LOGS_ROOT variables. source ". /__myinit__.sh" # no local logging if nested call (( ! IMPL_MODE && ! NEST_LVL )) && < export IMPL_MODE=1 exec 3>&1 4>&2 trap 'exec 2>&4 1>&3' EXIT HUP INT QUIT RETURN [[ ! -e "$/.log" ]] && mkdir "$/.log" # RANDOM instead of milliseconds case $BASH_VERSION in # < 4.2 [123].* | 4.[01] | 4.0* | 4.1[^0-9]*) LOG_FILE_NAME_SUFFIX=$(date "+%Y'%m'%d_%H'%M'%S''")$(( RANDOM % 1000 )) ;; # >= 4.2 *) printf -v LOG_FILE_NAME_SUFFIX "%(%Y'%m'%d_%H'%M'%S'')T$(( RANDOM % 1000 ))" -1 ;; esac ( ( myfoo1 #. myfooN # self script reentrance. exec $0 "$@" ) | tee -a "$/.log/$.myscript.log" 2>&1 ) 1>&3 2>&4 exit $? > (( NEST_LVL++ )) # usual script body goes here. 

Explanation:

  1. There is not enough to just redirect the output because you still have to redirect the inner echo calls both to console and to the file, so the pipe-plus-tee is the only way to split the output stream.
  2. We have to use (. ) operator to either redirect each stream into a standalone file or to restore all streams back to the original by separate steps. The second reason because myfoo calls before the self reentrance has to work as is without any additional redirection.
  3. Because the script can be called from a nested script, then we have to redirect the output in a file only once in a top level call. So we use NEST_LVL variable to indicate the nest call level.
  4. Because the redirection can be executed externally irrespective to the script, then we have to explicitly enable the inner redirection by the IMPL_MODE variable.
  5. We have to use date time values to automatically create a new unique log file on each script run.
Читайте также:  Active directory dhcp dns linux

TL;DR — Yesterday I wrote a set of tools for logging program runs and sessions.

As an admin, I’m always wanting to log the output of some command, often not a script. To solve that problem, I’ve written a few things. The easiest is to use the program «script» as xX0v0Xx mentioned. I found that calling script (without any arguments) would often result in me overwriting the output of a previous script. So I created this alias. All it does is prevent overwriting. You need a ~/tmp directory.

$ alias scr='script ~/tmp/typescript-$(date +%FT%T)' $ scr Script started, file is /home/wwalker/tmp/typescript-2019-12-05T18:56:31 $ 

That is great when I want to catch an interactive session.

When I want to log the output of a command (script or binary), I either want the exact output, or I want the output with timestamps in front of each line. So I wrote these two bash functions:

alias iso8601="date +%Y-%m-%dT%H:%M:%S" justlog()< name=$(basename "$1") log=~/logs/$-$(iso8601) "$@" > "$log" 2>&1 > timelog()< name=$(basename "$1") log=~/logs/$-$(iso8601) # https://github.com/wwalker/ilts # You could replace ilts with ts # /usr/bin/ts %FT%H:%M:%.S "$@" |& ilts -S -E > "$log" 2>&1 > 

Just run your command like you normally would:

justlog run rcn rcn-work\\\\\* 'ps -ef -o lstart,cmd | grep [s]upervisor' 
timelog run rcn rcn-work\\\\\* 'ps -ef -o lstart,cmd | grep [s]upervisor' 

This just created 2 file called:

wwalker@polonium:~ ✓ $ ls -l ~/logs/run* -rw-r--r-- 1 wwalker wwalker 10495 2019-12-05 18:21:14.985 /home/wwalker/logs/run-2019-12-05T18:21:13 -rw-r--r-- 1 wwalker wwalker 1694 2019-12-05 18:24:02.878 /home/wwalker/logs/run-2019-12-05T18:24:01 -rw-r--r-- 1 wwalker wwalker 7623 2019-12-05 18:25:07.873 /home/wwalker/logs/run-2019-12-05T18:25:06 -rw-r--r-- 1 wwalker wwalker 10296 2019-12-05 18:34:59.546 /home/wwalker/logs/run-2019-12-05T18:34:57 

I didn’t want to have to do an ls to find the name of the log file that justlog or timelog just created for me. So, I added 3 more functions:

newestlog()< name=$(basename "$1") ls ~/logs/"$"* | tail -1 > viewlog() < name=$(basename "$1") view $( newestlog "$name" ) >lesslog()

So, you run your command with justlog (or timelog), and then you just use lesslog or viewlog (I’ll probably create an emacs log for Those people):

justlog run rcn rcn-work\\\\\* 'ps -ef -o lstart,cmd | grep [s]upervisor' lesslog run 

That’s it, no ls ~/tmp , no tab completion games to find the file name. Just run lesslog (or viewlog if you like using vim to look at logs).

«I use grep all the time on my log files» — And the answer is, you guessed it, greplog

First, get the text from all 800 server’s /etc/cron.d/atop files that are broken:

justlog fordcrun 'cat /etc/cron.d/atop; md5dum /etc/cron.d/atop' 

Then get the hostnames (on the line above the output in the file 🙂 ) with greplog:

wwalker@polonium:~ ✓ $ greplog fordcrun -B1 -F "0 0 * * root" int-salt-01: 0 0 * * root systemctl restart atop -- rcn-pg-01: 0 0 * * root systemctl restart atop rcn-pg-02: 0 0 * * root systemctl restart atop rcn-pg-03: 0 0 * * root systemctl restart atop 

You can simply use «script».

script -a -e -c "set -x; echo this is logging ..; " mylogfile.log 

How would one go about using script inside a bash script? it seems to terminate the script as soon as it’s called.

Читайте также:  Проверка безопасности сети kali linux

So, here is an option I used and consider to be a little more friendly. Essentially, define a task function at the top of your script, and define all of your sections with their own functions to be passed to calls of the task function. It could be extended to add some specific INFO, WARN, DEBUG, and ERROR flags for your logs as well. They would simply be functions that use an echo to wrap your message in a consistent way, as all of the outputs are going to your log anyway.

It would look something like this:

# Append a timestamp and always write a new file #LOGFILE=/var/log/my_setup_log_file.$(date +%Y%h%d_%H.%M).log # Same log, growing and appending - make sure its a one time deal or that you have a log rotating utility running. LOGFILE=/var/log/my_setup_log_file.log touch "$" TAIL_PID=$! # [Un]comment the following line to control display of this script for the tty in real time. All other commands should be passed to task. tail -f "$" & # logging function - called task to make main section more readable. prepend it to commands, or group commands in a function and prepend the call. task() < echo "====================================================================================================" echo "$(date):$(printf ' %q' "$@")" echo "====================================================================================================" start=$(date +%s) "$@" 2>&1 end=$(date +%s) runtime=$((end-start)) echo "Elapsed time for command was $runtime seconds." echo "" > >> "$" ### FUNCTION DEFS. ### Main exectuion task my_operational_step_1 task my_operational_step_2 kill $ exit 0 

LIMITATIONS and CAUTION:

  1. My first iteration I chose to use the tee command in append mode for all of the log writes, which requires use of a pipe. This obliterates the ability to use globally scoped variables across functions, because every function is effectively run in its own subshell, due to the pipe. Argh.
  2. Addressing issue 1, I moved to the approach of using the tail -f as a spawned task at the start (~line 7). This has allows the task function to put all the output to the log, but still be able to see whats going on in real time.
  3. Note that you need to capture the PID of tail in order to kill it later, as you can see around line 8, and then the kill at the end of the script.
  4. A corollary to item 3 is that if your script exits early, tail stays running in the background. Its probably worthwhile to write some traps to handle that. I’ve not bothered with this, given the scope of what I’m doing.
  5. I’ve not used this extensively, so you may run into issues I’ve not thought of.
  6. It is possible to have stuff that isn’t logged using this approach, so not a perfect answer for OP’s question.

Источник

Оцените статью
Adblock
detector