Linux console file size

How can I check the size of a file using Bash?

I’ve got a script that checks for 0-size, but I thought there must be an easier way to check for file sizes instead. I.e. file.txt is normally 100 kB; how can I make a script check if it is less than 90 kB (including 0), and make it Wget a new copy because the file is corrupt in this case? What I’m currently using.

if [ -n file.txt ] then echo "everything is good" else mail -s "file.txt size is zero, please fix. " myemail@gmail.com < /dev/null # Grab wget as a fallback wget -c https://www.server.org/file.txt -P /root/tmp --output-document=/root/tmp/file.txt mv -f /root/tmp/file.txt /var/www/file.txt fi 

Title of question should be changed. This is about detecting zero size files, not about checking the size of a file.

12 Answers 12

[ -n file.txt ] doesn't check its size. It checks that the string file.txt is non-zero length, so it will always succeed.

If you want to say "size is non-zero", you need [ -s file.txt ] .

To get a file's size, you can use wc -c to get the size (file length) in bytes:

file=file.txt minimumsize=90000 actualsize=$(wc -c <"$file") if [ $actualsize -ge $minimumsize ]; then echo size is over $minimumsize bytes else echo size is under $minimumsize bytes fi 

In this case, it sounds like that's what you want.

But FYI, if you want to know how much disk space the file is using, you could use du -k to get the size (disk space used) in kilobytes:

file=file.txt minimumsize=90 actualsize=$(du -k "$file" | cut -f 1) if [ $actualsize -ge $minimumsize ]; then echo size is over $minimumsize kilobytes else echo size is under $minimumsize kilobytes fi 

If you need more control over the output format, you can also look at stat . On Linux, you'd start with something like stat -c '%s' file.txt , and on BSD and Mac OS X, something like stat -f '%z' file.txt .

Use wc -c < "$file" (note the < ), in which case you don't need the | cut . part (which, as posted, doesn't work on OSX). The minimum BLOCKSIZE value for du on OSX is 512 .

Is it not inefficient to read the file to determine it's size? I think stat will not read the file to see it's size.

@PetriSirkkala On my Linux system, wc -c

stat can also check the file size. Some methods are definitely better: using -s to find out whether the file is empty or not is easier than anything else if that's all you want. And if you want to find files of a size, then find is certainly the way to go.

I also like du a lot to get file size in kb, but, for bytes, I'd use stat :

size=$(stat -f%z $filename) # BSD stat size=$(stat -c%s $filename) # GNU stat? 

The difference between GNU and BSD is what, unfortunately, makes this alternative a bit less attractive. 🙁

Читайте также:  Adding users in arch linux

stat can be misleading if the file is sparse. You could use the blocks reported by stat to calculate space used.

@AjithAntony That's an interesting point which did not occur to me. I can see stat being the right thing in some situations, and sparse files are not relevant in most situations, though certainly not all.

An alternative solution with AWK and double parenthesis:

FILENAME=file.txt SIZE=$(du -sb $FILENAME | awk '< print $1 >') if ((SIZE<90000)) ; then echo "less"; else echo "not less"; fi 

Nice, but won't work on OSX, where du doesn't support -b . (It may be a conscious style choice, but just to mention the alternative: you can omit the $ prefix inside (( . )) when referencing variables: ((SIZE<90000)) )

@fstab, you may ommit awk by using read ( bash internal command): read SIZE _ <<<$(du -sb "$FILENAME")

If your find handles this syntax, you can use it:

find -maxdepth 1 -name "file.txt" -size -90k 

This will output file.txt to stdout if and only if the size of file.txt is less than 90k. To execute a script script if file.txt has a size less than 90k:

find -maxdepth 1 -name "file.txt" -size -90k -exec script \; 

+1, but to also make it work on OSX, you need an explicit target directory argument, e.g.: find . -maxdepth 1 -name "file.txt" -size -90k

If you are looking for just the size of a file:

wc -c "$file" was given as an answer in 2011 (three years ago). Yes, wc -c "$file" has the problem that it outputs the file name as well as the character count, so the early answers added a command to separate out the count. But wc -c < "$file" , which fixes that problem, was added as a comment in May 2014. Your answer is equivalent to that, except it adds a “useless use of cat ”. Also, you should quote all shell variable references unless you have a good reason not to.

You can make this more efficient by using head -c instead of cat.if [ $(head -c 90000 $file | wc -c) -lt 90000 ] ; then echo "File is smaller than 90k" ; fi . Tested on CentOS, so it may or may not work on BSD or OSX.

This works in both Linux and macOS:

function filesize < local file=$1 size=`stat -c%s $file 2>/dev/null` # Linux if [ $? -eq 0 ] then echo $size return 0 fi eval $(stat -s $file) # macOS if [ $? -eq 0 ] then echo $st_size return 0 fi return -1 > 
python -c 'import os; print (os.path.getsize(". filename . "))' 

It is portable, for all flavours of Python, and it avoids variation in stat dialects.

But the question was about checking for a file size threshold, e.g. 100 KB, not just getting the file size.

For getting the file size in both Linux and Mac OS X (and presumably other BSD systems), there are not many options, and most of the ones suggested here will only work on one system.

what does work in both Linux and Mac's Bash:

size=$( perl -e 'print -s shift' "$f" ) 

The other answers work fine in Linux, but not in Mac:

  • du doesn't have a -b option in Mac, and the BLOCKSIZE=1 trick doesn't work ("minimum blocksize is 512", which leads to a wrong result)
  • cut -d' ' -f1 doesn't work because on Mac, the number may be right-aligned, padded with spaces in front.
Читайте также:  Открыть порт linux ufw

So if you need something flexible, it's either perl 's -s operator , or wc -c piped to awk '' (awk will ignore the leading white space).

And of course, regarding the rest of your original question, use the -lt (or -gt ) operator:

if [ $size -lt $your_wanted_size ]; then , etc.

+1; if you know you'll only be using the size in an arithmetic context (where leading whitespace is ignored), you can simplify to size=$(wc -c < "$f") (note the < , which causes wc to only report a number). Re comparison: don't forget the more "bash-ful" if (( size < your_wanted_size )); then . (and also [[ $size -lt $your_wanted_size ]] ).

Based on gniourf_gniourf’s answer,

will write file.txt to stdout if and only if the size of file.txt is less than 90K, and

find "file.txt" -size -90k -exec command \;

will execute the command command if file.txt has a size less than 90K. I have tested this on Linux. From find(1) ,

… Command-line arguments following (the -H , -L and -P options) are taken to be names of files or directories to be examined, up to the first argument that begins with ‘-’, …

assuming that ls command reports filesize at column #6

But the question was about checking for a file size threshold, e.g. 100 KB, not just getting the file size.

I would use du 's --threshold for this. Not sure if this option is available in all versions of du but it is implemented in GNU's version.

-t, --threshold=SIZE exclude entries smaller than SIZE if positive, or entries greater than SIZE if negative 

Here's my solution, using du --threshold= for OP's use case:

THRESHOLD=90k if [[ -z "$(du --threshold=$ file.txt)" ]]; then mail -s "file.txt size is below $, please fix. " myemail@gmail.com < /dev/null mv -f /root/tmp/file.txt /var/www/file.txt fi 

The advantage of that, is that du can accept an argument to that option in a known format - either human as in 10K , 10MiB or what ever you feel comfortable with - you don't need to manually convert between formats / units since du handles that.

For reference, here's the explanation on this SIZE argument from the man page:

The SIZE argument is an integer and optional unit (example: 10K is 10*1024). Units are K,M,G,T,P,E,Z,Y (powers of 1024) or KB,MB. (powers of 1000). Binary prefixes can be used, too: KiB=K, MiB=M, and so on. 

Источник

Calculate size of files in shell

I'm trying to calculate the total size in bytes of all files (in a directory tree) matching a filename pattern just using the shell. This is what I have so far:

find -name *.undo -exec stat -c%s <> \; | awk ' END '

Is there an easier way to do this? I feel like there should be a simple du or find switch that does this for me but I can't find one. To be clear I want to total files matching a pattern anywhere under a directory tree which means

Читайте также:  Виртуальная машина astra linux smolensk

15 Answers 15

find . -name "*.undo" -ls | awk ' END ' 

On my system the size of the file is the seventh field in the find -ls output. If your find … -ls output is different, adjust.

In this version, using the existing directory information (file size) and the built-in ls feature of find should be efficient, avoiding process creations or file i/o.

I would add "-type f" to the find command to prevent from incorrect total if there are directories matching "*.undo" glob.

Note that if you need several patterns to match, you will have to use escaped parenthesis for the whole expression to match otherwise the -ls will apply only to the last pattern. For instance, if you want to match all jpeg and png files (trusting filenames), you would use find . \( -iname "*.jpg" -o -iname "*.jpeg" -o -iname "*.png" \) -ls | awk ' END ' ( -iname is for case insensitive search ; also, note the space between the expression and the escaped parenthesis).

Источник

How do I determine the total size of a directory (folder) from the command line?

The -h flag on sort will consider "Human Readable" size values.

If want to avoid recursively listing all files and directories, you can supply the --max-depth parameter to limit how many items are displayed. Most commonly, --max-depth=1

du -h --max-depth=1 /path/to/directory 

I use du -sh or DOOSH as a way to remember it (NOTE: the command is the same, just the organization of commandline flags for memory purposes)

There is a useful option to du called the --apparent-size. It can be used to find the actual size of a file or directory (as opposed to its footprint on the disk) eg, a text file with just 4 characters will occupy about 6 bytes, but will still show up as taking up ~4K in a regular du -sh output. However, if you pass the --apparent-size option, the output will be 6. man du says: --apparent-size print apparent sizes, rather than disk usage; although the apparent size is usually smaller, it may be larger due to holes in (‘sparse’) files, internal fragmentation, indirect blocks

This works for OS X too! Thanks, I was really looking for a way to clear up files, both on my local machine, and my server, but automated methods seemed not to work. So, I ran du -hs * and went into the largest directory and found out which files were so large. This is such a good method, and the best part is you don't have to install anything! Definitely deserved my upvote

@BandaMuhammadAlHelal I think there are two reasons: rounding ( du has somewhat peculiar rounding, showing no decimals if the value has more than one digit in the chosen unit), and the classical 1024 vs. 1000 prefix issue. du has an option -B (or --block-size ) to change the units in which it displays values, or you could use -b instead of -h to get the "raw" value in bytes.

Источник

Оцените статью
Adblock
detector