Linux limit directory size

Limit total size of a directory in linux

I have a service daemon that creates a lot of temp files. Recently my server died, because a malicious user managed to flood /tmp and fill up the disk. I have taken some measures to actively clean up the temp dir, but additionally I would like to constrain the max size of this applications temp dir. Is there any way I can create dir, say, /apptmp that will never be larger than e.g. 10G? I know I can set disk limits by-user, but I just want to limit this tmpdir; the application should always be able to write elsewhere. I am running Ubuntu Linux 12.04. edit: All of this should eventually be wrapped up in an installable Ubuntu package though. So I don’t think I want to rely on modifying the partitions, unless I can somehow simulate it.

2 Answers 2

You can give /tmp it’s own partition. Then you will be sure that it will never exceed that amount. I suggest using LVM so you can increase and decrease partition size should you ever feel the need to do so.

Is was the reason in the «old days» that you had a separate volume/partition for /var; it kept logs from causing crashes.

Thanks. My experience with LVM is very limited. Could you illustrate your answer a bit with some example code on how to create such a partition? Could it be a virtual partition (in a file) as suggested below?

+1 You should have separate partitions for all the usual locations, such as /, /usr, /var, /tmp, /opt, /home. See also debian.org/doc/manuals/securing-debian-howto/ch3.en.html#s3.2

Источник

How to set limit on directory size in Linux? [closed]

I have read about limiting size of directory — like creating big files, formatting,mount. etc. But this all very complicated. Does exist utility or something else to set limit on already existing directory?

Читайте также:  Linux check if group exist

The problem that I need limit to specific directories. There are many users that have access to e.g. direcotry1, directory2, directory3. I need set limit for log dir, for data dir, for applications dir.

Based on the accepted answer and the linked tutorial, I’ve put together a script to automate the process, which is actually was made for a related answer: askubuntu.com/a/1043139/295286

2 Answers 2

Quota is based upon filesystems, but you can always create a virtual filesystem and mount it on a specific (empty) directory with the usrquota and/or grpquota flags.

  1. create the mount point
  2. create a file full of /dev/zero, large enough to the maximum size you want to reserve for the virtual filesystem
  3. format this file with an ext3 filesystem (you can format a disk space even if it is not a block device, but double check the syntax of every — dangerous — formatting command)
  4. mount the newly formatted disk space in the directory you’ve created as mount point, e.g. Code: mount -o loop,rw,usrquota,grpquota /path/to/the/formatted/disk/space /path/of/mount/point
  5. Set proper permissions
  6. Set quotas and the trick is done.

Tutorial here. Original answer here

Источник

How to set a file size limit for a directory?

I have a directory on my system which is used for a specific reason by applications and users, but I don’t want its size to be allowed to exceed 2GB, is there a way of setting up some sort of limit which just doesn’t allow the file size to exceed that or any other amount I decide to set for it in the future? When the size limit is exceeded it should undo the last change (though there should be an option to have it so that it just stops the operation and doesn’t care if half a file was copied and left there) and then display a warning to the user. I am running Ubuntu GNOME 16.10 with GNOME 3.22.

1 Answer 1

Usual filesystem quota on ext4 is per-user/group, not per-directory. ZFS can sort-of set a directory quota, by creating a filesystem of a fixed size off a ZFS volume. A simple trick, though, is to create a 2GB file, create a filesystem on it, and mount it at the desired folder:

$ touch 2gbarea $ truncate -s 2G 2gbarea $ mke2fs -t ext4 -F 2gbarea mke2fs 1.43.3 (04-Sep-2016) Discarding device blocks: done Creating filesystem with 524288 4k blocks and 131072 inodes Filesystem UUID: bf1b2ee8-a7df-4a57-9d05-a8b60323e2bf Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Allocating group tables: done Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done $ sudo mount 2gbarea up $ df -h up Filesystem Size Used Avail Use% Mounted on /dev/loop0 2.0G 6.0M 1.8G 1% /home/muru/up 

In any case, filesystem quotas (or methods like this) aren’t as user friendly as you want. This method is one-way flexible, in that you can increase the size online, but decreasing it would be hard.

  • touch : touch 2gbarea creates an empty file named 2gbarea .
  • truncate : truncate is used to resize files (in this case, I resize the currently empty 2gbarea file to 2 GB using -s 2G ).
  • mke2fs : mke2fs creates ext2/3/4 filesystems (in this case, ext4).
  • mount mounts the filesystem on the given directory.
  • df is used to list filesystem usage.
Читайте также:  Kernel panic arch linux

Источник

How to Set Limit on Directory Size in Linux

Quota is based upon filesystems, but you can always create a virtual filesystem and mount it on a specific (empty) directory with the usrquota and/or grpquota flags.

  1. create the mount point
  2. create a file full of /dev/zero, large enough to the maximum size you want to reserve for the virtual filesystem
  3. format this file with an ext3 filesystem (you can format a disk space even if it is not a block device, but double check the syntax of every — dangerous — formatting command)
  4. mount the newly formatted disk space in the directory you’ve created as mount point, e.g.
    Code:
    mount -o loop,rw,usrquota,grpquota /path/to/the/formatted/disk/space /path/of/mount/point
  5. Set proper permissions
  6. Set quotas
    and the trick is done.

Tutorial here.
Original answer here

How can I limit the max numbers of folders that user can create in linux

This is what quotas are designed for. You can use file system quotas to enforce limits, per user and/or per group for:

  • the amount of disk size space that can be used
  • the number of blocks that can be used
  • the number of inodes that can be created.

The number of inodes will essentially limit the number of files and directories a user can create.

There is extensive, very good quality documentation about how to configure file system quotas in many sources, which I suggest you read further:

  • https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-disk-quotas.html
  • https://wiki.archlinux.org/index.php/disk_quota
  • http://www.ibm.com/developerworks/library/l-lpic1-v3-104-4/
  • http://www.firewall.cx/linux-knowledgebase-tutorials/linux-administration/838-linux-file-system-quotas.html

Bash script to limit a directory size by deleting files accessed last

Here’s a simple, easy to read and understand method I came up with to do this:

DIRSIZE=$(du -s /PATH/TO/FILES | awk '')
if [ "$DIRSIZE" -gt "$SOMELIMIT" ]
then
for f in `ls -rt --time=atime /PATH/TO/FILES/*.tar`; do
FILESIZE=`stat -c "%s" $f`
FILESIZE=$(($FILESIZE/1024))

DIRSIZE=$(($DIRSIZE - $FILESIZE))
if [ "$DIRSIZE" -lt "$LIMITSIZE" ]; then
break
fi
done
fi

Folder size linux

If you need to enforce limits then use quotas

Читайте также:  Установка приложений через консоль linux

Maximum number of files/directories on Linux?

ext[234] filesystems have a fixed maximum number of inodes; every file or directory requires one inode. You can see the current count and limits with df -i . For example, on a 15GB ext3 filesystem, created with the default settings:

Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/xvda 1933312 134815 1798497 7% /

There’s no limit on directories in particular beyond this; keep in mind that every file or directory requires at least one filesystem block (typically 4KB), though, even if it’s a directory with only a single item in it.

As you can see, though, 80,000 inodes is unlikely to be a problem. And with the dir_index option (enablable with tune2fs ), lookups in large directories aren’t too much of a big deal. However, note that many administrative tools (such as ls or rm ) can have a hard time dealing with directories with too many files in them. As such, it’s recommended to split your files up so that you don’t have more than a few hundred to a thousand items in any given directory. An easy way to do this is to hash whatever ID you’re using, and use the first few hex digits as intermediate directories.

For example, say you have item ID 12345, and it hashes to ‘DEADBEEF02842. ‘ . You might store your files under /storage/root/d/e/12345 . You’ve now cut the number of files in each directory by 1/256th.

Folder size as variable

Here is an example of output from du —bytes —summarize —total /home/user1/testfolder :

4954626 /home/user1/testfolder
4954626 total
#!/usr/bin/env bash

declare -i limit=200000
declare -- test_folder=/home/user1/testfolder

declare -i folder_size
IFS=$' \t\n' read -r -d '' _ _ folder_size _ < <(
du --bytes --summarize --total "$test_folder"
)

if [[ "$folder_size" -le "$limit" ]];
then
printf 'Folder size is small\n'
else
printf 'Folder size is big\n'
fi

Источник

Оцените статью
Adblock
detector