- What is file block size in Linux?
- How do I determine the block size for ext4 and btrfs filesystems?
- 3 Answers 3
- How To Find the Default Block Size in Unix
- Default block size in Linux
- Default block size in Solaris
- Using df -g to confirm the filesystem block size
- Using fstyp -v to confirm the filesystem block size
- How do I determine the block size of an ext3 partition on Linux?
- How to find the cluster size of any filesystem, whether NTFS, Apple APFS, ext4, ext3, FAT, exFAT, etc.
- See also
What is file block size in Linux?
I want to get file block size of two files, using stat struct. In internet, they say file block size is affected by which device is using. If that is right, does block size of all files in same partition equal? I am using blockcmp function, it always return same size. I wonder it is right or i miss something. Here is my code.
#include #include #include #include #include struct stat stat1, stat2; struct tm *time1, *time2; void filestat1(void); void filestat2(void); void filetime1(void); void filetime2(void); void sizecmp(void); void blockcmp(void); void datecmp(void); void timecmp(void); int main(void) < filestat1(); filestat2(); filetime1(); filetime2(); sizecmp(); blockcmp(); datecmp(); timecmp(); >void filestat1(void) < // check if there is no text1 int check = 0; check = stat("text1", &stat1); if(check != 0) < printf("Error : there is no text1\n"); >return; > void filestat2(void) < // check if there is no text2 int check = 0; check = stat("text2", &stat2); if(check != 0) < printf("Error : there is no text2\n"); >return; > void filetime1(void) < time1 = localtime(&stat1.st_mtime); return; >void filetime2(void) < time2 = localtime(&stat2.st_mtime); return; >void sizecmp(void) < printf("size compare\n"); //variable declare long long int text1_size; long long int text2_size; //variable initialize text1_size = stat1.st_size; text2_size = stat2.st_size; if(text1_size >text2_size) printf("text1 is bigger\n"); else if(text1_size < text2_size) printf("text2 is bigger\n"); else printf("same size\n"); printf("\n"); return; >void blockcmp(void) < printf("block compare\n"); //variable declare long long int text1_block_size; long long int text2_block_size; //variable initialize text1_block_size = stat1.st_blocks; text2_block_size = stat2.st_blocks; if(text1_block_size >text2_block_size) printf("text1 is bigger\n"); else if(text1_block_size < text2_block_size) printf("text2 is bigger\n"); else printf("same size\n"); printf("\n"); return; >void datecmp(void) < printf("date compare\n"); // compare tm_mon if(time1->tm_mon > time2->tm_mon) printf("time1 is early \n"); else if(time1->tm_mon < time2->tm_mon) printf("time2 is early \n"); else< // compare tm_mday if(time1->tm_mday > time2->tm_mday) printf("time1 is early \n"); else if(time1->tm_mday < time2->tm_mday) printf("time2 is early \n"); // same date else printf("same time \n"); > printf("\n"); > void timecmp(void) < printf("time compare\n"); // compare hour if(time1->tm_hour > time2->tm_hour) printf("time1 is early \n"); else if(time1->tm_hour < time2->tm_hour) printf("time2 is early \n"); else< // compare minutes if(time1->tm_min > time2->tm_min) printf("time1 is early \n"); else if(time1->tm_min < time2->tm_min) printf("time2 is early \n"); // same time ;else printf("same time \n") > >
How do I determine the block size for ext4 and btrfs filesystems?
I’m looking for the commands that will tell me the allocation quantum on drives formatted with ext4 vs btrfs. Background: I am using a backup system that allows users to restore individual files. This system just uses rsync and has no server-side software, backups are not compressed. The result is that I have some 3.6TB of files, most of them small. It appears that for my data set storage is much less efficient on a btrfs volume under LVM than it is on a plain old ext4 volume, and I suspect this has to do with the minimum file size, and thus the block size, but I have been unable to figure out how to get those sizes for comparison purposes. The btrfs wiki says that it uses the «page size» but there’s nothing I’ve found on obtaining that number.
Am I the only one that finds it strange that btrfs is under lvm? I’m more used to zfs, but I’m pretty sure btrfs, like zfs, also optimizes if it’s configured full stack (both volume and fs). Anyway, I seem to remember btrfs can provide better volume infrastructure than lvm.
3 Answers 3
You’ll want to look at the data block allocation size, which is the minimum block that any file can allocate. Large files consist of multiple blocks. And there’s always some «waste» at the end of large files (or all small files) where the final block isn’t filled entirely, and therefore unused.
As far as I know, every popular Linux filesystem uses 4K blocks by default because that’s the default pagesize of modern CPUs, which means that there’s an easy mapping between memory-mapped files and disk blocks. I know for a fact that BTRFS and Ext4 default to the page size (which is 4K on most systems).
On ext4, just use tune2fs to check your block size, as follows (change /dev/sda1 to your own device path):
[root@centos8 ~]# tune2fs -l /dev/sda1 |grep "^Block size:" Block size: 4096 [root@centos8 ~]#
On btrfs, use the following command to check your block size (change /dev/mapper/cr_root to your own device path, this example simply uses a typical encrypted BTRFS-on-LUKS path):
sudo btrfs inspect-internal dump-super -f /dev/mapper/cr_root | grep "^sectorsize"
How To Find the Default Block Size in Unix
The questions about default block sizes used in your Unix system are always popular. Today I’d like to show you a few ways to answer them.
Default block size in Linux
If you ever want to confirm the block size of any filesystem of Ubuntu or any other Linux OS, tune2fs command is here to help:
ubuntu# tune2fs -l /dev/sda1 | grep Block Block count: 4980736 Block size: 4096 Blocks per group: 32768
From this example, you can see that the default block size for the filesystem on /dev/sda1 partition is 4096 bytes, or 4k. That’s the default block size for ext3 filesystem.
Default block size in Solaris
The default block size in Solaris is 8192 bytes, or 8k. However, some architectures allow you to use 4k size as well, by specifying it as a command line option for the newfs command. To be absolutely sure, you can use one of the commands: df -g (takes a filesystem mount point name as the parameter – / or /usr for example) or use fstyp -v command (needs a character device of the filesystem you’re interested in).
Using df -g to confirm the filesystem block size
This command can be used as any user, so to confirm a block size for any of the filesystems you don’t have to be root. However, it works only for mounted filesystems.
bash-3.00$ df -g / / (/dev/dsk/c1t0d0s0 ): 8192 block size 1024 frag size 12405898 total blocks 4399080 free blocks 4275022 available 751296 total files 603544 free files 30932992 filesys id ufs fstype 0x00000004 flag 255 filename length
Using fstyp -v to confirm the filesystem block size
Because this command accesses the character device of a particular filesystem, you have to be root to run it. But as a bonus compared to df -g, you can use fstyp -v on an unmounted filesystem:
bash-3.00# fstyp -v /dev/dsk/c1t0d0s0 | grep ^bsize bsize 8192 shift 13 mask 0xffffe000
How do I determine the block size of an ext3 partition on Linux?
Replace /dev/sda1 with the partition you want to check.
Without root , without writing, and for any filesystem type, you can do:
This will give block size of the filesystem mounted in current directory (or any other directory specified instead of the dot).
Don’t forget the dot at the end of that command as stat -f is expecting is expecting a folder to give you stats about.
@JaniUusitalo, @c4f4t0r: thanks for the hint, corrected the answer using -c which is simpler than —printf=’. \n’
will output something with:
Block size: 4096 Fragment size: 4096
In the case where you don’t have the right to run tune2fs on a device (e.g. in a corporate environment) you can try writing a single byte to a file on the partition in question and check the disk usage:
On x86, a filesystem block is just about always 4KiB — the default size — and never larger than the size of a memory page (which is 4KiB).
Thanks Dave! I learned something today 😉 I originally thought the ext3 blocksize could be 8k on platforms that supported 8k memory pages.
@dfrankow: if you have 8k memory pages, such as on Alpha hardware, yes. But you do not have those on x86 hardware and that is what I was talking about.
To detect block size of required partition:
$ sudo blockdev --getbsz /dev/sda1
will also give file size in blocks
sudo dumpe2fs /dev/sda1 | grep "Block size"
where /dev/sda1 is the device partition. You can get it from lsblk
@narthi mentions using du -h on a tiny file too, but I’ll add some more context and explanation:
How to find the cluster size of any filesystem, whether NTFS, Apple APFS, ext4, ext3, FAT, exFAT, etc.
Create a file with a single char in it, and run du -h on it to see how much disk space it takes up. This is your cluster size for your disk:
# Check cluster size by making and checking a 2-byte (1 char + null terminator I # think) file. echo "1" > test.txt # This is how many bytes this file actually *takes up* on this disk! du -h test.txt # Check file size. This is the number of bytes in the file itself. ls -alh test.txt | awk ''
Example run and output, tested on Linux Ubuntu 20.04 on an ext4 filesystem. You can see here that test.txt takes up 4 KiB (4096 bytes) on the disk, since that is this disk’s minimum cluster size, but its actual file size is only 2 bytes!
$ echo "1" > test.txt $ du -h test.txt 4.0K test.txt $ ls -alh test.txt | awk '' 2
As @Mayur mentions here, you can also use stat to glean this information from our test.txt file, as shown here. The «Size» is 2 and the «IO Block» is 4096:
$ stat test.txt File: test.txt Size: 2 Blocks: 8 IO Block: 4096 regular file Device: fd01h/64769d Inode: 27032142 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ gabriel) Gid: ( 1000/ gabriel) Access: 2023-05-21 15:37:31.300562109 -0700 Modify: 2023-05-21 15:48:49.136721796 -0700 Change: 2023-05-21 15:48:49.136721796 -0700 Birth: -
See also
- If formatting a filesystem, such as exFAT, and if you have a choice on choosing the cluster size, I recommend 4 KiB, even for exFAT, which might otherwise default to something larger like 128 KiB, to keep disk usage low when you have a ton of small files. See my answer here: Is it best to reformat the hard drive to exFAT using 512kb chunk, or smaller or bigger chunks?