Ntfs defrag on linux

Defragging NTFS Partitions from Linux

Yes, you can use shake. You’ll first need to add a custom repository to your system:

sudo add-apt-repository ppa:un-brice/ppa sudo apt-get update sudo apt-get install shake-fs 

Shake isn’t quite a defragmenter, it simply copies each file in the hope that the copy will be less fragmented. That’s of course far from how real defragmenters work.

There is no such a tool around, for what I know.

Some site reports the following command

# WARNING - does not work fsck -t ntfs --kerneldefrag /dev/hdX 

but this does not work, and it is not clear where do they get it.

Update: UltraDefrag for Linux:

UltraDefrag is a powerful Open Source Defragmentation tool for the Windows Platform. It can defragment any system files including registry hives and paging file. Also one of the main goals of UltraDefrag is doing the job as fast and reliable as possible. It is being ported to Linux and NTFS-3G for defragmenting NTFS partitions. Currently only a test version in console mode is available. Please read the included file README.linux for compiling and testing

[I’ve not yet used this myself. Found it via a thread on an Arch forum. Further following the thread through to the next page leads to more on the topic. Try at your own risk.—kevjonesin—]

NTFS-3G moved to GitHub github.com/tuxera/ntfs-3g/wiki One thing to note — you will probably have to build it yourself.

Install udefrag zipped static executable (tested with Ubuntu 18.04.4 LTS amd64)

sudo dpkg --add-architecture i386 sudo apt update sudo apt install libc6:i386 libncurses5:i386 libstdc++6:i386 wget -r https://easy2boot.com/_files/200002026-43f1844ea0/udefrag.zip cd easy2boot.com/_files/200002026-43f1844ea0/ unzip udefrag.zip sudo chmod 755 * sudo cp udefrag /sbin/ 

In order to run it (replace sdX1 with the appropriate disk label)

udefrag

This is a BIG warning for all those of you that think NFTS can be defragmented on Linux just by copying files (cloning only files), etc:

from what I know, any time Linux (cp, fsarchiver, etc) writes a file/folder on a NTFS it always write it without NTFS compression, no matter if the file/folder has compression on or off.

So you can get to a situation (I meet it the hard way), where restoring with fsarchive (or cp, etc) would make partition to get full and not be enough.

Some kind of data can reach an NTFS compression ratio of more than 3, so you can have a X GiB partition with a lot of files, and the sum of files be near 3*X in size.

I give that warning because it is not well known and sometimes creates really big headaches. like when restoring a clone need more space than the whole partition that has been cloned, caused because NTFS compression got lost on Linux.

Also, with very very special data (NTFS ratio greater than 5) I reach this situation:

  • NTFS partition size of X GiB
  • The file that holds the clone (with the best compression that tool let, GZip I think) took 2*X GiB
Читайте также:  Команда линукс полное удаление

Oh yes, the clone was compressed and it took double that partition size.

That is caused because the clone tool read files in plain (in clear, not compressed) then compress the data (with a really worst ratio than NTFS did).

Of course restoring that data will not fit on that partition, since restored data will be putted without NTFS compression.

Hope it is clear another reason why not to use NFTS compression? Well, not at all, I use NTFS compression a lot (in the past). VDI (Virtual Box) files get a really good ratio.

Now I had discovered Pismo file Mount (and it also works on Linux). it can create a file that acts as a container (as a folder) and can be compressed (also with better ratio than NFTS) and at the same time encrypted.

Why I mention it. because any clone tool will see such container as a file (when not mounted as folder) and read/dump/backup the compressed stream of data, not the plain uncompressed data (as with NTFS compression). so restoring is as with any other file.

Instead of compressing a NTFS folder with NTFS compression attribute, I put a Pismo file Mount virtual folder. get better compression, etc.

I must also warn all of you interested on such free tool. it has no shrink (at least yet), so if folder content changes a lot it is not a so good idea.

But for immutable Virtual Disks, ISOs, and things that will not change, the ratio it gets is very close to LZMA2 ones (7-Zip) and it can be read/write on the fly.

Note the bad guy of NTFS compression talking about fragmentation. when you write a file to a NTFS with NTFS compression on, it does it this way (yes horrible designed, I think it is done like that to ensure greater fragmentation in intention way, worst can not be done):

  1. Start write position is pre-calculated as 64K*N, where N is the number of the 64K chunk that will be tried to be compressed
  2. A buffer for 64K is reserved
  3. That buffer is filled with 64K and then compressed
  4. Only the 4K blocks needed are written, the rest are let as free space

So it creates a lot, lot of GAPs in the middle of the file, and only after a file defragmentation that GAPs disappear, but that defragmentation does not occur until user order it (contig.exe, defrag.exe, etc).

Yes, it writes the N’th 64K chunk on a position multiple of 64K, no matter if previous data could or not could be compressed, it leave a Gap between each 64K chunk (if all could be compressed).

Pismo File Mount virtual folder compression acts like any normal compression is supposed to be done, piped mode, so no gaps. at least until you delete something.

Also another warning, do not put VHD / VHDX files inside it, Windows will not be able to attach them! Windows uses a kernel trick to mount such things, it does not use file-system level, works at low level.

Читайте также:  Linux телефон как микрофон

I would also like to get my hands on a Linux NTFS defragmenter, sure would be faster than all that run over Windows. it is a total madness to defragment free space. or better talking. creating a whole big enough for a new big file.

Also it would be great my memory work better. in the past I was using a tool (command line, sorry) on Windows that could copy/move a file in non-fragmented way. moving the needed files away while getting that needed whole, and not fragmenting that ones. it only gives a message if it can not find a way to put the file (impossible to get a hole) or a different warning if it needs t fragment another file (asking if authorised by user), etc. was really great. I did not remember the name (and maybe it does not work with modern windows, it was for Win2000).

Источник

How can I defrag an NTFS hard drive from Ubuntu? [duplicate]

I state I am asking if it is possible and HOW. I note many forum answers are ‘you can’t’ or ‘you don’t need to on Linux cause it is perfect and wonderful’ — none of these answers will help. Firstly, the HDD in question is SATA 163 GB and contains ONLY backup data like music, video and NO windows programs or installations. I have had a HDD failure in second HDD that is completely FUBAR so have lost my windows installation, this HDD had linux and windows dual boot, HDD is undetectable in BIOS. The working HDD is 130GB full with capacity of 163GB (it is badly fragmented due to excessive use. ). I intend to defrag this drive using USB linux OS (bootable USB with installation files and ‘trial’) and any other programs that will enable this task. Once this is done I intend to locate which data areas are free (the end of HDD data storage area), create a new partition on this free space, install linux full version and get things working. +++ I understand fully the following: I can buy a new HDD for install. I can get an external hard drive, back up the data. I also understand that copying off backup hdd and then copying back will do the same as defrag. I ask this question to find out how to complete this action I have requested and NOT to complete it using the methods unavailable to me at this time. Thanks

Well, to add to «you can’t» and «you don’t need», I think you don’t even want to. Defragmentation won’t make a large area of free space. It will only defragment individual files — so they can still be scattered over the whole area. Secondly, if you succeed in moving files to the start of end of the current partition then it will take you the same amount of time as letting the partition shrink and have the resize process take care of that for you. (both methods require the same data to be moved).

Читайте также:  Установка rdp клиента linux

however, I have one partition on this HDD. To create a new partition now would surely not allocate space fairly and keep all data. Surely I am right in saying partitioning this drive will involve data loss?? This is my issue. I wish to defrag only so I can safely ‘move’ data to a smaller area so it will read that ‘HDD area from 130GB to 160GB is free’ therefore a partition would be possible.

Alvar, it is not ext filesystem it is NTFS, and aswell as that linux can become fragmented and at times a defrag could be useful (except on new drives etc) despite what linux users say about allocation of HDD space by linux. This question asks: is this action possible and how would i do it, NOT should i do it.

1 Answer 1

Personally I wouldn’t bother to defrag the NTFS partition of the HDD since if you install Ubuntu on the spare part it will only use that amount as a hard drive and the rest will not be affected. Then just mount the NTFS on Ubuntu and access the files there.

An ext4 file system doesn’t have the problems with empty slots as NTFS or FAT32. The problems with NTFS and FAT32 are that the storage is based on data being put in slots and if a files on fills up 15 slots but are assigned 16 slots then one slot will be empty.

Here is where defrag comes in and moves the data around so that the data is used in each slot and the empty slots are declared as empty instead of used by this and that file. This saves space and makes access times shorter since you don’t have to search the whole HDD for a file.

In ext4 files are divided throughout the disk and the fields where the data is stored is linked to the original file, so a file can be stored in row 1 field 2, row 3 field 12, etc.

So moving files around to save space doesn’t work in ext4, you will make no more room on the hard drive. it will be just as easy to access files as it was before. This is just an example of the principle, explaining it in detail seemed too complex right now.

Is it possible to defrag an NTFS from Ubuntu
What I’ve found out by searching on this topic is that there is no program to defrag an NTFS HDD from Ubuntu. The best solution is to:

  1. Mount the HDD under Ubuntu
  2. Copy the files to another HDD
  3. Re-format the HDD (preferably with ext4)
  4. Move back the files

If you don’t have another drive I would

  1. Create a ext4 partition on the empty space and move some files to there
  2. Remove those files from the NTFS partition
  3. Resize the NTFS partition to make it smaller
  4. Make the ext4 partition larger

Do this procedure until all files are moved and the NTFS partition is gone.

Источник

Оцените статью
Adblock
detector