Unzip All Files In A Directory
I have a directory of ZIP files (created on a Windows machine). I can manually unzip them using unzip filename , but how can I unzip all the ZIP files in the current folder via the shell? Using Ubuntu Linux Server.
for windows in powershell: Get-ChildItem ‘path to folder’ -Filter *.zip | Expand-Archive -DestinationPath ‘path to extract’ -Force
17 Answers 17
This works in bash, according to this link:
Just put in some quotes to escape the wildcard:
+1 This one worked for me. I had to unzip filenames with a particular format while restricting the rest. I just kept the matching format within double quotes and it worked like charm. Output tells me the number of archives successfully processed.
The shell script below extracts all zip files in the current directory into new dirs with the filename of the zip file, i.e.:
./myfile1/files. ./myfile2/files.
Shell script:
#!/bin/sh for zip in *.zip do dirname=`echo $zip | sed 's/\.zip$//'` if mkdir "$dirname" then if cd "$dirname" then unzip ../"$zip" cd .. # rm -f $zip # Uncomment to delete the original zip file else echo "Could not unpack $zip - cd failed" fi else echo "Could not unpack $zip - mkdir failed" fi done
cd /dir/with/zips wget -O - https://www.toptal.com/developers/hastebin/suvefuxuxo.bash | bash
the ` saved my day! thanks! I am doing some loop, unzip, perform an action, copy, grep something, remove. The thing missing was how to go from file.gz to file as a variable in the bash script
This should be the ultimate answer to all unzipping anywhere and anytime, why is this not the accepted answer? 🙂
unzip *.zip, or if they are in subfolders, then something like
find . -name "*.zip" -exec unzip <> \;
Actually this will do exactly what is expected, the result of the find operation is being passed to unzip
This will extract all the zip files in current directory, what if I want the zip files (present in subfolders) to be extracted in the respective subfolders ?
Unzip all .zip files and store the content in a new folder with the same name and in the same folder as the .zip file:
find . -name '*.zip' -exec sh -c 'unzip -d "$" "$1"' _ <> \;
This will extract all the zip files in current directory, what if I want the zip files (present in subfolders) to be extracted in the respective subfolders ?
for i in *.zip; do newdir="$" && mkdir "$newdir" unzip "$i" -d "$newdir" done
This will unzip all the zip archives into new folders named with the filenames of the zip archives.
a.zip b.zip c.zip will be unzipped into a b c folders respectively.
This one worked for my use case, needs more up votes. The other approaches do not place the extracted files in a folder of the same name, as expected, but there are some cases where using this approach to separate the folders will be needed.
In any POSIX shell, this will unzip into a different directory for each zip file:
for file in *.zip do directory="$" unzip "$file" -d "$directory" done
aunpack -e *.zip , with atool installed. Has the advantage that it deals intelligently with errors, and always unpacks into subdirectories unless the zip contains only one file . Thus, there is no danger of polluting the current directory with masses of files, as there is with unzip on a zip with no directory structure.
aunpack -e -D *.zip if you want each zip to get its own output dir regardless of the number of files in it (similar to default behavior of ExtractAll in Windows)
for file in ‘ls *.zip’; do unzip «$» -d «$»; done
If by ‘current directory’ you mean the directory in which the zip file is, then I would use this command:
find . -name '*.zip' -execdir unzip <> \;
-execdir command ; -execdir command <> +
Like -exec, but the specified command is run from the subdirectory containing the matched file, which is not normally the directory in which you started find. This a much more secure method for invoking commands, as it avoids race conditions during resolution of the paths to the matched files. As with the -exec option, the ‘+’ form of -execdir will build a command line to process more than one matched file, but any given invocation of command will only list files that exist in the same subdirectory. If you use this option, you must ensure that your $PATH environment variable does not reference the current directory; otherwise, an attacker can run any commands they like by leaving an appropriately-named file in a directory in which you will run -execdir.
Extract and delete all .gz in a directory- Linux [closed]
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
I have a directory. It has about 500K .gz files. How can I extract all .gz in that directory and delete the .gz files?
A .gz file need not necessarily be an archive. In that case you wouldn’t have anything to delete upon performing gzip -d file.gz .
8 Answers 8
. unless that gives you an ‘argmuent too big’ error. In which case, you’ll want to use something like find «$dir» -maxdepth 1 -name ‘*.gz’ -print0 | xjobs -0 -l50 -v2 gunzip to restrict instances to 50 arguments each (and to run them in parallel).
. or just find «$dir» -maxdepth 1 -name ‘*.gz’ -exec gunzip <> + to run them sequentially. Remove the -maxdepth 1 to also traverse subdirectories, and replace «$dir» with the directory you want to examine (just . for current directory, or on Linux simply omit the directory).
@techedemic is correct but is missing ‘.’ to mention the current directory, and this command go throught all subdirectories.
find . -name '*.gz' -exec gunzip '<>' \;
There’s more than one way to do this obviously.
# This will find files recursively (you can limit it by using some 'find' parameters. # see the man pages # Final backslash required for exec example to work find . -name '*.gz' -exec gunzip '<>' \; # This will do it only in the current directory for a in *.gz; do gunzip $a; done
I’m sure there’s other ways as well, but this is probably the simplest.
And to remove it, just do a rm -rf *.gz in the applicable directory
How to extract files recursively but keep them in their own folders?
This will run unzip from each directory where files are found, ensuring that the files are extracted in the appropriate subdirectory.
If you specify the start directory this will also work on at least some BSDs (OpenBSD in particular):
find /path/to/start -iname \*.epub -execdir unzip -o -- <> \;
Note that -execdir is from BSD, it’s not GNU-specific (though not standard and not supported by every implementation). -iname and omiting the file/dir you want to search are also non-standard extensions. With BSD finds, you’ll want unzip -o — <> . You’d also want -nw to stop zip itself from interpreting wildcards.
Without -nw , unzip ‘*.zip’ would unzip all the zip files in the current directory instead of just the *.zip file. unzip behaves more in a MS-DOS way by default.
@StéphaneChazelas yes, I get that, but as you wrote yourself, -nw is an option for zip . It’s not supported by unzip .
Stephen Kitt’s -execdir answer is good, but if it’s possible for multiple .epub files to be in the same directory (so -execdir doesn’t help), you’d need to do something like this:
find ./ -iname '*.epub' -exec sh -c ' for f; do rp="$(realpath -e "$f")" bn="$(basename "$f" .epub)" dn="$(dirname "$rp")/$bn" [ -e "$dn" ] && continue echo mkdir -p "$dn" echo unzip -d "$dn" -o "$rp" done' find-sh <> +
For each argument ( «$f» ) passed to the sh script, this uses realpath to get its full absolute pathname into variable rp , and basename is then used to get the base filename without the .epub extension into variable bn . dirname is used to get the absolute directory name, and «/$bn» is appended to give us variable dn . BTW, all three of these programs are in GNU coreutils.
if «$dn» already exists, it is skipped with the continue statement (this would be a good place to add more error-checking and error-handling code, if required). Otherwise, it is created with mkdir -p and used as the argument to unzip’s -d option. «$rp» is used as the filename arg to unzip .
NOTE: This is written as a dry-run, so it only prints what it would do, without actually doing it. Remove the echo statements from the commands to make it actually create the directories and extract the .epub files into them.
You might want to add an echo «$rp» or echo «$f» statement and use unzip ‘s -q option to reduce the amount of noise being output while still showing progress as it iterates through the filenames.
Zip all files in directory?
Is there a way to zip all files in a given directory with the zip command? I’ve heard of using *.* , but I want it to work for extensionless files, too.
Have you tried navigating one-level up from your desired directory and doing zip myarch.zip mydir/* ?
*.* means any file with a dot. In cp/m and dos all files had a dot, and it made you type it (could not do * ). Therefore people came to see *.* as all files. Eventually Microsoft added long-filenames that could have zero, or more dots. To find a file that has a dot on windows you have to type *.*.* .
5 Answers 5
You can just use * ; there is no need for *.* . File extensions are not special on Unix. * matches zero or more characters—including a dot. So it matches foo.png , because that’s zero or more characters (seven, to be exact).
Note that * by default doesn’t match files beginning with a dot (neither does *.* ). This is often what you want. If not, in bash, if you shopt -s dotglob it will (but will still exclude . and .. ). Other shells have different ways (or none at all) of including dotfiles.
Alternatively, zip also has a -r (recursive) option to do entire directory trees at once (and not have to worry about the dotfile problem):
where mydir is the directory containing your files. Note that the produced zip will contain the directory structure as well as the files. As peterph points out in his comment, this is usually seen as a good thing: extracting the zip will neatly store all the extracted files in one subdirectory.
You can also tell zip to not store the paths with the -j / —junk-paths option.
The zip command comes with documentation telling you about all of its (many) options; type man zip to see that documentation. This isn’t unique to zip; you can get documentation for most commands this way.