Convert all files in folder linux

Recursively convert files in nested folders

I wanted to just comment on an answer to a question very similar to this but I don’t have enough rep. I’m looking for a way to change this line of code:

for i in *.mkv; do ffmpeg -i "$i" "$.mp4"; done 

So that it includes .avi files and so that it can search through nested folders. I want to be able to target /Videos and have the script automatically traverse through the folder tree into /Videos/2016/January , convert all of the video files contained within that folder, then do the same for /Videos/2016/February and so on. Thanks in advance. (credit for above line of code goes to LordNeckBeard, from this post.)

1 Answer 1

Using LordNeckBeard’s reference to find , I came up with the following solution:

find ./ -iname '*.avi' -o -iname '*.mkv' -exec bash -c 'ffmpeg -i "<>" -c:v libx265 -preset medium -crf 28 -c:a aac "<>".mp4' \; 

Tested and worked exactly how I expected, so it is currently running through the entire library.

If you want to give your converted files a different name to the original, see Parameter Expansion.

If you wish to destructively convert all files, be extremely careful with this command:

find ./ -iname '*.avi' -o -iname '*.mkv' -exec bash -c 'ffmpeg -i "<>" -c:v libx265 -preset medium -crf 28 -c:a aac "<>".mp4 && rm "<>"' \; 

NOTE: The command above isn’t bulletproof and was removing some files BEFORE the conversion process had finished, meaning I have now lost those files (thank God for backups). I tested with disposable files and have made sure I have a fully functional back up of my data before starting this procedure.

Источник

How to replace all the contents from one folder with another one

I have folders old and new . I want to replace all the contents in old/* to new/* . There can be many subdirectories in those folders. But there can be few files which are not in new/* but are in old/* , so I want them to stay as they are. How can I do that from a Linux shell?

3 Answers 3

rsync would probably be a better option here. It’s as simple as rsync -a subdir/ ./. check this unix.stackexchage answer for better solutions

use -f with the cp command

suppress cp to overwrite» prompt..

To override cp’s alias you can simply enclose it in quotes:

for more information follow these links:

Don’t use rsync for a one time copy, the overhead of hashing all dir contents is substantial, it only pays when keeping directories synchronized. For a gigantic one time copy, use a tarpipe.

Use rsync . It will synchronize the directories in one direction. So, if you want to update your old folder with everything from new , but keep what’s in there, just use:

rsync -avh --dry-run /path/to/new/ /path/to/old/ 

This will, in a first instance, just output the list of files that would be transferred. In that case: Everything found in new will be copied to old , unless it’s already there. Everything in old stays as it is.

Читайте также:  Scp linux permission denied publickey

If it looks fine to you, remove the —dry-run argument to transmit them for real.

The -avh flags just enable archive mode (which will preserve timestamps, etc.), verbosity and human-readable file-sizes. Nothing will be deleted from the destination unless you specify the —delete flag. Consult man rsync for more information.

Источник

How to bulk convert all the file in a file system branch between Unix and Windows line break format?

Everybody knows 🙂 that in Windows plain text files lines are terminated with CR+LF, and in Unix&Linux — with LF only. How can I quickly convert all my source code files from one format to another and back?

4 Answers 4

That depends: if the files are under version control, this could be a rather unpopular history-polluting decision. Git has the option to automagically convert line endings on check-out.

If you do not care and want to quickly convert, there are programs like fromdos / todos and dos2unix / unix2dos that do this for you. You can use find : find . -type f -name ‘*.php’ -exec dos2unix ‘<>‘ + .

This will run dos2unix on all files, the ones that already have Unix line-breaks (LF), as well as the ones in need to convert the Windows line-breaks (CRLF) to LF. This should be fine, I think. But it’s worth a comment. 🙂

Also, I prefer to start with find . -type f -name ‘*.php’ -exec echo dos2unix ‘<>‘ \; in order to see what files will be affected before going through with the actual conversion. Of course, you might not want to follow my suggestion if the number of files affected is huge.

There are several dedicated programs, including

Simply pick the tool for the appropriate direction and pass the names of the files to convert on the command line.

If you don’t have either, but have Linux or Cygwin:

sed -i -e 's/\r\+$//' filename # dos|unix -> unix sed -i -e 's/\r*$/\r/' filename # dos|unix -> dos 
perl -i -pe 's/\r+$//' filename # dos|unix -> unix perl -i -pe 's/\r*$/\r/' filename # dos|unix -> dos 

With only POSIX tools (including BusyBox), to go from unix to dos, you’ll need to pass the CR character literally in the sed command.

cr=$(echo | tr '\n' '\r') sed -e "s/$cr*\$/$cr/" filename.dos mv filename.dos filename 

In the other direction, you can simply delete all CRs:

tr -d '\r' filename.dos mv filename.dos filename 

You can use wildcards to convert many files in the same directory at once, e.g.

To convert all files in the current directory and its subdirectories, if your shell is zsh, you can use **/ , e.g.

You can use **/ in bash ≥4, but you need to run shopt -s globstar first (you can put this line in your ~/.bashrc ). You can use **/ in ksh93, but you need to run set -o globstar first (you can put this line in your ~/.kshrc .

If you can only use the tools that require a redirection, use a for loop.

for x in *.txt; do tr -d '\r' "$x.dos" mv -- "$x.dos" "$x" done 

If you don’t have **/ or need more complex matching to select which files to convert, use the find command. Here’s a Linux/Cygwin example which converts all files under the current directory and its subdirectories recursively, except for files called .o and under subdirectories called bin .

find -name 'bin' -type d -prune -o \ \! -name '*.o' \ -exec sed -i -e 's/\r\+$//' <> + 

Here’s a POSIX example. We tell find to start a shell that can perform the necessary redirection.

find -name 'bin' -type d -prune -o \ \! -name '*.o' \ -exec sh -c ' tr -d '\r' "$0.dos" mv -- "$0.dos" "$0" ' <> \; 

You can make the find method slightly faster, at the expense of more complex code, by using a loop in the shell command.

find -name 'bin' -type d -prune -o \ \! -name '*.o' \ -exec sh -c ' for x; do tr -d '\r' "$x.dos" mv -- "$x.dos" "$x" done ' _ <> + 

Источник

Читайте также:  Terminal server linux suse

How to convert all files (in different formats) in given folder to different file type

I have a folder with many images from different types (png, jpg, jpeg, bmp, pdf), and I would like to convert them all into png (for instance) using imagemagick. Is there a single command which can perform this? If not, what should I do instead? Thanks.

@MartijnvanWezel the post you linked is not helpful. It details how to rename a file, not how to reformat it entirely.

@speedstyle oeww nasty, back in the days I didn’t really know stuff. . I will let my command stand, so other people will learn from the mistake.

3 Answers 3

But be careful. Without the -format option, mogrify overwrites the original images. Make sure to read the documentation.

Thanks. I get the following error massage: C:\Users\Tal\s>mogrify -format png . mogrify.exe: unable to open module file C:\Program Files (x86)\ImageMagick-6.8. 3-Q16\modules\coders\IM_MOD_RL_INI_.dll’: No such file or directory @ warning/mo dule.c/GetMagickModulePath/683. mogrify.exe: no decode delegate for this image format desktop.ini’ @ error/cons titute.c/ReadImage/550. (Is this simple to resolve?)

That’s because mogrify tries to convert the file desktop.ini which isn’t an image file. You should only pass image files to mogrify . Either delete desktop.ini or change *.* to a wildcard that only matches the image files in your directory. You can also run the command several times for every file format: mogrify -format png *.jpg , mogrify -format png *.bmp , etc.

I’d like to merge different image file types into a single *.pdf file. How to adapt your code to make that work?

I guess I could revert to other tools like pdftk (e.g. by using pdftk *.pdf cat output combined.pdf ), but would rather have a one-liner to get the whole job done.

If mogrify is used with the -format option, the file is not replaced, but a new one is saved with the new prefix.

Источник

How can I automatically convert all source code files in a folder (recursively) to a single PDF with syntax highlighting?

I would like to convert source code of a few projects to one printable file to save on a usb and print out easily later. How can I do that?

Edit

First off I want to clarify that I only want to print the non-hidden files and directories(so no contents of .git e.g.). To get a list of all non-hidden files in non-hidden directories in the current directory you can run the find . -type f ! -regex «.*/\..*» ! -name «.*» command as seen as the answer in this thread. As suggested in that same thread I tried making a pdf file of the files by using the command find . -type f ! -regex «.*/\..*» ! -name «.*» ! -empty -print0 | xargs -0 a2ps -1 —delegate no -P pdf but unfortunately the resulting pdf file is a complete mess.

Читайте также:  Статус сетевого интерфейса linux

Don’t know if it fits your need, but with a2ps -P file *.src you can produce postscript files out of your source code. But the PS files need to be converted and combined afterwards.

Using convert (linux.about.com/od/commands/l/blcmdl1_convert.htm, imagemagick) you should then be able to make one pdf from the ps files.

Can you comment, what you mean with «complete mess»? This (i.stack.imgur.com/LoRhv.png) looks not too bad to me, using a2ps -1 —delegate=0 -l 100 —line-numbers=5 -P pdf — I added -l for 100 chars per row to prevent some word wraps and line numbers, but that’s only personal preference.

For converting this project(4 non-empty non-hidden files each about a page long in non-hidden directories) to pdf I had about 5 pages of source code and 39 pages of gibberish.

3 Answers 3

I was intrigued by your question and got kinda carried away. This solution will generate a nice PDF file with a clickable index and color highlighted code. It will find all files in the current directory and subdirectories and create a section in the PDF file for each of them (see the notes below for how to make your find command more specific).

It requires that you have the following installed (the install instructions are for Debian-based systems but these should be available in your distribution’s repositories):

sudo apt-get install texlive-latex-extra latex-xcolor texlive-latex-recommended 

Once these are installed, use this script to create a LaTeX document with your source code. The trick is using the listings (part of texlive-latex-recommended ) and color (installed by latex-xcolor ) LaTeX packages. The \usepackage[..] is what makes the listings in the table of contents clickable links.

#!/usr/bin/env bash tex_file=$(mktemp) ## Random temp file name cat$tex_file ## Print the tex file header \documentclass \usepackage \usepackage[usenames,dvipsnames] %% Allow color names \lstdefinestyle< belowcaptionskip=1\baselineskip, xleftmargin=\parindent, language=C++, %% Change this to whatever you write in breaklines=true, %% Wrap long lines basicstyle=\footnotesize\ttfamily, commentstyle=\itshape\color, stringstyle=\color, keywordstyle=\bfseries\color, identifierstyle=\color, xleftmargin=-8em, > \usepackage[colorlinks=true,linkcolor=blue] \begin \tableofcontents EOF find . -type f ! -regex ".*/\..*" ! -name ".*" ! -name "*~" ! -name 'src2pdf'| sed 's/^\..//' | ## Change ./foo/bar.src to foo/bar.src while read i; do ## Loop through each file name=$ ## escape underscores echo "\newpage" >> $tex_file ## start each section on a new page echo "\section" >> $tex_file ## Create a section for each filename ## This command will include the file in the PDF echo "\lstinputlisting[style=customasm]" >>$tex_file done && echo "\end" >> $tex_file && pdflatex $tex_file -output-directory . && pdflatex $tex_file -output-directory . ## This needs to be run twice ## for the TOC to be generated 

Run the script in the directory that contains the source files

That will create a file called all.pdf in the current directory. I tried this with a couple of random source files I found on my system (specifically, two files from the source of vlc-2.0.0 ) and this is a screenshot of the first two pages of the resulting PDF:

enter image description here

  • The script will not work if your source code file names contain spaces. Since we are talking about source code, I will assume they don’t.
  • I added ! -name «*~» to avoid backup files.
  • I recommend you use a more specific find command to find your files though, otherwise any random file will be included in the PDF. If your files all have specific extensions ( .c and .h for example), you should replace the find in the script with something like this
find . -name "*\.c" -o -name "\.h" | sed 's/^\..//' | 

Источник

Оцените статью
Adblock
detector