Generate random file linux

generate a random file using shell script

How can i generate a random file filled with random number or character in shell script? I also want to specify size of the file.

6 Answers 6

Use dd command to read data from /dev/random.

dd if=/dev/random of=random.dat bs=1000000 count=5000 

That would read 5000 1MB blocks of random data, that is a whole 5 gigabytes of random data!

Experiment with blocksize argument to get the optimal performance.

After a second read of the question, i think he also wanted to save only characters (guessing alphabetic ones) and numbers to the file.

That dd command is unlikely to complete as there will not be 5 gigabytes of entropy available. Use /dev/urandom if you need this much «randomness».

head -c 10 /dev/random > rand.txt 

change 10 to whatever. Read «man random» for differences between /dev/random and /dev/urandom.

Or, for only base64 characters

head -c 10 /dev/random | base64 | head -c 10 > rand.txt 

The base64 might include some characters you’re not interested in, but didn’t have time to come up with a better single-liner character converter. (also we’re taking too many bytes from /dev/random. sorry, entropy pool!)

oops, missed the characters and numbers part, i’m guessing you mean alphanumeric characters. need to revise.

#!/bin/bash # Created by Ben Okopnik on Wed Jul 16 18:04:33 EDT 2008 ######## User settings ############ MAXDIRS=5 MAXDEPTH=2 MAXFILES=10 MAXSIZE=1000 ######## End of user settings ############ # How deep in the file system are we now? TOP=`pwd|tr -cd '/'|wc -c` populate() < cd $1 curdir=$PWD files=$(($RANDOM*$MAXFILES/32767)) for n in `seq $files` do f=`mktemp XXXXXX` size=$(($RANDOM*$MAXSIZE/32767)) head -c $size /dev/urandom >$f done depth=`pwd|tr -cd '/'|wc -c` if [ $(($depth-$TOP)) -ge $MAXDEPTH ] then return fi unset dirlist dirs=$(($RANDOM*$MAXDIRS/32767)) for n in `seq $dirs` do d=`mktemp -d XXXXXX` dirlist="$dirlist$$PWD/$d" done for dir in $dirlist do populate "$dir" done > populate $PWD 

Create 100 randomly named files of 50MB in size each:

for i in `seq 1 100`; do echo $i; dd if=/dev/urandom bs=1024 count=50000 > `echo $RANDOM`; done 

It’s better to use mktemp to create random files. for i in seq 1 100; do myfile= mktemp —tmpdir=. dd if=/dev/urandom bs=1024 count=50000 > $myfile done

The RANDOM variable will give you a different number each time:

Save as «script.sh», run as ./script.sh SIZE. The printf code was lifted from http://mywiki.wooledge.org/BashFAQ/071. Of course, you could initialize the mychars array with brute force, mychars=(«0» «1» . «A» . «Z» «a» . «z»), but that wouldn’t be any fun, would it?

#!/bin/bash declare -a mychars for (( I=0; I0; I-- )); do echo -n $ done echo 

The /dev/random & base64 approach is also good, instead of piping through base64, pipe through «tr -d -c [:alnum:]», then you just need to count the good chars that come out until you’re done.

Читайте также:  Linux bundle что это

Источник

Random file generator code?

Does anyone have a simple shell script or c program to generate random files of a set size with random content under linux?

6 Answers 6

head -c SIZE /dev/random > file 

If you can sacrifice some entropy you can also use /dev/urandom instead. It will be faster since it will not block waiting for more environmental noise to occur

If you somehow find yourself needing to read from /dev/random and it’s blocking due to lack of sufficient randomness one reasonable way to get some more is to run a ‘du /’. This gets the disk moving and generated some extra entropy (not guaranteed for flash drives).

When I run head -c 1024 /dev/random > file , it seems like it was waiting something to complete and never finish writing to my file, what may be the problem?

openssl rand can be used to generate random bytes. The command is below:

openssl rand [bytes] -out [filename]

For example, openssl rand 2048 -out aaa will generate a file named aaa containing 2048 random bytes.

RandomData = file("/dev/urandom", "rb").read(1024) file("random.txt").write(RandomData) 
dd if=/dev/urandom of=myrandom.txt bs=1024 count=1 

Python. Call it make_random.py

#!/usr/bin/env python import random import sys import string size = int(sys.argv[1]) for i in xrange(size): sys.stdout.write( random.choice(string.printable) ) 
./make_random 1024 >some_file 

That will write 1024 bytes to stdout, which you can capture into a file. Depending on your system’s encoding this will probably not be readable as Unicode.

Here’s a quick an dirty script I wrote in Perl. It allows you to control the range of characters that will be in the generated file.

#!/usr/bin/perl if ($#ARGV < 1) < die("usage: \n"); > open(FILE,">" . $ARGV[0]) or die "Can't open file for writing\n"; # you can control the range of characters here my $minimum = 32; my $range = 96; for ($i=0; $i < $ARGV[1]; $i++) < print FILE chr(int(rand($range)) + $minimum); >close(FILE); 

Here’s a shorter version, based on S. Lott’s idea of outputting to STDOUT:

#!/usr/bin/perl # you can control the range of characters here my $minimum = 32; my $range = 96; for ($i=0; $i

Warning: This is the first script I wrote in Perl. Ever. But it seems to work fine.

Источник

Создание рандомного файла

Собственно, нужно создать большой по размеру(~100mb) файл, с рандомными символами. Желательно сделать это средствами системы и максимально просто. Есть идеи?

2 ответа 2

Можно записать ~10 8 случайных байт из /dev/urandom

head -c 100000000 /dev/urandom > file 
dd if=/dev/urandom of=file bs=100M count=1 iflag=fullblock 

Можно записать только печатные символы как-то так:

Можно и по совету использовать base64:

Вроде при таком подходе это будет столь же безопасно (никакой конец обрезать не надо), но даст меньший набор печатных символов.

Читайте также:  Workbench для linux mint

можно использовать base64 для более «экономичного» получения печатных ascii-символов из набора байт. только последние несколько байт вывода программы лучше отрезать — они весьма «неслучайны».

Если задача именно создать «заглушку» то есть уже готовое решение в util-linux

~$ fallocate --help Usage: fallocate [options] Preallocate space to, or deallocate space from a file. Options: -c, --collapse-range remove a range from the file -d, --dig-holes detect zeroes and replace with holes -l, --length length for range operations, in bytes -n, --keep-size maintain the apparent size of the file -o, --offset offset for range operations, in bytes -p, --punch-hole replace a range with a hole (implies -n) -z, --zero-range zero and ensure allocation of a range -v, --verbose verbose mode -h, --help display this help and exit -V, --version output version information and exit For more details see fallocate(1). 

Источник

Generating a random binary file

Why did it take 5 minutes to generate a 1 KiB file on my (low-end laptop) system with little load? And how could I generate a random binary file faster?

$ time dd if=/dev/random of=random-file bs=1 count=1024 1024+0 records in 1024+0 records out 1024 bytes (1.0 kB) copied, 303.266 s, 0.0 kB/s real 5m3.282s user 0m0.000s sys 0m0.004s $ 

Notice that dd if=/dev/random of=random-file bs=1024 count=1 doesn’t work. It generates a random binary file of random length, on most runs under 50 B. Has anyone an explanation for this too?

5 Answers 5

That’s because on most systems /dev/random uses random data from the environment, such as static from peripheral devices. The pool of truly random data (entropy) which it uses is very limited. Until more data is available, output blocks.

Retry your test with /dev/urandom (notice the u ), and you’ll see a significant speedup.

See Wikipedia for more info. /dev/random does not always output truly random data, but clearly on your system it does.

$ time dd if=/dev/urandom of=/dev/null bs=1 count=1024 1024+0 records in 1024+0 records out 1024 bytes (1.0 kB) copied, 0.00675739 s, 152 kB/s real 0m0.011s user 0m0.000s sys 0m0.012s 
$ time dd if=/dev/urandom of=random-file bs=1 count=1024 

The main difference between random and urandom is how they are pulling random data from kernel. random always takes data from entropy pool. If the pool is empty, random will block the operation until the pool would be filled enough. urandom will genarate data using SHA(or any other algorithm, MD5 sometimes) algorithm in the case kernel entropy pool is empty. urandom will never block the operation.

I wrote a script to test various hashing functions speeds. For this I wanted files of «random» data, and I didn’t want to use the same file twice so that none of the functions had a kernel cache advantage over the other. I found that both /dev/random and /dev/urandom were painfully slow. I chose to use dd to copy data of my hard disk starting at random offsets. I would NEVER suggest using this if you are doing anythings security related, but if all you need is noise it doesn’t matter where you get it. On a Mac use something like /dev/disk0 on Linux use /dev/sda

Читайте также:  Linux so list symbols

Here is the complete test script:

tests=3 kilobytes=102400 commands=(md5 shasum) count=0 test_num=0 time_file=/tmp/time.out file_base=/tmp/rand while [[ test_num -lt tests ]]; do ((test_num++)) for cmd in "$"; do ((count++)) file=$file_base$count touch $file # slowest #/usr/bin/time dd if=/dev/random of=$file bs=1024 count=$kilobytes >/dev/null 2>$time_file # slow #/usr/bin/time dd if=/dev/urandom of=$file bs=1024 count=$kilobytes >/dev/null 2>$time_file # less slow /usr/bin/time sudo dd if=/dev/disk0 skip=$(($RANDOM*4096)) of=$file bs=1024 count=$kilobytes >/dev/null 2>$time_file echo "dd took $(tail -n1 $time_file | awk '') seconds" echo -n "$(printf "%7s" $cmd)ing $file: " /usr/bin/time $cmd $file >/dev/null rm $file done done 

Here is the «less slow» /dev/disk0 results:

dd took 6.49 seconds md5ing /tmp/rand1: 0.45 real 0.29 user 0.15 sys dd took 7.42 seconds shasuming /tmp/rand2: 0.93 real 0.48 user 0.10 sys dd took 6.82 seconds md5ing /tmp/rand3: 0.45 real 0.29 user 0.15 sys dd took 7.05 seconds shasuming /tmp/rand4: 0.93 real 0.48 user 0.10 sys dd took 6.53 seconds md5ing /tmp/rand5: 0.45 real 0.29 user 0.15 sys dd took 7.70 seconds shasuming /tmp/rand6: 0.92 real 0.49 user 0.10 sys 

Here are the «slow» /dev/urandom results:

dd took 12.80 seconds md5ing /tmp/rand1: 0.45 real 0.29 user 0.15 sys dd took 13.00 seconds shasuming /tmp/rand2: 0.58 real 0.48 user 0.09 sys dd took 12.86 seconds md5ing /tmp/rand3: 0.45 real 0.29 user 0.15 sys dd took 13.18 seconds shasuming /tmp/rand4: 0.59 real 0.48 user 0.10 sys dd took 12.87 seconds md5ing /tmp/rand5: 0.45 real 0.29 user 0.15 sys dd took 13.47 seconds shasuming /tmp/rand6: 0.58 real 0.48 user 0.09 sys 

Here is are the «slowest» /dev/random results:

dd took 13.07 seconds md5ing /tmp/rand1: 0.47 real 0.29 user 0.15 sys dd took 13.03 seconds shasuming /tmp/rand2: 0.70 real 0.49 user 0.10 sys dd took 13.12 seconds md5ing /tmp/rand3: 0.47 real 0.29 user 0.15 sys dd took 13.19 seconds shasuming /tmp/rand4: 0.59 real 0.48 user 0.10 sys dd took 12.96 seconds md5ing /tmp/rand5: 0.45 real 0.29 user 0.15 sys dd took 12.84 seconds shasuming /tmp/rand6: 0.59 real 0.48 user 0.09 sys 

You’ll notice that /dev/random and /dev/urandom were not much different in speed. However, /dev/disk0 took 1/2 the time.

PS. I lessen the number of tests and removed all but 2 commands for the sake of «brevity» (not that I succeeded in being brief).

Источник

Оцените статью
Adblock
detector