[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bacula-devel] MaximumBlockSize Problem and Question

Arno Lehmann <al@xxxxxxxxxxxxxx> writes:

> And I strongly recommend to not measure throughput using
> /dev/zero...  if you use compression, you're almost only measuring
> bus throughput as all the zeros will be compressed away... it's
> better to prepare a big file with random data for it. Unfortunately,
> that brings the disk system into the environment again. A big
> RAM-disk can help, but you'd need lots of RAM for this to be useful.

you can make a suitably large file from /dev/urandom and cat it
endlessly into dd

   dd if=/dev/urandom of=/tmp/random-data bs=1M count=16

   cat /tmp/random-data{,,,,,,,,,,,,,,,} |
       dd of=/dev/nst0 bs=256k

(the {,,,} syntax requires csh, bash or zsh.  use a for loop or
similar in POSIX sh.)

gzip uses a blocksize of 32 KiB, in other words, it will not find
patterns more than 32 KiB apart, so 16 MiB is certainly more than
enough to make life hard for the tape drive's compression algorithm.
(bzip2 -9 uses 900 KiB, still way less.)

regards,          | Redpill  _
Kjetil T. Homme   | Linpro  (_)

This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
Bacula-devel mailing list

This mailing list archive is a service of Copilot Consulting.