[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bacula-devel] MaximumBlockSize Problem and Question


On Wed, 2008-11-05 at 19:22 +0000, Ulrich Leodolter wrote:
> On Wed, 2008-11-05 at 18:11 +0100, Kjetil Torgrim Homme wrote:
> > Arno Lehmann <al@xxxxxxxxxxxxxx> writes:
> > 
> > > And I strongly recommend to not measure throughput using
> > > /dev/zero...  if you use compression, you're almost only measuring
> > > bus throughput as all the zeros will be compressed away... it's
> > > better to prepare a big file with random data for it. Unfortunately,
> > > that brings the disk system into the environment again. A big
> > > RAM-disk can help, but you'd need lots of RAM for this to be useful.
> > 
> > you can make a suitably large file from /dev/urandom and cat it
> > endlessly into dd
> > 
> >    dd if=/dev/urandom of=/tmp/random-data bs=1M count=16
> > 
> >    cat /tmp/random-data{,,,,,,,,,,,,,,,} |
> >        dd of=/dev/nst0 bs=256k
> > 
> > (the {,,,} syntax requires csh, bash or zsh.  use a for loop or
> > similar in POSIX sh.)
> > 
> 
> Hi,
> 
> here are my results
> 
> [root@troll tmp]# cat /disk0/tmp/random-data{,,,,,,,,,,,,,,,} | dd
> of=/dev/nst1 bs=256k
> 1006+28 records in
> 1006+28 records out

forget this
> 
> [root@troll tmp]# cat /disk0/tmp/random-data{,,,,,,,,,,,,,,,} | dd
> of=/dev/nst1 bs=256k
> 64854+1048 records in
> 64854+1048 records out
> 17179869184 bytes (17 GB) copied, 162.675 seconds, 106 MB/s
> 
> As u can see performance is good,  and it would be ok if bacula
> writes at  50 MB/s.
> 
> But bacula writes only at  5-15 MB/s for Copy/Migrate Disk to Tape
> jobs.   A job size like this (17 GB) is split into 282949 63K buffers.
> There must be performance bottleneck in bacula's buffer handling.
> Maybe because this runs inside one storage daemon process.
> Does bacula use multithreaded parallel read/write for jobs like this?
> Using top i can see only one active process (bacula-sd)
> 
> EMC Legato Networker (on the same hardware) is able to Clone
> a set of Full Backup Jobs (800 GB) from Disk To Tape in 5 Hours
> This gives an average Speed of 50 MB/s.
> 
> 
> Just need a simple Performance Tip from Bacula Experts :-)
> 

Now i think about setting up a second storage daemon,

First sd controls File storage
Second sd controls Tape storage (autochanger)

Operations like reading jobs from disk, spooling and writing
to tape will hopefully run more in parallel (overlapped).

Is this good idea ?


Currently only one storage daemon is running, and operations
needed for copy disk to tape run more or less in sequential
order (simplified pseudo code, maybe wrong)

foreach job in PoolUncopiedJobs
{
	foreach block in list_blocks(job)
	{
		read block from disk
		if (spooling)
		{
			write block to spooling file
			if (spooling max size reached)
			{
				write spooling file to tape
				delete spooling file
			}
		}
		else
		{
			write block to tape
		}
	}
	if (spooling and exists spooling file)
	{
		write spooling file to tape
		delete spooling file
	}
}

Correct me if i am wrong in understanding the storage daemon.


After tests yesterday (without spooling)
is switch back to spooling

Maximum Spooling Size = 2G

Machine has 4G ram installed, maybe install more ram
and spool to ramdisk.


Results for 12GB copy disk to tape job:

troll-sd JobId 5486: Despooling elapsed time = 00:00:31, Transfer rate =
69.27 M bytes/second
....
  Elapsed time:           9 mins 30 secs
  Priority:               12
  SD Files Written:       165
  SD Bytes Written:       12,136,067,878 (12.13 GB)
  Rate:                   21291.3 KB/s


Thx
Ulrich

-- 
Ulrich Leodolter <ulrich.leodolter@xxxxxxxx>
OBVSG


-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-devel mailing list
Bacula-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/bacula-devel


This mailing list archive is a service of Copilot Consulting.