[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bacula-devel] Accurate file project hash tables

On Tuesday 25 March 2008 20:21:58 Dan Langille wrote:
> On Mar 25, 2008, at 6:29 PM, Martin Simmons wrote:
> >>>>>> On Tue, 25 Mar 2008 14:23:17 -0400, Dan Langille said:
> >>
> >> On Mar 25, 2008, at 12:55 PM, Kern Sibbald wrote:
> >>> What do you think?  Any other ideas?
> >>
> >> Keep the data on disk.  Utilize the OS file-system caching.
> >>
> >> I don't know what structure to put it in.  But if the file is small,
> >> it will fly.
> >> If the data is large, the OS will do the caching for us and us as
> >> much
> >> RAM as possible.
> >
> > There is a possible flaw in that logic: the disk cache might be
> > overloaded
> > already due to the FD reading lots of files/dirs during the backup.
> Noted.
> Regardless, the OS is better at handling caching than we are.  :)

My experience is that if the programmer is good, knows his data and does a 
good job, he can beat the OS hands down.  An example was Autodesk's need to 
handle large volumes of data.  On a flat address Solaris, it was a total pig 
once paging kicked in.  I wrote LRU caching, paged extensions to the fread 
and fwrite functions where paging was enabled with a "p" option was passed to 
fopen(), and it ran thousands of times faster than the flat memory OS.

Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
Bacula-devel mailing list

This mailing list archive is a service of Copilotco.