[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bacula-devel] [Bacula-users] Vbackup feature


On Thursday 10 July 2008 20:45:24 Blake Dunlap wrote:
> > -----Original Message-----
> > From: bacula-users-bounces@xxxxxxxxxxxxxxxxxxxxx [mailto:bacula-users-
> > bounces@xxxxxxxxxxxxxxxxxxxxx] On Behalf Of Kern Sibbald
> > Sent: Thursday, July 10, 2008 6:45 AM
> > To: bacula-devel
> > Cc: bacula-users
> > Subject: [Bacula-users] Vbackup feature
> >
> > Hello,
> >
> > I'm a bit burned out from intensive bug fixing over the last couple of
> > months,
> > so decided to do something totally new yesterday.  I started implementing
> > what I call Virtual Backup or Vbackup, which is essentially project #3
> > "Merge
> > multiple backups (Synthetic Backup or Consolidation)".
> >
> > In attempting to implement it, I've realized a few things:
> >
> > 1. It is probably better to implement it as a new "level" under the
> > normal Backup code, for example "level=vbackup".  The resulting output
> > will be recorded in the catalog as a "Full".
> >
> > 2. In most all respects it must behave much like a Migration job in that
> > it
> > does not use a FD, it reads an existing set of backups, and writes them
> > to a
> > new Volume.
> >
> > 3. One difference from a Migration job is that all the old jobs remain
> > unchanged (i.e. like a Copy).
> >
> > 4. Another difference is that it has many fewer features in that it
> > simply finds all the current backup records and copies them.  There are
> > no complicated selection criteria.
> >
> > 5. Like the Migration and Copy jobs, the input Pool (from where it reads
> > the
> > currently backed up data) and the output Pool (where it writes the merged
> > data) must be different.  This ensures that the job does not attempt to
> > read
> > and write to the same device, which just will not work.
> >
> > Well the problem with the above -- principally item #5 is consider the
> > following:
> >
> > You have a job J1, which does a Full, one or more Diff backups, then any
> > number of Inc backups all going to Pool P1.  At some point in time
> > (possibly
> > via the Schedule), you run a vbackup level, so it finds all the current
> > backup files (Full, last Diff, and all later Inc) and copies the data
> > from the input Pool (P1) to the output Pool (P2).
> >
> > Now, if you then redo a normal Full backup and restart with Diff and Inc
> > jobs
> > again, all will work.
> >
> > However, it is much more likely that you will then continue doing
> > incremental
> > backups (no more Full or Diffs).  At some point later, you want to do
> > another
> > vbackup to "consolidate" all the Inc backups, and now the process fails,
> > because you are going to need to read from Pools P2 (Full produced by the
> > vbackup) and P1 (new Incs), and you will attempt to write to P2, which
> > will
> > not work.
> >
> > Thus without some other mechanism to move Volumes from Pool to Pool, a
> > setup
> > like described above won't work, and I suspect this is what will be done
> > the
> > most frequently (i.e. do only one Full and there after vbackups when
> > there are enough Incs to warrant a consolidation).
> >
> > Any comments?
>
> I've thought about this a good bit, and my first thought, would be
> considering using a restore tree style lookup on jobs to build the previous
> jobs to consolidate (not restricted to one pool), instead of using a source
> pool for the consolidation. My setup for instance has each level going to a
> different pool due to differing retention times etc.

What I am implementing is an automatic "consolidation" feature.  Perhaps later 
we can add an interactive feature, but I'm not very convinced of its utility.

>
> Though the downside of using the current restore logic is that it doesn't
> limit to Jobs, but instead walks based on the fileset, which is not
> necessarily what is wanted here.

Only one subitem of the current restore logic is based on FileSets. Restore in 
general is *far* more comprehensive than that.

>
> It would also be nice (though I plan to set this up via scripting and the
> use of some disk pools) if along these lines, the backup staged to the
> spool area, then wrote to the destination pool listed. That way you don't
> have to worry about source pool and destination pool collision. (I plan on
> doing this using a two stage consolidate to disk pool with the current
> design, then migrate the jobs to the correct tape pools based on desired
> level, assuming you implement both differential and full consolidation
> backups, otherwise just the one destination Full pool).

As I wrote above, this is an automatic feature like backup jobs, so the main 
thrust is to allow it to be scheduled.  As with backup jobs, they can be 
initiated from the command line (and thus scripted), so what you are 
requesting will probably fall out of the implementation.

Regards,

Kern

>
> -Blake
>
> -------------------------------------------------------------------------
> Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
> Studies have shown that voting for your favorite open source project,
> along with a healthy diet, reduces your potential for chronic lameness
> and boredom. Vote Now at http://www.sourceforge.net/community/cca08
> _______________________________________________
> Bacula-devel mailing list
> Bacula-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/bacula-devel



-------------------------------------------------------------------------
Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW!
Studies have shown that voting for your favorite open source project,
along with a healthy diet, reduces your potential for chronic lameness
and boredom. Vote Now at http://www.sourceforge.net/community/cca08
_______________________________________________
Bacula-devel mailing list
Bacula-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/bacula-devel


This mailing list archive is a service of Copilotco.