Welcome! » Log In » Create A New Profile

Job is waiting on Storage

Posted by Jim Richardson 
Jim Richardson
Job is waiting on Storage
July 13, 2017 07:59PM
I can't seem to get Bacula to run simultaneous jobs when using the same storage device. Can anyone offer advice?

Running Jobs:
Console connected at 13-Jul-17 19:37
JobId Type Level Files Bytes Name Status
======================================================================
934 Back Diff 106,028 537.4 G C2T-Data is running
935 Back Diff 0 0 C2T-Archive is waiting for higher priority jobs to finish
936 Back Full 19,943 13.58 G D2D-DC02-Application is running
937 Back Full 0 0 D2D-HRMS-Application is waiting on Storage "Storage_Daily2Disk"
938 Back Full 0 0 D2D-Fish-Application is waiting on Storage "Storage_Daily2Disk"
939 Back Full 0 0 D2D-SPR01-Application is waiting on Storage "Storage_Daily2Disk"

Relavent configuration settings:

bacula-dir.conf
Director {
Maximum Concurrent Jobs = 20
}

Storage {
Name = Storage_Daily2Disk
Maximum Concurrent Jobs = 10 # run up to 10 jobs a the same time
}

Client { /* All clients */
Maximum Concurrent Jobs = 2
}

bacula-sd.conf
Storage {
Name = bacula-sd
Maximum Concurrent Jobs = 20
}

Autochanger {
Name = FileChgr
Device = DailyDevice, WeeklyDevice, MonthlyDevice
Changer Command = ""
Changer Device = /dev/null
}

Device {
Name = DailyDevice
Media Type = DailyDisk
Archive Device = /backup/bacula/daily
Autochanger = yes;
LabelMedia = yes;
Random Access = Yes;
AutomaticMount = yes;
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 10
}

bacula-fd.conf
FileDaemon { # this is me
Name = bacula-fd
Maximum Concurrent Jobs = 20
}


Jim Richardson

CONFIDENTIALITY: This email (including any attachments) may contain confidential, proprietary and privileged information, and unauthorized disclosure or use is prohibited. If you received this email in error, please notify the sender and delete this email from your system. Thank you.
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
This message was imported via the External PhorumMail Module
Bill Arlofski
Re: Job is waiting on Storage
July 13, 2017 11:01PM
On 07/13/2017 06:50 PM, Jim Richardson wrote:
> I can’t seem to get Bacula to run simultaneous jobs when using the same
> storage device. Can anyone offer advice?
>
>
>
> Running Jobs:
>
> Console connected at 13-Jul-17 19:37
>
> JobId Type Level Files Bytes Name Status
>
> ======================================================================
>
> 934 Back Diff 106,028 537.4 G C2T-Data is running
>
> 935 Back Diff 0 0 C2T-Archive is waiting for higher
> priority jobs to finish
>
> 936 Back Full 19,943 13.58 G D2D-DC02-Application is running
>
> 937 Back Full 0 0 D2D-HRMS-Application is waiting on
> Storage "Storage_Daily2Disk"
>
> 938 Back Full 0 0 D2D-Fish-Application is waiting on
> Storage "Storage_Daily2Disk"
>
> 939 Back Full 0 0 D2D-SPR01-Application is waiting on
> Storage "Storage_Daily2Disk"

Hi Jim,

To me, it looks like your settings are correct regarding MaximumConcurrentjobs
(MCJ)...

What I think is going on here is that jobid 935 is holding everything else up
due to it having a different priority.

Notice that its status is: "waiting for higher priority jobs to finish"

Unless you have set "AllowMixedPriority" in your Job resources, then the
other jobs will wait until this one is finished. Personally, I do not
recommend that this be set, as it causes more confusion than clarity in my
opinion.

Just an FYI: The status "is waiting for higher priority jobs to finish", in my
humble opinion is not really 100% correct. It could be that it "is waiting on
LOWER priority jobs to finish", but the same message is printed in both cases.
I think this message could be more specific to the actual case, or made more
generic to say "waiting on jobs of different priorities to finish, and
'AllowMixedPriority' not enabled..." (something like this)

I wonder why though, that jobid 936 (after 935) is listed as running...
Perhaps check its priority to see if it is the same as jobid 934 "C2T-Data"

If you set the "C2T-Archive" job's priority to the same priority as the other
backup jobs, then it will not be held up, and it will not hold up any other
queued jobs.

You can investigate the "AllowMixedPriority" option, but I think it may not do
what you want (exactly).

Another option is to set up some schedules to try to make sure this "Archive"
jobs is run when no other normal backup jobs are running.


Best regards,

Bill

--
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
This message was imported via the External PhorumMail Module
Bill Arlofski
Re: Job is waiting on Storage
July 14, 2017 12:06AM
On 07/14/2017 12:17 AM, Darold Lucus wrote:
> I have the same issue, I have four different storage devices. Only one Job per
> storage device can be run concurrently. Now different jobs from another
> storage device can run at the same time, 4 Jobs on 4 separate storage device
> will run at the same time, but never two jobs on the same storage device at
> once. I am not sure f this is just typical behavior for bacula storage daemon
> or if it is a setting that can be adjusted to make multiple jobs on the same
> storage device run concurrently. If someone has any extra insight on this I
> would greatly appreciate it as well.
>
>
>
> Sincerely,
>
> Darold Lucus

Hi Darold, (I am posting this reply to the list)

To help with this, I would ask to have you post all of the resource configs
like Jim did, and also post the bconsole output of:

* s dir

When you see only 1 job running while expecting multiple concurrency.

The 's dir' (status director) will tell us exactly what is preventing a job
from running...

Best regards,

Bill



--
Bill Arlofski
http://www.revpol.com/bacula
-- Not responsible for anything below this line --

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users
This message was imported via the External PhorumMail Module
Jim Richardson
Re: Job is waiting on Storage
July 14, 2017 10:00AM
Bill, thank you for your response. The C2T "Cycle to Tape" jobs are actually functioning properly. The first job takes longer, and I have one tape drive. I am using Priority to ensure that the C2T-Data job completes before the C2T-Archive job. The D2D "Daily to Disk" jobs use a different set of devices. But, if this could be the root of my problem I will investigate. To complete the picture, the priority of the C2T-Data job is 10, the C2T-Archive is 11 and the D2D jobs are 9 except one the D2D-Bacula post backup job which is 99, due to wanting a clean backup after all jobs are complete.



This is the behavior I am looking for: from the 7.4.6 manual: "Note that only higher priority jobs will start early. Suppose the director will allow two concurrent jobs, and that two jobs with priority 10 are running, with two more in the queue. If a job with priority 5 is added to the queue, it will be run as soon as one of the running jobs finishes. However, new priority 10 jobs will not be run until the priority 5 job has finished."



It seems I am limited to only 2 connections to my Storage, but I can’t see where that is configured improperly.



As a quick rationale

Concurrency:

My DIR allows for up to 20 concurrent

My SD allows for up to 20 concurrent

My FD allows for up to 20 concurrent

My Clients allow for up to 2 concurrent (by schedule will only happen on Sundays)

My Bacula Client allows for up to 10 concurrent (just in case)

My Storage allows for up to 10 concurrent for each of two types Daily2Disk & Weekly2Disk and 1 concurrent for Cycle2Tape



Devices:

TapeChanger (Dell TL1000)

- ULT3580 - /dev/nst0 (IBM LTO-7)



FileChanger

- Daily2Disk - Media-Type: Daily

- Weekly2Disk - Media-Type: Weekly

- Monthly2Disk - Media-Type: Monthly



Schedule:

Cycle2Tape begin daily at 6PM #-- Jobs will start first

Daily2Disk begin daily at 7PM #-- Jobs will start second except for Sundays

Daily2Disk-After Backup begin daily at 11:10 PM #-- Jobs will start last

Weekly2Disk begin Sunday at 12PM #-- Jobs will start first



-Run down

934 Back Diff 106,028 537.4 G C2T-Data is running <- Job starts at 6PM with a priority of 10 no other jobs running

935 Back Diff 0 0 C2T-Archive is waiting for higher priority jobs to finish <- Job starts at 6PM with a priority of 11 Job 934 is running job 935 waits

936 Back Full 19,943 13.58 G D2D-DC02-Application is running <- Job starts at 7PM with a priority of 9, starts immediately just what we want.

937 Back Full 0 0 D2D-HRMS-Application is waiting on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but hangs, when it should start per concurrency settings and being the same priority as 936

938 Back Full 0 0 D2D-Fish-Application is waiting on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but hangs, when it should start per concurrency settings and being the same priority as 936 & 937

939 Back Full 0 0 D2D-SPR01-Application is waiting on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but hangs, when it should start per concurrency settings and being the same priority as 936, 937, & 938



Full Details:

/etc/bacula/bacula-dir.conf



Director {

Name = bacula-dir

DIRport = 9101

QueryFile = "/etc/bacula/query.sql"

WorkingDirectory = "/backup/bacula/spool"

PidDirectory = "/var/run"

Maximum Concurrent Jobs = 20

Password = "*"

Messages = Daemon

}



###############################################################################

#--- SCHEDULES

Schedule {

Name = "Daily2DiskCycle"

Run = Pool=Pool_Monthly2Disk 1st sun at 19:00

Run = Pool=Pool_Daily2Disk mon-sat at 19:00

Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 19:00

}



Schedule {

Name = "Weekly2DiskCycle"

Run = Pool=Pool_Monthly2Disk 1st sun at 12:00

Run = Pool=Pool_Weekly2Disk sun at 12:00

}



Schedule {

Name = "Days-Diff-MTWHFSU"

Run = Full 1st sat at 19:00

Run = Differential mon-sun at 19:00

}



Schedule {

Name = "LogIT360_Cycle"

Run = Level=Full sat at 6:00

Run = Level=Differential sun at 18:00

Run = Level=Incremental mon at 18:00

Run = Level=Differential tue at 18:00

Run = Level=Incremental wed at 18:00

Run = Level=Differential thu at 18:00

Run = Level=Incremental fri at 18:00

}



# This schedule does the catalog. It starts after the WeeklyCycle

Schedule {

Name = "Daily2DiskCycle-AfterBackup"

Run = Pool=Pool_Monthly2Disk 1st sun at 23:10

Run = Pool=Pool_Daily2Disk mon-sat at 23:10

Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 23:10

}



###############################################################################

#--- DISK STORAGE OPTIONS

Storage {

Name = Storage_Daily2Disk

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = FileChgr

Media Type = DailyDisk

Maximum Concurrent Jobs = 10

}



Storage {

Name = Storage_Weekly2Disk

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = FileChgr

Media Type = WeeklyDisk

Maximum Concurrent Jobs = 10

}



Storage {

Name = Storage_Monthly2Disk

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = FileChgr

Media Type = MonthlyDisk

Maximum Concurrent Jobs = 10



}



###############################################################################

#--- TAPE STORAGE OPTIONS

Storage {

Name = Tape

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = "ULT3580"

Media Type = LTO-7

Maximum Concurrent Jobs = 10

Autochanger = yes

}



###############################################################################

#--- DEFAULT JOB DEFINITIONS

JobDefs {

Name = "Daily2Disk Jobs"

Type = Backup

Level = Full

Schedule = "Daily2DiskCycle"

Messages = Standard

SpoolAttributes = yes

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 9

Allow Mixed Priority = yes

}



JobDefs {

Name = "Weekly2Disk Jobs"

Type = Backup

Level = Full

Schedule = "Weekly2DiskCycle"

Messages = Standard

SpoolAttributes = yes

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 9

Allow Mixed Priority = yes

}



JobDefs {

Name = "Daily2Disk Bacula Catalog"

Type = Backup

Level = Full

Schedule = "Daily2DiskCycle-AfterBackup"

Messages = Standard

SpoolAttributes = no

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 99

}



JobDefs {

Name = "TapeJobs"

Type = Backup

Level = Full

Client = bacula-fd

Storage = Tape

Messages = Standard

SpoolAttributes = yes

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 10

Allow Mixed Priority = yes

}



###############################################################################

##--- SAMPLE CLIENT

Client {

Name = sample-fd

Address = 10.1.X.X

FDPort = 9102

Catalog = MyCatalog

Password = "*"

File Retention = 60 days

Job Retention = 6 months

AutoPrune = yes

Maximum Concurrent Jobs = 2

}



#-- SERVER JOBS

Job {

Name = "W2D-Sample-System"

Client = sample-fd

JobDefs = "Weekly2Disk Jobs"

FileSet = "Windows-Sample-System"

Pool = Pool_Weekly2Disk

RunScript {

Command = "WBADMIN START SYSTEMSTATEBACKUP -backupTarget:E: -quiet"

RunsWhen = Before

RunsOnClient = yes

}

}



Job {

Name = "D2D-Sample-Application"

Client = sample-fd

JobDefs = "Daily2Disk Jobs"

FileSet = "Windows-Sample-Application"

Pool = Pool_Daily2Disk

}



FileSet {

Name = "Windows-Sample-Application"

Include {

Options {

signature = MD5

compression = GZIP

}

File = "E:/ShareFiles"

File = "E:/Shares/Share"

File = "C:/Shares/Scans"

}

}



FileSet {

Name = "Windows-Sample-System"

Include {

Options {

signature = MD5

compression = GZIP

}

File = "E:/WindowsImageBackup"

}

}



/etc/bacula/bacula-sd.conf



Storage {

Name = bacula-sd

SDPort = 9103

WorkingDirectory = "/backup/bacula/spool"

Pid Directory = "/var/run"

Maximum Concurrent Jobs = 20

}



Autochanger {

Name = FileChgr

Device = DailyDevice, WeeklyDevice, MonthlyDevice

Changer Command = ""

Changer Device = /dev/null

}



Device {

Name = DailyDevice

Media Type = DailyDisk

Archive Device = /backup/bacula/daily

Autochanger = yes;

LabelMedia = yes;

Random Access = Yes;

AutomaticMount = yes;

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 10

}



Device {

Name = WeeklyDevice

Media Type = WeeklyDisk

Archive Device = /backup/bacula/weekly

Autochanger = yes;

LabelMedia = yes;

Random Access = Yes;

AutomaticMount = yes;

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 10

}



Device {

Name = MonthlyDevice

Media Type = MonthlyDisk

Archive Device = /backup/bacula/monthly

Autochanger = yes;

LabelMedia = yes;

Random Access = Yes;

AutomaticMount = yes;

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 10

}



Autochanger {

Name = "Dell-TL1000"

Device = ULT3580

Description = "Dell TL1000 (model IBM 3572-TL)"

Changer Device = /dev/sg5

Changer Command = "/usr/local/sbin/mtx-changer %c %o %S %a %d"

}



Device {

Name = ULT3580

Description = "IBM ULT3580-HH7"

Media Type = LTO-7

Archive Device = /dev/nst0

Label Media = yes

# Label Type = IBM;

AutomaticMount = yes;

AlwaysOpen = yes;

RemovableMedia = yes;

RandomAccess = no;

AutoChanger = yes;

Changer Device = /dev/sg5

Drive Index = 0

Spool Directory = /backup/bacula/spool

Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"

# Enable the Alert command only if you have the mtx package loaded

Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"

Maximum Concurrent Jobs = 1

}



/etc/bacula/bacula-fd.conf



FileDaemon {

Name = bacula-fd

FDport = 9102

WorkingDirectory = /var/spool/bacula

Pid Directory = /var/run

Maximum Concurrent Jobs = 20

Plugin Directory = /usr/lib64/bacula

}



Thanks you again and I hope we can find a resolution
Jim Richardson
Re: Job is waiting on Storage
July 14, 2017 10:03AM
Bill, thank you for your response. The C2T "Cycle to Tape" jobs are actually functioning properly. The first job takes longer, and I have one tape drive. I am using Priority to ensure that the C2T-Data job completes before the C2T-Archive job. The D2D "Daily to Disk" jobs use a different set of devices. But, if this could be the root of my problem I will investigate. To complete the picture, the priority of the C2T-Data job is 10, the C2T-Archive is 11 and the D2D jobs are 9 except one the D2D-Bacula post backup job which is 99, due to wanting a clean backup after all jobs are complete.



This is the behavior I am looking for: from the 7.4.6 manual: "Note that only higher priority jobs will start early. Suppose the director will allow two concurrent jobs, and that two jobs with priority 10 are running, with two more in the queue. If a job with priority 5 is added to the queue, it will be run as soon as one of the running jobs finishes. However, new priority 10 jobs will not be run until the priority 5 job has finished."



It seems I am limited to only 2 connections to my Storage, but I can’t see where that is configured improperly.



As a quick rationale

Concurrency:

My DIR allows for up to 20 concurrent

My SD allows for up to 20 concurrent

My FD allows for up to 20 concurrent

My Clients allow for up to 2 concurrent (by schedule will only happen on Sundays)

My Bacula Client allows for up to 10 concurrent (just in case)

My Storage allows for up to 10 concurrent for each of two types Daily2Disk & Weekly2Disk and 1 concurrent for Cycle2Tape



Devices:

TapeChanger (Dell TL1000)

- ULT3580 - /dev/nst0 (IBM LTO-7)



FileChanger

- Daily2Disk - Media-Type: Daily

- Weekly2Disk - Media-Type: Weekly

- Monthly2Disk - Media-Type: Monthly



Schedule:

Cycle2Tape begin daily at 6PM #-- Jobs will start first

Daily2Disk begin daily at 7PM #-- Jobs will start second except for Sundays

Daily2Disk-After Backup begin daily at 11:10 PM #-- Jobs will start last

Weekly2Disk begin Sunday at 12PM #-- Jobs will start first



-Run down

934 Back Diff 106,028 537.4 G C2T-Data is running <- Job starts at 6PM with a priority of 10 no other jobs running

935 Back Diff 0 0 C2T-Archive is waiting for higher priority jobs to finish <- Job starts at 6PM with a priority of 11 Job 934 is running job 935 waits

936 Back Full 19,943 13.58 G D2D-DC02-Application is running <- Job starts at 7PM with a priority of 9, starts immediately just what we want.

937 Back Full 0 0 D2D-HRMS-Application is waiting on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but hangs, when it should start per concurrency settings and being the same priority as 936

938 Back Full 0 0 D2D-Fish-Application is waiting on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but hangs, when it should start per concurrency settings and being the same priority as 936 & 937

939 Back Full 0 0 D2D-SPR01-Application is waiting on Storage "Storage_Daily2Disk" <- Job starts at 7PM with priority of 9, but hangs, when it should start per concurrency settings and being the same priority as 936, 937, & 938



Full Details:

/etc/bacula/bacula-dir.conf



Director {

Name = bacula-dir

DIRport = 9101

QueryFile = "/etc/bacula/query.sql"

WorkingDirectory = "/backup/bacula/spool"

PidDirectory = "/var/run"

Maximum Concurrent Jobs = 20

Password = "*"

Messages = Daemon

}



###############################################################################

#--- SCHEDULES

Schedule {

Name = "Daily2DiskCycle"

Run = Pool=Pool_Monthly2Disk 1st sun at 19:00

Run = Pool=Pool_Daily2Disk mon-sat at 19:00

Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 19:00

}



Schedule {

Name = "Weekly2DiskCycle"

Run = Pool=Pool_Monthly2Disk 1st sun at 12:00

Run = Pool=Pool_Weekly2Disk sun at 12:00

}



Schedule {

Name = "Days-Diff-MTWHFSU"

Run = Full 1st sat at 19:00

Run = Differential mon-sun at 19:00

}



Schedule {

Name = "LogIT360_Cycle"

Run = Level=Full sat at 6:00

Run = Level=Differential sun at 18:00

Run = Level=Incremental mon at 18:00

Run = Level=Differential tue at 18:00

Run = Level=Incremental wed at 18:00

Run = Level=Differential thu at 18:00

Run = Level=Incremental fri at 18:00

}



# This schedule does the catalog. It starts after the WeeklyCycle

Schedule {

Name = "Daily2DiskCycle-AfterBackup"

Run = Pool=Pool_Monthly2Disk 1st sun at 23:10

Run = Pool=Pool_Daily2Disk mon-sat at 23:10

Run = Pool=Pool_Weekly2Disk 2nd-5th sun at 23:10

}



###############################################################################

#--- DISK STORAGE OPTIONS

Storage {

Name = Storage_Daily2Disk

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = FileChgr

Media Type = DailyDisk

Maximum Concurrent Jobs = 10

}



Storage {

Name = Storage_Weekly2Disk

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = FileChgr

Media Type = WeeklyDisk

Maximum Concurrent Jobs = 10

}



Storage {

Name = Storage_Monthly2Disk

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = FileChgr

Media Type = MonthlyDisk

Maximum Concurrent Jobs = 10



}



###############################################################################

#--- TAPE STORAGE OPTIONS

Storage {

Name = Tape

Address = backup.us.domain.com

SDPort = 9103

Password = "*"

Device = "ULT3580"

Media Type = LTO-7

Maximum Concurrent Jobs = 10

Autochanger = yes

}



###############################################################################

#--- DEFAULT JOB DEFINITIONS

JobDefs {

Name = "Daily2Disk Jobs"

Type = Backup

Level = Full

Schedule = "Daily2DiskCycle"

Messages = Standard

SpoolAttributes = yes

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 9

Allow Mixed Priority = yes

}



JobDefs {

Name = "Weekly2Disk Jobs"

Type = Backup

Level = Full

Schedule = "Weekly2DiskCycle"

Messages = Standard

SpoolAttributes = yes

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 9

Allow Mixed Priority = yes

}



JobDefs {

Name = "Daily2Disk Bacula Catalog"

Type = Backup

Level = Full

Schedule = "Daily2DiskCycle-AfterBackup"

Messages = Standard

SpoolAttributes = no

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 99

}



JobDefs {

Name = "TapeJobs"

Type = Backup

Level = Full

Client = bacula-fd

Storage = Tape

Messages = Standard

SpoolAttributes = yes

Write Bootstrap = "/backup/bacula/spool/%c-%n-%j.bsr"

Priority = 10

Allow Mixed Priority = yes

}



###############################################################################

##--- SAMPLE CLIENT

Client {

Name = sample-fd

Address = 10.1.X.X

FDPort = 9102

Catalog = MyCatalog

Password = "*"

File Retention = 60 days

Job Retention = 6 months

AutoPrune = yes

Maximum Concurrent Jobs = 2

}



#-- SERVER JOBS

Job {

Name = "W2D-Sample-System"

Client = sample-fd

JobDefs = "Weekly2Disk Jobs"

FileSet = "Windows-Sample-System"

Pool = Pool_Weekly2Disk

RunScript {

Command = "WBADMIN START SYSTEMSTATEBACKUP -backupTarget:E: -quiet"

RunsWhen = Before

RunsOnClient = yes

}

}



Job {

Name = "D2D-Sample-Application"

Client = sample-fd

JobDefs = "Daily2Disk Jobs"

FileSet = "Windows-Sample-Application"

Pool = Pool_Daily2Disk

}



FileSet {

Name = "Windows-Sample-Application"

Include {

Options {

signature = MD5

compression = GZIP

}

File = "E:/ShareFiles"

File = "E:/Shares/Share"

File = "C:/Shares/Scans"

}

}



FileSet {

Name = "Windows-Sample-System"

Include {

Options {

signature = MD5

compression = GZIP

}

File = "E:/WindowsImageBackup"

}

}



/etc/bacula/bacula-sd.conf



Storage {

Name = bacula-sd

SDPort = 9103

WorkingDirectory = "/backup/bacula/spool"

Pid Directory = "/var/run"

Maximum Concurrent Jobs = 20

}



Autochanger {

Name = FileChgr

Device = DailyDevice, WeeklyDevice, MonthlyDevice

Changer Command = ""

Changer Device = /dev/null

}



Device {

Name = DailyDevice

Media Type = DailyDisk

Archive Device = /backup/bacula/daily

Autochanger = yes;

LabelMedia = yes;

Random Access = Yes;

AutomaticMount = yes;

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 10

}



Device {

Name = WeeklyDevice

Media Type = WeeklyDisk

Archive Device = /backup/bacula/weekly

Autochanger = yes;

LabelMedia = yes;

Random Access = Yes;

AutomaticMount = yes;

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 10

}



Device {

Name = MonthlyDevice

Media Type = MonthlyDisk

Archive Device = /backup/bacula/monthly

Autochanger = yes;

LabelMedia = yes;

Random Access = Yes;

AutomaticMount = yes;

RemovableMedia = no;

AlwaysOpen = no;

Maximum Concurrent Jobs = 10

}



Autochanger {

Name = "Dell-TL1000"

Device = ULT3580

Description = "Dell TL1000 (model IBM 3572-TL)"

Changer Device = /dev/sg5

Changer Command = "/usr/local/sbin/mtx-changer %c %o %S %a %d"

}



Device {

Name = ULT3580

Description = "IBM ULT3580-HH7"

Media Type = LTO-7

Archive Device = /dev/nst0

Label Media = yes

# Label Type = IBM;

AutomaticMount = yes;

AlwaysOpen = yes;

RemovableMedia = yes;

RandomAccess = no;

AutoChanger = yes;

Changer Device = /dev/sg5

Drive Index = 0

Spool Directory = /backup/bacula/spool

Changer Command = "/usr/libexec/bacula/mtx-changer %c %o %S %a %d"

# Enable the Alert command only if you have the mtx package loaded

Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"

Maximum Concurrent Jobs = 1

}



/etc/bacula/bacula-fd.conf



FileDaemon {

Name = bacula-fd

FDport = 9102

WorkingDirectory = /var/spool/bacula

Pid Directory = /var/run

Maximum Concurrent Jobs = 20

Plugin Directory = /usr/lib64/bacula

}



Thanks you again and I hope we can find a resolution
Sorry, only registered users may post in this forum.

Click here to login