Deprecated: __autoload() is deprecated, use spl_autoload_register() instead in /home/pbuc/public_html/forum/mods/ext_phorummail/ezc/Base/src/ezc_bootstrap.php on line 36

Deprecated: Methods with the same name as their class will not be constructors in a future version of PHP; KeyCAPTCHA_CLASS has a deprecated constructor in /home/pbuc/public_html/forum/mods/keycaptcha/keycaptcha.php on line 108
User lost access to remote dir - Rebuilding local repositary
Welcome! » Log In » Create A New Profile

User lost access to remote dir - Rebuilding local repositary

Posted by Anonymous 
User lost access to remote dir - Rebuilding local repositary
October 20, 2014 06:37PM
We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4 are EMPTY.

1-We'll give back the access to the /backup directory on the remote server, [u]but how do you recommend that we proceed?[/u]
[list] [*]delete the daily.0, daily.1, daily.2 and daily.4 directories?! [*]Just give back the access and Rsnapshot will move on?
[/list]
2-ALSO, for the future, is there a way to ensure that if the remote dir is not there to return an error? We would have seen this.
Thanks for your support!
--
Thierry
User lost access to remote dir - Rebuilding local repositary
October 21, 2014 12:21AM
On Mon, Oct 20, 2014 at 2:15 PM, Thierry Lavallee <thierry < at > 8p-design.com> wrote:
[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory that
it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4 are
EMPTY.

1-We'll give back the access to the /backup directory on the remote server,
but how do you recommend that we proceed?
[/quote]
Backups are like source control. They are only absolute and
untouchable in people's heads, not in real life.

They will not have pristine files from their last few day's changes in
anything. But you can set move aside the most recent daily.0, label
and archive it just in case, then make a fewsh 'daily.0' and copy in
the user's old files to the relevant location with 'cp -al' to
preserve hardlinks to older backups. If you need to, you can insert it
into other backups, as well, but I'd notify him.

[quote]delete the daily.0, daily.1, daily.2 and daily.4 directories?!
Just give back the access and Rsnapshot will move on?
[/quote]
That part, yes.

[quote]2-ALSO, for the future, is there a way to ensure that if the remote dir is
not there to return an error? We would have seen this.

Thanks for your support!
--
Thierry
[/quote]
Put it in a cron job that emails you with output. Expect nightly
emails, including reporting of access issues.

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss < at > lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
User lost access to remote dir - Rebuilding local repositary
October 21, 2014 01:22AM
On Oct 21 2014, Thierry Lavallee wrote:

[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory
that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4
are EMPTY.
[/quote]
I take it there are no files AT ALL in daily.{0,1,2,4} (because your ssh
couldn't access any files on those days and IIUC presumably your setup is
using link_dest to explain why you ended up with an empty backup directory
rather than an exact copy of the previous backup when rsync failed), not
just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

[quote]1-We'll give back the access to the /backup directory on the remote
server, _but how do you recommend that we proceed?_

* delete the daily.0, daily.1, daily.2 and daily.4 directories?!
[/quote]
If you delete those directories (after double checking they are really
empty), then the other backup directories (eg daily.{3,5,6}) will be kept
for longer, because they won't be cycled out in favour of those empty
backup directories.

This could be a plus or a minus depending on how you look at it (the minus
I was thinking of is some backups are kept on a different schedule than
usual, and so some of your weekly backups will end up being more than 7
days apart).

[quote]* Just give back the access and Rsnapshot will move on?
[/quote]
Assuming you are using link_dest, it would probably be a good idea (at
least temporarily) to set up daily.0 (or .sync or whatever is expected to
have your most recent backup) with a fairly complete backup.

This should save network bandwidth (rsync can link with the previous backup
rather than fetching the whole file across the network) and maximise
chances of unchanged files being hard linked together between daily.X and
daily.Y (saving disk space in your backup area).

This is only a consideration for the first backup after the underlying
issue is fixed.

[quote]2-ALSO, for the future, is there a way to ensure that if the remote dir
is not there to return an error? We would have seen this.
[/quote]
I would expect something on stderr from rsync (eg Permission Denied) which
should have flowed through to stderr of rsnapshot. If you run from cron,
generally stdout and stderr of rsnapshot are emailled to the user running
the cron job unless you have made other arrangements. Assuming email works.

[quote]Thanks for your support!

[/quote]

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss < at > lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
User lost access to remote dir - Rebuilding local repositary
October 21, 2014 02:22AM
thanks for both for your replies...

 FYI, yes, daily.{0,1,2,3,4} are [b]really[/b] empty.[i]
[/i][i]# du -hs daily.0[/i][i]
[/i][i]12K    daily.0[/i][i]
[/i][i]# du -hs daily.1[/i][i]
[/i][i]12K    daily.1[/i][i]
[/i][i]# du -hs daily.2[/i][i]
[/i][i]12K    daily.2[/i][i]
[/i][i]# du -hs daily.3[/i][i]
[/i][i]12K    daily.3[/i][i]
[/i][i]# du -hs daily.4[/i][i]
[/i][i]24K    daily.4[/i][i]
[/i][i]# du -hs daily.5[/i][i]
[/i][i]132G    daily.5[/i]

Ideally I would like to duplicate daily.5 into daily.{0,1,2,3,4}
And run the next rsnapshot from there. Just like if nothing happened on the remote machine for those last 5 days.
Save bandwidth too! 132G !

If this is viable, would I simply run the following commands then give access back and run my rsnapshot?
[quote]cp -al daily.5 daily.4
cp -al daily.5 daily.3
cp -al daily.5 daily.2
cp -al daily.5 daily.1
cp -al daily.5 daily.0
 

Normally this should not take more space as hard links are kept and all the same...
Then the next rush should just pickup from there?
What do you all think?

thanks!
--
Thierry
[/quote]

On 2014-10-20 9:01 PM, djk < at > cyber.com.au ([email]djk < at > cyber.com.au[/email]) wrote:

[quote]On Oct 21 2014, Thierry Lavallee wrote:

[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4 are EMPTY.
[/quote]
I take it there are no files AT ALL in daily.{0,1,2,4} (because your ssh couldn't access any files on those days and IIUC presumably your setup is using link_dest to explain why you ended up with an empty backup directory rather than an exact copy of the previous backup when rsync failed), not
just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

[quote]1-We'll give back the access to the /backup directory on the remote server, _but how do you recommend that we proceed?_

 * delete the daily.0, daily.1, daily.2 and daily.4 directories?!
[/quote]
If you delete those directories (after double checking they are really empty), then the other backup directories (eg daily.{3,5,6}) will be kept for longer, because they won't be cycled out in favour of those empty backup directories.

This could be a plus or a minus depending on how you look at it (the minus I was thinking of is some backups are kept on a different schedule than usual, and so some of your weekly backups will end up being more than 7 days apart).

[quote] * Just give back the access and Rsnapshot will move on?
[/quote]
Assuming you are using link_dest, it would probably be a good idea (at least temporarily) to set up daily.0 (or .sync or whatever is expected to have your most recent backup) with a fairly complete backup.

This should save network bandwidth (rsync can link with the previous backup rather than fetching the whole file across the network) and maximise chances of unchanged files being hard linked together between daily.X and daily.Y (saving disk space in your backup area).

This is only a consideration for the first backup after the underlying issue is fixed.

[quote]2-ALSO, for the future, is there a way to ensure that if the remote dir is not there to return an error? We would have seen this.
[/quote]
I would expect something on stderr from rsync (eg Permission Denied) which should have flowed through to stderr of rsnapshot. If you run from cron, generally stdout and stderr of rsnapshot are emailled to the user running the cron job unless you have made other arrangements. Assuming email works.

[quote]Thanks for your support!

[/quote]

[/quote]
User lost access to remote dir - Rebuilding local repositary
October 21, 2014 12:46PM
Well, 132gb over the internet can take quite a while. And doing this has the same value in terms of data retention than the one you are suggesting. 

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])> wrote:

[quote] Ya know, you can run it manually.  Why not just so that once to get changes, then run it 4 more times to get everything back in line?

It's 132gb.  It's not a lot of data. 

On Oct 20, 2014, at 18:20, Thierry Lavallee <thierry < at > 8p-design.com ([email]thierry < at > 8p-design.com[/email])> wrote:

[quote] thanks for both for your replies...

 FYI, yes, daily.{0,1,2,3,4} are [b]really[/b] empty.[i]
[/i][i]# du -hs daily.0[/i][i]
[/i][i]12K    daily.0[/i][i]
[/i][i]# du -hs daily.1[/i][i]
[/i][i]12K    daily.1[/i][i]
[/i][i]# du -hs daily.2[/i][i]
[/i][i]12K    daily.2[/i][i]
[/i][i]# du -hs daily.3[/i][i]
[/i][i]12K    daily.3[/i][i]
[/i][i]# du -hs daily.4[/i][i]
[/i][i]24K    daily.4[/i][i]
[/i][i]# du -hs daily.5[/i][i]
[/i][i]132G    daily.5[/i]

Ideally I would like to duplicate daily.5 into daily.{0,1,2,3,4}
And run the next rsnapshot from there. Just like if nothing happened on the remote machine for those last 5 days.
Save bandwidth too! 132G !

If this is viable, would I simply run the following commands then give access back and run my rsnapshot?
[quote]cp -al daily.5 daily.4
cp -al daily.5 daily.3
cp -al daily.5 daily.2
cp -al daily.5 daily.1
cp -al daily.5 daily.0
 

Normally this should not take more space as hard links are kept and all the same...
Then the next rush should just pickup from there?
What do you all think?

thanks!
--
Thierry
[/quote]

On 2014-10-20 9:01 PM, djk < at > cyber.com.au ([email]djk < at > cyber.com.au[/email]) wrote:

[quote]On Oct 21 2014, Thierry Lavallee wrote:

[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4 are EMPTY.
[/quote]
I take it there are no files AT ALL in daily.{0,1,2,4} (because your ssh couldn't access any files on those days and IIUC presumably your setup is using link_dest to explain why you ended up with an empty backup directory rather than an exact copy of the previous backup when rsync failed), not
just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

[quote]1-We'll give back the access to the /backup directory on the remote server, _but how do you recommend that we proceed?_

 * delete the daily.0, daily.1, daily.2 and daily.4 directories?!
[/quote]
If you delete those directories (after double checking they are really empty), then the other backup directories (eg daily.{3,5,6}) will be kept for longer, because they won't be cycled out in favour of those empty backup directories.

This could be a plus or a minus depending on how you look at it (the minus I was thinking of is some backups are kept on a different schedule than usual, and so some of your weekly backups will end up being more than 7 days apart).

[quote] * Just give back the access and Rsnapshot will move on?
[/quote]
Assuming you are using link_dest, it would probably be a good idea (at least temporarily) to set up daily.0 (or .sync or whatever is expected to have your most recent backup) with a fairly complete backup.

This should save network bandwidth (rsync can link with the previous backup rather than fetching the whole file across the network) and maximise chances of unchanged files being hard linked together between daily.X and daily.Y (saving disk space in your backup area).

This is only a consideration for the first backup after the underlying issue is fixed.

[quote]2-ALSO, for the future, is there a way to ensure that if the remote dir is not there to return an error? We would have seen this.
[/quote]
I would expect something on stderr from rsync (eg Permission Denied) which should have flowed through to stderr of rsnapshot. If you run from cron, generally stdout and stderr of rsnapshot are emailled to the user running the cron job unless you have made other arrangements. Assuming email works.

[quote]Thanks for your support!

[/quote]

[/quote]

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
[url=http://p.sf.net/sfu/Zoho]http://p.sf.net/sfu/Zoho[/url]
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss < at > lists.sourceforge.net ([email]rsnapshot-discuss < at > lists.sourceforge.net[/email])
[url=https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss]https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss[/url]

[/quote]
[/quote]
User lost access to remote dir - Rebuilding local repositary
October 21, 2014 02:01PM
Yes, your plan to:

* cp -al everything to each missed dir first
* then restore perms to remote user
* then simply allow rsnapshot of remote again

is the right way, in my opinion.

-C

On Tue, 21 Oct 2014 08:44:00 -0400
Thierry Lavallee <thierry < at > 8p-design.com> wrote:

[quote]Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com
<mailto:kenwoods < at > gmail.com>> wrote:

[quote]Ya know, you can run it manually. Why not just so that once to get
changes, then run it 4 more times to get everything back in line?

It's 132gb. It's not a lot of data.

On Oct 20, 2014, at 18:20, Thierry Lavallee <thierry < at > 8p-design.com
<mailto:thierry < at > 8p-design.com>> wrote:

[quote]thanks for both for your replies...

FYI, yes, daily.{0,1,2,3,4} are *really* empty./
//# du -hs daily.0//
//12K daily.0//
//# du -hs daily.1//
//12K daily.1//
//# du -hs daily.2//
//12K daily.2//
//# du -hs daily.3//
//12K daily.3//
//# du -hs daily.4//
//24K daily.4//
//# du -hs daily.5//
//132G daily.5/

Ideally I would like to duplicate daily.5 into daily.{0,1,2,3,4}
And run the next rsnapshot from there. Just like if nothing
happened on the remote machine for those last 5 days.
Save bandwidth too! 132G !

If this is viable, would I simply run the following commands then
give access back and run my rsnapshot?
cp -al daily.5 daily.4
cp -al daily.5 daily.3
cp -al daily.5 daily.2
cp -al daily.5 daily.1
cp -al daily.5 daily.0

Normally this should not take more space as hard links are kept and
all the same... Then the next rush should just pickup from there?
What do you all think?

thanks!
--
Thierry

On 2014-10-20 9:01 PM, djk < at > cyber.com.au wrote:
[quote]On Oct 21 2014, Thierry Lavallee wrote:

[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the
directory that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and
daily.4 are EMPTY.
[/quote]
I take it there are no files AT ALL in daily.{0,1,2,4} (because
your ssh couldn't access any files on those days and IIUC
presumably your setup is using link_dest to explain why you ended
up with an empty backup directory rather than an exact copy of the
previous backup when rsync failed), not
just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

[quote]1-We'll give back the access to the /backup directory on the
remote server, _but how do you recommend that we proceed?_

* delete the daily.0, daily.1, daily.2 and daily.4 directories?!
[/quote]
If you delete those directories (after double checking they are
really empty), then the other backup directories (eg
daily.{3,5,6}) will be kept for longer, because they won't be
cycled out in favour of those empty backup directories.

This could be a plus or a minus depending on how you look at it
(the minus I was thinking of is some backups are kept on a
different schedule than usual, and so some of your weekly backups
will end up being more than 7 days apart).

[quote]* Just give back the access and Rsnapshot will move on?
[/quote]
Assuming you are using link_dest, it would probably be a good idea
(at least temporarily) to set up daily.0 (or .sync or whatever is
expected to have your most recent backup) with a fairly complete
backup.

This should save network bandwidth (rsync can link with the
previous backup rather than fetching the whole file across the
network) and maximise chances of unchanged files being hard linked
together between daily.X and daily.Y (saving disk space in your
backup area).

This is only a consideration for the first backup after the
underlying issue is fixed.

[quote]2-ALSO, for the future, is there a way to ensure that if the
remote dir is not there to return an error? We would have seen
this.
[/quote]
I would expect something on stderr from rsync (eg Permission
Denied) which should have flowed through to stderr of rsnapshot.
If you run from cron, generally stdout and stderr of rsnapshot are
emailled to the user running the cron job unless you have made
other arrangements. Assuming email works.

[quote]Thanks for your support!

[/quote]

[/quote]
------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push
notifications. Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss < at > lists.sourceforge.net
<mailto:rsnapshot-discuss < at > lists.sourceforge.net>
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
[/quote][/quote][/quote]

--
Regards,
Christopher Barry

Random geeky fortune:
Life is a whim of several billion cells to be you for a while.

------------------------------------------------------------------------------
Comprehensive Server Monitoring with Site24x7.
Monitor 10 servers for $9/Month.
Get alerted through email, SMS, voice calls or mobile push notifications.
Take corrective actions from your mobile device.
http://p.sf.net/sfu/Zoho
_______________________________________________
rsnapshot-discuss mailing list
rsnapshot-discuss < at > lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rsnapshot-discuss
User lost access to remote dir - Rebuilding local repositary
October 21, 2014 02:46PM
thanks for your feedback. will do.

On 2014-10-21 9:59 AM, Christopher Barry wrote:

[quote] [quote]
Yes, your plan to:

* cp -al everything to each missed dir first
* then restore perms to remote user
* then simply allow rsnapshot of remote again

is the right way, in my opinion.

-C

On Tue, 21 Oct 2014 08:44:00 -0400
Thierry Lavallee <thierry < at > 8p-design.com> ([email]thierry < at > 8p-design.com[/email]) wrote:

[quote]Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[quote]Ya know, you can run it manually. Why not just so that once to get
changes, then run it 4 more times to get everything back in line?

It's 132gb. It's not a lot of data.

On Oct 20, 2014, at 18:20, Thierry Lavallee <thierry < at > 8p-design.com ([email]thierry < at > 8p-design.com[/email])
<mailto:thierry < at > 8p-design.com> ([email]thierry < at > 8p-design.com[/email])> wrote:

[quote]thanks for both for your replies...

FYI, yes, daily.{0,1,2,3,4} are *really* empty./
//# du -hs daily.0//
//12K daily.0//
//# du -hs daily.1//
//12K daily.1//
//# du -hs daily.2//
//12K daily.2//
//# du -hs daily.3//
//12K daily.3//
//# du -hs daily.4//
//24K daily.4//
//# du -hs daily.5//
//132G daily.5/

Ideally I would like to duplicate daily.5 into daily.{0,1,2,3,4}
And run the next rsnapshot from there. Just like if nothing
happened on the remote machine for those last 5 days.
Save bandwidth too! 132G !

If this is viable, would I simply run the following commands then
give access back and run my rsnapshot?
cp -al daily.5 daily.4
cp -al daily.5 daily.3
cp -al daily.5 daily.2
cp -al daily.5 daily.1
cp -al daily.5 daily.0

Normally this should not take more space as hard links are kept and
all the same... Then the next rush should just pickup from there?
What do you all think?

thanks!
--
Thierry

On 2014-10-20 9:01 PM, djk < at > cyber.com.au ([email]djk < at > cyber.com.au[/email]) wrote:
[quote]On Oct 21 2014, Thierry Lavallee wrote:

[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the
directory that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and
daily.4 are EMPTY.
[/quote]
I take it there are no files AT ALL in daily.{0,1,2,4} (because
your ssh couldn't access any files on those days and IIUC
presumably your setup is using link_dest to explain why you ended
up with an empty backup directory rather than an exact copy of the
previous backup when rsync failed), not
just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

[quote]1-We'll give back the access to the /backup directory on the
remote server, _but how do you recommend that we proceed?_

* delete the daily.0, daily.1, daily.2 and daily.4 directories?!
[/quote]
If you delete those directories (after double checking they are
really empty), then the other backup directories (eg
daily.{3,5,6}) will be kept for longer, because they won't be
cycled out in favour of those empty backup directories.

This could be a plus or a minus depending on how you look at it
(the minus I was thinking of is some backups are kept on a
different schedule than usual, and so some of your weekly backups
will end up being more than 7 days apart).

[quote] * Just give back the access and Rsnapshot will move on?
[/quote] Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]0 [quote] [quote]Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]1 Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]2 [quote] [quote]Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]3 Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]4 Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]5 [/quote] [/quote] Well, 132gb over the internet can take quite a while. And doing this
has the same value in terms of data retention than the one you are
suggesting.

Would my solution work?
Thanks

On 2014-10-21, at 0:06, Ken Woods <kenwoods < at > gmail.com ([email]kenwoods < at > gmail.com[/email])
<mailto:kenwoods < at > gmail.com> ([email]kenwoods < at > gmail.com[/email])> wrote:

[/quote]6 [/quote]
User lost access to remote dir - Rebuilding local repositary
October 22, 2014 06:39PM
At the risk of confusing the issue, you say:
presumably your setup is using link_dest to explain why you ended up with an empty backup directory rather than an exact copy of the previous backup when rsync failed
<<
I thought rsnapshot/rsync were supposed to exhibit robust behavior in the face of connection issues. Why should link-dest have any effect on this?

I use link-dest because I am backing up from Windows machines, and it says in the rsnapshot man page:
it's the best way to support special files on non-Linux systems
<<
As a noob to rsnapshot/rsync, any tips will be gratefully accepted!

-----Original Message-----
From: djk < at > cyber.com.au ([email]djk < at > cyber.com.au[/email]) [mailto]djk < at > cyber.com.au[/email])]
Sent: Tuesday, October 21, 2014 3:02 AM
To: Thierry Lavallee
Cc: rsnapshot Discussion List
Subject: Re: [rsnapshot-discuss] User lost access to remote dir - Rebuilding local repositary

On Oct 21 2014, Thierry Lavallee wrote:

[quote]We had a running rsnapshot to backup a remote server.
On the remote server, the SSH user lost privileges over the directory
that it was supposed to snap.

Hence, for the last few days, the daily.0, daily.1, daily.2 and daily.4
are EMPTY.
[/quote]
I take it there are no files AT ALL in daily.{0,1,2,4} (because your ssh couldn't access any files on those days and IIUC presumably your setup is using link_dest to explain why you ended up with an empty backup directory rather than an exact copy of the previous backup when rsync failed), not just that a sub-directory in the backup is empty.

I also assume some other backups have files in them.

[quote]1-We'll give back the access to the /backup directory on the remote
server, _but how do you recommend that we proceed?_

* delete the daily.0, daily.1, daily.2 and daily.4 directories?!
[/quote]
If you delete those directories (after double checking they are really empty), then the other backup directories (eg daily.{3,5,6}) will be kept for longer, because they won't be cycled out in favour of those empty backup directories.

This could be a plus or a minus depending on how you look at it (the minus I was thinking of is some backups are kept on a different schedule than usual, and so some of your weekly backups will end up being more than 7 days apart).

[quote]* Just give back the access and Rsnapshot will move on?
[/quote]
Assuming you are using link_dest, it would probably be a good idea (at least temporarily) to set up daily.0 (or .sync or whatever is expected to have your most recent backup) with a fairly complete backup.

This should save network bandwidth (rsync can link with the previous backup rather than fetching the whole file across the network) and maximise chances of unchanged files being hard linked together between daily.X and daily.Y (saving disk space in your backup area).

This is only a consideration for the first backup after the underlying issue is fixed.

[quote]2-ALSO, for the future, is there a way to ensure that if the remote dir
is not there to return an error? We would have seen this.
[/quote]
I would expect something on stderr from rsync (eg Permission Denied) which should have flowed through to stderr of rsnapshot. If you run from cron, generally stdout and stderr of rsnapshot are emailled to the user running the cron job unless you have made other arrangements. Assuming email works.

[quote]Thanks for your support!

[/quote]
Sorry, only registered users may post in this forum.

Click here to login