Welcome! » Log In » Create A New Profile

power down hard drives

Posted by Jon LaBadie 
Jon LaBadie
power down hard drives
November 10, 2017 10:59PM
Just a thought. My amanda server has seven hard drives
dedicated to saving amanda data. Only 2 are typically
used (holding and one vtape drive) during an amdump run.
Even then, the usage is only for about 3 hours.

So there is a lot of electricity and disk drive wear for
inactive drives.

Can todays drives be unmounted and powered down then
when needed, powered up and mounted again?

I'm not talking about system hibernation, the system
and its other drives still need to be active.

Back when 300GB was a big drive I had 2 of them in
external USB housings. They shut themselves down
on inactivity. When later accessed, there would
be about 5-10 seconds delay while the drive spun
up and things proceeded normally.

That would be a fine arrangement now if it could
be mimiced.

Jon
--
Jon H. LaBadie jon@jgcomp.com
11226 South Shore Rd. (703) 787-0688 (H)
Reston, VA 20190 (703) 935-6720 (C)
This message was imported via the External PhorumMail Module
Stefan G. Weichinger
Re: power down hard drives
November 11, 2017 01:59AM
Am 2017-11-11 um 07:49 schrieb Jon LaBadie:
> Just a thought. My amanda server has seven hard drives
> dedicated to saving amanda data. Only 2 are typically
> used (holding and one vtape drive) during an amdump run.
> Even then, the usage is only for about 3 hours.
>
> So there is a lot of electricity and disk drive wear for
> inactive drives.
>
> Can todays drives be unmounted and powered down then
> when needed, powered up and mounted again?
>
> I'm not talking about system hibernation, the system
> and its other drives still need to be active.
>
> Back when 300GB was a big drive I had 2 of them in
> external USB housings. They shut themselves down
> on inactivity. When later accessed, there would
> be about 5-10 seconds delay while the drive spun
> up and things proceeded normally.
>
> That would be a fine arrangement now if it could
> be mimiced

example: hdparm -S 120 /dev/sdb
This message was imported via the External PhorumMail Module
Gene Heskett
Re: power down hard drives
November 11, 2017 01:59AM
On Saturday 11 November 2017 01:49:25 Jon LaBadie wrote:

> Just a thought. My amanda server has seven hard drives
> dedicated to saving amanda data. Only 2 are typically
> used (holding and one vtape drive) during an amdump run.
> Even then, the usage is only for about 3 hours.
>
> So there is a lot of electricity and disk drive wear for
> inactive drives.
>
> Can todays drives be unmounted and powered down then
> when needed, powered up and mounted again?
>
> I'm not talking about system hibernation, the system
> and its other drives still need to be active.
>
> Back when 300GB was a big drive I had 2 of them in
> external USB housings. They shut themselves down
> on inactivity. When later accessed, there would
> be about 5-10 seconds delay while the drive spun
> up and things proceeded normally.
>
> That would be a fine arrangement now if it could
> be mimiced.
>
> Jon

Hi Jon and Gundy;

This might be nice, but is it balanced with the life of the drive when
it is only spun down from a power failure long enough to use up the UPS?
Or when powerdown maintenance is being done.

The drives in this box are all 5+ years old, and the drive I use for
vtapes is the oldest.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 119 099 006 Pre-fail Always - 210441924
3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 378
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 25
7 Seek_Error_Rate 0x000f 082 060 030 Pre-fail Always - 179617636
9 Power_On_Hours 0x0032 023 023 000 Old_age Always - 68094
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 383
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0
188 Command_Timeout 0x0032 100 099 000 Old_age Always - 4
189 High_Fly_Writes 0x003a 001 001 000 Old_age Always - 395
190 Airflow_Temperature_Cel 0x0022 061 057 045 Old_age Always - 39 (Min/Max 18/42)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 68089 (30 52 0)
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 4258864840
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3825606395

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 59921 -
# 2 Extended offline Completed without error 00% 54103 -
# 3 Short offline Completed without error 00% 45529 -
# 4 Extended offline Completed without error 00% 43537 -
# 5 Extended offline Completed without error 00% 27211 -
# 6 Extended offline Completed without error 00% 27043 -
# 7 Extended offline Completed without error 00% 26875 -
# 8 Extended offline Completed without error 00% 26372 -
# 9 Extended offline Completed without error 00% 26205 -
#10 Extended offline Completed without error 00% 26037 -
#11 Extended offline Completed without error 00% 25869 -
#12 Extended offline Completed without error 00% 25701 -
#13 Extended offline Completed without error 00% 25534 -
#14 Extended offline Completed without error 00% 25368 -
#15 Extended offline Completed without error 00% 25201 -
#16 Extended offline Completed without error 00% 25033 -
#17 Extended offline Completed without error 00% 24866 -
#18 Extended offline Completed without error 00% 24697 -
#19 Extended offline Completed without error 00% 24530 -
#20 Extended offline Completed without error 00% 24362 -
#21 Extended offline Completed without error 00% 24194 -

Those 25 re-allocated sectors were there the first time I used
smartctl to inspect it at about 5000 spinning hours. Now at 68089
hours. Thats 7.76739676020990189 years of service. I am about
to replace it with a 2Tb, not because its failing, but because its
essentially full and occasionally rejects a big dump. That however
has seemed to reduce amanda's reticence to do a level 2, before,
despite some very loose settings, it refused to do a level 2,
and still insists on promoting most level 2 to level 0's. And
the level 0 then is 5 or 6 days early in a 7 day cycle. And that makes
the imbalance between dump sizes worse, not better.

What I also consider as very very important when buying "commodity"
drives, is going to the drive makers site, getting that drives
latest firmware and installing it. When Newegg et all buys those
drives very early in the production cycle, they are often written
with buggy firmware that can slow the data rates considerably.
In one case, a drive that was doing 35 megabytes/second reads, and even
slower writes, is now doing 135 both ways after having its firmware
updated. Its not uncommon to gain 20 or 30 megs a second read/write
rates for any drive updated.

So I leave mine spinning. The head drag when landing and getting started is
I am convinced is the first major cause of drive failures. Remember that a
drives heads do not touch the disk but fly a few microns above on the
film of air carried by the spinning disk.

The other major contributor to drive failures is the bright red sata cables,
use any other color but that bright red, its vapors over time convert
the copper into dark brown dust that doesn't make a very good conductor
of electricity. Watch your logs, and when you see a drive reset, and a
flurry of them if you touch the cable, its history. And its data errors
can trash a perfectly good drive as it tries to cope with them.

Stop them if you must, but I think you will pay for it in seriously
shortened drive life. How long, at 5 watts a drive saved, does it
take to gain back the $100 it cost? I haven't bothered to work that out.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page http://geneslinuxbox.net:6309/gene
This message was imported via the External PhorumMail Module
Austin S. Hemmelgarn
Re: power down hard drives
November 13, 2017 04:59AM
On 2017-11-11 01:49, Jon LaBadie wrote:
> Just a thought. My amanda server has seven hard drives
> dedicated to saving amanda data. Only 2 are typically
> used (holding and one vtape drive) during an amdump run.
> Even then, the usage is only for about 3 hours.
>
> So there is a lot of electricity and disk drive wear for
> inactive drives.
>
> Can todays drives be unmounted and powered down then
> when needed, powered up and mounted again?
>
> I'm not talking about system hibernation, the system
> and its other drives still need to be active.
>
> Back when 300GB was a big drive I had 2 of them in
> external USB housings. They shut themselves down
> on inactivity. When later accessed, there would
> be about 5-10 seconds delay while the drive spun
> up and things proceeded normally.
>
> That would be a fine arrangement now if it could
> be mimiced.
Aside from what Stefan mentioned (using hdparam to set the standby
timeout, check the man page for hdparam as the numbers are not exactly
sensible), you may consider looking into auto-mounting each of the
drives, as that can help eliminate things that would keep the drives
on-line (or make it more obvious that something is still using them).
This message was imported via the External PhorumMail Module
Gene Heskett
Re: power down hard drives
November 13, 2017 08:01AM
On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:

> On 2017-11-11 01:49, Jon LaBadie wrote:
> > Just a thought. My amanda server has seven hard drives
> > dedicated to saving amanda data. Only 2 are typically
> > used (holding and one vtape drive) during an amdump run.
> > Even then, the usage is only for about 3 hours.
> >
> > So there is a lot of electricity and disk drive wear for
> > inactive drives.
> >
> > Can todays drives be unmounted and powered down then
> > when needed, powered up and mounted again?
> >
> > I'm not talking about system hibernation, the system
> > and its other drives still need to be active.
> >
> > Back when 300GB was a big drive I had 2 of them in
> > external USB housings. They shut themselves down
> > on inactivity. When later accessed, there would
> > be about 5-10 seconds delay while the drive spun
> > up and things proceeded normally.
> >
> > That would be a fine arrangement now if it could
> > be mimiced.
>
> Aside from what Stefan mentioned (using hdparam to set the standby
> timeout, check the man page for hdparam as the numbers are not exactly
> sensible), you may consider looking into auto-mounting each of the
> drives, as that can help eliminate things that would keep the drives
> on-line (or make it more obvious that something is still using them).

I've investigated that, and I have amanda wrapped up in a script that
could do that, but ran into a showstopper I've long since forgotten
about. Al this was back in the time I was writing that wrapper, years
ago now. One of the show stoppers AIR was the fact that only root can
mount and unmount a drive, and my script runs as amanda.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page http://geneslinuxbox.net:6309/gene
This message was imported via the External PhorumMail Module
Austin S. Hemmelgarn
Re: power down hard drives
November 13, 2017 08:05AM
On 2017-11-13 09:56, Gene Heskett wrote:
> On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
>
>> On 2017-11-11 01:49, Jon LaBadie wrote:
>>> Just a thought. My amanda server has seven hard drives
>>> dedicated to saving amanda data. Only 2 are typically
>>> used (holding and one vtape drive) during an amdump run.
>>> Even then, the usage is only for about 3 hours.
>>>
>>> So there is a lot of electricity and disk drive wear for
>>> inactive drives.
>>>
>>> Can todays drives be unmounted and powered down then
>>> when needed, powered up and mounted again?
>>>
>>> I'm not talking about system hibernation, the system
>>> and its other drives still need to be active.
>>>
>>> Back when 300GB was a big drive I had 2 of them in
>>> external USB housings. They shut themselves down
>>> on inactivity. When later accessed, there would
>>> be about 5-10 seconds delay while the drive spun
>>> up and things proceeded normally.
>>>
>>> That would be a fine arrangement now if it could
>>> be mimiced.
>>
>> Aside from what Stefan mentioned (using hdparam to set the standby
>> timeout, check the man page for hdparam as the numbers are not exactly
>> sensible), you may consider looking into auto-mounting each of the
>> drives, as that can help eliminate things that would keep the drives
>> on-line (or make it more obvious that something is still using them).
>
> I've investigated that, and I have amanda wrapped up in a script that
> could do that, but ran into a showstopper I've long since forgotten
> about. Al this was back in the time I was writing that wrapper, years
> ago now. One of the show stoppers AIR was the fact that only root can
> mount and unmount a drive, and my script runs as amanda.
>
While such a wrapper might work if you use sudo inside it (you can
configure sudo to allow root to run things as the amanda user without
needing a password, then run the wrapper as root), what I was trying to
refer to in a system-agnostic manner (since the exact mechanism is
different between different UNIX derivatives) was on-demand
auto-mounting, as provided by autofs on Linux or the auto-mount daemon
(amd) on BSD. When doing on-demand auto-mounting, you don't need a
wrapper at all, as the access attempt will trigger the mount, and then
the mount will time out after some period of inactivity and be unmounted
again. It's mostly used for network resources (possibly with special
auto-lookup mechanisms), as certain protocols (NFS in particular) tend
to have issues if the server goes down while a share is mounted
remotely, even if nothing is happening on that share, but it works just
as well for auto-mounting of local fixed or removable volumes that
aren't needed all the time (I use it for a handful of things on my
personal systems to minimize idle resource usage).
This message was imported via the External PhorumMail Module
Gene Heskett
Re: power down hard drives
November 13, 2017 08:59AM
On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:

> On 2017-11-13 09:56, Gene Heskett wrote:
> > On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
> >> On 2017-11-11 01:49, Jon LaBadie wrote:
> >>> Just a thought. My amanda server has seven hard drives
> >>> dedicated to saving amanda data. Only 2 are typically
> >>> used (holding and one vtape drive) during an amdump run.
> >>> Even then, the usage is only for about 3 hours.
> >>>
> >>> So there is a lot of electricity and disk drive wear for
> >>> inactive drives.
> >>>
> >>> Can todays drives be unmounted and powered down then
> >>> when needed, powered up and mounted again?
> >>>
> >>> I'm not talking about system hibernation, the system
> >>> and its other drives still need to be active.
> >>>
> >>> Back when 300GB was a big drive I had 2 of them in
> >>> external USB housings. They shut themselves down
> >>> on inactivity. When later accessed, there would
> >>> be about 5-10 seconds delay while the drive spun
> >>> up and things proceeded normally.
> >>>
> >>> That would be a fine arrangement now if it could
> >>> be mimiced.
> >>
> >> Aside from what Stefan mentioned (using hdparam to set the standby
> >> timeout, check the man page for hdparam as the numbers are not
> >> exactly sensible), you may consider looking into auto-mounting each
> >> of the drives, as that can help eliminate things that would keep
> >> the drives on-line (or make it more obvious that something is still
> >> using them).
> >
> > I've investigated that, and I have amanda wrapped up in a script
> > that could do that, but ran into a showstopper I've long since
> > forgotten about. Al this was back in the time I was writing that
> > wrapper, years ago now. One of the show stoppers AIR was the fact
> > that only root can mount and unmount a drive, and my script runs as
> > amanda.
>
> While such a wrapper might work if you use sudo inside it (you can
> configure sudo to allow root to run things as the amanda user without
> needing a password, then run the wrapper as root), what I was trying
> to refer to in a system-agnostic manner (since the exact mechanism is
> different between different UNIX derivatives) was on-demand
> auto-mounting, as provided by autofs on Linux or the auto-mount daemon
> (amd) on BSD. When doing on-demand auto-mounting, you don't need a
> wrapper at all, as the access attempt will trigger the mount, and then
> the mount will time out after some period of inactivity and be
> unmounted again. It's mostly used for network resources (possibly
> with special auto-lookup mechanisms), as certain protocols (NFS in
> particular) tend to have issues if the server goes down while a share
> is mounted remotely, even if nothing is happening on that share, but
> it works just as well for auto-mounting of local fixed or removable
> volumes that aren't needed all the time (I use it for a handful of
> things on my personal systems to minimize idle resource usage).

Sounds good perhaps. I am currently up to my eyeballs in an unrelated
problem, and I won't get to this again until that project is completed
and I have brought the 2TB drive in and configured it for amanda's
usage. That will tend to enforce my one thing at a time but do it right
bent. :) What I have is working for a loose definition of working...

But if I allow the 2TB to be unmounted and self-powered down, once
daily, what shortening of its life would I be subjected to? In other
words, how many start-stop cycles can it survive?

Interesting, I had started a long time test yesterday, and the reported
hours has wrapped in the report, apparently at 65636 hours. Somebody
apparently didn't expect a drive to last that long? ;-) The drive?
Healthy as can be.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page http://geneslinuxbox.net:6309/gene
This message was imported via the External PhorumMail Module
Austin S. Hemmelgarn
Re: power down hard drives
November 13, 2017 09:00AM
On 2017-11-13 11:11, Gene Heskett wrote:
> On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
>
>> On 2017-11-13 09:56, Gene Heskett wrote:
>>> On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
>>>> On 2017-11-11 01:49, Jon LaBadie wrote:
>>>>> Just a thought. My amanda server has seven hard drives
>>>>> dedicated to saving amanda data. Only 2 are typically
>>>>> used (holding and one vtape drive) during an amdump run.
>>>>> Even then, the usage is only for about 3 hours.
>>>>>
>>>>> So there is a lot of electricity and disk drive wear for
>>>>> inactive drives.
>>>>>
>>>>> Can todays drives be unmounted and powered down then
>>>>> when needed, powered up and mounted again?
>>>>>
>>>>> I'm not talking about system hibernation, the system
>>>>> and its other drives still need to be active.
>>>>>
>>>>> Back when 300GB was a big drive I had 2 of them in
>>>>> external USB housings. They shut themselves down
>>>>> on inactivity. When later accessed, there would
>>>>> be about 5-10 seconds delay while the drive spun
>>>>> up and things proceeded normally.
>>>>>
>>>>> That would be a fine arrangement now if it could
>>>>> be mimiced.
>>>>
>>>> Aside from what Stefan mentioned (using hdparam to set the standby
>>>> timeout, check the man page for hdparam as the numbers are not
>>>> exactly sensible), you may consider looking into auto-mounting each
>>>> of the drives, as that can help eliminate things that would keep
>>>> the drives on-line (or make it more obvious that something is still
>>>> using them).
>>>
>>> I've investigated that, and I have amanda wrapped up in a script
>>> that could do that, but ran into a showstopper I've long since
>>> forgotten about. Al this was back in the time I was writing that
>>> wrapper, years ago now. One of the show stoppers AIR was the fact
>>> that only root can mount and unmount a drive, and my script runs as
>>> amanda.
>>
>> While such a wrapper might work if you use sudo inside it (you can
>> configure sudo to allow root to run things as the amanda user without
>> needing a password, then run the wrapper as root), what I was trying
>> to refer to in a system-agnostic manner (since the exact mechanism is
>> different between different UNIX derivatives) was on-demand
>> auto-mounting, as provided by autofs on Linux or the auto-mount daemon
>> (amd) on BSD. When doing on-demand auto-mounting, you don't need a
>> wrapper at all, as the access attempt will trigger the mount, and then
>> the mount will time out after some period of inactivity and be
>> unmounted again. It's mostly used for network resources (possibly
>> with special auto-lookup mechanisms), as certain protocols (NFS in
>> particular) tend to have issues if the server goes down while a share
>> is mounted remotely, even if nothing is happening on that share, but
>> it works just as well for auto-mounting of local fixed or removable
>> volumes that aren't needed all the time (I use it for a handful of
>> things on my personal systems to minimize idle resource usage).
>
> Sounds good perhaps. I am currently up to my eyeballs in an unrelated
> problem, and I won't get to this again until that project is completed
> and I have brought the 2TB drive in and configured it for amanda's
> usage. That will tend to enforce my one thing at a time but do it right
> bent. :) What I have is working for a loose definition of working...
Yeah, I know what that's like. Prior to switching to amanda where I
worked, we had a home-grown backup system that had all kinds of odd edge
cases I had to make sure never happened. I'm extremely glad we decided
to stop using that, since it means I can now focus on more interesting
problems (in theory at least, we're having an issue with our Amanda
config right now too, but thankfully it's not a huge one).
>
> But if I allow the 2TB to be unmounted and self-powered down, once
> daily, what shortening of its life would I be subjected to? In other
> words, how many start-stop cycles can it survive?
It's hard to be certain. For what it's worth though, you might want to
test this to be certain that it's actually going to save you energy. It
takes a lot of power to get the platters up to speed, but it doesn't
take much to keep them running at that speed. It might be more
advantageous to just configure the device to idle (that is, park the
heads) after some time out and leave the platters spinning instead of
spinning down completely (and it should result in less wear on the
spindle motor).
>
> Interesting, I had started a long time test yesterday, and the reported
> hours has wrapped in the report, apparently at 65636 hours. Somebody
> apparently didn't expect a drive to last that long? ;-) The drive?
> Healthy as can be.
That's about 7.48 years, so I can actually somewhat understand not going
past 16-bits for that since most people don't use a given disk for more
than about 5 years worth of power-on time before replacing it. However,
what matters is really not how long the device has been powered on, but
how much abuse the drive has taken. Running 24/7 for 5 years with no
movement of the system (including nothing like earthquakes), in a
temperature, humidity, and pressure controlled room will get you near
zero wear on anything in the drive but the bearings and possibly the
heads. In contrast, that same five years of runtime in a laptop that's
being taken all over the place will usually result in a drive that has
numerous errors in addition to noticeable mechanical wear.
This message was imported via the External PhorumMail Module
Gene Heskett
Re: power down hard drives
November 13, 2017 11:00AM
On Monday 13 November 2017 11:40:17 Austin S. Hemmelgarn wrote:

> On 2017-11-13 11:11, Gene Heskett wrote:
> > On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
> >> On 2017-11-13 09:56, Gene Heskett wrote:
> >>> On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
> >>>> On 2017-11-11 01:49, Jon LaBadie wrote:
> >>>>> Just a thought. My amanda server has seven hard drives
> >>>>> dedicated to saving amanda data. Only 2 are typically
> >>>>> used (holding and one vtape drive) during an amdump run.
> >>>>> Even then, the usage is only for about 3 hours.
> >>>>>
> >>>>> So there is a lot of electricity and disk drive wear for
> >>>>> inactive drives.
> >>>>>
> >>>>> Can todays drives be unmounted and powered down then
> >>>>> when needed, powered up and mounted again?
> >>>>>
> >>>>> I'm not talking about system hibernation, the system
> >>>>> and its other drives still need to be active.
> >>>>>
> >>>>> Back when 300GB was a big drive I had 2 of them in
> >>>>> external USB housings. They shut themselves down
> >>>>> on inactivity. When later accessed, there would
> >>>>> be about 5-10 seconds delay while the drive spun
> >>>>> up and things proceeded normally.
> >>>>>
> >>>>> That would be a fine arrangement now if it could
> >>>>> be mimiced.
> >>>>
> >>>> Aside from what Stefan mentioned (using hdparam to set the
> >>>> standby timeout, check the man page for hdparam as the numbers
> >>>> are not exactly sensible), you may consider looking into
> >>>> auto-mounting each of the drives, as that can help eliminate
> >>>> things that would keep the drives on-line (or make it more
> >>>> obvious that something is still using them).
> >>>
> >>> I've investigated that, and I have amanda wrapped up in a script
> >>> that could do that, but ran into a showstopper I've long since
> >>> forgotten about. Al this was back in the time I was writing that
> >>> wrapper, years ago now. One of the show stoppers AIR was the fact
> >>> that only root can mount and unmount a drive, and my script runs
> >>> as amanda.
> >>
> >> While such a wrapper might work if you use sudo inside it (you can
> >> configure sudo to allow root to run things as the amanda user
> >> without needing a password, then run the wrapper as root), what I
> >> was trying to refer to in a system-agnostic manner (since the exact
> >> mechanism is different between different UNIX derivatives) was
> >> on-demand auto-mounting, as provided by autofs on Linux or the
> >> auto-mount daemon (amd) on BSD. When doing on-demand
> >> auto-mounting, you don't need a wrapper at all, as the access
> >> attempt will trigger the mount, and then the mount will time out
> >> after some period of inactivity and be unmounted again. It's
> >> mostly used for network resources (possibly with special
> >> auto-lookup mechanisms), as certain protocols (NFS in particular)
> >> tend to have issues if the server goes down while a share is
> >> mounted remotely, even if nothing is happening on that share, but
> >> it works just as well for auto-mounting of local fixed or removable
> >> volumes that aren't needed all the time (I use it for a handful of
> >> things on my personal systems to minimize idle resource usage).
> >
> > Sounds good perhaps. I am currently up to my eyeballs in an
> > unrelated problem, and I won't get to this again until that project
> > is completed and I have brought the 2TB drive in and configured it
> > for amanda's usage. That will tend to enforce my one thing at a time
> > but do it right bent. :) What I have is working for a loose
> > definition of working...
>
> Yeah, I know what that's like. Prior to switching to amanda where I
> worked, we had a home-grown backup system that had all kinds of odd
> edge cases I had to make sure never happened. I'm extremely glad we
> decided to stop using that, since it means I can now focus on more
> interesting problems (in theory at least, we're having an issue with
> our Amanda config right now too, but thankfully it's not a huge one).
>
> > But if I allow the 2TB to be unmounted and self-powered down, once
> > daily, what shortening of its life would I be subjected to? In
> > other words, how many start-stop cycles can it survive?
>
> It's hard to be certain. For what it's worth though, you might want
> to test this to be certain that it's actually going to save you
> energy. It takes a lot of power to get the platters up to speed, but
> it doesn't take much to keep them running at that speed. It might be
> more advantageous to just configure the device to idle (that is, park
> the heads) after some time out and leave the platters spinning instead
> of spinning down completely (and it should result in less wear on the
> spindle motor).
>
> > Interesting, I had started a long time test yesterday, and the
> > reported hours has wrapped in the report, apparently at 65636 hours.
> > Somebody apparently didn't expect a drive to last that long? ;-)
> > The drive? Healthy as can be.
>
> That's about 7.48 years, so I can actually somewhat understand not
> going past 16-bits for that since most people don't use a given disk
> for more than about 5 years worth of power-on time before replacing
> it. However, what matters is really not how long the device has been
> powered on, but how much abuse the drive has taken. Running 24/7 for
> 5 years with no movement of the system (including nothing like
> earthquakes), in a temperature, humidity, and pressure controlled room
> will get you near zero wear on anything in the drive but the bearings
> and possibly the heads. In contrast, that same five years of runtime
> in a laptop that's being taken all over the place will usually result
> in a drive that has numerous errors in addition to noticeable
> mechanical wear.

Which makes perfect sense. Although, I have a lappy that has at least
5000 miles worth of hammering from a twin piston pounder hauling me
around putting out technical fires at tv stations in its history, and
that 100gig drive is still doing ok last time it was powered. But not
recently, couple times in the last year I often use it as a test bed to
check out a new distro, has mint 14 on it ATM. Getting old.

Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page http://geneslinuxbox.net:6309/gene
This message was imported via the External PhorumMail Module
Jon LaBadie
Re: power down hard drives
November 13, 2017 11:07AM
On Mon, Nov 13, 2017 at 11:40:17AM -0500, Austin S. Hemmelgarn wrote:
> On 2017-11-13 11:11, Gene Heskett wrote:
> > On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
> >
> > > On 2017-11-13 09:56, Gene Heskett wrote:
> > > > On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
> > > > > On 2017-11-11 01:49, Jon LaBadie wrote:
> > > > > > Just a thought. My amanda server has seven hard drives
> > > > > > dedicated to saving amanda data. Only 2 are typically
> > > > > > used (holding and one vtape drive) during an amdump run.
> > > > > > Even then, the usage is only for about 3 hours.
> > > > > >
> > > > > > So there is a lot of electricity and disk drive wear for
> > > > > > inactive drives.
> > > > > >
> > > > > > Can todays drives be unmounted and powered down then
> > > > > > when needed, powered up and mounted again?
> > > > > >
> > > > > > I'm not talking about system hibernation, the system
> > > > > > and its other drives still need to be active.
> > > > > >
> > > > > > Back when 300GB was a big drive I had 2 of them in
> > > > > > external USB housings. They shut themselves down
> > > > > > on inactivity. When later accessed, there would
> > > > > > be about 5-10 seconds delay while the drive spun
> > > > > > up and things proceeded normally.
> > > > > >
> > > > > > That would be a fine arrangement now if it could
> > > > > > be mimiced.
> > > > >
> > > > > Aside from what Stefan mentioned (using hdparam to set the standby
> > > > > timeout, check the man page for hdparam as the numbers are not
> > > > > exactly sensible), you may consider looking into auto-mounting each
> > > > > of the drives, as that can help eliminate things that would keep
> > > > > the drives on-line (or make it more obvious that something is still
> > > > > using them).
> > > >
....
> >
> > But if I allow the 2TB to be unmounted and self-powered down, once
> > daily, what shortening of its life would I be subjected to? In other
> > words, how many start-stop cycles can it survive?
> >
> It's hard to be certain. For what it's worth though, you might want to test
> this to be certain that it's actually going to save you energy. It takes a
> lot of power to get the platters up to speed, but it doesn't take much to
> keep them running at that speed. It might be more advantageous to just
> configure the device to idle (that is, park the heads) after some time out
> and leave the platters spinning instead of spinning down completely (and it
> should result in less wear on the spindle motor).
> >
In my situation, each of the six data drives is only
needed for a 2 week period out of each 12 weeks. Once
shutdown, it could be down for 10 weeks.

Jon
--
Jon H. LaBadie jon@jgcomp.com
11226 South Shore Rd. (703) 787-0688 (H)
Reston, VA 20190 (703) 935-6720 (C)
This message was imported via the External PhorumMail Module
Gene Heskett
Re: power down hard drives
November 13, 2017 11:59AM
On Monday 13 November 2017 13:42:13 Jon LaBadie wrote:

> On Mon, Nov 13, 2017 at 11:40:17AM -0500, Austin S. Hemmelgarn wrote:
> > On 2017-11-13 11:11, Gene Heskett wrote:
> > > On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
> > > > On 2017-11-13 09:56, Gene Heskett wrote:
> > > > > On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
> > > > > > On 2017-11-11 01:49, Jon LaBadie wrote:
> > > > > > > Just a thought. My amanda server has seven hard drives
> > > > > > > dedicated to saving amanda data. Only 2 are typically
> > > > > > > used (holding and one vtape drive) during an amdump run.
> > > > > > > Even then, the usage is only for about 3 hours.
> > > > > > >
> > > > > > > So there is a lot of electricity and disk drive wear for
> > > > > > > inactive drives.
> > > > > > >
> > > > > > > Can todays drives be unmounted and powered down then
> > > > > > > when needed, powered up and mounted again?
> > > > > > >
> > > > > > > I'm not talking about system hibernation, the system
> > > > > > > and its other drives still need to be active.
> > > > > > >
> > > > > > > Back when 300GB was a big drive I had 2 of them in
> > > > > > > external USB housings. They shut themselves down
> > > > > > > on inactivity. When later accessed, there would
> > > > > > > be about 5-10 seconds delay while the drive spun
> > > > > > > up and things proceeded normally.
> > > > > > >
> > > > > > > That would be a fine arrangement now if it could
> > > > > > > be mimiced.
> > > > > >
> > > > > > Aside from what Stefan mentioned (using hdparam to set the
> > > > > > standby timeout, check the man page for hdparam as the
> > > > > > numbers are not exactly sensible), you may consider looking
> > > > > > into auto-mounting each of the drives, as that can help
> > > > > > eliminate things that would keep the drives on-line (or make
> > > > > > it more obvious that something is still using them).
>
> ...
>
> > > But if I allow the 2TB to be unmounted and self-powered down,
> > > once daily, what shortening of its life would I be subjected to?
> > > In other words, how many start-stop cycles can it survive?
> >
> > It's hard to be certain. For what it's worth though, you might want
> > to test this to be certain that it's actually going to save you
> > energy. It takes a lot of power to get the platters up to speed,
> > but it doesn't take much to keep them running at that speed. It
> > might be more advantageous to just configure the device to idle
> > (that is, park the heads) after some time out and leave the platters
> > spinning instead of spinning down completely (and it should result
> > in less wear on the spindle motor).
>
> In my situation, each of the six data drives is only
> needed for a 2 week period out of each 12 weeks. Once
> shutdown, it could be down for 10 weeks.
>
> Jon

Which is more than enough time for stiction to appear if the heads are
not parked off disk.


Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page http://geneslinuxbox.net:6309/gene
This message was imported via the External PhorumMail Module
Jon LaBadie
Re: power down hard drives
November 13, 2017 11:59AM
On Mon, Nov 13, 2017 at 02:04:42PM -0500, Gene Heskett wrote:
> On Monday 13 November 2017 13:42:13 Jon LaBadie wrote:
>
> > On Mon, Nov 13, 2017 at 11:40:17AM -0500, Austin S. Hemmelgarn wrote:
> > > On 2017-11-13 11:11, Gene Heskett wrote:
> > > > On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
> > > > > On 2017-11-13 09:56, Gene Heskett wrote:
> > > > > > On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
> > > > > > > On 2017-11-11 01:49, Jon LaBadie wrote:
> > > > > > > > Just a thought. My amanda server has seven hard drives
> > > > > > > > dedicated to saving amanda data. Only 2 are typically
> > > > > > > > used (holding and one vtape drive) during an amdump run.
> > > > > > > > Even then, the usage is only for about 3 hours.
> > > > > > > >
> > > > > > > > So there is a lot of electricity and disk drive wear for
> > > > > > > > inactive drives.
> > > > > > > >
> > > > > > > > Can todays drives be unmounted and powered down then
> > > > > > > > when needed, powered up and mounted again?
> > > > > > > >
> > > > > > > > I'm not talking about system hibernation, the system
> > > > > > > > and its other drives still need to be active.
> > > > > > > >
> > > > > > > > Back when 300GB was a big drive I had 2 of them in
> > > > > > > > external USB housings. They shut themselves down
> > > > > > > > on inactivity. When later accessed, there would
> > > > > > > > be about 5-10 seconds delay while the drive spun
> > > > > > > > up and things proceeded normally.
> > > > > > > >
> > > > > > > > That would be a fine arrangement now if it could
> > > > > > > > be mimiced.
> > > > > > >
> > > > > > > Aside from what Stefan mentioned (using hdparam to set the
> > > > > > > standby timeout, check the man page for hdparam as the
> > > > > > > numbers are not exactly sensible), you may consider looking
> > > > > > > into auto-mounting each of the drives, as that can help
> > > > > > > eliminate things that would keep the drives on-line (or make
> > > > > > > it more obvious that something is still using them).
> >
> > ...
> >
> > > > But if I allow the 2TB to be unmounted and self-powered down,
> > > > once daily, what shortening of its life would I be subjected to?
> > > > In other words, how many start-stop cycles can it survive?
> > >
> > > It's hard to be certain. For what it's worth though, you might want
> > > to test this to be certain that it's actually going to save you
> > > energy. It takes a lot of power to get the platters up to speed,
> > > but it doesn't take much to keep them running at that speed. It
> > > might be more advantageous to just configure the device to idle
> > > (that is, park the heads) after some time out and leave the platters
> > > spinning instead of spinning down completely (and it should result
> > > in less wear on the spindle motor).
> >
> > In my situation, each of the six data drives is only
> > needed for a 2 week period out of each 12 weeks. Once
> > shutdown, it could be down for 10 weeks.
> >
> > Jon
>
> Which is more than enough time for stiction to appear if the heads are
> not parked off disk.
>
Don't today's drives automatically park heads?

jl
--
Jon H. LaBadie jon@jgcomp.com
11226 South Shore Rd. (703) 787-0688 (H)
Reston, VA 20190 (703) 935-6720 (C)
This message was imported via the External PhorumMail Module
Austin S. Hemmelgarn
Re: power down hard drives
November 13, 2017 01:00PM
On 2017-11-13 14:51, Jon LaBadie wrote:
> On Mon, Nov 13, 2017 at 02:04:42PM -0500, Gene Heskett wrote:
>> On Monday 13 November 2017 13:42:13 Jon LaBadie wrote:
>>
>>> On Mon, Nov 13, 2017 at 11:40:17AM -0500, Austin S. Hemmelgarn wrote:
>>>> On 2017-11-13 11:11, Gene Heskett wrote:
>>>>> On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
>>>>>> On 2017-11-13 09:56, Gene Heskett wrote:
>>>>>>> On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn wrote:
>>>>>>>> On 2017-11-11 01:49, Jon LaBadie wrote:
>>>>>>>>> Just a thought. My amanda server has seven hard drives
>>>>>>>>> dedicated to saving amanda data. Only 2 are typically
>>>>>>>>> used (holding and one vtape drive) during an amdump run.
>>>>>>>>> Even then, the usage is only for about 3 hours.
>>>>>>>>>
>>>>>>>>> So there is a lot of electricity and disk drive wear for
>>>>>>>>> inactive drives.
>>>>>>>>>
>>>>>>>>> Can todays drives be unmounted and powered down then
>>>>>>>>> when needed, powered up and mounted again?
>>>>>>>>>
>>>>>>>>> I'm not talking about system hibernation, the system
>>>>>>>>> and its other drives still need to be active.
>>>>>>>>>
>>>>>>>>> Back when 300GB was a big drive I had 2 of them in
>>>>>>>>> external USB housings. They shut themselves down
>>>>>>>>> on inactivity. When later accessed, there would
>>>>>>>>> be about 5-10 seconds delay while the drive spun
>>>>>>>>> up and things proceeded normally.
>>>>>>>>>
>>>>>>>>> That would be a fine arrangement now if it could
>>>>>>>>> be mimiced.
>>>>>>>>
>>>>>>>> Aside from what Stefan mentioned (using hdparam to set the
>>>>>>>> standby timeout, check the man page for hdparam as the
>>>>>>>> numbers are not exactly sensible), you may consider looking
>>>>>>>> into auto-mounting each of the drives, as that can help
>>>>>>>> eliminate things that would keep the drives on-line (or make
>>>>>>>> it more obvious that something is still using them).
>>>
>>> ...
>>>
>>>>> But if I allow the 2TB to be unmounted and self-powered down,
>>>>> once daily, what shortening of its life would I be subjected to?
>>>>> In other words, how many start-stop cycles can it survive?
>>>>
>>>> It's hard to be certain. For what it's worth though, you might want
>>>> to test this to be certain that it's actually going to save you
>>>> energy. It takes a lot of power to get the platters up to speed,
>>>> but it doesn't take much to keep them running at that speed. It
>>>> might be more advantageous to just configure the device to idle
>>>> (that is, park the heads) after some time out and leave the platters
>>>> spinning instead of spinning down completely (and it should result
>>>> in less wear on the spindle motor).
>>>
>>> In my situation, each of the six data drives is only
>>> needed for a 2 week period out of each 12 weeks. Once
>>> shutdown, it could be down for 10 weeks.
>>>
>>> Jon
>>
>> Which is more than enough time for stiction to appear if the heads are
>> not parked off disk.
>>
> Don't today's drives automatically park heads?
I don't think there were ever any (at least, not ATA or SAS) that didn't
when they went into standby. In fact, I've never seen a modern style
hard disk with 'voice coil' style actuators that didn't automatically
park the heads (and part of my job is tearing apart old hard drives
prior to physical media destruction, so I've seen my fair share of them).
This message was imported via the External PhorumMail Module
Gene Heskett
Re: power down hard drives
November 13, 2017 01:02PM
On Monday 13 November 2017 14:51:59 Jon LaBadie wrote:

> On Mon, Nov 13, 2017 at 02:04:42PM -0500, Gene Heskett wrote:
> > On Monday 13 November 2017 13:42:13 Jon LaBadie wrote:
> > > On Mon, Nov 13, 2017 at 11:40:17AM -0500, Austin S. Hemmelgarn
wrote:
> > > > On 2017-11-13 11:11, Gene Heskett wrote:
> > > > > On Monday 13 November 2017 10:12:47 Austin S. Hemmelgarn wrote:
> > > > > > On 2017-11-13 09:56, Gene Heskett wrote:
> > > > > > > On Monday 13 November 2017 07:19:45 Austin S. Hemmelgarn
wrote:
> > > > > > > > On 2017-11-11 01:49, Jon LaBadie wrote:
> > > > > > > > > Just a thought. My amanda server has seven hard
> > > > > > > > > drives dedicated to saving amanda data. Only 2 are
> > > > > > > > > typically used (holding and one vtape drive) during an
> > > > > > > > > amdump run. Even then, the usage is only for about 3
> > > > > > > > > hours.
> > > > > > > > >
> > > > > > > > > So there is a lot of electricity and disk drive wear
> > > > > > > > > for inactive drives.
> > > > > > > > >
> > > > > > > > > Can todays drives be unmounted and powered down then
> > > > > > > > > when needed, powered up and mounted again?
> > > > > > > > >
> > > > > > > > > I'm not talking about system hibernation, the system
> > > > > > > > > and its other drives still need to be active.
> > > > > > > > >
> > > > > > > > > Back when 300GB was a big drive I had 2 of them in
> > > > > > > > > external USB housings. They shut themselves down
> > > > > > > > > on inactivity. When later accessed, there would
> > > > > > > > > be about 5-10 seconds delay while the drive spun
> > > > > > > > > up and things proceeded normally.
> > > > > > > > >
> > > > > > > > > That would be a fine arrangement now if it could
> > > > > > > > > be mimiced.
> > > > > > > >
> > > > > > > > Aside from what Stefan mentioned (using hdparam to set
> > > > > > > > the standby timeout, check the man page for hdparam as
> > > > > > > > the numbers are not exactly sensible), you may consider
> > > > > > > > looking into auto-mounting each of the drives, as that
> > > > > > > > can help eliminate things that would keep the drives
> > > > > > > > on-line (or make it more obvious that something is still
> > > > > > > > using them).
> > >
> > > ...
> > >
> > > > > But if I allow the 2TB to be unmounted and self-powered down,
> > > > > once daily, what shortening of its life would I be subjected
> > > > > to? In other words, how many start-stop cycles can it survive?
> > > >
> > > > It's hard to be certain. For what it's worth though, you might
> > > > want to test this to be certain that it's actually going to save
> > > > you energy. It takes a lot of power to get the platters up to
> > > > speed, but it doesn't take much to keep them running at that
> > > > speed. It might be more advantageous to just configure the
> > > > device to idle (that is, park the heads) after some time out and
> > > > leave the platters spinning instead of spinning down completely
> > > > (and it should result in less wear on the spindle motor).
> > >
> > > In my situation, each of the six data drives is only
> > > needed for a 2 week period out of each 12 weeks. Once
> > > shutdown, it could be down for 10 weeks.
> > >
> > > Jon
> >
> > Which is more than enough time for stiction to appear if the heads
> > are not parked off disk.
>
> Don't today's drives automatically park heads?
>
> jl
Some may, but with improved control over surface smoothness, I suspect
few are. In those I've taken apart, I have yet to find a parking ramp
that lifted the heads clear of the disk when brought to the edge of the
disk.


Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page http://geneslinuxbox.net:6309/gene
This message was imported via the External PhorumMail Module
Sorry, only registered users may post in this forum.

Click here to login