Welcome! » Log In » Create A New Profile

Bottomless pit

Posted by Zoltan Forray 
Zoltan Forray
Bottomless pit
February 25, 2019 11:59AM
Here is a new one.......

We turned off backing up SystemState last week. Now I am going through and
deleted the Systemstate filesystems.

Since I wanted to see how many objects would be deleted, I did a "Q
OCCUPANCY" and preserved the file count numbers for all Windows nodes on
this server.

For 4-nodes, the delete of their systemstate filespaces has been running
for 5-hours. A "Q PROC" shows:

2019-02-25 08:52:05 Deleting file space
ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1) (backup
data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.

Considering the occupancy for this node was *~5-Million objects*, how has
it deleted *105-Million* objects (and counting). The other 3-nodes in
question are also up to *>100-Million objects deleted* and none of them had
more than *6M objects* in occupancy?

At this rate, the deleting objects count for 4-nodes systemstate will
exceed 50% of the total occupancy objects on this server that houses the
backups for* 263-nodes*?

I vaguely remember some bug/APAR about systemstate backups being
large/slow/causing performance problems with expiration but these nodes
client levels are fairly current (8.1.0.2 - staying below the 8.1.2/SSL/TLS
enforcement levels) and the ISP server is 7.1.7.400. All of these are
Windows 2016, if that matters.

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Vandeventer, Harold [OITS]
Re: Bottomless pit
February 25, 2019 12:59PM
You're seeing the same issue I've seen.

Q OCCUP values for SystemState seem to have no relationship to the number that are eventually deleted.

The value on Q OCCUP is always much lower than what is reported in Q PRO.

And, deleting system state takes a LONG time.

-----Original Message-----
From: ADSM: Dist Stor Manager <ADSM-L@VM.MARIST.EDU> On Behalf Of Zoltan Forray
Sent: Monday, February 25, 2019 1:05 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Bottomless pit

EXTERNAL: This email originated from outside of the organization. Do not click any links or open any attachments unless you trust the sender and know the content is safe.


Here is a new one.......

We turned off backing up SystemState last week. Now I am going through and deleted the Systemstate filesystems.

Since I wanted to see how many objects would be deleted, I did a "Q OCCUPANCY" and preserved the file count numbers for all Windows nodes on this server.

For 4-nodes, the delete of their systemstate filespaces has been running for 5-hours. A "Q PROC" shows:

2019-02-25 08:52:05 Deleting file space
ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1) (backup
data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.

Considering the occupancy for this node was *~5-Million objects*, how has it deleted *105-Million* objects (and counting). The other 3-nodes in question are also up to *>100-Million objects deleted* and none of them had more than *6M objects* in occupancy?

At this rate, the deleting objects count for 4-nodes systemstate will exceed 50% of the total occupancy objects on this server that houses the backups for* 263-nodes*?

I vaguely remember some bug/APAR about systemstate backups being large/slow/causing performance problems with expiration but these nodes client levels are fairly current (8.1.0.2 - staying below the 8.1.2/SSL/TLS enforcement levels) and the ISP server is 7.1.7.400. All of these are Windows 2016, if that matters.

--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor Administrator VMware Administrator Virginia Commonwealth University UCC/Office of Technology Services www.ucc.vcu.edu zforray@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and other reputable organizations will never use email to request that you reply with your password, social security number or confidential personal information. For more details visit http://phishing.vcu.edu/


CONFIDENTIALITY AND PRIVILEGE NOTICE
This e-mail message, including attachments, if any, is intended for the person or entity to which it is addressed and may contain confidential or privileged information. Any unauthorized review, use, or disclosure is prohibited. If you are not the intended recipient, please contact the sender and destroy the original message, including all copies, Thank you.
This message was imported via the External PhorumMail Module
Sasa Drnjevic
Re: Bottomless pit
February 25, 2019 01:59PM
FYI,
same here...but my range/ratio was:

~2 mil occ to 25 mil deleted objects...

Never solved the mystery... gave up :->


--
Sasa Drnjevic
www.srce.unizg.hr/en/




On 2019-02-25 20:05, Zoltan Forray wrote:
> Here is a new one.......
>
> We turned off backing up SystemState last week. Now I am going through and
> deleted the Systemstate filesystems.
>
> Since I wanted to see how many objects would be deleted, I did a "Q
> OCCUPANCY" and preserved the file count numbers for all Windows nodes on
> this server.
>
> For 4-nodes, the delete of their systemstate filespaces has been running
> for 5-hours. A "Q PROC" shows:
>
> 2019-02-25 08:52:05 Deleting file space
> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1) (backup
> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
>
> Considering the occupancy for this node was *~5-Million objects*, how has
> it deleted *105-Million* objects (and counting). The other 3-nodes in
> question are also up to *>100-Million objects deleted* and none of them had
> more than *6M objects* in occupancy?
>
> At this rate, the deleting objects count for 4-nodes systemstate will
> exceed 50% of the total occupancy objects on this server that houses the
> backups for* 263-nodes*?
>
> I vaguely remember some bug/APAR about systemstate backups being
> large/slow/causing performance problems with expiration but these nodes
> client levels are fairly current (8.1.0.2 - staying below the 8.1.2/SSL/TLS
> enforcement levels) and the ISP server is 7.1.7.400. All of these are
> Windows 2016, if that matters.
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 05:59AM
Thanks for the confirmation that I am not the only one seeing it and
wondering what is going on. FWIW, the expirations all failed/crashed with
strange "unexpected error 4522 fetching row in table "Backup.Objects" (or
Filespaces). The last "q proc" I recorded:

2,325 DELETE FILESPACE Deleting file space
ORION-POLL-W2\SystemState\NULL\System State\SystemState (fsId=1) (backup
data) for node ORION-POLL-W2: 119,442,593 objects deleted.

2,326 DELETE FILESPACE Deleting file space
ORION-POLL-E2\SystemState\NULL\System State\SystemState (fsId=1) (backup
data) for node ORION-POLL-E2: 116,621,727 objects deleted.


Then I see this in the logs:

2/25/2019 3:07:29 PM ANR1893E Process 2324 for DELETE FILESPACE completed
with a completion state of FAILURE.
2/25/2019 3:32:53 PM ANR0106E imfs.c(8340): Unexpected error 4522 fetching
row in table "Filespaces".
2/25/2019 3:32:53 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
fetching row in table "Backup.Objects".
2/25/2019 3:32:53 PM ANR1893E Process 2325 for DELETE FILESPACE completed
with a completion state of FAILURE.
2/25/2019 4:29:26 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
fetching row in table "Backup.Objects".
2/25/2019 4:29:26 PM ANR1893E Process 2326 for DELETE FILESPACE completed
with a completion state of FAILURE.


On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr> wrote:

> FYI,
> same here...but my range/ratio was:
>
> ~2 mil occ to 25 mil deleted objects...
>
> Never solved the mystery... gave up :->
>
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr/en/
>
>
>
>
> On 2019-02-25 20:05, Zoltan Forray wrote:
> > Here is a new one.......
> >
> > We turned off backing up SystemState last week. Now I am going through
> and
> > deleted the Systemstate filesystems.
> >
> > Since I wanted to see how many objects would be deleted, I did a "Q
> > OCCUPANCY" and preserved the file count numbers for all Windows nodes on
> > this server.
> >
> > For 4-nodes, the delete of their systemstate filespaces has been running
> > for 5-hours. A "Q PROC" shows:
> >
> > 2019-02-25 08:52:05 Deleting file space
> > ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> (backup
> > data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> >
> > Considering the occupancy for this node was *~5-Million objects*, how has
> > it deleted *105-Million* objects (and counting). The other 3-nodes in
> > question are also up to *>100-Million objects deleted* and none of them
> had
> > more than *6M objects* in occupancy?
> >
> > At this rate, the deleting objects count for 4-nodes systemstate will
> > exceed 50% of the total occupancy objects on this server that houses the
> > backups for* 263-nodes*?
> >
> > I vaguely remember some bug/APAR about systemstate backups being
> > large/slow/causing performance problems with expiration but these nodes
> > client levels are fairly current (8.1.0.2 - staying below the
> 8.1.2/SSL/TLS
> > enforcement levels) and the ISP server is 7.1.7.400. All of these are
> > Windows 2016, if that matters.
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://phishing.vcu.edu/
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 05:59AM
Oops - meant to say "the deletions all failed" - not expirations. Now I
get to try them, again.....

On Tue, Feb 26, 2019 at 8:17 AM Zoltan Forray <zforray@vcu.edu> wrote:

> Thanks for the confirmation that I am not the only one seeing it and
> wondering what is going on. FWIW, the expirations all failed/crashed with
> strange "unexpected error 4522 fetching row in table "Backup.Objects" (or
> Filespaces). The last "q proc" I recorded:
>
> 2,325 DELETE FILESPACE Deleting file space
> ORION-POLL-W2\SystemState\NULL\System State\SystemState (fsId=1) (backup
> data) for node ORION-POLL-W2: 119,442,593 objects deleted.
>
> 2,326 DELETE FILESPACE Deleting file space
> ORION-POLL-E2\SystemState\NULL\System State\SystemState (fsId=1) (backup
> data) for node ORION-POLL-E2: 116,621,727 objects deleted.
>
>
> Then I see this in the logs:
>
> 2/25/2019 3:07:29 PM ANR1893E Process 2324 for DELETE FILESPACE completed
> with a completion state of FAILURE.
> 2/25/2019 3:32:53 PM ANR0106E imfs.c(8340): Unexpected error 4522 fetching
> row in table "Filespaces".
> 2/25/2019 3:32:53 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
> fetching row in table "Backup.Objects".
> 2/25/2019 3:32:53 PM ANR1893E Process 2325 for DELETE FILESPACE completed
> with a completion state of FAILURE.
> 2/25/2019 4:29:26 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
> fetching row in table "Backup.Objects".
> 2/25/2019 4:29:26 PM ANR1893E Process 2326 for DELETE FILESPACE completed
> with a completion state of FAILURE.
>
>
> On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr>
> wrote:
>
>> FYI,
>> same here...but my range/ratio was:
>>
>> ~2 mil occ to 25 mil deleted objects...
>>
>> Never solved the mystery... gave up :->
>>
>>
>> --
>> Sasa Drnjevic
>> www.srce.unizg.hr/en/
>>
>>
>>
>>
>> On 2019-02-25 20:05, Zoltan Forray wrote:
>> > Here is a new one.......
>> >
>> > We turned off backing up SystemState last week. Now I am going through
>> and
>> > deleted the Systemstate filesystems.
>> >
>> > Since I wanted to see how many objects would be deleted, I did a "Q
>> > OCCUPANCY" and preserved the file count numbers for all Windows nodes on
>> > this server.
>> >
>> > For 4-nodes, the delete of their systemstate filespaces has been running
>> > for 5-hours. A "Q PROC" shows:
>> >
>> > 2019-02-25 08:52:05 Deleting file space
>> > ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
>> (backup
>> > data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
>> >
>> > Considering the occupancy for this node was *~5-Million objects*, how
>> has
>> > it deleted *105-Million* objects (and counting). The other 3-nodes in
>> > question are also up to *>100-Million objects deleted* and none of them
>> had
>> > more than *6M objects* in occupancy?
>> >
>> > At this rate, the deleting objects count for 4-nodes systemstate will
>> > exceed 50% of the total occupancy objects on this server that houses the
>> > backups for* 263-nodes*?
>> >
>> > I vaguely remember some bug/APAR about systemstate backups being
>> > large/slow/causing performance problems with expiration but these nodes
>> > client levels are fairly current (8.1.0.2 - staying below the
>> 8.1.2/SSL/TLS
>> > enforcement levels) and the ISP server is 7.1.7.400. All of these are
>> > Windows 2016, if that matters.
>> >
>> > --
>> > *Zoltan Forray*
>> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
>> > Xymon Monitor Administrator
>> > VMware Administrator
>> > Virginia Commonwealth University
>> > UCC/Office of Technology Services
>> > www.ucc.vcu.edu
>> > zforray@vcu.edu - 804-828-4807
>> > Don't be a phishing victim - VCU and other reputable organizations will
>> > never use email to request that you reply with your password, social
>> > security number or confidential personal information. For more details
>> > visit http://phishing.vcu.edu/
>> >
>>
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 06:59AM
Since all of these systemstate deletes crashed/failed, I restarted them and
2-of the 3 are already up to 5M objects after running for 30-minutes. Will
this ever end successfully?

On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr> wrote:

> FYI,
> same here...but my range/ratio was:
>
> ~2 mil occ to 25 mil deleted objects...
>
> Never solved the mystery... gave up :->
>
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr/en/
>
>
>
>
> On 2019-02-25 20:05, Zoltan Forray wrote:
> > Here is a new one.......
> >
> > We turned off backing up SystemState last week. Now I am going through
> and
> > deleted the Systemstate filesystems.
> >
> > Since I wanted to see how many objects would be deleted, I did a "Q
> > OCCUPANCY" and preserved the file count numbers for all Windows nodes on
> > this server.
> >
> > For 4-nodes, the delete of their systemstate filespaces has been running
> > for 5-hours. A "Q PROC" shows:
> >
> > 2019-02-25 08:52:05 Deleting file space
> > ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> (backup
> > data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> >
> > Considering the occupancy for this node was *~5-Million objects*, how has
> > it deleted *105-Million* objects (and counting). The other 3-nodes in
> > question are also up to *>100-Million objects deleted* and none of them
> had
> > more than *6M objects* in occupancy?
> >
> > At this rate, the deleting objects count for 4-nodes systemstate will
> > exceed 50% of the total occupancy objects on this server that houses the
> > backups for* 263-nodes*?
> >
> > I vaguely remember some bug/APAR about systemstate backups being
> > large/slow/causing performance problems with expiration but these nodes
> > client levels are fairly current (8.1.0.2 - staying below the
> 8.1.2/SSL/TLS
> > enforcement levels) and the ISP server is 7.1.7.400. All of these are
> > Windows 2016, if that matters.
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://phishing.vcu.edu/
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Sasa Drnjevic
Re: Bottomless pit
February 26, 2019 06:59AM
On 26.2.2019. 15:01, Zoltan Forray wrote:
> Since all of these systemstate deletes crashed/failed, I restarted them and
> 2-of the 3 are already up to 5M objects after running for 30-minutes. Will
> this ever end successfully?

All of mine did finish successfully...

But, none of them had more than 25 mil files deleted.

Wish you luck ;-)

Rgds,

--
Sasa Drnjevic
www.srce.unizg.hr/en/






> On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr> wrote:
>
>> FYI,
>> same here...but my range/ratio was:
>>
>> ~2 mil occ to 25 mil deleted objects...
>>
>> Never solved the mystery... gave up :->
>>
>>
>> --
>> Sasa Drnjevic
>> www.srce.unizg.hr/en/
>>
>>
>>
>>
>> On 2019-02-25 20:05, Zoltan Forray wrote:
>>> Here is a new one.......
>>>
>>> We turned off backing up SystemState last week. Now I am going through
>> and
>>> deleted the Systemstate filesystems.
>>>
>>> Since I wanted to see how many objects would be deleted, I did a "Q
>>> OCCUPANCY" and preserved the file count numbers for all Windows nodes on
>>> this server.
>>>
>>> For 4-nodes, the delete of their systemstate filespaces has been running
>>> for 5-hours. A "Q PROC" shows:
>>>
>>> 2019-02-25 08:52:05 Deleting file space
>>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
>> (backup
>>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
>>>
>>> Considering the occupancy for this node was *~5-Million objects*, how has
>>> it deleted *105-Million* objects (and counting). The other 3-nodes in
>>> question are also up to *>100-Million objects deleted* and none of them
>> had
>>> more than *6M objects* in occupancy?
>>>
>>> At this rate, the deleting objects count for 4-nodes systemstate will
>>> exceed 50% of the total occupancy objects on this server that houses the
>>> backups for* 263-nodes*?
>>>
>>> I vaguely remember some bug/APAR about systemstate backups being
>>> large/slow/causing performance problems with expiration but these nodes
>>> client levels are fairly current (8.1.0.2 - staying below the
>> 8.1.2/SSL/TLS
>>> enforcement levels) and the ISP server is 7.1.7.400. All of these are
>>> Windows 2016, if that matters.
>>>
>>> --
>>> *Zoltan Forray*
>>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
>>> Xymon Monitor Administrator
>>> VMware Administrator
>>> Virginia Commonwealth University
>>> UCC/Office of Technology Services
>>> www.ucc.vcu.edu
>>> zforray@vcu.edu - 804-828-4807
>>> Don't be a phishing victim - VCU and other reputable organizations will
>>> never use email to request that you reply with your password, social
>>> security number or confidential personal information. For more details
>>> visit http://phishing.vcu.edu/
>>>
>>
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 07:59AM
Just found another node with a similar issue on a different ISP server with
different software levels (client=7.1.4.4 and OS=Windows 2012R2). The node
name is the same so I think the application is, as well.

2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL\System
State\SystemState (fsId=1) (backup data) for node ORIONADDWEB: *129,785,134
objects deleted*.


On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <Sasa.Drnjevic@srce.hr> wrote:

> On 26.2.2019. 15:01, Zoltan Forray wrote:
> > Since all of these systemstate deletes crashed/failed, I restarted them
> and
> > 2-of the 3 are already up to 5M objects after running for 30-minutes.
> Will
> > this ever end successfully?
>
> All of mine did finish successfully...
>
> But, none of them had more than 25 mil files deleted.
>
> Wish you luck ;-)
>
> Rgds,
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr/en/
>
>
>
>
>
>
> > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr>
> wrote:
> >
> >> FYI,
> >> same here...but my range/ratio was:
> >>
> >> ~2 mil occ to 25 mil deleted objects...
> >>
> >> Never solved the mystery... gave up :->
> >>
> >>
> >> --
> >> Sasa Drnjevic
> >> www.srce.unizg.hr/en/
> >>
> >>
> >>
> >>
> >> On 2019-02-25 20:05, Zoltan Forray wrote:
> >>> Here is a new one.......
> >>>
> >>> We turned off backing up SystemState last week. Now I am going through
> >> and
> >>> deleted the Systemstate filesystems.
> >>>
> >>> Since I wanted to see how many objects would be deleted, I did a "Q
> >>> OCCUPANCY" and preserved the file count numbers for all Windows nodes
> on
> >>> this server.
> >>>
> >>> For 4-nodes, the delete of their systemstate filespaces has been
> running
> >>> for 5-hours. A "Q PROC" shows:
> >>>
> >>> 2019-02-25 08:52:05 Deleting file space
> >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> >> (backup
> >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> >>>
> >>> Considering the occupancy for this node was *~5-Million objects*, how
> has
> >>> it deleted *105-Million* objects (and counting). The other 3-nodes in
> >>> question are also up to *>100-Million objects deleted* and none of them
> >> had
> >>> more than *6M objects* in occupancy?
> >>>
> >>> At this rate, the deleting objects count for 4-nodes systemstate will
> >>> exceed 50% of the total occupancy objects on this server that houses
> the
> >>> backups for* 263-nodes*?
> >>>
> >>> I vaguely remember some bug/APAR about systemstate backups being
> >>> large/slow/causing performance problems with expiration but these nodes
> >>> client levels are fairly current (8.1.0.2 - staying below the
> >> 8.1.2/SSL/TLS
> >>> enforcement levels) and the ISP server is 7.1.7.400. All of these are
> >>> Windows 2016, if that matters.
> >>>
> >>> --
> >>> *Zoltan Forray*
> >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> >>> Xymon Monitor Administrator
> >>> VMware Administrator
> >>> Virginia Commonwealth University
> >>> UCC/Office of Technology Services
> >>> www.ucc.vcu.edu
> >>> zforray@vcu.edu - 804-828-4807
> >>> Don't be a phishing victim - VCU and other reputable organizations will
> >>> never use email to request that you reply with your password, social
> >>> security number or confidential personal information. For more details
> >>> visit http://phishing.vcu.edu/
> >>>
> >>
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://phishing.vcu.edu/
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Andrew Raibeck
Re: Bottomless pit
February 26, 2019 08:59AM
Hi Zoltan,

The large number of objects is normal for system state file spaces. System
state backup uses grouping, with each backed up object being a member of
the group. If the same object is included in multiple groups, then it will
be counted more than once. Each system state backup creates a new group, so
as the number of retained backup versions grows, so does the number of
groups, and thus the total object count can grow very large.

Best regards,

Andy

____________________________________________________________________________

Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com

"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
10:30:04:

> From: Zoltan Forray <zforray@VCU.EDU>
> To: ADSM-L@VM.MARIST.EDU
> Date: 2019-02-26 10:30
> Subject: Re: Bottomless pit
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Just found another node with a similar issue on a different ISP server
with
> different software levels (client=7.1.4.4 and OS=Windows 2012R2). The
node
> name is the same so I think the application is, as well.
>
> 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
\System
> State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
*129,785,134
> objects deleted*.
>
>
> On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <Sasa.Drnjevic@srce.hr>
wrote:
>
> > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > Since all of these systemstate deletes crashed/failed, I restarted
them
> > and
> > > 2-of the 3 are already up to 5M objects after running for 30-minutes.
> > Will
> > > this ever end successfully?
> >
> > All of mine did finish successfully...
> >
> > But, none of them had more than 25 mil files deleted.
> >
> > Wish you luck ;-)
> >
> > Rgds,
> >
> > --
> > Sasa Drnjevic
> > www.srce.unizg.hr/en/
> >
> >
> >
> >
> >
> >
> > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr>
> > wrote:
> > >
> > >> FYI,
> > >> same here...but my range/ratio was:
> > >>
> > >> ~2 mil occ to 25 mil deleted objects...
> > >>
> > >> Never solved the mystery... gave up :->
> > >>
> > >>
> > >> --
> > >> Sasa Drnjevic
> > >> www.srce.unizg.hr/en/
> > >>
> > >>
> > >>
> > >>
> > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > >>> Here is a new one.......
> > >>>
> > >>> We turned off backing up SystemState last week. Now I am going
through
> > >> and
> > >>> deleted the Systemstate filesystems.
> > >>>
> > >>> Since I wanted to see how many objects would be deleted, I did a "Q
> > >>> OCCUPANCY" and preserved the file count numbers for all Windows
nodes
> > on
> > >>> this server.
> > >>>
> > >>> For 4-nodes, the delete of their systemstate filespaces has been
> > running
> > >>> for 5-hours. A "Q PROC" shows:
> > >>>
> > >>> 2019-02-25 08:52:05 Deleting file space
> > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> > >> (backup
> > >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> > >>>
> > >>> Considering the occupancy for this node was *~5-Million objects*,
how
> > has
> > >>> it deleted *105-Million* objects (and counting). The other 3-nodes
in
> > >>> question are also up to *>100-Million objects deleted* and none of
them
> > >> had
> > >>> more than *6M objects* in occupancy?
> > >>>
> > >>> At this rate, the deleting objects count for 4-nodes systemstate
will
> > >>> exceed 50% of the total occupancy objects on this server that
houses
> > the
> > >>> backups for* 263-nodes*?
> > >>>
> > >>> I vaguely remember some bug/APAR about systemstate backups being
> > >>> large/slow/causing performance problems with expiration but these
nodes
> > >>> client levels are fairly current (8.1.0.2 - staying below the
> > >> 8.1.2/SSL/TLS
> > >>> enforcement levels) and the ISP server is 7.1.7.400. All of these
are
> > >>> Windows 2016, if that matters.
> > >>>
> > >>> --
> > >>> *Zoltan Forray*
> > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > >>> Xymon Monitor Administrator
> > >>> VMware Administrator
> > >>> Virginia Commonwealth University
> > >>> UCC/Office of Technology Services
> > >>> www.ucc.vcu.edu
> > >>> zforray@vcu.edu - 804-828-4807
> > >>> Don't be a phishing victim - VCU and other reputable organizations
will
> > >>> never use email to request that you reply with your password,
social
> > >>> security number or confidential personal information. For more
details
> > >>> visit https://urldefense.proofpoint.com/v2/url?
> u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> > >>>
> > >>
> > >
> > >
> > > --
> > > *Zoltan Forray*
> > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > Xymon Monitor Administrator
> > > VMware Administrator
> > > Virginia Commonwealth University
> > > UCC/Office of Technology Services
> > > www.ucc.vcu.edu
> > > zforray@vcu.edu - 804-828-4807
> > > Don't be a phishing victim - VCU and other reputable organizations
will
> > > never use email to request that you reply with your password, social
> > > security number or confidential personal information. For more
details
> > > visit https://urldefense.proofpoint.com/v2/url?
> u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> > >
> >
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit https://urldefense.proofpoint.com/v2/url?
> u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

>
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 09:59AM
Hi Andy,

Thank you for clarifying things - a bit. However, why would certain
nodes have enormously large numbers vs the average when all things are
equal as far as policies are concerned?

I do see average systemstate object delete counts in the *1-2M range* but
these 4-nodes are exceeding *200M* each. On this server, I deleted the
systemstate for 60-nodes. Only 9-exceeded 1M and of those 1-exceeded 2M.

This last node deletion is still running after 3-hours of deletion.

2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL\System
State\SystemState (fsId=1) (backup data) for node ORIONADDWEB: 243,029,858
objects deleted.

Even the 3-nodes whose deletion failed (after deleting >110M objects each)
due to some kind of bitfile error - after restarting the deletions are up
to 50M+ each and still running?


On Tue, Feb 26, 2019 at 11:30 AM Andrew Raibeck <storman@us.ibm.com> wrote:

> Hi Zoltan,
>
> The large number of objects is normal for system state file spaces. System
> state backup uses grouping, with each backed up object being a member of
> the group. If the same object is included in multiple groups, then it will
> be counted more than once. Each system state backup creates a new group, so
> as the number of retained backup versions grows, so does the number of
> groups, and thus the total object count can grow very large.
>
> Best regards,
>
> Andy
>
>
> ____________________________________________________________________________
>
> Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> 10:30:04:
>
> > From: Zoltan Forray <zforray@VCU.EDU>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 2019-02-26 10:30
> > Subject: Re: Bottomless pit
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Just found another node with a similar issue on a different ISP server
> with
> > different software levels (client=7.1.4.4 and OS=Windows 2012R2). The
> node
> > name is the same so I think the application is, as well.
> >
> > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> \System
> > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> *129,785,134
> > objects deleted*.
> >
> >
> > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <Sasa.Drnjevic@srce.hr>
> wrote:
> >
> > > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > > Since all of these systemstate deletes crashed/failed, I restarted
> them
> > > and
> > > > 2-of the 3 are already up to 5M objects after running for 30-minutes.
> > > Will
> > > > this ever end successfully?
> > >
> > > All of mine did finish successfully...
> > >
> > > But, none of them had more than 25 mil files deleted.
> > >
> > > Wish you luck ;-)
> > >
> > > Rgds,
> > >
> > > --
> > > Sasa Drnjevic
> > > www.srce.unizg.hr/en/
> > >
> > >
> > >
> > >
> > >
> > >
> > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic <Sasa.Drnjevic@srce.hr
> >
> > > wrote:
> > > >
> > > >> FYI,
> > > >> same here...but my range/ratio was:
> > > >>
> > > >> ~2 mil occ to 25 mil deleted objects...
> > > >>
> > > >> Never solved the mystery... gave up :->
> > > >>
> > > >>
> > > >> --
> > > >> Sasa Drnjevic
> > > >> www.srce.unizg.hr/en/
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > > >>> Here is a new one.......
> > > >>>
> > > >>> We turned off backing up SystemState last week. Now I am going
> through
> > > >> and
> > > >>> deleted the Systemstate filesystems.
> > > >>>
> > > >>> Since I wanted to see how many objects would be deleted, I did a "Q
> > > >>> OCCUPANCY" and preserved the file count numbers for all Windows
> nodes
> > > on
> > > >>> this server.
> > > >>>
> > > >>> For 4-nodes, the delete of their systemstate filespaces has been
> > > running
> > > >>> for 5-hours. A "Q PROC" shows:
> > > >>>
> > > >>> 2019-02-25 08:52:05 Deleting file space
> > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> > > >> (backup
> > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> > > >>>
> > > >>> Considering the occupancy for this node was *~5-Million objects*,
> how
> > > has
> > > >>> it deleted *105-Million* objects (and counting). The other 3-nodes
> in
> > > >>> question are also up to *>100-Million objects deleted* and none of
> them
> > > >> had
> > > >>> more than *6M objects* in occupancy?
> > > >>>
> > > >>> At this rate, the deleting objects count for 4-nodes systemstate
> will
> > > >>> exceed 50% of the total occupancy objects on this server that
> houses
> > > the
> > > >>> backups for* 263-nodes*?
> > > >>>
> > > >>> I vaguely remember some bug/APAR about systemstate backups being
> > > >>> large/slow/causing performance problems with expiration but these
> nodes
> > > >>> client levels are fairly current (8.1.0.2 - staying below the
> > > >> 8.1.2/SSL/TLS
> > > >>> enforcement levels) and the ISP server is 7.1.7.400. All of these
> are
> > > >>> Windows 2016, if that matters.
> > > >>>
> > > >>> --
> > > >>> *Zoltan Forray*
> > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > >>> Xymon Monitor Administrator
> > > >>> VMware Administrator
> > > >>> Virginia Commonwealth University
> > > >>> UCC/Office of Technology Services
> > > >>> www.ucc.vcu.edu
> > > >>> zforray@vcu.edu - 804-828-4807
> > > >>> Don't be a phishing victim - VCU and other reputable organizations
> will
> > > >>> never use email to request that you reply with your password,
> social
> > > >>> security number or confidential personal information. For more
> details
> > > >>> visit https://urldefense.proofpoint.com/v2/url?
> > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > > >>>
> > > >>
> > > >
> > > >
> > > > --
> > > > *Zoltan Forray*
> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > Xymon Monitor Administrator
> > > > VMware Administrator
> > > > Virginia Commonwealth University
> > > > UCC/Office of Technology Services
> > > > www.ucc.vcu.edu
> > > > zforray@vcu.edu - 804-828-4807
> > > > Don't be a phishing victim - VCU and other reputable organizations
> will
> > > > never use email to request that you reply with your password, social
> > > > security number or confidential personal information. For more
> details
> > > > visit https://urldefense.proofpoint.com/v2/url?
> > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > > >
> > >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit https://urldefense.proofpoint.com/v2/url?
> > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Andrew Raibeck
Re: Bottomless pit
February 26, 2019 09:59AM
Hi Zoltan,

What policy was system state bound to for the nodes that exceed 200 million
objects? Is it possible that the number of versions retained was very
large?

Another thought is whether the OS of each of these nodes might be affected
by this issue:

https://social.technet.microsoft.com/Forums/en-US/e2632b6e-76b9-4640-85b9-698fb55199d8/cprogramdatamicrosoftcryptorsamachinekeys-is-filling-my-disk-space?forum=winservergen

We have seen huge system state backups that can occur due to that issue, so
if it occurred for those nodes, that is another explanation.

Regards,

Andy

____________________________________________________________________________

Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com

"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
11:59:43:

> From: Zoltan Forray <zforray@VCU.EDU>
> To: ADSM-L@VM.MARIST.EDU
> Date: 2019-02-26 12:01
> Subject: Re: Bottomless pit
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Hi Andy,
>
> Thank you for clarifying things - a bit. However, why would certain
> nodes have enormously large numbers vs the average when all things are
> equal as far as policies are concerned?
>
> I do see average systemstate object delete counts in the *1-2M range* but
> these 4-nodes are exceeding *200M* each. On this server, I deleted the
> systemstate for 60-nodes. Only 9-exceeded 1M and of those 1-exceeded 2M.
>
> This last node deletion is still running after 3-hours of deletion.
>
> 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
\System
> State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
243,029,858
> objects deleted.
>
> Even the 3-nodes whose deletion failed (after deleting >110M objects
each)
> due to some kind of bitfile error - after restarting the deletions are up
> to 50M+ each and still running?
>
>
> On Tue, Feb 26, 2019 at 11:30 AM Andrew Raibeck <storman@us.ibm.com>
wrote:
>
> > Hi Zoltan,
> >
> > The large number of objects is normal for system state file spaces.
System
> > state backup uses grouping, with each backed up object being a member
of
> > the group. If the same object is included in multiple groups, then it
will
> > be counted more than once. Each system state backup creates a new
group, so
> > as the number of retained backup versions grows, so does the number of
> > groups, and thus the total object count can grow very large.
> >
> > Best regards,
> >
> > Andy
> >
> >
> >
____________________________________________________________________________

> >
> > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> >
> > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> > 10:30:04:
> >
> > > From: Zoltan Forray <zforray@VCU.EDU>
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 2019-02-26 10:30
> > > Subject: Re: Bottomless pit
> > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > >
> > > Just found another node with a similar issue on a different ISP
server
> > with
> > > different software levels (client=7.1.4.4 and OS=Windows 2012R2).
The
> > node
> > > name is the same so I think the application is, as well.
> > >
> > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> > \System
> > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > *129,785,134
> > > objects deleted*.
> > >
> > >
> > > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <Sasa.Drnjevic@srce.hr>
> > wrote:
> > >
> > > > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > > > Since all of these systemstate deletes crashed/failed, I
restarted
> > them
> > > > and
> > > > > 2-of the 3 are already up to 5M objects after running for
30-minutes.
> > > > Will
> > > > > this ever end successfully?
> > > >
> > > > All of mine did finish successfully...
> > > >
> > > > But, none of them had more than 25 mil files deleted.
> > > >
> > > > Wish you luck ;-)
> > > >
> > > > Rgds,
> > > >
> > > > --
> > > > Sasa Drnjevic
> > > > www.srce.unizg.hr/en/
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic
<Sasa.Drnjevic@srce.hr
> > >
> > > > wrote:
> > > > >
> > > > >> FYI,
> > > > >> same here...but my range/ratio was:
> > > > >>
> > > > >> ~2 mil occ to 25 mil deleted objects...
> > > > >>
> > > > >> Never solved the mystery... gave up :->
> > > > >>
> > > > >>
> > > > >> --
> > > > >> Sasa Drnjevic
> > > > >> www.srce.unizg.hr/en/
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > > > >>> Here is a new one.......
> > > > >>>
> > > > >>> We turned off backing up SystemState last week. Now I am going
> > through
> > > > >> and
> > > > >>> deleted the Systemstate filesystems.
> > > > >>>
> > > > >>> Since I wanted to see how many objects would be deleted, I did
a "Q
> > > > >>> OCCUPANCY" and preserved the file count numbers for all Windows
> > nodes
> > > > on
> > > > >>> this server.
> > > > >>>
> > > > >>> For 4-nodes, the delete of their systemstate filespaces has
been
> > > > running
> > > > >>> for 5-hours. A "Q PROC" shows:
> > > > >>>
> > > > >>> 2019-02-25 08:52:05 Deleting file space
> > > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState
(fsId=1)
> > > > >> (backup
> > > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> > > > >>>
> > > > >>> Considering the occupancy for this node was *~5-Million
objects*,
> > how
> > > > has
> > > > >>> it deleted *105-Million* objects (and counting). The other
3-nodes
> > in
> > > > >>> question are also up to *>100-Million objects deleted* and none
of
> > them
> > > > >> had
> > > > >>> more than *6M objects* in occupancy?
> > > > >>>
> > > > >>> At this rate, the deleting objects count for 4-nodes
systemstate
> > will
> > > > >>> exceed 50% of the total occupancy objects on this server that
> > houses
> > > > the
> > > > >>> backups for* 263-nodes*?
> > > > >>>
> > > > >>> I vaguely remember some bug/APAR about systemstate backups
being
> > > > >>> large/slow/causing performance problems with expiration but
these
> > nodes
> > > > >>> client levels are fairly current (8.1.0.2 - staying below the
> > > > >> 8.1.2/SSL/TLS
> > > > >>> enforcement levels) and the ISP server is 7.1.7.400. All of
these
> > are
> > > > >>> Windows 2016, if that matters.
> > > > >>>
> > > > >>> --
> > > > >>> *Zoltan Forray*
> > > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > >>> Xymon Monitor Administrator
> > > > >>> VMware Administrator
> > > > >>> Virginia Commonwealth University
> > > > >>> UCC/Office of Technology Services
> > > > >>> www.ucc.vcu.edu
> > > > >>> zforray@vcu.edu - 804-828-4807
> > > > >>> Don't be a phishing victim - VCU and other reputable
organizations
> > will
> > > > >>> never use email to request that you reply with your password,
> > social
> > > > >>> security number or confidential personal information. For more
> > details
> > > > >>> visit https://urldefense.proofpoint.com/v2/url?
> > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > >
> >
> >
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> >
> > > > >>>
> > > > >>
> > > > >
> > > > >
> > > > > --
> > > > > *Zoltan Forray*
> > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > Xymon Monitor Administrator
> > > > > VMware Administrator
> > > > > Virginia Commonwealth University
> > > > > UCC/Office of Technology Services
> > > > > www.ucc.vcu.edu
> > > > > zforray@vcu.edu - 804-828-4807
> > > > > Don't be a phishing victim - VCU and other reputable
organizations
> > will
> > > > > never use email to request that you reply with your password,
social
> > > > > security number or confidential personal information. For more
> > details
> > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > >
> >
> >
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> >
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Zoltan Forray*
> > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > Xymon Monitor Administrator
> > > VMware Administrator
> > > Virginia Commonwealth University
> > > UCC/Office of Technology Services
> > > www.ucc.vcu.edu
> > > zforray@vcu.edu - 804-828-4807
> > > Don't be a phishing victim - VCU and other reputable organizations
will
> > > never use email to request that you reply with your password, social
> > > security number or confidential personal information. For more
details
> > > visit https://urldefense.proofpoint.com/v2/url?
> > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > >
> >
> >
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> >
> > >
> >
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit https://urldefense.proofpoint.com/v2/url?
> u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=hQoTgXdTWRp-
>
c4tE7WF3GSnI5KTnZ9r6rwUQJqLtLu0&s=1kN_Gmc2Qiaab36WLtyjbl66sf5H54wTfT4PyG9zCj8&e=

>
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 09:59AM
Hi Andy,

We do not have any policy/managementclass with versions higher than 5
across our complex and retain-extra copies is 30-days. I do not know what
these servers/applications do so I will ask about the Crypto Keys you
referred to in the link.

For the first 3-with this issue, the client is 8.1.0.2 and the OS is
Windows 2016. The current one that just successfully finished, is Windows
2012R2 and the client is 7.1.4.4

2/26/2019 12:40:13 PM ANR0806I The ORIONADDWEB\SystemState\NULL\System
State\SystemState (fsId=1) file space was deleted for node ORIONADDWEB:
300,346,169 objects were deleted.
2/26/2019 12:40:13 PM ANR0987I Process 1382 for DELETE FILESPACE running in
the BACKGROUND processed 300,346,169 items with a completion state of
SUCCESS at 12:40:09 PM.


On Tue, Feb 26, 2019 at 12:19 PM Andrew Raibeck <storman@us.ibm.com> wrote:

> Hi Zoltan,
>
> What policy was system state bound to for the nodes that exceed 200 million
> objects? Is it possible that the number of versions retained was very
> large?
>
> Another thought is whether the OS of each of these nodes might be affected
> by this issue:
>
>
> https://social.technet.microsoft.com/Forums/en-US/e2632b6e-76b9-4640-85b9-698fb55199d8/cprogramdatamicrosoftcryptorsamachinekeys-is-filling-my-disk-space?forum=winservergen
>
> We have seen huge system state backups that can occur due to that issue, so
> if it occurred for those nodes, that is another explanation.
>
> Regards,
>
> Andy
>
>
> ____________________________________________________________________________
>
> Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> 11:59:43:
>
> > From: Zoltan Forray <zforray@VCU.EDU>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 2019-02-26 12:01
> > Subject: Re: Bottomless pit
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Hi Andy,
> >
> > Thank you for clarifying things - a bit. However, why would certain
> > nodes have enormously large numbers vs the average when all things are
> > equal as far as policies are concerned?
> >
> > I do see average systemstate object delete counts in the *1-2M range* but
> > these 4-nodes are exceeding *200M* each. On this server, I deleted the
> > systemstate for 60-nodes. Only 9-exceeded 1M and of those 1-exceeded 2M.
> >
> > This last node deletion is still running after 3-hours of deletion.
> >
> > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> \System
> > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> 243,029,858
> > objects deleted.
> >
> > Even the 3-nodes whose deletion failed (after deleting >110M objects
> each)
> > due to some kind of bitfile error - after restarting the deletions are up
> > to 50M+ each and still running?
> >
> >
> > On Tue, Feb 26, 2019 at 11:30 AM Andrew Raibeck <storman@us.ibm.com>
> wrote:
> >
> > > Hi Zoltan,
> > >
> > > The large number of objects is normal for system state file spaces.
> System
> > > state backup uses grouping, with each backed up object being a member
> of
> > > the group. If the same object is included in multiple groups, then it
> will
> > > be counted more than once. Each system state backup creates a new
> group, so
> > > as the number of retained backup versions grows, so does the number of
> > > groups, and thus the total object count can grow very large.
> > >
> > > Best regards,
> > >
> > > Andy
> > >
> > >
> > >
>
> ____________________________________________________________________________
>
> > >
> > > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> > >
> > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> > > 10:30:04:
> > >
> > > > From: Zoltan Forray <zforray@VCU.EDU>
> > > > To: ADSM-L@VM.MARIST.EDU
> > > > Date: 2019-02-26 10:30
> > > > Subject: Re: Bottomless pit
> > > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > > >
> > > > Just found another node with a similar issue on a different ISP
> server
> > > with
> > > > different software levels (client=7.1.4.4 and OS=Windows 2012R2).
> The
> > > node
> > > > name is the same so I think the application is, as well.
> > > >
> > > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> > > \System
> > > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > > *129,785,134
> > > > objects deleted*.
> > > >
> > > >
> > > > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <Sasa.Drnjevic@srce.hr
> >
> > > wrote:
> > > >
> > > > > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > > > > Since all of these systemstate deletes crashed/failed, I
> restarted
> > > them
> > > > > and
> > > > > > 2-of the 3 are already up to 5M objects after running for
> 30-minutes.
> > > > > Will
> > > > > > this ever end successfully?
> > > > >
> > > > > All of mine did finish successfully...
> > > > >
> > > > > But, none of them had more than 25 mil files deleted.
> > > > >
> > > > > Wish you luck ;-)
> > > > >
> > > > > Rgds,
> > > > >
> > > > > --
> > > > > Sasa Drnjevic
> > > > > www.srce.unizg.hr/en/
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic
> <Sasa.Drnjevic@srce.hr
> > > >
> > > > > wrote:
> > > > > >
> > > > > >> FYI,
> > > > > >> same here...but my range/ratio was:
> > > > > >>
> > > > > >> ~2 mil occ to 25 mil deleted objects...
> > > > > >>
> > > > > >> Never solved the mystery... gave up :->
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >> Sasa Drnjevic
> > > > > >> www.srce.unizg.hr/en/
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > > > > >>> Here is a new one.......
> > > > > >>>
> > > > > >>> We turned off backing up SystemState last week. Now I am going
> > > through
> > > > > >> and
> > > > > >>> deleted the Systemstate filesystems.
> > > > > >>>
> > > > > >>> Since I wanted to see how many objects would be deleted, I did
> a "Q
> > > > > >>> OCCUPANCY" and preserved the file count numbers for all Windows
> > > nodes
> > > > > on
> > > > > >>> this server.
> > > > > >>>
> > > > > >>> For 4-nodes, the delete of their systemstate filespaces has
> been
> > > > > running
> > > > > >>> for 5-hours. A "Q PROC" shows:
> > > > > >>>
> > > > > >>> 2019-02-25 08:52:05 Deleting file space
> > > > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState
> (fsId=1)
> > > > > >> (backup
> > > > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> > > > > >>>
> > > > > >>> Considering the occupancy for this node was *~5-Million
> objects*,
> > > how
> > > > > has
> > > > > >>> it deleted *105-Million* objects (and counting). The other
> 3-nodes
> > > in
> > > > > >>> question are also up to *>100-Million objects deleted* and none
> of
> > > them
> > > > > >> had
> > > > > >>> more than *6M objects* in occupancy?
> > > > > >>>
> > > > > >>> At this rate, the deleting objects count for 4-nodes
> systemstate
> > > will
> > > > > >>> exceed 50% of the total occupancy objects on this server that
> > > houses
> > > > > the
> > > > > >>> backups for* 263-nodes*?
> > > > > >>>
> > > > > >>> I vaguely remember some bug/APAR about systemstate backups
> being
> > > > > >>> large/slow/causing performance problems with expiration but
> these
> > > nodes
> > > > > >>> client levels are fairly current (8.1.0.2 - staying below the
> > > > > >> 8.1.2/SSL/TLS
> > > > > >>> enforcement levels) and the ISP server is 7.1.7.400. All of
> these
> > > are
> > > > > >>> Windows 2016, if that matters.
> > > > > >>>
> > > > > >>> --
> > > > > >>> *Zoltan Forray*
> > > > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > >>> Xymon Monitor Administrator
> > > > > >>> VMware Administrator
> > > > > >>> Virginia Commonwealth University
> > > > > >>> UCC/Office of Technology Services
> > > > > >>> www.ucc.vcu.edu
> > > > > >>> zforray@vcu.edu - 804-828-4807
> > > > > >>> Don't be a phishing victim - VCU and other reputable
> organizations
> > > will
> > > > > >>> never use email to request that you reply with your password,
> > > social
> > > > > >>> security number or confidential personal information. For more
> > > details
> > > > > >>> visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > > >>>
> > > > > >>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *Zoltan Forray*
> > > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > > Xymon Monitor Administrator
> > > > > > VMware Administrator
> > > > > > Virginia Commonwealth University
> > > > > > UCC/Office of Technology Services
> > > > > > www.ucc.vcu.edu
> > > > > > zforray@vcu.edu - 804-828-4807
> > > > > > Don't be a phishing victim - VCU and other reputable
> organizations
> > > will
> > > > > > never use email to request that you reply with your password,
> social
> > > > > > security number or confidential personal information. For more
> > > details
> > > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Zoltan Forray*
> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > Xymon Monitor Administrator
> > > > VMware Administrator
> > > > Virginia Commonwealth University
> > > > UCC/Office of Technology Services
> > > > www.ucc.vcu.edu
> > > > zforray@vcu.edu - 804-828-4807
> > > > Don't be a phishing victim - VCU and other reputable organizations
> will
> > > > never use email to request that you reply with your password, social
> > > > security number or confidential personal information. For more
> details
> > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > >
> > >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit https://urldefense.proofpoint.com/v2/url?
> > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=hQoTgXdTWRp-
> >
>
> c4tE7WF3GSnI5KTnZ9r6rwUQJqLtLu0&s=1kN_Gmc2Qiaab36WLtyjbl66sf5H54wTfT4PyG9zCj8&e=
>
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 10:59AM
I did some research and the application on these servers is called
SolarWinds

On Tue, Feb 26, 2019 at 12:19 PM Andrew Raibeck <storman@us.ibm.com> wrote:

> Hi Zoltan,
>
> What policy was system state bound to for the nodes that exceed 200 million
> objects? Is it possible that the number of versions retained was very
> large?
>
> Another thought is whether the OS of each of these nodes might be affected
> by this issue:
>
>
> https://social.technet.microsoft.com/Forums/en-US/e2632b6e-76b9-4640-85b9-698fb55199d8/cprogramdatamicrosoftcryptorsamachinekeys-is-filling-my-disk-space?forum=winservergen
>
> We have seen huge system state backups that can occur due to that issue, so
> if it occurred for those nodes, that is another explanation.
>
> Regards,
>
> Andy
>
>
> ____________________________________________________________________________
>
> Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> 11:59:43:
>
> > From: Zoltan Forray <zforray@VCU.EDU>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 2019-02-26 12:01
> > Subject: Re: Bottomless pit
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Hi Andy,
> >
> > Thank you for clarifying things - a bit. However, why would certain
> > nodes have enormously large numbers vs the average when all things are
> > equal as far as policies are concerned?
> >
> > I do see average systemstate object delete counts in the *1-2M range* but
> > these 4-nodes are exceeding *200M* each. On this server, I deleted the
> > systemstate for 60-nodes. Only 9-exceeded 1M and of those 1-exceeded 2M.
> >
> > This last node deletion is still running after 3-hours of deletion.
> >
> > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> \System
> > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> 243,029,858
> > objects deleted.
> >
> > Even the 3-nodes whose deletion failed (after deleting >110M objects
> each)
> > due to some kind of bitfile error - after restarting the deletions are up
> > to 50M+ each and still running?
> >
> >
> > On Tue, Feb 26, 2019 at 11:30 AM Andrew Raibeck <storman@us.ibm.com>
> wrote:
> >
> > > Hi Zoltan,
> > >
> > > The large number of objects is normal for system state file spaces.
> System
> > > state backup uses grouping, with each backed up object being a member
> of
> > > the group. If the same object is included in multiple groups, then it
> will
> > > be counted more than once. Each system state backup creates a new
> group, so
> > > as the number of retained backup versions grows, so does the number of
> > > groups, and thus the total object count can grow very large.
> > >
> > > Best regards,
> > >
> > > Andy
> > >
> > >
> > >
>
> ____________________________________________________________________________
>
> > >
> > > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> > >
> > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> > > 10:30:04:
> > >
> > > > From: Zoltan Forray <zforray@VCU.EDU>
> > > > To: ADSM-L@VM.MARIST.EDU
> > > > Date: 2019-02-26 10:30
> > > > Subject: Re: Bottomless pit
> > > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > > >
> > > > Just found another node with a similar issue on a different ISP
> server
> > > with
> > > > different software levels (client=7.1.4.4 and OS=Windows 2012R2).
> The
> > > node
> > > > name is the same so I think the application is, as well.
> > > >
> > > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> > > \System
> > > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > > *129,785,134
> > > > objects deleted*.
> > > >
> > > >
> > > > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic <Sasa.Drnjevic@srce.hr
> >
> > > wrote:
> > > >
> > > > > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > > > > Since all of these systemstate deletes crashed/failed, I
> restarted
> > > them
> > > > > and
> > > > > > 2-of the 3 are already up to 5M objects after running for
> 30-minutes.
> > > > > Will
> > > > > > this ever end successfully?
> > > > >
> > > > > All of mine did finish successfully...
> > > > >
> > > > > But, none of them had more than 25 mil files deleted.
> > > > >
> > > > > Wish you luck ;-)
> > > > >
> > > > > Rgds,
> > > > >
> > > > > --
> > > > > Sasa Drnjevic
> > > > > www.srce.unizg.hr/en/
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic
> <Sasa.Drnjevic@srce.hr
> > > >
> > > > > wrote:
> > > > > >
> > > > > >> FYI,
> > > > > >> same here...but my range/ratio was:
> > > > > >>
> > > > > >> ~2 mil occ to 25 mil deleted objects...
> > > > > >>
> > > > > >> Never solved the mystery... gave up :->
> > > > > >>
> > > > > >>
> > > > > >> --
> > > > > >> Sasa Drnjevic
> > > > > >> www.srce.unizg.hr/en/
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > > > > >>> Here is a new one.......
> > > > > >>>
> > > > > >>> We turned off backing up SystemState last week. Now I am going
> > > through
> > > > > >> and
> > > > > >>> deleted the Systemstate filesystems.
> > > > > >>>
> > > > > >>> Since I wanted to see how many objects would be deleted, I did
> a "Q
> > > > > >>> OCCUPANCY" and preserved the file count numbers for all Windows
> > > nodes
> > > > > on
> > > > > >>> this server.
> > > > > >>>
> > > > > >>> For 4-nodes, the delete of their systemstate filespaces has
> been
> > > > > running
> > > > > >>> for 5-hours. A "Q PROC" shows:
> > > > > >>>
> > > > > >>> 2019-02-25 08:52:05 Deleting file space
> > > > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState
> (fsId=1)
> > > > > >> (backup
> > > > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> > > > > >>>
> > > > > >>> Considering the occupancy for this node was *~5-Million
> objects*,
> > > how
> > > > > has
> > > > > >>> it deleted *105-Million* objects (and counting). The other
> 3-nodes
> > > in
> > > > > >>> question are also up to *>100-Million objects deleted* and none
> of
> > > them
> > > > > >> had
> > > > > >>> more than *6M objects* in occupancy?
> > > > > >>>
> > > > > >>> At this rate, the deleting objects count for 4-nodes
> systemstate
> > > will
> > > > > >>> exceed 50% of the total occupancy objects on this server that
> > > houses
> > > > > the
> > > > > >>> backups for* 263-nodes*?
> > > > > >>>
> > > > > >>> I vaguely remember some bug/APAR about systemstate backups
> being
> > > > > >>> large/slow/causing performance problems with expiration but
> these
> > > nodes
> > > > > >>> client levels are fairly current (8.1.0.2 - staying below the
> > > > > >> 8.1.2/SSL/TLS
> > > > > >>> enforcement levels) and the ISP server is 7.1.7.400. All of
> these
> > > are
> > > > > >>> Windows 2016, if that matters.
> > > > > >>>
> > > > > >>> --
> > > > > >>> *Zoltan Forray*
> > > > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > >>> Xymon Monitor Administrator
> > > > > >>> VMware Administrator
> > > > > >>> Virginia Commonwealth University
> > > > > >>> UCC/Office of Technology Services
> > > > > >>> www.ucc.vcu.edu
> > > > > >>> zforray@vcu.edu - 804-828-4807
> > > > > >>> Don't be a phishing victim - VCU and other reputable
> organizations
> > > will
> > > > > >>> never use email to request that you reply with your password,
> > > social
> > > > > >>> security number or confidential personal information. For more
> > > details
> > > > > >>> visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > > >>>
> > > > > >>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *Zoltan Forray*
> > > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > > Xymon Monitor Administrator
> > > > > > VMware Administrator
> > > > > > Virginia Commonwealth University
> > > > > > UCC/Office of Technology Services
> > > > > > www.ucc.vcu.edu
> > > > > > zforray@vcu.edu - 804-828-4807
> > > > > > Don't be a phishing victim - VCU and other reputable
> organizations
> > > will
> > > > > > never use email to request that you reply with your password,
> social
> > > > > > security number or confidential personal information. For more
> > > details
> > > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Zoltan Forray*
> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > Xymon Monitor Administrator
> > > > VMware Administrator
> > > > Virginia Commonwealth University
> > > > UCC/Office of Technology Services
> > > > www.ucc.vcu.edu
> > > > zforray@vcu.edu - 804-828-4807
> > > > Don't be a phishing victim - VCU and other reputable organizations
> will
> > > > never use email to request that you reply with your password, social
> > > > security number or confidential personal information. For more
> details
> > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > >
> > >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit https://urldefense.proofpoint.com/v2/url?
> > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=hQoTgXdTWRp-
> >
>
> c4tE7WF3GSnI5KTnZ9r6rwUQJqLtLu0&s=1kN_Gmc2Qiaab36WLtyjbl66sf5H54wTfT4PyG9zCj8&e=
>
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Andrew Raibeck
Re: Bottomless pit
February 26, 2019 10:59AM
Hi Zoltan,

There is nothing in the client code you are using that would cause
excessive backups. If you happen to have backup logs going far back enough,
those might show an excessive number of system state objects. Normally I
would expect the number of backup objects N to be 90000 < N < 150000. that
is a roughly estimated range, depends on the individual system, and maybe N
could be a little higher... but my radar would certainly be alerted if it
was more than 200,000 and growing daily.

Regards,

Andy

____________________________________________________________________________

Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com

"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
12:52:27:

> From: Zoltan Forray <zforray@VCU.EDU>
> To: ADSM-L@VM.MARIST.EDU
> Date: 2019-02-26 12:52
> Subject: Re: Bottomless pit
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> Hi Andy,
>
> We do not have any policy/managementclass with versions higher than 5
> across our complex and retain-extra copies is 30-days. I do not know what
> these servers/applications do so I will ask about the Crypto Keys you
> referred to in the link.
>
> For the first 3-with this issue, the client is 8.1.0.2 and the OS is
> Windows 2016. The current one that just successfully finished, is
Windows
> 2012R2 and the client is 7.1.4.4
>
> 2/26/2019 12:40:13 PM ANR0806I The ORIONADDWEB\SystemState\NULL\System
> State\SystemState (fsId=1) file space was deleted for node ORIONADDWEB:
> 300,346,169 objects were deleted.
> 2/26/2019 12:40:13 PM ANR0987I Process 1382 for DELETE FILESPACE running
in
> the BACKGROUND processed 300,346,169 items with a completion state of
> SUCCESS at 12:40:09 PM.
>
>
> On Tue, Feb 26, 2019 at 12:19 PM Andrew Raibeck <storman@us.ibm.com>
wrote:
>
> > Hi Zoltan,
> >
> > What policy was system state bound to for the nodes that exceed 200
million
> > objects? Is it possible that the number of versions retained was very
> > large?
> >
> > Another thought is whether the OS of each of these nodes might be
affected
> > by this issue:
> >
> >
> > https://urldefense.proofpoint.com/v2/url?
>
u=https-3A__social.technet.microsoft.com_Forums_en-2DUS_e2632b6e-2D76b9-2D4640-2D85b9-2D698fb55199d8_cprogramdatamicrosoftcryptorsamachinekeys-2Dis-2Dfilling-2Dmy-2Ddisk-2Dspace-3Fforum-3Dwinservergen&d=DwIBaQ&c=jf_iaSHvJObTbx-

>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=N9AQefDr2d0WLgIG8G7ONWkk-3POpjbBbVM3mI2lzuM&s=rNhhFizJITYew56twqREPr482w-

> Hss5ZsFp5-UY_W58&e=
> >
> > We have seen huge system state backups that can occur due to that
issue, so
> > if it occurred for those nodes, that is another explanation.
> >
> > Regards,
> >
> > Andy
> >
> >
> >
____________________________________________________________________________

> >
> > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> >
> > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> > 11:59:43:
> >
> > > From: Zoltan Forray <zforray@VCU.EDU>
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 2019-02-26 12:01
> > > Subject: Re: Bottomless pit
> > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > >
> > > Hi Andy,
> > >
> > > Thank you for clarifying things - a bit. However, why would
certain
> > > nodes have enormously large numbers vs the average when all things
are
> > > equal as far as policies are concerned?
> > >
> > > I do see average systemstate object delete counts in the *1-2M range*
but
> > > these 4-nodes are exceeding *200M* each. On this server, I deleted
the
> > > systemstate for 60-nodes. Only 9-exceeded 1M and of those 1-exceeded
2M.
> > >
> > > This last node deletion is still running after 3-hours of deletion.
> > >
> > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> > \System
> > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > 243,029,858
> > > objects deleted.
> > >
> > > Even the 3-nodes whose deletion failed (after deleting >110M objects
> > each)
> > > due to some kind of bitfile error - after restarting the deletions
are up
> > > to 50M+ each and still running?
> > >
> > >
> > > On Tue, Feb 26, 2019 at 11:30 AM Andrew Raibeck <storman@us.ibm.com>
> > wrote:
> > >
> > > > Hi Zoltan,
> > > >
> > > > The large number of objects is normal for system state file spaces.
> > System
> > > > state backup uses grouping, with each backed up object being a
member
> > of
> > > > the group. If the same object is included in multiple groups, then
it
> > will
> > > > be counted more than once. Each system state backup creates a new
> > group, so
> > > > as the number of retained backup versions grows, so does the number
of
> > > > groups, and thus the total object count can grow very large.
> > > >
> > > > Best regards,
> > > >
> > > > Andy
> > > >
> > > >
> > > >
> >
> >
____________________________________________________________________________

> >
> > > >
> > > > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> > > >
> > > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on
2019-02-26
> > > > 10:30:04:
> > > >
> > > > > From: Zoltan Forray <zforray@VCU.EDU>
> > > > > To: ADSM-L@VM.MARIST.EDU
> > > > > Date: 2019-02-26 10:30
> > > > > Subject: Re: Bottomless pit
> > > > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > > > >
> > > > > Just found another node with a similar issue on a different ISP
> > server
> > > > with
> > > > > different software levels (client=7.1.4.4 and OS=Windows 2012R2).
> > The
> > > > node
> > > > > name is the same so I think the application is, as well.
> > > > >
> > > > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState
\NULL
> > > > \System
> > > > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > > > *129,785,134
> > > > > objects deleted*.
> > > > >
> > > > >
> > > > > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic
<Sasa.Drnjevic@srce.hr
> > >
> > > > wrote:
> > > > >
> > > > > > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > > > > > Since all of these systemstate deletes crashed/failed, I
> > restarted
> > > > them
> > > > > > and
> > > > > > > 2-of the 3 are already up to 5M objects after running for
> > 30-minutes.
> > > > > > Will
> > > > > > > this ever end successfully?
> > > > > >
> > > > > > All of mine did finish successfully...
> > > > > >
> > > > > > But, none of them had more than 25 mil files deleted.
> > > > > >
> > > > > > Wish you luck ;-)
> > > > > >
> > > > > > Rgds,
> > > > > >
> > > > > > --
> > > > > > Sasa Drnjevic
> > > > > > www.srce.unizg.hr/en/
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic
> > <Sasa.Drnjevic@srce.hr
> > > > >
> > > > > > wrote:
> > > > > > >
> > > > > > >> FYI,
> > > > > > >> same here...but my range/ratio was:
> > > > > > >>
> > > > > > >> ~2 mil occ to 25 mil deleted objects...
> > > > > > >>
> > > > > > >> Never solved the mystery... gave up :->
> > > > > > >>
> > > > > > >>
> > > > > > >> --
> > > > > > >> Sasa Drnjevic
> > > > > > >> www.srce.unizg.hr/en/
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > > > > > >>> Here is a new one.......
> > > > > > >>>
> > > > > > >>> We turned off backing up SystemState last week. Now I am
going
> > > > through
> > > > > > >> and
> > > > > > >>> deleted the Systemstate filesystems.
> > > > > > >>>
> > > > > > >>> Since I wanted to see how many objects would be deleted, I
did
> > a "Q
> > > > > > >>> OCCUPANCY" and preserved the file count numbers for all
Windows
> > > > nodes
> > > > > > on
> > > > > > >>> this server.
> > > > > > >>>
> > > > > > >>> For 4-nodes, the delete of their systemstate filespaces has
> > been
> > > > > > running
> > > > > > >>> for 5-hours. A "Q PROC" shows:
> > > > > > >>>
> > > > > > >>> 2019-02-25 08:52:05 Deleting file space
> > > > > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState
> > (fsId=1)
> > > > > > >> (backup
> > > > > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects
deleted*.
> > > > > > >>>
> > > > > > >>> Considering the occupancy for this node was *~5-Million
> > objects*,
> > > > how
> > > > > > has
> > > > > > >>> it deleted *105-Million* objects (and counting). The other
> > 3-nodes
> > > > in
> > > > > > >>> question are also up to *>100-Million objects deleted* and
none
> > of
> > > > them
> > > > > > >> had
> > > > > > >>> more than *6M objects* in occupancy?
> > > > > > >>>
> > > > > > >>> At this rate, the deleting objects count for 4-nodes
> > systemstate
> > > > will
> > > > > > >>> exceed 50% of the total occupancy objects on this server
that
> > > > houses
> > > > > > the
> > > > > > >>> backups for* 263-nodes*?
> > > > > > >>>
> > > > > > >>> I vaguely remember some bug/APAR about systemstate backups
> > being
> > > > > > >>> large/slow/causing performance problems with expiration but
> > these
> > > > nodes
> > > > > > >>> client levels are fairly current (8.1.0.2 - staying below
the
> > > > > > >> 8.1.2/SSL/TLS
> > > > > > >>> enforcement levels) and the ISP server is 7.1.7.400. All
of
> > these
> > > > are
> > > > > > >>> Windows 2016, if that matters.
> > > > > > >>>
> > > > > > >>> --
> > > > > > >>> *Zoltan Forray*
> > > > > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware
Administrator
> > > > > > >>> Xymon Monitor Administrator
> > > > > > >>> VMware Administrator
> > > > > > >>> Virginia Commonwealth University
> > > > > > >>> UCC/Office of Technology Services
> > > > > > >>> www.ucc.vcu.edu
> > > > > > >>> zforray@vcu.edu - 804-828-4807
> > > > > > >>> Don't be a phishing victim - VCU and other reputable
> > organizations
> > > > will
> > > > > > >>> never use email to request that you reply with your
password,
> > > > social
> > > > > > >>> security number or confidential personal information. For
more
> > > > details
> > > > > > >>> visit https://urldefense.proofpoint.com/v2/url?
> > > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > >
> > > >
> > > >
> > >
> >
> >
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> >
> > > >
> > > > > > >>>
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > *Zoltan Forray*
> > > > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware
Administrator
> > > > > > > Xymon Monitor Administrator
> > > > > > > VMware Administrator
> > > > > > > Virginia Commonwealth University
> > > > > > > UCC/Office of Technology Services
> > > > > > > www.ucc.vcu.edu
> > > > > > > zforray@vcu.edu - 804-828-4807
> > > > > > > Don't be a phishing victim - VCU and other reputable
> > organizations
> > > > will
> > > > > > > never use email to request that you reply with your password,
> > social
> > > > > > > security number or confidential personal information. For
more
> > > > details
> > > > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > >
> > > >
> > > >
> > >
> >
> >
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> >
> > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > *Zoltan Forray*
> > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > Xymon Monitor Administrator
> > > > > VMware Administrator
> > > > > Virginia Commonwealth University
> > > > > UCC/Office of Technology Services
> > > > > www.ucc.vcu.edu
> > > > > zforray@vcu.edu - 804-828-4807
> > > > > Don't be a phishing victim - VCU and other reputable
organizations
> > will
> > > > > never use email to request that you reply with your password,
social
> > > > > security number or confidential personal information. For more
> > details
> > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > >
> > > >
> > > >
> > >
> >
> >
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=

> >
> > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > *Zoltan Forray*
> > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > Xymon Monitor Administrator
> > > VMware Administrator
> > > Virginia Commonwealth University
> > > UCC/Office of Technology Services
> > > www.ucc.vcu.edu
> > > zforray@vcu.edu - 804-828-4807
> > > Don't be a phishing victim - VCU and other reputable organizations
will
> > > never use email to request that you reply with your password, social
> > > security number or confidential personal information. For more
details
> > > visit https://urldefense.proofpoint.com/v2/url?
> > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=hQoTgXdTWRp-
> > >
> >
> >
>
c4tE7WF3GSnI5KTnZ9r6rwUQJqLtLu0&s=1kN_Gmc2Qiaab36WLtyjbl66sf5H54wTfT4PyG9zCj8&e=

> >
> > >
> >
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit https://urldefense.proofpoint.com/v2/url?
> u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
>
siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=N9AQefDr2d0WLgIG8G7ONWkk-3POpjbBbVM3mI2lzuM&s=qJtGBK9WeZFU_1uZjZJN_jNGKZUQNnnB9MIn6JPDPC8&e=

>
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 26, 2019 10:59AM
Hi Andy,

Here are some end-of-session statistics for the node I just deleted 300M
systemstate objects for. Note, 02/20/2019 was when we pushed down via
CLOPT "DOMAIN ALL-LOCAL -SYSTEMSTATE" and the numbers dropped from 400K+ to
120+

02/12/2019 06:08:46 ANE4954I (Session: 77016, Node: ORIONADDWEB) Total
number of objects backed up: 440,704 (SESSION: 77016)
02/13/2019 06:14:55 ANE4954I (Session: 82515, Node: ORIONADDWEB) Total
number of objects backed up: 441,334 (SESSION: 82515)
02/14/2019 06:23:42 ANE4954I (Session: 86972, Node: ORIONADDWEB) Total
number of objects backed up: 447,772 (SESSION: 86972)
02/15/2019 06:08:11 ANE4954I (Session: 91270, Node: ORIONADDWEB) Total
number of objects backed up: 444,531 (SESSION: 91270)
02/16/2019 06:19:17 ANE4954I (Session: 95112, Node: ORIONADDWEB) Total
number of objects backed up: 443,944 (SESSION: 95112)
02/17/2019 06:05:30 ANE4954I (Session: 99758, Node: ORIONADDWEB) Total
number of objects backed up: 443,590 (SESSION: 99758)
02/18/2019 06:18:36 ANE4954I (Session: 103929, Node: ORIONADDWEB) Total
number of objects backed up: 444,178 (SESSION: 103929)
02/19/2019 06:23:14 ANE4954I (Session: 108011, Node: ORIONADDWEB) Total
number of objects backed up: 444,760 (SESSION: 108011)
02/20/2019 06:09:34 ANE4954I (Session: 112209, Node: ORIONADDWEB) Total
number of objects backed up: 445,331 (SESSION: 112209)
02/21/2019 05:03:00 ANE4954I (Session: 116471, Node: ORIONADDWEB) Total
number of objects backed up: 124 (SESSION: 116471)
02/22/2019 05:02:21 ANE4954I (Session: 120825, Node: ORIONADDWEB) Total
number of objects backed up: 129 (SESSION: 120825)
02/23/2019 05:18:47 ANE4954I (Session: 124689, Node: ORIONADDWEB) Total
number of objects backed up: 125 (SESSION: 124689)
02/24/2019 05:11:25 ANE4954I (Session: 127775, Node: ORIONADDWEB) Total
number of objects backed up: 135 (SESSION: 127775)
02/25/2019 05:21:58 ANE4954I (Session: 131511, Node: ORIONADDWEB) Total
number of objects backed up: 168 (SESSION: 131511)
02/26/2019 05:02:19 ANE4954I (Session: 135496, Node: ORIONADDWEB) Total
number of objects backed up: 123 (SESSION: 135496)

Here are all of the stats for 02/12/2019
02/12/2019 06:07:49 ANE4940I (Session: 77025, Node: ORIONADDWEB) Backing
up object 'SystemState' component 'System State' using shadow copy.
(SESSION: 77025)
02/12/2019 06:07:49 ANE4182I (Session: 77025, Node: ORIONADDWEB)
AS_ENTITY='ORIONADDWEB' (SESSION: 77025)
02/12/2019 06:07:49 ANE4183I (Session: 77025, Node: ORIONADDWEB)
SUB_ENTITY='System State' (SESSION: 77025)
02/12/2019 06:07:49 ANE4184I (Session: 77025, Node: ORIONADDWEB)
ACTIVITY_TYPE='Full' (SESSION: 77025)
02/12/2019 06:07:49 ANE4181I (Session: 77025, Node: ORIONADDWEB)
ACTIVITY_DETAILS='SystemState' (SESSION: 77025)
02/12/2019 06:07:49 ANE4186I (Session: 77025, Node: ORIONADDWEB)
ENTITY='ORIONADDWEB' (SESSION: 77025)
02/12/2019 06:07:49 ANE4941I (Session: 77025, Node: ORIONADDWEB) Backup of
object 'SystemState' component 'System State' finished successfully.
(SESSION: 77025)
02/12/2019 06:08:46 ANE4952I (Session: 77016, Node: ORIONADDWEB) Total
number of objects inspected: 761,570 (SESSION: 77016)
02/12/2019 06:08:46 ANE4951I (Session: 77016, Node: ORIONADDWEB) Total
number of objects assigned: 203,773 (SESSION: 77016)
02/12/2019 06:08:46 ANE4954I (Session: 77016, Node: ORIONADDWEB) Total
number of objects backed up: 440,704 (SESSION: 77016)
02/12/2019 06:08:46 ANE4958I (Session: 77016, Node: ORIONADDWEB) Total
number of objects updated: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4960I (Session: 77016, Node: ORIONADDWEB) Total
number of objects rebound: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4957I (Session: 77016, Node: ORIONADDWEB) Total
number of objects deleted: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4970I (Session: 77016, Node: ORIONADDWEB) Total
number of objects expired: 3 (SESSION: 77016)
02/12/2019 06:08:46 ANE4959I (Session: 77016, Node: ORIONADDWEB) Total
number of objects failed: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4197I (Session: 77016, Node: ORIONADDWEB) Total
number of objects encrypted: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4965I (Session: 77016, Node: ORIONADDWEB) Total
number of subfile objects: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4914I (Session: 77016, Node: ORIONADDWEB) Total
number of objects grew: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4916I (Session: 77016, Node: ORIONADDWEB) Total
number of retries: 0 (SESSION: 77016)
02/12/2019 06:08:46 ANE4977I (Session: 77016, Node: ORIONADDWEB) Total
number of bytes inspected: 41.60 GB (SESSION: 77016)
02/12/2019 06:08:46 ANE4961I (Session: 77016, Node: ORIONADDWEB) Total
number of bytes transferred: 1.26 GB (SESSION: 77016)
02/12/2019 06:08:46 ANE4963I (Session: 77016, Node: ORIONADDWEB) Data
transfer time: 3.74 sec (SESSION: 77016)
02/12/2019 06:08:46 ANE4966I (Session: 77016, Node: ORIONADDWEB) Network
data transfer rate: 353,277.35 KB/sec (SESSION: 77016)
02/12/2019 06:08:46 ANE4967I (Session: 77016, Node: ORIONADDWEB) Aggregate
data transfer rate: 329.82 KB/sec (SESSION: 77016)
02/12/2019 06:08:46 ANE4968I (Session: 77016, Node: ORIONADDWEB) Objects
compressed by: 0% (SESSION: 77016)
02/12/2019 06:08:46 ANE4976I (Session: 77016, Node: ORIONADDWEB) Total
data reduction ratio: 96.97% (SESSION: 77016)
02/12/2019 06:08:46 ANE4969I (Session: 77016, Node: ORIONADDWEB) Subfile
objects reduced by: 0% (SESSION: 77016)
02/12/2019 06:08:46 ANE4964I (Session: 77016, Node: ORIONADDWEB) Elapsed
processing time: 01:06:52 (SESSION: 77016)

and the latest backup session:
02/26/2019 05:02:19 ANE4952I (Session: 135496, Node: ORIONADDWEB) Total
number of objects inspected: 117,152 (SESSION: 135496)
02/26/2019 05:02:19 ANE4954I (Session: 135496, Node: ORIONADDWEB) Total
number of objects backed up: 123 (SESSION: 135496)
02/26/2019 05:02:19 ANE4958I (Session: 135496, Node: ORIONADDWEB) Total
number of objects updated: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4960I (Session: 135496, Node: ORIONADDWEB) Total
number of objects rebound: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4957I (Session: 135496, Node: ORIONADDWEB) Total
number of objects deleted: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4970I (Session: 135496, Node: ORIONADDWEB) Total
number of objects expired: 4 (SESSION: 135496)
02/26/2019 05:02:19 ANE4959I (Session: 135496, Node: ORIONADDWEB) Total
number of objects failed: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4197I (Session: 135496, Node: ORIONADDWEB) Total
number of objects encrypted: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4965I (Session: 135496, Node: ORIONADDWEB) Total
number of subfile objects: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4914I (Session: 135496, Node: ORIONADDWEB) Total
number of objects grew: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4916I (Session: 135496, Node: ORIONADDWEB) Total
number of retries: 0 (SESSION: 135496)
02/26/2019 05:02:19 ANE4977I (Session: 135496, Node: ORIONADDWEB) Total
number of bytes inspected: 26.59 GB (SESSION: 135496)
02/26/2019 05:02:19 ANE4961I (Session: 135496, Node: ORIONADDWEB) Total
number of bytes transferred: 64.41 MB (SESSION: 135496)
02/26/2019 05:02:19 ANE4963I (Session: 135496, Node: ORIONADDWEB) Data
transfer time: 0.21 sec (SESSION: 135496)
02/26/2019 05:02:19 ANE4966I (Session: 135496, Node: ORIONADDWEB) Network
data transfer rate: 299,827.21 KB/sec (SESSION: 135496)
02/26/2019 05:02:19 ANE4967I (Session: 135496, Node: ORIONADDWEB)
Aggregate data transfer rate: 852.00 KB/sec (SESSION: 135496)
02/26/2019 05:02:19 ANE4968I (Session: 135496, Node: ORIONADDWEB) Objects
compressed by: 0% (SESSION: 135496)
02/26/2019 05:02:19 ANE4976I (Session: 135496, Node: ORIONADDWEB) Total
data reduction ratio: 99.77% (SESSION: 135496)
02/26/2019 05:02:19 ANE4969I (Session: 135496, Node: ORIONADDWEB) Subfile
objects reduced by: 0% (SESSION: 135496)
02/26/2019 05:02:19 ANE4964I (Session: 135496, Node: ORIONADDWEB) Elapsed
processing time: 00:01:17 (SESSION: 135496)



On Tue, Feb 26, 2019 at 1:32 PM Andrew Raibeck <storman@us.ibm.com> wrote:

> Hi Zoltan,
>
> There is nothing in the client code you are using that would cause
> excessive backups. If you happen to have backup logs going far back enough,
> those might show an excessive number of system state objects. Normally I
> would expect the number of backup objects N to be 90000 < N < 150000. that
> is a roughly estimated range, depends on the individual system, and maybe N
> could be a little higher... but my radar would certainly be alerted if it
> was more than 200,000 and growing daily.
>
> Regards,
>
> Andy
>
>
> ____________________________________________________________________________
>
> Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> 12:52:27:
>
> > From: Zoltan Forray <zforray@VCU.EDU>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 2019-02-26 12:52
> > Subject: Re: Bottomless pit
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Hi Andy,
> >
> > We do not have any policy/managementclass with versions higher than 5
> > across our complex and retain-extra copies is 30-days. I do not know what
> > these servers/applications do so I will ask about the Crypto Keys you
> > referred to in the link.
> >
> > For the first 3-with this issue, the client is 8.1.0.2 and the OS is
> > Windows 2016. The current one that just successfully finished, is
> Windows
> > 2012R2 and the client is 7.1.4.4
> >
> > 2/26/2019 12:40:13 PM ANR0806I The ORIONADDWEB\SystemState\NULL\System
> > State\SystemState (fsId=1) file space was deleted for node ORIONADDWEB:
> > 300,346,169 objects were deleted.
> > 2/26/2019 12:40:13 PM ANR0987I Process 1382 for DELETE FILESPACE running
> in
> > the BACKGROUND processed 300,346,169 items with a completion state of
> > SUCCESS at 12:40:09 PM.
> >
> >
> > On Tue, Feb 26, 2019 at 12:19 PM Andrew Raibeck <storman@us.ibm.com>
> wrote:
> >
> > > Hi Zoltan,
> > >
> > > What policy was system state bound to for the nodes that exceed 200
> million
> > > objects? Is it possible that the number of versions retained was very
> > > large?
> > >
> > > Another thought is whether the OS of each of these nodes might be
> affected
> > > by this issue:
> > >
> > >
> > > https://urldefense.proofpoint.com/v2/url?
> >
>
> u=https-3A__social.technet.microsoft.com_Forums_en-2DUS_e2632b6e-2D76b9-2D4640-2D85b9-2D698fb55199d8_cprogramdatamicrosoftcryptorsamachinekeys-2Dis-2Dfilling-2Dmy-2Ddisk-2Dspace-3Fforum-3Dwinservergen&d=DwIBaQ&c=jf_iaSHvJObTbx-
>
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=N9AQefDr2d0WLgIG8G7ONWkk-3POpjbBbVM3mI2lzuM&s=rNhhFizJITYew56twqREPr482w-
>
> > Hss5ZsFp5-UY_W58&e=
> > >
> > > We have seen huge system state backups that can occur due to that
> issue, so
> > > if it occurred for those nodes, that is another explanation.
> > >
> > > Regards,
> > >
> > > Andy
> > >
> > >
> > >
>
> ____________________________________________________________________________
>
> > >
> > > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> > >
> > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2019-02-26
> > > 11:59:43:
> > >
> > > > From: Zoltan Forray <zforray@VCU.EDU>
> > > > To: ADSM-L@VM.MARIST.EDU
> > > > Date: 2019-02-26 12:01
> > > > Subject: Re: Bottomless pit
> > > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > > >
> > > > Hi Andy,
> > > >
> > > > Thank you for clarifying things - a bit. However, why would
> certain
> > > > nodes have enormously large numbers vs the average when all things
> are
> > > > equal as far as policies are concerned?
> > > >
> > > > I do see average systemstate object delete counts in the *1-2M range*
> but
> > > > these 4-nodes are exceeding *200M* each. On this server, I deleted
> the
> > > > systemstate for 60-nodes. Only 9-exceeded 1M and of those 1-exceeded
> 2M.
> > > >
> > > > This last node deletion is still running after 3-hours of deletion.
> > > >
> > > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState\NULL
> > > \System
> > > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > > 243,029,858
> > > > objects deleted.
> > > >
> > > > Even the 3-nodes whose deletion failed (after deleting >110M objects
> > > each)
> > > > due to some kind of bitfile error - after restarting the deletions
> are up
> > > > to 50M+ each and still running?
> > > >
> > > >
> > > > On Tue, Feb 26, 2019 at 11:30 AM Andrew Raibeck <storman@us.ibm.com>
> > > wrote:
> > > >
> > > > > Hi Zoltan,
> > > > >
> > > > > The large number of objects is normal for system state file spaces.
> > > System
> > > > > state backup uses grouping, with each backed up object being a
> member
> > > of
> > > > > the group. If the same object is included in multiple groups, then
> it
> > > will
> > > > > be counted more than once. Each system state backup creates a new
> > > group, so
> > > > > as the number of retained backup versions grows, so does the number
> of
> > > > > groups, and thus the total object count can grow very large.
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Andy
> > > > >
> > > > >
> > > > >
> > >
> > >
>
> ____________________________________________________________________________
>
> > >
> > > > >
> > > > > Andrew Raibeck | IBM Spectrum Protect Level 3 | storman@us.ibm.com
> > > > >
> > > > > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on
> 2019-02-26
> > > > > 10:30:04:
> > > > >
> > > > > > From: Zoltan Forray <zforray@VCU.EDU>
> > > > > > To: ADSM-L@VM.MARIST.EDU
> > > > > > Date: 2019-02-26 10:30
> > > > > > Subject: Re: Bottomless pit
> > > > > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > > > > >
> > > > > > Just found another node with a similar issue on a different ISP
> > > server
> > > > > with
> > > > > > different software levels (client=7.1.4.4 and OS=Windows 2012R2).
> > > The
> > > > > node
> > > > > > name is the same so I think the application is, as well.
> > > > > >
> > > > > > 2019-02-26 08:57:56 Deleting file space ORIONADDWEB\SystemState
> \NULL
> > > > > \System
> > > > > > State\SystemState (fsId=1) (backup data) for node ORIONADDWEB:
> > > > > *129,785,134
> > > > > > objects deleted*.
> > > > > >
> > > > > >
> > > > > > On Tue, Feb 26, 2019 at 9:15 AM Sasa Drnjevic
> <Sasa.Drnjevic@srce.hr
> > > >
> > > > > wrote:
> > > > > >
> > > > > > > On 26.2.2019. 15:01, Zoltan Forray wrote:
> > > > > > > > Since all of these systemstate deletes crashed/failed, I
> > > restarted
> > > > > them
> > > > > > > and
> > > > > > > > 2-of the 3 are already up to 5M objects after running for
> > > 30-minutes.
> > > > > > > Will
> > > > > > > > this ever end successfully?
> > > > > > >
> > > > > > > All of mine did finish successfully...
> > > > > > >
> > > > > > > But, none of them had more than 25 mil files deleted.
> > > > > > >
> > > > > > > Wish you luck ;-)
> > > > > > >
> > > > > > > Rgds,
> > > > > > >
> > > > > > > --
> > > > > > > Sasa Drnjevic
> > > > > > > www.srce.unizg.hr/en/
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > On Mon, Feb 25, 2019 at 4:25 PM Sasa Drnjevic
> > > <Sasa.Drnjevic@srce.hr
> > > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > >> FYI,
> > > > > > > >> same here...but my range/ratio was:
> > > > > > > >>
> > > > > > > >> ~2 mil occ to 25 mil deleted objects...
> > > > > > > >>
> > > > > > > >> Never solved the mystery... gave up :->
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> --
> > > > > > > >> Sasa Drnjevic
> > > > > > > >> www.srce.unizg.hr/en/
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On 2019-02-25 20:05, Zoltan Forray wrote:
> > > > > > > >>> Here is a new one.......
> > > > > > > >>>
> > > > > > > >>> We turned off backing up SystemState last week. Now I am
> going
> > > > > through
> > > > > > > >> and
> > > > > > > >>> deleted the Systemstate filesystems.
> > > > > > > >>>
> > > > > > > >>> Since I wanted to see how many objects would be deleted, I
> did
> > > a "Q
> > > > > > > >>> OCCUPANCY" and preserved the file count numbers for all
> Windows
> > > > > nodes
> > > > > > > on
> > > > > > > >>> this server.
> > > > > > > >>>
> > > > > > > >>> For 4-nodes, the delete of their systemstate filespaces has
> > > been
> > > > > > > running
> > > > > > > >>> for 5-hours. A "Q PROC" shows:
> > > > > > > >>>
> > > > > > > >>> 2019-02-25 08:52:05 Deleting file space
> > > > > > > >>> ORION-POLL-WEST\SystemState\NULL\System State\SystemState
> > > (fsId=1)
> > > > > > > >> (backup
> > > > > > > >>> data) for node ORION-POLL-WEST: *105,511,859 objects
> deleted*.
> > > > > > > >>>
> > > > > > > >>> Considering the occupancy for this node was *~5-Million
> > > objects*,
> > > > > how
> > > > > > > has
> > > > > > > >>> it deleted *105-Million* objects (and counting). The other
> > > 3-nodes
> > > > > in
> > > > > > > >>> question are also up to *>100-Million objects deleted* and
> none
> > > of
> > > > > them
> > > > > > > >> had
> > > > > > > >>> more than *6M objects* in occupancy?
> > > > > > > >>>
> > > > > > > >>> At this rate, the deleting objects count for 4-nodes
> > > systemstate
> > > > > will
> > > > > > > >>> exceed 50% of the total occupancy objects on this server
> that
> > > > > houses
> > > > > > > the
> > > > > > > >>> backups for* 263-nodes*?
> > > > > > > >>>
> > > > > > > >>> I vaguely remember some bug/APAR about systemstate backups
> > > being
> > > > > > > >>> large/slow/causing performance problems with expiration but
> > > these
> > > > > nodes
> > > > > > > >>> client levels are fairly current (8.1.0.2 - staying below
> the
> > > > > > > >> 8.1.2/SSL/TLS
> > > > > > > >>> enforcement levels) and the ISP server is 7.1.7.400. All
> of
> > > these
> > > > > are
> > > > > > > >>> Windows 2016, if that matters.
> > > > > > > >>>
> > > > > > > >>> --
> > > > > > > >>> *Zoltan Forray*
> > > > > > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware
> Administrator
> > > > > > > >>> Xymon Monitor Administrator
> > > > > > > >>> VMware Administrator
> > > > > > > >>> Virginia Commonwealth University
> > > > > > > >>> UCC/Office of Technology Services
> > > > > > > >>> www.ucc.vcu.edu
> > > > > > > >>> zforray@vcu.edu - 804-828-4807
> > > > > > > >>> Don't be a phishing victim - VCU and other reputable
> > > organizations
> > > > > will
> > > > > > > >>> never use email to request that you reply with your
> password,
> > > > > social
> > > > > > > >>> security number or confidential personal information. For
> more
> > > > > details
> > > > > > > >>> visit https://urldefense.proofpoint.com/v2/url?
> > > > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > >
> > > > > > > >>>
> > > > > > > >>
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > *Zoltan Forray*
> > > > > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware
> Administrator
> > > > > > > > Xymon Monitor Administrator
> > > > > > > > VMware Administrator
> > > > > > > > Virginia Commonwealth University
> > > > > > > > UCC/Office of Technology Services
> > > > > > > > www.ucc.vcu.edu
> > > > > > > > zforray@vcu.edu - 804-828-4807
> > > > > > > > Don't be a phishing victim - VCU and other reputable
> > > organizations
> > > > > will
> > > > > > > > never use email to request that you reply with your password,
> > > social
> > > > > > > > security number or confidential personal information. For
> more
> > > > > details
> > > > > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *Zoltan Forray*
> > > > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > > > Xymon Monitor Administrator
> > > > > > VMware Administrator
> > > > > > Virginia Commonwealth University
> > > > > > UCC/Office of Technology Services
> > > > > > www.ucc.vcu.edu
> > > > > > zforray@vcu.edu - 804-828-4807
> > > > > > Don't be a phishing victim - VCU and other reputable
> organizations
> > > will
> > > > > > never use email to request that you reply with your password,
> social
> > > > > > security number or confidential personal information. For more
> > > details
> > > > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > > >
> > > > >
> > > > >
> > > >
> > >
> > >
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=zUs0X2c-0aFkUJkwVOlc7YfGUwuUiaieLUaugGIXmQQ&s=qfYWZxPRiZPFOeph0XdxtVb3FtsGj1zoGEzhMmGCNqA&e=
>
> > >
> > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > *Zoltan Forray*
> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > Xymon Monitor Administrator
> > > > VMware Administrator
> > > > Virginia Commonwealth University
> > > > UCC/Office of Technology Services
> > > > www.ucc.vcu.edu
> > > > zforray@vcu.edu - 804-828-4807
> > > > Don't be a phishing victim - VCU and other reputable organizations
> will
> > > > never use email to request that you reply with your password, social
> > > > security number or confidential personal information. For more
> details
> > > > visit https://urldefense.proofpoint.com/v2/url?
> > > > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> > > > siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=hQoTgXdTWRp-
> > > >
> > >
> > >
> >
>
> c4tE7WF3GSnI5KTnZ9r6rwUQJqLtLu0&s=1kN_Gmc2Qiaab36WLtyjbl66sf5H54wTfT4PyG9zCj8&e=
>
> > >
> > > >
> > >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit https://urldefense.proofpoint.com/v2/url?
> > u=http-3A__phishing.vcu.edu_&d=DwIBaQ&c=jf_iaSHvJObTbx-
> >
>
> siA1ZOg&r=Ij6DLy1l7wDpCbTfcDkLC_KknvhyGdCy_RnAGnhV37I&m=N9AQefDr2d0WLgIG8G7ONWkk-3POpjbBbVM3mI2lzuM&s=qJtGBK9WeZFU_1uZjZJN_jNGKZUQNnnB9MIn6JPDPC8&e=
>
> >
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Deschner, Roger Douglas
Re: Bottomless pit
February 26, 2019 11:59AM
I set up a cron job that does filespace and node deletions in a batch on weekends. Then if it takes a long time, I don't care. I set this up back on server version 5, when deletions took a REALLY long time, and I kept it in V6 to deal with exactly this issue with System State. We've been using Client Option Sets to prevent System State backup for several years now, but sometimes one slips through. Also we now have actual data filespaces with many millions of stored objects, and sometimes one of those needs to be deleted, another reason to batch deletions on weekends.

Roger Deschner
University of Illinois at Chicago
"I have not lost my mind -- it is backed up on tape somewhere."
________________________________________
From: Sasa Drnjevic <Sasa.Drnjevic@SRCE.HR>
Sent: Monday, February 25, 2019 15:25
Subject: Re: Bottomless pit

FYI,
same here...but my range/ratio was:

~2 mil occ to 25 mil deleted objects...

Never solved the mystery... gave up :->


--
Sasa Drnjevic
www.srce.unizg.hr/en/




On 2019-02-25 20:05, Zoltan Forray wrote:
> Here is a new one.......
>
> We turned off backing up SystemState last week. Now I am going through and
> deleted the Systemstate filesystems.
>
> Since I wanted to see how many objects would be deleted, I did a "Q
> OCCUPANCY" and preserved the file count numbers for all Windows nodes on
> this server.
>
> For 4-nodes, the delete of their systemstate filespaces has been running
> for 5-hours. A "Q PROC" shows:
>
> 2019-02-25 08:52:05 Deleting file space
> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1) (backup
> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
>
> Considering the occupancy for this node was *~5-Million objects*, how has
> it deleted *105-Million* objects (and counting). The other 3-nodes in
> question are also up to *>100-Million objects deleted* and none of them had
> more than *6M objects* in occupancy?
>
> At this rate, the deleting objects count for 4-nodes systemstate will
> exceed 50% of the total occupancy objects on this server that houses the
> backups for* 263-nodes*?
>
> I vaguely remember some bug/APAR about systemstate backups being
> large/slow/causing performance problems with expiration but these nodes
> client levels are fairly current (8.1.0.2 - staying below the 8.1.2/SSL/TLS
> enforcement levels) and the ISP server is 7.1.7.400. All of these are
> Windows 2016, if that matters.
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>
This message was imported via the External PhorumMail Module
Harris, Steven
Re: Bottomless pit
February 26, 2019 01:59PM
Zoltan

I had something similar. Prod node of application had a reasonable number of systemstate objects. Two nonprod nodes of same application had huge numbers of systemstate. This first came to my attention when daily expiration was taking 3 or more days instead of the usual 30 minutes.

As far as I can tell the nonprod nodes were set up in a lazy manner and the database and application were dumped on the C: drive in a place that is part of systemstate. I excluded the systemstate for these two nodes and deleted the filespaces - which took a week or more. The local ticket is still open with the application people, almost a year on.

Hope that helps

Steve.

Steven Harris
TSM Admin/Consultant
Canberra Australia.


-----Original Message-----
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Deschner, Roger Douglas
Sent: Wednesday, 27 February 2019 6:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Bottomless pit

I set up a cron job that does filespace and node deletions in a batch on weekends. Then if it takes a long time, I don't care. I set this up back on server version 5, when deletions took a REALLY long time, and I kept it in V6 to deal with exactly this issue with System State. We've been using Client Option Sets to prevent System State backup for several years now, but sometimes one slips through. Also we now have actual data filespaces with many millions of stored objects, and sometimes one of those needs to be deleted, another reason to batch deletions on weekends.

Roger Deschner
University of Illinois at Chicago
"I have not lost my mind -- it is backed up on tape somewhere."
________________________________________
From: Sasa Drnjevic <Sasa.Drnjevic@SRCE.HR>
Sent: Monday, February 25, 2019 15:25
Subject: Re: Bottomless pit

FYI,
same here...but my range/ratio was:

~2 mil occ to 25 mil deleted objects...

Never solved the mystery... gave up :->


--
Sasa Drnjevic
www.srce.unizg.hr/en/




On 2019-02-25 20:05, Zoltan Forray wrote:
> Here is a new one.......
>
> We turned off backing up SystemState last week. Now I am going
> through and deleted the Systemstate filesystems.
>
> Since I wanted to see how many objects would be deleted, I did a "Q
> OCCUPANCY" and preserved the file count numbers for all Windows nodes
> on this server.
>
> For 4-nodes, the delete of their systemstate filespaces has been
> running for 5-hours. A "Q PROC" shows:
>
> 2019-02-25 08:52:05 Deleting file space
> ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> (backup
> data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
>
> Considering the occupancy for this node was *~5-Million objects*, how
> has it deleted *105-Million* objects (and counting). The other
> 3-nodes in question are also up to *>100-Million objects deleted* and
> none of them had more than *6M objects* in occupancy?
>
> At this rate, the deleting objects count for 4-nodes systemstate will
> exceed 50% of the total occupancy objects on this server that houses
> the backups for* 263-nodes*?
>
> I vaguely remember some bug/APAR about systemstate backups being
> large/slow/causing performance problems with expiration but these
> nodes client levels are fairly current (8.1.0.2 - staying below the
> 8.1.2/SSL/TLS enforcement levels) and the ISP server is 7.1.7.400.
> All of these are Windows 2016, if that matters.
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon
> Monitor Administrator VMware Administrator Virginia Commonwealth
> University UCC/Office of Technology Services www.ucc.vcu.edu
> zforray@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and
> other reputable organizations will never use email to request that you
> reply with your password, social security number or confidential
> personal information. For more details visit http://phishing.vcu.edu/
>

This message and any attachment is confidential and may be privileged or otherwise protected from disclosure. You should immediately delete the message if you are not the intended recipient. If you have received this email by mistake please delete it from your system; you should not copy the message or disclose its content to anyone.

This electronic communication may contain general financial product advice but should not be relied upon or construed as a recommendation of any financial product. The information has been prepared without taking into account your objectives, financial situation or needs. You should consider the Product Disclosure Statement relating to the financial product and consult your financial adviser before making a decision about whether to acquire, hold or dispose of a financial product.

For further details on the financial product please go to http://www.bt.com.au

Past performance is not a reliable indicator of future performance.
This message was imported via the External PhorumMail Module
Zoltan Forray
Re: Bottomless pit
February 27, 2019 04:59AM
Thanks for the confirmations. It took multiple attempts over 3-days to
finally delete all of the systemstate objects for these 4-nodes, totalling
over 1.2B objects deleted. The deletes kept constantly failing with errors
like this:

2/26/2019 2:24:27 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
fetching row in table "Backup.Objects".
2/26/2019 2:24:27 PM ANR1880W Server transaction was canceled because of a
conflicting lock on table BACKUP_OBJECTS.
2/26/2019 2:24:27 PM ANR1893E Process 2399 for DELETE FILESPACE completed
with a completion state of FAILURE.
2/26/2019 2:46:04 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
fetching row in table "Backup.Objects".
2/26/2019 2:46:04 PM ANR1880W Server transaction was canceled because of a
conflicting lock on table BACKUP_OBJECTS.
2/26/2019 2:46:04 PM ANR1893E Process 2400 for DELETE FILESPACE completed
with a completion state of FAILURE.
2/26/2019 5:03:16 PM ANR0106E imfsdel.c(2723): Unexpected error 4522
fetching row in table "Backup.Objects".
2/26/2019 5:03:16 PM ANR1880W Server transaction was canceled because of a
conflicting lock on table BACKUP_OBJECTS.
2/26/2019 5:03:16 PM ANR1893E Process 2404 for DELETE FILESPACE completed
with a completion state of FAILURE.


On Tue, Feb 26, 2019 at 4:55 PM Harris, Steven <
steven.harris@btfinancialgroup.com> wrote:

> Zoltan
>
> I had something similar. Prod node of application had a reasonable number
> of systemstate objects. Two nonprod nodes of same application had huge
> numbers of systemstate. This first came to my attention when daily
> expiration was taking 3 or more days instead of the usual 30 minutes.
>
> As far as I can tell the nonprod nodes were set up in a lazy manner and
> the database and application were dumped on the C: drive in a place that is
> part of systemstate. I excluded the systemstate for these two nodes and
> deleted the filespaces - which took a week or more. The local ticket is
> still open with the application people, almost a year on.
>
> Hope that helps
>
> Steve.
>
> Steven Harris
> TSM Admin/Consultant
> Canberra Australia.
>
>
> -----Original Message-----
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Deschner, Roger Douglas
> Sent: Wednesday, 27 February 2019 6:49 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Bottomless pit
>
> I set up a cron job that does filespace and node deletions in a batch on
> weekends. Then if it takes a long time, I don't care. I set this up back on
> server version 5, when deletions took a REALLY long time, and I kept it in
> V6 to deal with exactly this issue with System State. We've been using
> Client Option Sets to prevent System State backup for several years now,
> but sometimes one slips through. Also we now have actual data filespaces
> with many millions of stored objects, and sometimes one of those needs to
> be deleted, another reason to batch deletions on weekends.
>
> Roger Deschner
> University of Illinois at Chicago
> "I have not lost my mind -- it is backed up on tape somewhere."
> ________________________________________
> From: Sasa Drnjevic <Sasa.Drnjevic@SRCE.HR>
> Sent: Monday, February 25, 2019 15:25
> Subject: Re: Bottomless pit
>
> FYI,
> same here...but my range/ratio was:
>
> ~2 mil occ to 25 mil deleted objects...
>
> Never solved the mystery... gave up :->
>
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr/en/
>
>
>
>
> On 2019-02-25 20:05, Zoltan Forray wrote:
> > Here is a new one.......
> >
> > We turned off backing up SystemState last week. Now I am going
> > through and deleted the Systemstate filesystems.
> >
> > Since I wanted to see how many objects would be deleted, I did a "Q
> > OCCUPANCY" and preserved the file count numbers for all Windows nodes
> > on this server.
> >
> > For 4-nodes, the delete of their systemstate filespaces has been
> > running for 5-hours. A "Q PROC" shows:
> >
> > 2019-02-25 08:52:05 Deleting file space
> > ORION-POLL-WEST\SystemState\NULL\System State\SystemState (fsId=1)
> > (backup
> > data) for node ORION-POLL-WEST: *105,511,859 objects deleted*.
> >
> > Considering the occupancy for this node was *~5-Million objects*, how
> > has it deleted *105-Million* objects (and counting). The other
> > 3-nodes in question are also up to *>100-Million objects deleted* and
> > none of them had more than *6M objects* in occupancy?
> >
> > At this rate, the deleting objects count for 4-nodes systemstate will
> > exceed 50% of the total occupancy objects on this server that houses
> > the backups for* 263-nodes*?
> >
> > I vaguely remember some bug/APAR about systemstate backups being
> > large/slow/causing performance problems with expiration but these
> > nodes client levels are fairly current (8.1.0.2 - staying below the
> > 8.1.2/SSL/TLS enforcement levels) and the ISP server is 7.1.7.400.
> > All of these are Windows 2016, if that matters.
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon
> > Monitor Administrator VMware Administrator Virginia Commonwealth
> > University UCC/Office of Technology Services www.ucc.vcu.edu
> > zforray@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and
> > other reputable organizations will never use email to request that you
> > reply with your password, social security number or confidential
> > personal information. For more details visit http://phishing.vcu.edu/
> >
>
> This message and any attachment is confidential and may be privileged or
> otherwise protected from disclosure. You should immediately delete the
> message if you are not the intended recipient. If you have received this
> email by mistake please delete it from your system; you should not copy the
> message or disclose its content to anyone.
>
> This electronic communication may contain general financial product advice
> but should not be relied upon or construed as a recommendation of any
> financial product. The information has been prepared without taking into
> account your objectives, financial situation or needs. You should consider
> the Product Disclosure Statement relating to the financial product and
> consult your financial adviser before making a decision about whether to
> acquire, hold or dispose of a financial product.
>
> For further details on the financial product please go to
> http://www.bt.com.au
>
> Past performance is not a reliable indicator of future performance.
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zforray@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/
This message was imported via the External PhorumMail Module
Sorry, only registered users may post in this forum.

Click here to login