[Glass] GemTools-1.0-beta.8.7.1-32x can't log in to GS 3.2.4

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
57 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?



After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:




Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results

 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...

By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?



After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:




Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass





_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Thanks Dale, I’ll try again, but, unfortunately, I can no longer log into GemTools. I get this error: 

'Unable to create a GemStone session.
Netldi ''gs64ldi'' on host ''ip-10-32-103-83'' reports the request ''gemnetobject'' failed:
Password validation failed for user seasideuser because getspnam() returned an error: errno=13,EACCES, Authorization failure (permission denied)’

Did the restore somehow change the password on the glass login in the launcher? 

I can’t log into Topaz either and I did in order to do the restore:

|_____________________________________________________________________________|
topaz> set username DataCurator 
topaz> set gemstone seaside
topaz> set password swordfish
topaz> login
-----------------------------------------------------
GemStone: Error         Fatal
Unable to create a GemStone session.
Netldi 'gs64ldi' on host 'ip-10-32-103-83' reports the request 'gemnetobject'
failed:
Password validation failed for user seasideuser because getspnam()
returned an error: errno=13,EACCES, Authorization failure (permission
denied)
Error Category: [GemStone] Number: 4042 Arg Count: 0


Larry

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results

 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...

By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass






_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
A quick answer (I have a meeting) ... the old extent should work as well as the new extent for our purposes ... the usernames should not have been changed by a simple backup and restore

Dale

On 03/30/2015 10:06 AM, Lawrence Kellogg wrote:
Thanks Dale, I’ll try again, but, unfortunately, I can no longer log into GemTools. I get this error: 

'Unable to create a GemStone session.
Netldi ''gs64ldi'' on host ''ip-10-32-103-83'' reports the request ''gemnetobject'' failed:
Password validation failed for user seasideuser because getspnam() returned an error: errno=13,EACCES, Authorization failure (permission denied)’

Did the restore somehow change the password on the glass login in the launcher? 

I can’t log into Topaz either and I did in order to do the restore:

|_____________________________________________________________________________|
topaz> set username DataCurator 
topaz> set gemstone seaside
topaz> set password swordfish
topaz> login
-----------------------------------------------------
GemStone: Error         Fatal
Unable to create a GemStone session.
Netldi 'gs64ldi' on host 'ip-10-32-103-83' reports the request 'gemnetobject'
failed:
Password validation failed for user seasideuser because getspnam()
returned an error: errno=13,EACCES, Authorization failure (permission
denied)
Error Category: [GemStone] Number: 4042 Arg Count: 0


Larry

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results

 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...

By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass







_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list
A couple of things:

  1. You appear to be doing rpc logins when using topaz? You
      should be able to login in using a linking login by using the -l
      option for topaz ... you will need to do this on the machine that
      is running .. linked logins bypass the netldi

  2. I suspect that you have started the netldi, without using the -g
      option ... so double check how you started the netldi ...

Dale
On 03/30/2015 10:06 AM, Lawrence Kellogg wrote:
Thanks Dale, I’ll try again, but, unfortunately, I can no longer log into GemTools. I get this error: 

'Unable to create a GemStone session.
Netldi ''gs64ldi'' on host ''ip-10-32-103-83'' reports the request ''gemnetobject'' failed:
Password validation failed for user seasideuser because getspnam() returned an error: errno=13,EACCES, Authorization failure (permission denied)’

Did the restore somehow change the password on the glass login in the launcher? 

I can’t log into Topaz either and I did in order to do the restore:

|_____________________________________________________________________________|
topaz> set username DataCurator 
topaz> set gemstone seaside
topaz> set password swordfish
topaz> login
-----------------------------------------------------
GemStone: Error         Fatal
Unable to create a GemStone session.
Netldi 'gs64ldi' on host 'ip-10-32-103-83' reports the request 'gemnetobject'
failed:
Password validation failed for user seasideuser because getspnam()
returned an error: errno=13,EACCES, Authorization failure (permission
denied)
Error Category: [GemStone] Number: 4042 Arg Count: 0


Larry

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results

 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...

By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass







_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Dale, 

  Well, I got it to work by killing everything and restarting. I have to resort to System>>shutDown in topaz because stopSession: or terminateSession:timeout: never seems to kill the GCiUsers. The keep popping back up. 

On Mar 30, 2015, at 2:06 PM, Dale Henrichs <[hidden email]> wrote:

A couple of things:

  1. You appear to be doing rpc logins when using topaz? You
      should be able to login in using a linking login by using the -l
      option for topaz ... you will need to do this on the machine that
      is running .. linked logins bypass the netldi


Yeah, I do use the -l flag.


  2. I suspect that you have started the netldi, without using the -g
      option ... so double check how you started the netldi ...


I didn’t know I had to run li in guest mode. 

Larry



Dale
On 03/30/2015 10:06 AM, Lawrence Kellogg wrote:
Thanks Dale, I’ll try again, but, unfortunately, I can no longer log into GemTools. I get this error: 

'Unable to create a GemStone session.
Netldi ''gs64ldi'' on host ''ip-10-32-103-83'' reports the request ''gemnetobject'' failed:
Password validation failed for user seasideuser because getspnam() returned an error: errno=13,EACCES, Authorization failure (permission denied)’

Did the restore somehow change the password on the glass login in the launcher? 

I can’t log into Topaz either and I did in order to do the restore:

|_____________________________________________________________________________|
topaz> set username DataCurator 
topaz> set gemstone seaside
topaz> set password swordfish
topaz> login
-----------------------------------------------------
GemStone: Error         Fatal
Unable to create a GemStone session.
Netldi 'gs64ldi' on host 'ip-10-32-103-83' reports the request 'gemnetobject'
failed:
Password validation failed for user seasideuser because getspnam()
returned an error: errno=13,EACCES, Authorization failure (permission
denied)
Error Category: [GemStone] Number: 4042 Arg Count: 0


Larry

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results

 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...

By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass








_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list

On 03/30/2015 11:10 AM, Lawrence Kellogg wrote:

  2. I suspect that you have started the netldi, without using the -g
      option ... so double check how you started the netldi ...


I didn’t know I had to run li in guest mode.

Well `had` is a strong word ... If you don't run with `-g` option[1], then you need to provide the hostname and password as part of your gci login[2] (the os username and password parameters) ... which is a bit more secure (like preventing other folks from logging into your server via gci and GemTools and being able to guess your gemstone credentials ...

So with `-g` option your GemStone username and password provide the security ...

Dale



[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/3-Distributed.htm#pgfId-977868
[2] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/3-Distributed.htm#pgfId-188409

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:





Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a href="log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a href="log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass






_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground ...

Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:





Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass







_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 



Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass








_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.

Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)

Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 



Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass









_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC




At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass










_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Okay,

This is good information because it does give us some clues ... I'll look into the RcCounterElement ... some of the Rc Collections can hold onto data with the goal of eliminating/reducing conflicts and that appears to be the case here ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC




At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass











_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list
Larry,

Here's a story ... and I don't quite know where it will go, but

RcCounterElements are used in RcCounter.
RcCounter is used in WARcLastAccessExpiryPolicy.
WARcLastAccessExpiryPolicy is used in WaSession.

We have 1425 WaSession instances, but we have 110670 WARcLastAccessExpiryPolicy instances ...

Hmmm we have 110412 GsProcess instances ... that plus 110412 WAPartialContinuation instances indicates that we've got some continuations stuck somewhere ... the object log would be the most obvious place, but we've cleared out the object log ...

There are no object log artifacts so I don't think that is the culprit, besides the debug continuations are different animals ... this really looks like we've been accumulatating a bunch of session instances somewhere ...

More digging ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC




At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass











_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list
The number of counter elements suggests you should perform some maintenance on your RcCounters.

There is an instance method #cleanupCounter which has the following comment:
"For sessions that are not logged in, centralize the individual session
 element's values to the global session element (at index 1).  This may cause
 concurrency conflict if another session performs this operation."

I believe the counter manages the potential conflict by holding a value per session. You may have a counter element for every session there ever was!
(Although, I find it hard to believe you have a thousand sessions per counter and also have 338,617 counters!)

You should track down the reference paths for a sampling of those counters and find out what's holding on to them.


On Thu, Apr 2, 2015 at 8:32 AM, Dale Henrichs via Glass <[hidden email]> wrote:
Okay,

This is good information because it does give us some clues ... I'll look into the RcCounterElement ... some of the Rc Collections can hold onto data with the goal of eliminating/reducing conflicts and that appears to be the case here ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC




At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass











_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
ON the one hand, Richard is correct, but on the other hand ... there is no good reason to have that many counters hanging around ... the RcCounters are only supposed to survive as long as a WaSession  and that should be normally 10 minutes or so ... the counters are a side effect of the deeper "object leak"

Dale

On 04/02/2015 09:09 AM, Richard Sargent via Glass wrote:
The number of counter elements suggests you should perform some maintenance on your RcCounters.

There is an instance method #cleanupCounter which has the following comment:
"For sessions that are not logged in, centralize the individual session
 element's values to the global session element (at index 1).  This may cause
 concurrency conflict if another session performs this operation."

I believe the counter manages the potential conflict by holding a value per session. You may have a counter element for every session there ever was!
(Although, I find it hard to believe you have a thousand sessions per counter and also have 338,617 counters!)

You should track down the reference paths for a sampling of those counters and find out what's holding on to them.


On Thu, Apr 2, 2015 at 8:32 AM, Dale Henrichs via Glass <[hidden email]> wrote:
Okay,

This is good information because it does give us some clues ... I'll look into the RcCounterElement ... some of the Rc Collections can hold onto data with the goal of eliminating/reducing conflicts and that appears to be the case here ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC




At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass











_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass


_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list
Hmmm ...

Well, I've done a bit of poking around starting with doing a list instances of WaSession (in a small Seaside stone or my own) and then seeing if I can trace back the reference path to the instance but the WaSession data structures are pretty incestuous ... we've snipped the obvious places and now we're hunting for a leak from a non-obvious source ...

in 3.1.x you have to build a reference path manually by using the following process:

  System commitTransaction.
  WASession allInstances anyOne findAllReferences

And then pick a likely object from the list and `findAllReferences` for that ... rinse repeat until you hit a persistent root that you recognize ...

In 3.2 we have a new feature that lets you find reference paths to an instance from persistent roots ... you could attempt to upgrade your repository to run the reference path analysis there ..

The other alternative is to start from some of you own persistent roots for your collections and dive into the objects to see if you recognize some rogue Seaside session state objects ...

Dale

On 04/02/2015 08:48 AM, Dale Henrichs wrote:
Larry,

Here's a story ... and I don't quite know where it will go, but

RcCounterElements are used in RcCounter.
RcCounter is used in WARcLastAccessExpiryPolicy.
WARcLastAccessExpiryPolicy is used in WaSession.

We have 1425 WaSession instances, but we have 110670 WARcLastAccessExpiryPolicy instances ...

Hmmm we have 110412 GsProcess instances ... that plus 110412 WAPartialContinuation instances indicates that we've got some continuations stuck somewhere ... the object log would be the most obvious place, but we've cleared out the object log ...

There are no object log artifacts so I don't think that is the culprit, besides the debug continuations are different animals ... this really looks like we've been accumulatating a bunch of session instances somewhere ...

More digging ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC




At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass












_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Taking James’s advice, I did a "become: Object new” on all the WAPartialContinuation instances and then did a markForCollection

I got this: 



I guess this is some progress but there is still a lot of garbage out there. I tried some findReferences but they take an hour to run on the repository. 

I’m running a 2.4.4.1 repository. 

Do I keep trying to find out what is holding onto Seaside objects? What about my own PracticeJournalLoginTask and PrcticeJournalSession? There are a lot of those instances in the repository and they’re jut transient for as long as a user is logged into the repository, as far as I know. 


Best, 

Larry





On Apr 2, 2015, at 12:39 PM, Dale Henrichs <[hidden email]> wrote:

Hmmm ...

Well, I've done a bit of poking around starting with doing a list instances of WaSession (in a small Seaside stone or my own) and then seeing if I can trace back the reference path to the instance but the WaSession data structures are pretty incestuous ... we've snipped the obvious places and now we're hunting for a leak from a non-obvious source ...

in 3.1.x you have to build a reference path manually by using the following process:

  System commitTransaction.
  WASession allInstances anyOne findAllReferences

And then pick a likely object from the list and `findAllReferences` for that ... rinse repeat until you hit a persistent root that you recognize ...

In 3.2 we have a new feature that lets you find reference paths to an instance from persistent roots ... you could attempt to upgrade your repository to run the reference path analysis there ..

The other alternative is to start from some of you own persistent roots for your collections and dive into the objects to see if you recognize some rogue Seaside session state objects ...

Dale

On 04/02/2015 08:48 AM, Dale Henrichs wrote:
Larry,

Here's a story ... and I don't quite know where it will go, but

RcCounterElements are used in RcCounter.
RcCounter is used in WARcLastAccessExpiryPolicy.
WARcLastAccessExpiryPolicy is used in WaSession.

We have 1425 WaSession instances, but we have 110670 WARcLastAccessExpiryPolicy instances ...

Hmmm we have 110412 GsProcess instances ... that plus 110412 WAPartialContinuation instances indicates that we've got some continuations stuck somewhere ... the object log would be the most obvious place, but we've cleared out the object log ...

There are no object log artifacts so I don't think that is the culprit, besides the debug continuations are different animals ... this really looks like we've been accumulatating a bunch of session instances somewhere ...

More digging ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC

<Mail Attachment.png>



At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass













_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Larry,

If we don't identify the leak, then we run the risk of ballooning in size again ....

I didn't see James' email, perhaps I need to look in my junk folder again ... Anyway, it looks the WAPartialContinuation objects were not on the direct reference path that is keeping everything alive (otherwise you would have made a big dent in the live objects ...)

I think that probably the best answer is to upgrade your repo to 3.2 so that you can use the find reference path method to pinpoint the root object that is keeping things alive ...

Other than that we will have to splash around and try random things until they work ...

We've snipped the obvious places, and we are faced with finding a non-obvious place ... which presents us with an obvious problem:)

Again, before we go too far .... I want to verify that you have done a logout between the "become script" and the mfc? If not it is worth trying another mfc ....

Dale



On 04/03/2015 11:56 AM, Lawrence Kellogg wrote:
Taking James’s advice, I did a "become: Object new” on all the WAPartialContinuation instances and then did a markForCollection

I got this: 



I guess this is some progress but there is still a lot of garbage out there. I tried some findReferences but they take an hour to run on the repository. 

I’m running a 2.4.4.1 repository. 

Do I keep trying to find out what is holding onto Seaside objects? What about my own PracticeJournalLoginTask and PrcticeJournalSession? There are a lot of those instances in the repository and they’re jut transient for as long as a user is logged into the repository, as far as I know. 


Best, 

Larry





On Apr 2, 2015, at 12:39 PM, Dale Henrichs <[hidden email]> wrote:

Hmmm ...

Well, I've done a bit of poking around starting with doing a list instances of WaSession (in a small Seaside stone or my own) and then seeing if I can trace back the reference path to the instance but the WaSession data structures are pretty incestuous ... we've snipped the obvious places and now we're hunting for a leak from a non-obvious source ...

in 3.1.x you have to build a reference path manually by using the following process:

  System commitTransaction.
  WASession allInstances anyOne findAllReferences

And then pick a likely object from the list and `findAllReferences` for that ... rinse repeat until you hit a persistent root that you recognize ...

In 3.2 we have a new feature that lets you find reference paths to an instance from persistent roots ... you could attempt to upgrade your repository to run the reference path analysis there ..

The other alternative is to start from some of you own persistent roots for your collections and dive into the objects to see if you recognize some rogue Seaside session state objects ...

Dale

On 04/02/2015 08:48 AM, Dale Henrichs wrote:
Larry,

Here's a story ... and I don't quite know where it will go, but

RcCounterElements are used in RcCounter.
RcCounter is used in WARcLastAccessExpiryPolicy.
WARcLastAccessExpiryPolicy is used in WaSession.

We have 1425 WaSession instances, but we have 110670 WARcLastAccessExpiryPolicy instances ...

Hmmm we have 110412 GsProcess instances ... that plus 110412 WAPartialContinuation instances indicates that we've got some continuations stuck somewhere ... the object log would be the most obvious place, but we've cleared out the object log ...

There are no object log artifacts so I don't think that is the culprit, besides the debug continuations are different animals ... this really looks like we've been accumulatating a bunch of session instances somewhere ...

More digging ...

Dale

On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:

On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC

<Mail Attachment.png>



At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass














_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list
It appears that I mistakenly sent this as a private email rather than to the group as I intended. I apologize for the confusion.

On Apr 1, 2015, at 8:09 PM, James Foster <[hidden email]> wrote:

Larry,

I’m glad that the backup scan worked for you (it doesn’t get tested with every release). The object count list is quite interesting. Three hundred million instances of RcCounterElement is certainly a surprise.

What jumps out at me is that there are over a hundred thousand instances of WAPartialContinuation. This suggests that some Seaside cache is still holding things. Dale will likely have some good advice on how to find them but a couple ideas occur to me. 

First, grab one of the instances (Class>>#’allInstances’) and then look at one (or more) reference paths (Repository>>#'findReferencePathToObject:*’ or #’Repository>>#’findAllReferencePathsToObject:’ if you are in 2.4.6 or later). 

Another thing that would be interesting is to replace all the continuations with a new object (each become: Object new). After another MFC/reclaim cycle there should not be any continuation instances and it would be interesting to see what sort of space that frees (Repository>>#’fileSizeReport’). 

Of course, knowing who is holding on to them is the primary question.

James

On Apr 1, 2015, at 7:05 PM, Lawrence Kellogg via Glass <[hidden email]> wrote:


On Mar 31, 2015, at 12:41 PM, Dale Henrichs <[hidden email]> wrote:

Larry,

I'm just going over the old ground again, in case we missed something obvious ... I would hate to spend several more days digging into this only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer present and I'd like to run the WABasicDevelopment reapSeasideCache one more time for good luck.


Yes, the collection at UserGlobals at: #ExpiryCleanup is empty. 

I ran the reapSeasideCache code again.



Assuming that neither of those turn up anything of use, the next step is to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside framework, there are a number of other persistent caches in GLASS, that might as well be cleared out. You can use the workspace here[1] to clean them up ... I don't think that these caches should be holding onto 23G of objects, but run an MFC aftwards to be safe ...


I cleared the caches. 

I ran another MFC

<Screen Shot 2015-04-01 at 10.02.25 PM.png>



At this point there's basically two directions that we can take:

  1. Top down. Start inspecting the data structures in your application and look
      for suspicious collections/objects that could be hanging onto objects above and
      beyond those absolutely needed.

  2. Bottom up. Scan your recent backup and get an instance count report[2] that
      will tell you what class of object is clogging up your data base .... Perhaps you'll
      recognize a big runner or two and know where to look to drop the references.
     If no, we'll have to pick a suspicious class, list the instances of that class, and then
     Repository>>listReferences:  to work our way back to a known root and then
     NUKE THE SUCKER:)


Ok, here is my instance count report. RCCounterElement is a huge winner here, I have no idea why. #63 PracticeJournalLoginTask  and #65 PracticeJournalSession is coming up a lot, so perhaps these are being held onto somewhere.

1 338955617 RcCounterElement
2 17607121 RcCollisionBucket
3 7683895 Association
4 2142624 String
5 2126557 WAValueHolder
6 1959784 VariableContext
7 1629389 CollisionBucket
8 1464171 Dictionary
9 1339617 KeyValueDictionary
10 1339616 Set
11 1243135 OrderedCollection
12 1116296 Array
13 951872 ComplexBlock
14 943639 ComplexVCBlock
15 781212 IdentityDictionary
16 673104 IdentityCollisionBucket
17 666407 WAUserConfiguration
18 664701 WAAttributeSearchContext
19 338617 RcCounter
20 338617 WARcLastAccessEntry
21 332017 RcKeyValueDictionary
22 230240 WAValueCallback
23 226002 WARequestFields
24 226002 WAUrl
25 223641 GRSmallDictionary
26 221821 GRDelayedSend
27 221821 GRUnboundMessage
28 220824 GsStackBuffer
29 219296 WAImageCallback
30 187813 Date
31 176258 MCMethodDefinition
32 146263 WAActionCallback
33 113114 WARenderCanvas
34 113039 WAMimeType
35 113003 WADocumentHandler
36 113003 WAMimeDocument
37 113001 WARenderVisitor
38 113001 WAActionPhaseContinuation
39 113001 WACallbackRegistry
40 113001 WARenderingGuide
41 113001 WARenderContext
42 112684 WASnapshot
43 110804 IdentityBag
44 110720 TransientValue
45 110710 WAToolDecoration
46 110672 TransientMutex
47 110670 WAGemStoneMutex
48 110670 WARcLastAccessExpiryPolicy
49 110670 WACache
50 110670 WANoReapingStrategy
51 110670 WACacheMissStrategy
52 110670 WANotifyRemovalAction
53 110640 WATimingToolFilter
54 110640 WADeprecatedToolFilter
55 110489 WAAnswerHandler
56 110422 WADelegation
57 110412 WAPartialContinuation
58 110412 GsProcess
59 109773 UserPersonalInformation
60 109712 Student
61 109295 WATaskVisitor
62 109285 UserLoginView
63 109285 PracticeJournalLoginTask
64 109259 WAValueExpression
65 109215 PracticeJournalSession
66 56942 Time
67 54394 GsMethod
68 53207 MCVersionInfo
69 53207 UUID
70 45927 MethodVersionRecord
71 41955 MethodBookExercise
72 37223 Symbol
73 29941 MCInstanceVariableDefinition
74 21828 MCClassDefinition
75 19291 SymbolAssociation
76 18065 PracticeDay
77 17218 GsMethodDictionary
78 16617 MusicalPiece
79 16609 SymbolSet
80 11160 FreeformExercise
81 8600 SymbolDictionary
82 7537 DateAndTime
83 6812 Duration
84 6288 Month
85 6288 PracticeMonth
86 4527 WAHtmlAttributes
87 4390 DateTime
88 4247 Metaclass
89 4190 WAGenericTag
90 4142 SimpleBlock
91 4136 WATableColumnTag
92 4136 WACheckboxTag
93 4029 Composer
94 3682 RcIdentityBag
95 3428 ClassHistory
96 3010 PracticeSession
97 2185 MCClassVariableDefinition
98 2017 CanonStringBucket
99 1986 MethodBook
100 1974 WARenderPhaseContinuation
101 1965 PurchaseOptionInformation
102 1843 AmazonPurchase
103 1796 GsDocText
104 1513 GsClassDocumentation
105 1508 209409
106 1425 WASession
107 1218 UserInformationInterface
108 1134 WAValuesCallback
109 1125 WACancelActionCallback
110 751 DepListBucket
111 738 Pragma
112 716 LessonTaskRecording
113 693 UserForgotPasswordView
114 629 MusicalPieceRepertoireItem
115 524 PracticeYear
116 524 Year
117 483 MCOrganizationDefinition
118 480 Repertoire
119 467 MCPackage
120 440 MultiplePageDisplayView
121 403 MethodBookExerciseRepertoireItem
122 352 UserCalendar
123 334 MetacelloValueHolderSpec
124 333 MCVersion
125 333 MCSnapshot
126 313 TimeZoneTransition
127 307 Color
128 269 NumberGenerator
129 216 UserCommunityInformation
130 206 IdentitySet
131 200 RcQueueSessionComponent
132 199 FreeformExerciseRepertoireItem
133 191 WAHtmlCanvas
134 187 PackageInfo
135 182 InvariantArray
136 176 MCRepositoryGroup
137 175 MCWorkingCopy
138 175 MCWorkingAncestry
139 157 PracticeSessionInputView
140 149 MetacelloPackageSpec
141 139 MetacelloRepositoriesSpec
142 132 WAMetaElement
143 131 MCClassInstanceVariableDefinition
144 117 MetacelloMergeMemberSpec
145 106 YouTubeVideoResource
146 101 MusicalPieceRepertoireItemInputView
147 99 UserCommentsView
148 96 LessonTasksView
149 96 LessonTaskView
150 94 WATableTag
151 91 MetacelloMCVersion
152 87 PracticeSessionView
153 81 SortedCollection
154 78 MetacelloMCVersionSpec
155 78 MetacelloVersionNumber
156 78 MetacelloPackagesSpec
157 77 DateRange
158 77 PracticeSessionsView
159 70 MethodBooksView
160 67 UserCalendarView
161 67 PracticeJournalMiniCalendar
162 66 PracticeDayView
163 65 MetacelloAddMemberSpec
164 64 WATextInputTag
165 61 Teacher
166 61 MCPoolImportDefinition
167 61 MCHttpRepository
168 60 CheckScreenNameAvailability
169 58 UserRepertoireItemsView
170 58 UserRepertoireView
171 58 PrivateLesson
172 57 UserRepertoireItemsSummaryView
173 53 MetacelloMCProjectSpec
174 53 MetacelloProjectReferenceSpec
175 52 WAListAttribute
176 48 TimedActivitiesInformationServer
177 46 PracticeSessionTemplate
178 46 WriteStream
179 44 WAFormTag
180 39 UserInstrumentsInputView
181 39 MetacelloRepositorySpec
182 35 UserInstrumentsInputViewGenerator
183 35 CreateLessonTaskRecordingInterface
184 32 WASelectTag
185 32 WADateInput
186 30 WAApplication
187 30 UserComment
188 30 WAExceptionFilter
189 29 WADispatchCallback
190 29 WARadioGroup
191 28 DecimalFloat
192 27 JSStream
193 26 MethodBookExerciseRepertoireItemInputView
194 26 WAStringAttribute
195 24 WAOpeningConditionalComment
196 24 WAScriptElement
197 24 WAClosingConditionalComment
198 24 PracticeSessionTemplateInputView
199 24 WALinkElement
200 23 UserInformationView


Larry



Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 6:24 PM, Dale Henrichs <[hidden email]> wrote:

The initial MFC gave you (pre-backup):

  390,801,691 live objects with 23,382,898 dead

The second MFC gave you (post-backup):

  391,007,811 live objects with 107 dead

Which means that we did not gain nearly as much as anticipated by cleaning up the seaside session state and object log ... so something else is hanging onto a big chunk of objects ...

So yes at this point there is no need to consider a backup and restore to shrink extents until we can free up some more objects ...

I've got to head out on an errand right now, so I can't give you any detailed pointers, to the techniques to use for finding the nasty boy that is hanging onto the "presumably dead objects" ...

I am a bit suspicious that the Object log might still be alive an kicking, so I think you should verify by inspecting the ObjectLog collections ... poke around on the class side ... if you find a big collection (and it blows up your TOC if you try to look at it), then look again at the class-side methods and make sure that you nuke the RCQueue and the OrderedCollection .... close down/logout your vms, and then run another mfc to see if you gained any ground …


Well, the ObjectLog collection on the class side of ObjectLogEntry is empty, and the ObjectQueue class variable has: 

<Mail Attachment.png>


Is it necessary to reinitialize the ObjectQueue?

Is there some report I can run that will tell me what is holding onto so much space?

Best,

Larry



Dale

On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:

On Mar 30, 2015, at 12:28 PM, Dale Henrichs <[hidden email]> wrote:

Okay,

I guess you made it through the session expirations okay and according to the MFC results it does look like you did get rid of a big chunk of objects... Presumably the backup was made before the vote on the possible dead was finished so the backup would not have been able to skip all of the dead objects (until the vote was completed) .... there 's also an outside chance that the vm used to expire the sessions would have voted down some of the possible dead if it was still logged in when the backup was made ...

So we need to find out what's going on in the new extent ... so do another mfc and send me the results


Ok, I made it through another mark for collection and here is the result:

<Mail Attachment.png>




Am I wrong in thinking that the file size of the extent will not shrink? It certainly has not shrunk much. 





 In the new extent, run the MFC again, and provide me with the results ... include an `Admin>>DoIt>>File Size Report`. Then logout of GemTools and stop/start any other seaside servers or maintenance vms that might be running ...


Here is the file size report before the mark for collection 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3478.58 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3478.58 Megabytes

and after 

Extent #1
-----------
   Filename = !TCP@localhost#dir:/opt/gemstone/product/seaside/data#<a moz-do-not-send="true" href="log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf" class="">log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf

   File size =       23732.00 Megabytes
   Space available = 3476.47 Megabytes

Totals
------
   Repository size = 23732.00 Megabytes
   Free Space =      3476.47 Megabytes



I await further instructions. 

Best,

Larry





By the time we exchange emails, the vote should have a chance to complete this time... but I want to see the results of the MFC and File SIze Report before deciding what to do next ...

Dale

On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Well, I went though the process as described below, but have not see my extent shrink appreciably, so I am puzzled. 
Here is the screenshot after the mark for collection. Do I have to do something to reclaim the dead objects? Does the maintenance gem need to be run?


<Mail Attachment.png>

After the ObjectLog init, and mark, I did a restore into a fresh extent.

Here is the size of the new extent vs the old, saved extent:

<Mail Attachment.png>



Thoughts?

Larry



On Mar 25, 2015, at 2:15 PM, Dale Henrichs <[hidden email]> wrote:

Okay here's the sequence of steps that I think you should take:

  1. expire all of your sessions:

  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.'.
  System commitTransactions

  2. initalize your object log

  3. run MFC

  [
  System abortTransaction.
  SystemRepository markForCollection ]
    on: Warning
    do: [ :ex |
      Transcript
        cr;
        show: ex description.
      ex resume ]

  4. Then do a backup and restore ... you can use GemTools to do the restore,
      but then you should read the SysAdmin docs[1] for instructions to do the restore
      (I've enclosed link to 3.2 docs, but the procedure and commands should pretty
      much be the same, but it's best to look up the docs for your GemStone version[2]
      and follow those instructions)

As I mentioned earlier, it will probably take a while for each of these operations to complete (object log will be fast and the backup will be fast, if the mfc tosses out the majority of your data) and it is likely that the repository will grow some more during the process (hard to predict this one, tho).

Step 1 will touch every session and every continuation so it is hard to say what percent of the objects are going to be touched (the expensive part), still there are likely to be a lot of those puppies and they will have to be read from disk into the SPC ...

Step 3. is going scan all of the live objects and again it hard to predict exactly how expensive it will be ...

Dale

[1] http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
[2] http://gemtalksystems.com/techsupport/resources/

On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
Hello Dale, 

  Thanks for the help. I’m a terrible system admin when it comes to maintaining a system with one user, LOL. 

  I’m not running the maintenance VM and I haven’t been doing regular mark for collects. 

  I’m trying to do a fullBackupTo: at the moment, well see if I get through that. Should I have done a markForCollection before the full backup? 

  I’ll also try the ObjectLog trick. 

  I guess I need to start from a fresh extent, as you said, and the extent file will not shrink. I’m at 48% of my available disk space but it does seem slower than usual. 

  
Best, 

Larry


  
On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass <[hidden email]> wrote:

Lawrence,

Are you doing regular Mark for collects? Are you running the maintenance vm along with you seaside servers?

Seaside produces persistent garbage (persistent session state that eventually times out) when it processes requests so if you do not run the maintenance vm the sessions are not expired and if you do not run mfc regularly the expired sessions are not cleaned up ...

Another source of growth could be the Object Log ... (use `ObjectLogEntry initalize` to efficiently reset the Object Log ... pay attention to the mispelling ... thats another story). If you are getting continuations saved to the object log, the stacks that are saved, can hang onto a lot of session state, that even though expired will not be garbage collected because of references from the continuation in the object log keep it alive ...

The best way to shrink your extent (once we understand why it is growing) is to do a backup and then restore into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...

Dale

On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
Well, Amazon sent me a note that they are having hardware trouble on my instance, so they shut it down. It looks like they’re threatening to take the thing offline permanently so I’m trying to save my work with an AMI and move it somewhere else, if I have to.

I finally got Gemstone/Seaside back up and running and noticed these lines in the Seaside log file. These kind of messages go on once a day for weeks. Is this normal? 

--- 03/07/2015 02:44:14 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22528 megabytes.
    Repository has grown to 22528 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22544 megabytes.
    Repository has grown to 22544 megabytes.

--- 03/08/2015 03:31:45 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22560 megabytes.
    Repository has grown to 22560 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22576 megabytes.
    Repository has grown to 22576 megabytes.

--- 03/10/2015 03:19:34 AM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22592 megabytes.
    Repository has grown to 22592 megabytes.

--- 03/10/2015 03:46:39 PM UTC ---
    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22608 megabytes.
    Repository has grown to 22608 megabytes.

    Extent = !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
       has grown to 22624 megabytes.
    Repository has grown to 22624 megabytes.


My extent has now grown to 

-rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 extent0.dbf


I don’t get a lot of traffic so I’m a little surprised at the growth. Should I try to shrink the extent?

I suppose I should also do a SystemRepository backup, if I can remember the commands. 

Best, 

Larry




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass









_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
123