[Glass] Fwd: [GLASS] Seaside - growing extent - normal?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

[Glass] Fwd: [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Answers below...


On Apr 8, 2015, at 10:34 AM, Lawrence Kellogg <[hidden email]> wrote:





Begin forwarded message:

From: Dale Henrichs <[hidden email]>
Date: April 7, 2015 at 6:50:32 PM EDT
To: Lawrence Kellogg <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: [Glass] [GLASS] Seaside - growing extent - normal?

Larry,

Here's the workspace to use if the size of the cache seems to be reasonable:


From the previous code, the cache size is 2568

I thought that was reasonable so I ran this code


  | app cache objectsByKey |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  {(objectsByKey size).
  (cache gemstoneReap)}



and got: 



If 'UserInformationInterface' appears to be too big, poke around in the WAApplication instances that show up in `WADispatcher default handlers` and see if you can find the smallest cache, and then run the above workspace against that app ... also I'd like to see the result array which gives us the number of entries in the cache and the number of entries expired ...

If they turn out to be significantly different, then try a second run  ...



Well, this does not seem to be good, having 2568 sessions and only one expired. 

Thoughts?

Run it again? 

Larry


I'm also tempted to ask you to edit the gemstoneReap method to reduce the amount of logging (but this would be optional):

WACache>>gemstoneReap, 

gemstoneReap
  "Iterate through the cache and remove objects that have expired."

  "In GemStone, this method is performed by a separate maintenance VM, 
     so we are already in transaction (assumed to be running in #autoBegin 
     transactionMode) and do not have to worry about acquiring the TransactionMutex.
    Since we are using reducedConflict dictionaries in the first place, we will remove the keys
    and values from the existing dictionaries without using the mutex."

  | expired count platform |
  expired := UserGlobals at: #'ExpiryCleanup' put: OrderedCollection new.
  platform := GRPlatform current.
  platform doCommitTransaction.
  count := 0.
  objectsByKey
    associationsDo: [ :assoc | 
      (self expiryPolicy isExpiredUpdating: assoc value key: assoc key)
        ifTrue: [ 
          self notifyRemoved: assoc value key: assoc key.
          count := count + 1.
          expired add: assoc.
          count \\ 100 == 0
            ifTrue: [platform doCommitTransaction ].
          count \\ 1000 == 0
            ifTrue: [Transcript cr; show: 'Scan progress: ' , count printString.] ] ].
  Transcript cr; show: 'finished scan: ' , count printString.
platform doCommitTransaction.
count := 0.
  (UserGlobals at: #'ExpiryCleanup' )
    do: [ :assoc | 
      count := count + 1.
      objectsByKey removeKey: assoc key ifAbsent: [].
      keysByObject removeKey: assoc value ifAbsent: [  ].
      count \\ 100 == 0
        ifTrue: [ platform doCommitTransaction ].
      count \\ 1000 == 0
        ifTrue: [ Transcript cr; show: 'Expire progress: ' , count printString.]].
  platform doCommitTransaction.
  UserGlobals removeKey: #'ExpiryCleanup'.
  platform doCommitTransaction.
Transcript show: 'Leaving gemstoneReap: ',expired size printString.
  ^ expired size

If you do make these edits then I'd like you to capture the transcript after each run completes ...

Dale

On 04/07/2015 03:20 PM, Dale Henrichs wrote:
Larry,

Okay, I cannot think of a reason that the earlier scans could not work, so I'm thinking that I'd like to pick a smaller subset of the whole system that runs faster and then try to understand why the scan isn't working ...

So I'll start by finding out if UserInformationInterface will make a likely candidate, so let's find out how big the cache is:

  | app cache objectsByKey assoc expired1 expired2 timeout lastAccessTable entry |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  objectsByKey size

I'll put together another workspace for doing a scan on a per application basis in th emean time ...

Dale

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] Fwd: [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Yes, the one session that expired was the one that we had already manually expired, so running again should expire the whole lot .. and if not we'll need to dig deeper.

for some reason it appears that a certain collection of applications did not have the expiration scan run against them at all ... like this one)... the case that was supposed to show up "missing" is the most likely explanation for such a phenomenon, but that was not the case with the WASession that we looked at in detail ...

If the second run yields good results, I'd like to try a few more experiments to try to characterize why we had these troubles, but it may not be worth spending too much more time on this (your call) if we have a formula for slashing the size of your repo...

Dale

On 4/8/15 7:51 AM, Lawrence Kellogg wrote:
Answers below...


On Apr 8, 2015, at 10:34 AM, Lawrence Kellogg <[hidden email]> wrote:





Begin forwarded message:

From: Dale Henrichs <[hidden email]>
Date: April 7, 2015 at 6:50:32 PM EDT
To: Lawrence Kellogg <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: [Glass] [GLASS] Seaside - growing extent - normal?

Larry,

Here's the workspace to use if the size of the cache seems to be reasonable:


From the previous code, the cache size is 2568

I thought that was reasonable so I ran this code


  | app cache objectsByKey |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  {(objectsByKey size).
  (cache gemstoneReap)}



and got: 



If 'UserInformationInterface' appears to be too big, poke around in the WAApplication instances that show up in `WADispatcher default handlers` and see if you can find the smallest cache, and then run the above workspace against that app ... also I'd like to see the result array which gives us the number of entries in the cache and the number of entries expired ...

If they turn out to be significantly different, then try a second run  ...



Well, this does not seem to be good, having 2568 sessions and only one expired. 

Thoughts?

Run it again? 

Larry


I'm also tempted to ask you to edit the gemstoneReap method to reduce the amount of logging (but this would be optional):

WACache>>gemstoneReap, 

gemstoneReap
  "Iterate through the cache and remove objects that have expired."

  "In GemStone, this method is performed by a separate maintenance VM, 
     so we are already in transaction (assumed to be running in #autoBegin 
     transactionMode) and do not have to worry about acquiring the TransactionMutex.
    Since we are using reducedConflict dictionaries in the first place, we will remove the keys
    and values from the existing dictionaries without using the mutex."

  | expired count platform |
  expired := UserGlobals at: #'ExpiryCleanup' put: OrderedCollection new.
  platform := GRPlatform current.
  platform doCommitTransaction.
  count := 0.
  objectsByKey
    associationsDo: [ :assoc | 
      (self expiryPolicy isExpiredUpdating: assoc value key: assoc key)
        ifTrue: [ 
          self notifyRemoved: assoc value key: assoc key.
          count := count + 1.
          expired add: assoc.
          count \\ 100 == 0
            ifTrue: [platform doCommitTransaction ].
          count \\ 1000 == 0
            ifTrue: [Transcript cr; show: 'Scan progress: ' , count printString.] ] ].
  Transcript cr; show: 'finished scan: ' , count printString.
platform doCommitTransaction.
count := 0.
  (UserGlobals at: #'ExpiryCleanup' )
    do: [ :assoc | 
      count := count + 1.
      objectsByKey removeKey: assoc key ifAbsent: [].
      keysByObject removeKey: assoc value ifAbsent: [  ].
      count \\ 100 == 0
        ifTrue: [ platform doCommitTransaction ].
      count \\ 1000 == 0
        ifTrue: [ Transcript cr; show: 'Expire progress: ' , count printString.]].
  platform doCommitTransaction.
  UserGlobals removeKey: #'ExpiryCleanup'.
  platform doCommitTransaction.
Transcript show: 'Leaving gemstoneReap: ',expired size printString.
  ^ expired size

If you do make these edits then I'd like you to capture the transcript after each run completes ...

Dale

On 04/07/2015 03:20 PM, Dale Henrichs wrote:
Larry,

Okay, I cannot think of a reason that the earlier scans could not work, so I'm thinking that I'd like to pick a smaller subset of the whole system that runs faster and then try to understand why the scan isn't working ...

So I'll start by finding out if UserInformationInterface will make a likely candidate, so let's find out how big the cache is:

  | app cache objectsByKey assoc expired1 expired2 timeout lastAccessTable entry |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  objectsByKey size

I'll put together another workspace for doing a scan on a per application basis in th emean time ...

Dale


_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list

On Apr 8, 2015, at 11:50 AM, Dale Henrichs <[hidden email]> wrote:

Yes, the one session that expired was the one that we had already manually expired, so running again should expire the whole lot .. and if not we'll need to dig deeper.

for some reason it appears that a certain collection of applications did not have the expiration scan run against them at all ... like this one)... the case that was supposed to show up "missing" is the most likely explanation for such a phenomenon, but that was not the case with the WASession that we looked at in detail ...

If the second run yields good results, I'd like to try a few more experiments to try to characterize why we had these troubles, but it may not be worth spending too much more time on this (your call) if we have a formula for slashing the size of your repo…

So you mean for me to run this again?


  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.’.
  System commitTransactions


log out, and then log in, and run a markForCollection?

I ran this code, but the number of sessions printed from the “leaving gemstoneReap” Transcript call is only 10 or 14…most of the time 0, not sure this is going to make a huge difference. 

The code did not run for long and it looks like it is committing changes.

Larry







Dale

On 4/8/15 7:51 AM, Lawrence Kellogg wrote:
Answers below...


On Apr 8, 2015, at 10:34 AM, Lawrence Kellogg <[hidden email]> wrote:





Begin forwarded message:

From: Dale Henrichs <[hidden email]>
Date: April 7, 2015 at 6:50:32 PM EDT
To: Lawrence Kellogg <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: [Glass] [GLASS] Seaside - growing extent - normal?

Larry,

Here's the workspace to use if the size of the cache seems to be reasonable:


From the previous code, the cache size is 2568

I thought that was reasonable so I ran this code


  | app cache objectsByKey |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  {(objectsByKey size).
  (cache gemstoneReap)}



and got: 

<Mail Attachment.png>


If 'UserInformationInterface' appears to be too big, poke around in the WAApplication instances that show up in `WADispatcher default handlers` and see if you can find the smallest cache, and then run the above workspace against that app ... also I'd like to see the result array which gives us the number of entries in the cache and the number of entries expired ...

If they turn out to be significantly different, then try a second run  ...



Well, this does not seem to be good, having 2568 sessions and only one expired. 

Thoughts?

Run it again? 

Larry


I'm also tempted to ask you to edit the gemstoneReap method to reduce the amount of logging (but this would be optional):

WACache>>gemstoneReap, 

gemstoneReap
  "Iterate through the cache and remove objects that have expired."

  "In GemStone, this method is performed by a separate maintenance VM, 
     so we are already in transaction (assumed to be running in #autoBegin 
     transactionMode) and do not have to worry about acquiring the TransactionMutex.
    Since we are using reducedConflict dictionaries in the first place, we will remove the keys
    and values from the existing dictionaries without using the mutex."

  | expired count platform |
  expired := UserGlobals at: #'ExpiryCleanup' put: OrderedCollection new.
  platform := GRPlatform current.
  platform doCommitTransaction.
  count := 0.
  objectsByKey
    associationsDo: [ :assoc | 
      (self expiryPolicy isExpiredUpdating: assoc value key: assoc key)
        ifTrue: [ 
          self notifyRemoved: assoc value key: assoc key.
          count := count + 1.
          expired add: assoc.
          count \\ 100 == 0
            ifTrue: [platform doCommitTransaction ].
          count \\ 1000 == 0
            ifTrue: [Transcript cr; show: 'Scan progress: ' , count printString.] ] ].
  Transcript cr; show: 'finished scan: ' , count printString.
platform doCommitTransaction.
count := 0.
  (UserGlobals at: #'ExpiryCleanup' )
    do: [ :assoc | 
      count := count + 1.
      objectsByKey removeKey: assoc key ifAbsent: [].
      keysByObject removeKey: assoc value ifAbsent: [  ].
      count \\ 100 == 0
        ifTrue: [ platform doCommitTransaction ].
      count \\ 1000 == 0
        ifTrue: [ Transcript cr; show: 'Expire progress: ' , count printString.]].
  platform doCommitTransaction.
  UserGlobals removeKey: #'ExpiryCleanup'.
  platform doCommitTransaction.
Transcript show: 'Leaving gemstoneReap: ',expired size printString.
  ^ expired size

If you do make these edits then I'd like you to capture the transcript after each run completes ...

Dale

On 04/07/2015 03:20 PM, Dale Henrichs wrote:
Larry,

Okay, I cannot think of a reason that the earlier scans could not work, so I'm thinking that I'd like to pick a smaller subset of the whole system that runs faster and then try to understand why the scan isn't working ...

So I'll start by finding out if UserInformationInterface will make a likely candidate, so let's find out how big the cache is:

  | app cache objectsByKey assoc expired1 expired2 timeout lastAccessTable entry |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  objectsByKey size

I'll put together another workspace for doing a scan on a per application basis in th emean time ...

Dale



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Yes.

On 04/08/2015 09:01 AM, Lawrence Kellogg wrote:

On Apr 8, 2015, at 11:50 AM, Dale Henrichs <[hidden email]> wrote:

Yes, the one session that expired was the one that we had already manually expired, so running again should expire the whole lot .. and if not we'll need to dig deeper.

for some reason it appears that a certain collection of applications did not have the expiration scan run against them at all ... like this one)... the case that was supposed to show up "missing" is the most likely explanation for such a phenomenon, but that was not the case with the WASession that we looked at in detail ...

If the second run yields good results, I'd like to try a few more experiments to try to characterize why we had these troubles, but it may not be worth spending too much more time on this (your call) if we have a formula for slashing the size of your repo…

So you mean for me to run this again?


  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.’.
  System commitTransactions


log out, and then log in, and run a markForCollection?

I ran this code, but the number of sessions printed from the “leaving gemstoneReap” Transcript call is only 10 or 14…most of the time 0, not sure this is going to make a huge difference. 

The code did not run for long and it looks like it is committing changes.

Larry







Dale

On 4/8/15 7:51 AM, Lawrence Kellogg wrote:
Answers below...


On Apr 8, 2015, at 10:34 AM, Lawrence Kellogg <[hidden email]> wrote:





Begin forwarded message:

From: Dale Henrichs <[hidden email]>
Date: April 7, 2015 at 6:50:32 PM EDT
To: Lawrence Kellogg <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: [Glass] [GLASS] Seaside - growing extent - normal?

Larry,

Here's the workspace to use if the size of the cache seems to be reasonable:


From the previous code, the cache size is 2568

I thought that was reasonable so I ran this code


  | app cache objectsByKey |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  {(objectsByKey size).
  (cache gemstoneReap)}



and got: 

<Mail Attachment.png>


If 'UserInformationInterface' appears to be too big, poke around in the WAApplication instances that show up in `WADispatcher default handlers` and see if you can find the smallest cache, and then run the above workspace against that app ... also I'd like to see the result array which gives us the number of entries in the cache and the number of entries expired ...

If they turn out to be significantly different, then try a second run  ...



Well, this does not seem to be good, having 2568 sessions and only one expired. 

Thoughts?

Run it again? 

Larry


I'm also tempted to ask you to edit the gemstoneReap method to reduce the amount of logging (but this would be optional):

WACache>>gemstoneReap, 

gemstoneReap
  "Iterate through the cache and remove objects that have expired."

  "In GemStone, this method is performed by a separate maintenance VM, 
     so we are already in transaction (assumed to be running in #autoBegin 
     transactionMode) and do not have to worry about acquiring the TransactionMutex.
    Since we are using reducedConflict dictionaries in the first place, we will remove the keys
    and values from the existing dictionaries without using the mutex."

  | expired count platform |
  expired := UserGlobals at: #'ExpiryCleanup' put: OrderedCollection new.
  platform := GRPlatform current.
  platform doCommitTransaction.
  count := 0.
  objectsByKey
    associationsDo: [ :assoc | 
      (self expiryPolicy isExpiredUpdating: assoc value key: assoc key)
        ifTrue: [ 
          self notifyRemoved: assoc value key: assoc key.
          count := count + 1.
          expired add: assoc.
          count \\ 100 == 0
            ifTrue: [platform doCommitTransaction ].
          count \\ 1000 == 0
            ifTrue: [Transcript cr; show: 'Scan progress: ' , count printString.] ] ].
  Transcript cr; show: 'finished scan: ' , count printString.
platform doCommitTransaction.
count := 0.
  (UserGlobals at: #'ExpiryCleanup' )
    do: [ :assoc | 
      count := count + 1.
      objectsByKey removeKey: assoc key ifAbsent: [].
      keysByObject removeKey: assoc value ifAbsent: [  ].
      count \\ 100 == 0
        ifTrue: [ platform doCommitTransaction ].
      count \\ 1000 == 0
        ifTrue: [ Transcript cr; show: 'Expire progress: ' , count printString.]].
  platform doCommitTransaction.
  UserGlobals removeKey: #'ExpiryCleanup'.
  platform doCommitTransaction.
Transcript show: 'Leaving gemstoneReap: ',expired size printString.
  ^ expired size

If you do make these edits then I'd like you to capture the transcript after each run completes ...

Dale

On 04/07/2015 03:20 PM, Dale Henrichs wrote:
Larry,

Okay, I cannot think of a reason that the earlier scans could not work, so I'm thinking that I'd like to pick a smaller subset of the whole system that runs faster and then try to understand why the scan isn't working ...

So I'll start by finding out if UserInformationInterface will make a likely candidate, so let's find out how big the cache is:

  | app cache objectsByKey assoc expired1 expired2 timeout lastAccessTable entry |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  objectsByKey size

I'll put together another workspace for doing a scan on a per application basis in th emean time ...

Dale




_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list

On Apr 8, 2015, at 12:53 PM, Dale Henrichs <[hidden email]> wrote:

Yes.


I’ve run it a number of times today but keep running out of temporary object memory


Will it eventually work because it commits partial results?




On 04/08/2015 09:01 AM, Lawrence Kellogg wrote:

On Apr 8, 2015, at 11:50 AM, Dale Henrichs <[hidden email]> wrote:

Yes, the one session that expired was the one that we had already manually expired, so running again should expire the whole lot .. and if not we'll need to dig deeper.

for some reason it appears that a certain collection of applications did not have the expiration scan run against them at all ... like this one)... the case that was supposed to show up "missing" is the most likely explanation for such a phenomenon, but that was not the case with the WASession that we looked at in detail ...

If the second run yields good results, I'd like to try a few more experiments to try to characterize why we had these troubles, but it may not be worth spending too much more time on this (your call) if we have a formula for slashing the size of your repo…

So you mean for me to run this again?


  | expired |
  Transcript cr; show: 'Unregistering...' , DateAndTime now printString.
  expired := WABasicDevelopment reapSeasideCache.
  expired > 0
    ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' object: expired) addToLog ].
  Transcript cr; show: '...Expired: ' , expired printString , ' sessions.’.
  System commitTransactions


log out, and then log in, and run a markForCollection?

I ran this code, but the number of sessions printed from the “leaving gemstoneReap” Transcript call is only 10 or 14…most of the time 0, not sure this is going to make a huge difference. 

The code did not run for long and it looks like it is committing changes.

Larry







Dale

On 4/8/15 7:51 AM, Lawrence Kellogg wrote:
Answers below...


On Apr 8, 2015, at 10:34 AM, Lawrence Kellogg <[hidden email]> wrote:





Begin forwarded message:

From: Dale Henrichs <[hidden email]>
Date: April 7, 2015 at 6:50:32 PM EDT
To: Lawrence Kellogg <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Subject: Re: [Glass] [GLASS] Seaside - growing extent - normal?

Larry,

Here's the workspace to use if the size of the cache seems to be reasonable:


From the previous code, the cache size is 2568

I thought that was reasonable so I ran this code


  | app cache objectsByKey |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  {(objectsByKey size).
  (cache gemstoneReap)}



and got: 

<Mail Attachment.png>


If 'UserInformationInterface' appears to be too big, poke around in the WAApplication instances that show up in `WADispatcher default handlers` and see if you can find the smallest cache, and then run the above workspace against that app ... also I'd like to see the result array which gives us the number of entries in the cache and the number of entries expired ...

If they turn out to be significantly different, then try a second run  ...



Well, this does not seem to be good, having 2568 sessions and only one expired. 

Thoughts?

Run it again? 

Larry


I'm also tempted to ask you to edit the gemstoneReap method to reduce the amount of logging (but this would be optional):

WACache>>gemstoneReap, 

gemstoneReap
  "Iterate through the cache and remove objects that have expired."

  "In GemStone, this method is performed by a separate maintenance VM, 
     so we are already in transaction (assumed to be running in #autoBegin 
     transactionMode) and do not have to worry about acquiring the TransactionMutex.
    Since we are using reducedConflict dictionaries in the first place, we will remove the keys
    and values from the existing dictionaries without using the mutex."

  | expired count platform |
  expired := UserGlobals at: #'ExpiryCleanup' put: OrderedCollection new.
  platform := GRPlatform current.
  platform doCommitTransaction.
  count := 0.
  objectsByKey
    associationsDo: [ :assoc | 
      (self expiryPolicy isExpiredUpdating: assoc value key: assoc key)
        ifTrue: [ 
          self notifyRemoved: assoc value key: assoc key.
          count := count + 1.
          expired add: assoc.
          count \\ 100 == 0
            ifTrue: [platform doCommitTransaction ].
          count \\ 1000 == 0
            ifTrue: [Transcript cr; show: 'Scan progress: ' , count printString.] ] ].
  Transcript cr; show: 'finished scan: ' , count printString.
platform doCommitTransaction.
count := 0.
  (UserGlobals at: #'ExpiryCleanup' )
    do: [ :assoc | 
      count := count + 1.
      objectsByKey removeKey: assoc key ifAbsent: [].
      keysByObject removeKey: assoc value ifAbsent: [  ].
      count \\ 100 == 0
        ifTrue: [ platform doCommitTransaction ].
      count \\ 1000 == 0
        ifTrue: [ Transcript cr; show: 'Expire progress: ' , count printString.]].
  platform doCommitTransaction.
  UserGlobals removeKey: #'ExpiryCleanup'.
  platform doCommitTransaction.
Transcript show: 'Leaving gemstoneReap: ',expired size printString.
  ^ expired size

If you do make these edits then I'd like you to capture the transcript after each run completes ...

Dale

On 04/07/2015 03:20 PM, Dale Henrichs wrote:
Larry,

Okay, I cannot think of a reason that the earlier scans could not work, so I'm thinking that I'd like to pick a smaller subset of the whole system that runs faster and then try to understand why the scan isn't working ...

So I'll start by finding out if UserInformationInterface will make a likely candidate, so let's find out how big the cache is:

  | app cache objectsByKey assoc expired1 expired2 timeout lastAccessTable entry |
  app := WADispatcher default handlers at: 'UserInformationInterface'.
  cache := app cache.
  objectsByKey := cache instVarAt: 3.
  objectsByKey size

I'll put together another workspace for doing a scan on a per application basis in th emean time ...

Dale





_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GLASS] Seaside - growing extent - normal?

GLASS mailing list


On 4/8/15 6:49 PM, Lawrence Kellogg wrote:

On Apr 8, 2015, at 12:53 PM, Dale Henrichs <[hidden email]> wrote:

Yes.


I’ve run it a number of times today but keep running out of temporary object memory
Hmmmmmm,

Running out of TOC is not a good thing ... running out of TOC and not recording a stack (like I needed the last time this happened) is not a good thing ... Without a stack I don't know where you blew out ... without a complete stack I don't know whether or not you are making any progress at all....

I am also suspicious that running out of memory is the source of some of the object leaks ...

I was under the impression that before making and restoring from backup that you had made a "complete" run without running out of TOC. Am I remembering wrong?

Today have you had any runs without running out of TOC ... I thought you had made it through one run today without running out of TOC. Am I remembering wrong?

I just need a full stack from your last run and we'll see what's happening ... remember the instructions from last time? The gem log should have some stacks that I can look at (pick the last stack in the file) and see if I can figure out what the heck is going on ...

Maybe, just maybe we are blowing out of memory because we've gotten to a spot that's never been executed before?

Dale



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [GLASS] Seaside - growing extent - normal?

GLASS mailing list
Hi guys, 

I think I might have a similar issue as Lawrence. But it's hard to debug 58 messages ;)
Should I start by trying this and see if there is a difference? 
If not, anyway of the involves could have a little executive summary :) hahahahah Now..for real..this is just a heads up saying I may have the same problem and hence more hands to try and find this bug (if it is so)!

Cheers,

On Wed, Apr 8, 2015 at 11:51 PM, Dale Henrichs via Glass <[hidden email]> wrote:


On 4/8/15 6:49 PM, Lawrence Kellogg wrote:

On Apr 8, 2015, at 12:53 PM, Dale Henrichs <[hidden email]> wrote:

Yes.


I’ve run it a number of times today but keep running out of temporary object memory
Hmmmmmm,

Running out of TOC is not a good thing ... running out of TOC and not recording a stack (like I needed the last time this happened) is not a good thing ... Without a stack I don't know where you blew out ... without a complete stack I don't know whether or not you are making any progress at all....

I am also suspicious that running out of memory is the source of some of the object leaks ...

I was under the impression that before making and restoring from backup that you had made a "complete" run without running out of TOC. Am I remembering wrong?

Today have you had any runs without running out of TOC ... I thought you had made it through one run today without running out of TOC. Am I remembering wrong?

I just need a full stack from your last run and we'll see what's happening ... remember the instructions from last time? The gem log should have some stacks that I can look at (pick the last stack in the file) and see if I can figure out what the heck is going on ...

Maybe, just maybe we are blowing out of memory because we've gotten to a spot that's never been executed before?

Dale



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass




--

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [GLASS] Seaside - growing extent - normal?

GLASS mailing list
LOL. We fought a long and hard battle, and I'm not sure of the outcome, except that my extent is still big, and I just got a another paying gig. 

Some day, I'll take another shot at it.

Best,

Larry



On Jul 5, 2015, at 4:45 AM, Mariano Martinez Peck <[hidden email]> wrote:

Hi guys, 

I think I might have a similar issue as Lawrence. But it's hard to debug 58 messages ;)
Should I start by trying this and see if there is a difference? 
If not, anyway of the involves could have a little executive summary :) hahahahah Now..for real..this is just a heads up saying I may have the same problem and hence more hands to try and find this bug (if it is so)!

Cheers,

On Wed, Apr 8, 2015 at 11:51 PM, Dale Henrichs via Glass <[hidden email]> wrote:


On 4/8/15 6:49 PM, Lawrence Kellogg wrote:

On Apr 8, 2015, at 12:53 PM, Dale Henrichs <[hidden email]> wrote:

Yes.


I’ve run it a number of times today but keep running out of temporary object memory
Hmmmmmm,

Running out of TOC is not a good thing ... running out of TOC and not recording a stack (like I needed the last time this happened) is not a good thing ... Without a stack I don't know where you blew out ... without a complete stack I don't know whether or not you are making any progress at all....

I am also suspicious that running out of memory is the source of some of the object leaks ...

I was under the impression that before making and restoring from backup that you had made a "complete" run without running out of TOC. Am I remembering wrong?

Today have you had any runs without running out of TOC ... I thought you had made it through one run today without running out of TOC. Am I remembering wrong?

I just need a full stack from your last run and we'll see what's happening ... remember the instructions from last time? The gem log should have some stacks that I can look at (pick the last stack in the file) and see if I can figure out what the heck is going on ...

Maybe, just maybe we are blowing out of memory because we've gotten to a spot that's never been executed before?

Dale



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass




--

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [GLASS] Seaside - growing extent - normal?

GLASS mailing list
In reply to this post by GLASS mailing list


On Sat, Jul 4, 2015 at 11:45 PM, Mariano Martinez Peck <[hidden email]> wrote:
Hi guys, 

I think I might have a similar issue as Lawrence. But it's hard to debug 58 messages ;)
Should I start by trying this and see if there is a difference? 

Ok, i tried that and no difference :(

 
If not, anyway of the involves could have a little executive summary :) hahahahah Now..for real..this is just a heads up saying I may have the same problem and hence more hands to try and find this bug (if it is so)!

Cheers,

On Wed, Apr 8, 2015 at 11:51 PM, Dale Henrichs via Glass <[hidden email]> wrote:


On 4/8/15 6:49 PM, Lawrence Kellogg wrote:

On Apr 8, 2015, at 12:53 PM, Dale Henrichs <[hidden email]> wrote:

Yes.


I’ve run it a number of times today but keep running out of temporary object memory
Hmmmmmm,

Running out of TOC is not a good thing ... running out of TOC and not recording a stack (like I needed the last time this happened) is not a good thing ... Without a stack I don't know where you blew out ... without a complete stack I don't know whether or not you are making any progress at all....

I am also suspicious that running out of memory is the source of some of the object leaks ...

I was under the impression that before making and restoring from backup that you had made a "complete" run without running out of TOC. Am I remembering wrong?

Today have you had any runs without running out of TOC ... I thought you had made it through one run today without running out of TOC. Am I remembering wrong?

I just need a full stack from your last run and we'll see what's happening ... remember the instructions from last time? The gem log should have some stacks that I can look at (pick the last stack in the file) and see if I can figure out what the heck is going on ...

Maybe, just maybe we are blowing out of memory because we've gotten to a spot that's never been executed before?

Dale



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass




--



--

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass