profiling GLASS

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

profiling GLASS

ccrraaiigg

Hi--

     Apologies if this has been discussed recently or documented
prominently, I didn't see anything in a quick search.

     How does one profile GLASS? I've written a GLASS app[1] which does
queries on a collection of a million objects. Some of the query
responses are so slow that I had to increase a FastCGI timeout from 30
seconds, to keep from getting an HTTP 5xx response. When I remove what I
suspect is the long-running code, the response is much faster, so I know
roughly where to optimize. What tools are there for finding where the
time goes?

     Even answering the Seaside welcome page takes longer than I
expected (a few seconds), so I'd like to profile that, too.


     thanks again!

-C

[1]

     GLASS running on Ubuntu 12.04.2 LTS on an EC2 m1.small instance,
through Apache 2.2.22 and FastCGI 2.4.6, with a Pharo GemTools
development UI.

--
Craig Latta
www.netjam.org/resume
+31   6 2757 7177 (SMS ok)
+ 1 415 287 3547 (no SMS)
Reply | Threaded
Open this post in threaded view
|

Re: profiling GLASS

Dale Henrichs
Craig,

There are two answers to profiling:

  1. analyze statmonitor output
  2. vm level profiling

statmonitor helps you track down performance issues related to disk i/o and the SPC it also helps you get an overall view of your system:

  statmonitor seaside -i 1 -u 0 -z -A

will launch a statmonitor against a stone named seaside and drop the statmon file in the current directory. A program called vsd is used to view the output (X-windows based):

  vsd statmon26790.out.gz &

will launch vsd on the named statmon file ... You can poke around and look at all kind of stats ... If you send a statmon file to me, I'll try to take a look and tell you what I think ... poke me if I don't get back to you right away ... I'm often distracted ...

For ProfMonitor you can analyze a chunk of code from Topaz or GemTools by inspecting the following:

  ProfMonitor monitorBlock: ["whatever"] .

For analyzing Seaside and headless topaz gems, I had a web api written in Seaside2.8 that allowed you to turn on profiling and then analyze the results ... I haven't gotten around to porting the tool to Seaside3.0 ... but the basic technique can be used to produce profiles (without the fancy seaside interface) ...

ProfMonitor writes it's sample information to disk while running, so you can arrange to

  1. start profiling to disk
  2. stop profiling
  3. collect the file
  4. read profile data from file and display results

It's too late for me to go into more detail tonight, but I'll grunge around and get you more details about ProfMonitor. If you look at ProfMonitor class>>runBlock:intervalNs: you'll get some clues as to what you need to do ... basically ProfMonitor samples the whole vm so it isn't explicitly necessary to pass in a block, just start monitoring wait a bit, the stop monitoring and then analysize the results.

If you figure things out over night let me know otherwise I'll start digging tomorrow morning...

With a large collection like that, I'd be inclined to focus on statmonitor in the first place...Are you using indexing with your large collection or are you just doing a brute force scan?

Dale

----- Original Message -----
| From: "Craig Latta" <[hidden email]>
| To: "GemStone Seaside beta discussion" <[hidden email]>
| Sent: Wednesday, February 6, 2013 3:55:39 PM
| Subject: [GS/SS Beta] profiling GLASS
|
|
| Hi--
|
|      Apologies if this has been discussed recently or documented
| prominently, I didn't see anything in a quick search.
|
|      How does one profile GLASS? I've written a GLASS app[1] which
|      does
| queries on a collection of a million objects. Some of the query
| responses are so slow that I had to increase a FastCGI timeout from
| 30
| seconds, to keep from getting an HTTP 5xx response. When I remove
| what I
| suspect is the long-running code, the response is much faster, so I
| know
| roughly where to optimize. What tools are there for finding where the
| time goes?
|
|      Even answering the Seaside welcome page takes longer than I
| expected (a few seconds), so I'd like to profile that, too.
|
|
|      thanks again!
|
| -C
|
| [1]
|
|      GLASS running on Ubuntu 12.04.2 LTS on an EC2 m1.small instance,
| through Apache 2.2.22 and FastCGI 2.4.6, with a Pharo GemTools
| development UI.
|
| --
| Craig Latta
| www.netjam.org/resume
| +31   6 2757 7177 (SMS ok)
| + 1 415 287 3547 (no SMS)
|
Reply | Threaded
Open this post in threaded view
|

Re: profiling GLASS

Dale Henrichs
In reply to this post by ccrraaiigg
Craig,

Here's the skinny on using ProfMonitor. First the basic commands broken down:

"Create ProfMonitor instance ... you can commit the ProfMonitor instance
 after you've started monitoring, but keep in mind that the instance is
 can only be used/referenced in the vm in which it was created ... UNTIL
 #gatherResults"

  | profMon |
  profMon := ProfMonitor new.
  self class profMonitor: profMon.
  profMon startMonitoring.

"After #startMonitoring, samples will be taken and written to a tmp file
 without regard to which smalltalk thread is running... When you've waited
 a sufficient length of time, call #stopMonitoring and #gatherResults."

  profMon
        stopMonitoring;
        gatherResults.

"After #gatherResults, the commited instance of ProfMonitor can be accessed
 and used to create a report in a separate vm (say an interactive topaz)"

  profMon reportDownTo: 100

"The results of reportDownTo: is a string that contains the results formatted
 similarly to gprof..."

If the report is too big, increase the tally limit ...

To profile the operation of a headless Seaside vm. You might fork a thread that goes into a 5 second delay and monitors a global that contains a symbol to start/stop/gather results.

Then from another topaz you can remotely start/stop/gather a profmon instance view results, etc. If you are running multiple Seaside gems, be careful that you don't use the same globals to store the your profmon instance ...

For extra credit you can use gem to gem signalling to start stop the profmonitor (this mechanism was used for the Seaside2.8 profmon component)...

This should at least get you started.

Dale
----- Original Message -----
| From: "Craig Latta" <[hidden email]>
| To: "GemStone Seaside beta discussion" <[hidden email]>
| Sent: Wednesday, February 6, 2013 3:55:39 PM
| Subject: [GS/SS Beta] profiling GLASS
|
|
| Hi--
|
|      Apologies if this has been discussed recently or documented
| prominently, I didn't see anything in a quick search.
|
|      How does one profile GLASS? I've written a GLASS app[1] which
|      does
| queries on a collection of a million objects. Some of the query
| responses are so slow that I had to increase a FastCGI timeout from
| 30
| seconds, to keep from getting an HTTP 5xx response. When I remove
| what I
| suspect is the long-running code, the response is much faster, so I
| know
| roughly where to optimize. What tools are there for finding where the
| time goes?
|
|      Even answering the Seaside welcome page takes longer than I
| expected (a few seconds), so I'd like to profile that, too.
|
|
|      thanks again!
|
| -C
|
| [1]
|
|      GLASS running on Ubuntu 12.04.2 LTS on an EC2 m1.small instance,
| through Apache 2.2.22 and FastCGI 2.4.6, with a Pharo GemTools
| development UI.
|
| --
| Craig Latta
| www.netjam.org/resume
| +31   6 2757 7177 (SMS ok)
| + 1 415 287 3547 (no SMS)
|
Reply | Threaded
Open this post in threaded view
|

Re: profiling GLASS

Dale Henrichs
Craig,

To fork the porifiling thread create a new topaz start script that does the fork before listening on the port...

Dale

----- Original Message -----
| From: "Dale Henrichs" <[hidden email]>
| To: "GemStone Seaside beta discussion" <[hidden email]>
| Sent: Thursday, February 7, 2013 9:11:08 PM
| Subject: Re: [GS/SS Beta] profiling GLASS
|
| Craig,
|
| Here's the skinny on using ProfMonitor. First the basic commands
| broken down:
|
| "Create ProfMonitor instance ... you can commit the ProfMonitor
| instance
|  after you've started monitoring, but keep in mind that the instance
|  is
|  can only be used/referenced in the vm in which it was created ...
|  UNTIL
|  #gatherResults"
|
|   | profMon |
|   profMon := ProfMonitor new.
|   self class profMonitor: profMon.
|   profMon startMonitoring.
|
| "After #startMonitoring, samples will be taken and written to a tmp
| file
|  without regard to which smalltalk thread is running... When you've
|  waited
|  a sufficient length of time, call #stopMonitoring and
|  #gatherResults."
|
|   profMon
| stopMonitoring;
| gatherResults.
|
| "After #gatherResults, the commited instance of ProfMonitor can be
| accessed
|  and used to create a report in a separate vm (say an interactive
|  topaz)"
|
|   profMon reportDownTo: 100
|
| "The results of reportDownTo: is a string that contains the results
| formatted
|  similarly to gprof..."
|
| If the report is too big, increase the tally limit ...
|
| To profile the operation of a headless Seaside vm. You might fork a
| thread that goes into a 5 second delay and monitors a global that
| contains a symbol to start/stop/gather results.
|
| Then from another topaz you can remotely start/stop/gather a profmon
| instance view results, etc. If you are running multiple Seaside
| gems, be careful that you don't use the same globals to store the
| your profmon instance ...
|
| For extra credit you can use gem to gem signalling to start stop the
| profmonitor (this mechanism was used for the Seaside2.8 profmon
| component)...
|
| This should at least get you started.
|
| Dale
| ----- Original Message -----
| | From: "Craig Latta" <[hidden email]>
| | To: "GemStone Seaside beta discussion" <[hidden email]>
| | Sent: Wednesday, February 6, 2013 3:55:39 PM
| | Subject: [GS/SS Beta] profiling GLASS
| |
| |
| | Hi--
| |
| |      Apologies if this has been discussed recently or documented
| | prominently, I didn't see anything in a quick search.
| |
| |      How does one profile GLASS? I've written a GLASS app[1] which
| |      does
| | queries on a collection of a million objects. Some of the query
| | responses are so slow that I had to increase a FastCGI timeout from
| | 30
| | seconds, to keep from getting an HTTP 5xx response. When I remove
| | what I
| | suspect is the long-running code, the response is much faster, so I
| | know
| | roughly where to optimize. What tools are there for finding where
| | the
| | time goes?
| |
| |      Even answering the Seaside welcome page takes longer than I
| | expected (a few seconds), so I'd like to profile that, too.
| |
| |
| |      thanks again!
| |
| | -C
| |
| | [1]
| |
| |      GLASS running on Ubuntu 12.04.2 LTS on an EC2 m1.small
| |      instance,
| | through Apache 2.2.22 and FastCGI 2.4.6, with a Pharo GemTools
| | development UI.
| |
| | --
| | Craig Latta
| | www.netjam.org/resume
| | +31   6 2757 7177 (SMS ok)
| | + 1 415 287 3547 (no SMS)
| |
|
Reply | Threaded
Open this post in threaded view
|

Re: profiling GLASS

ccrraaiigg

Hi Dale--

     Thanks! And aha/oops: lots of groovy advice in chapter 14 of The
Fine Manual. :)  Apologies.

     Indeed, I was doing a naïve brute-force scan, to find where the
performance of naïveté breaks down (I found it ;). This is a rudimentary
demo, for a client (a CTO) who is considering getting his team to use
GLASS. My first goals are to show that a commodity-cloud GLASS server is
up to the task, and to explain the development process to him through a
real example (including cloud provisioning and performance tuning,
quantifying the gains).


     thanks again!

-C

--
Craig Latta
www.netjam.org/resume
+31   6 2757 7177 (SMS ok)
+ 1 415 287 3547 (no SMS)