GLASS configuration

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

GLASS configuration

dario trussardi
Ciao,

        I would like to create a solution based on GLASS,

        the use of which is limited to a few hours per day (5-6 hours) for a few days (2 - 10 days).

        The problem to be considered is that when operating,
       
        the solution should withstand the simultaneous management of 100 -500  users: PC - tablet and smartphone.

        What configuration of GLASS server (hardware - software) should I prepare?

        Does anyone have experience - advice about that?

        Thanks,

                Dario
Reply | Threaded
Open this post in threaded view
|

Re: GLASS configuration

Dale Henrichs
Dario,

It depends upon how hard those 100-500 users are hitting the system as your system sizing will depend upon how many concurrent gems you need to service the requests ...

If the requests are short and infrequent you might be able to get away with 3 gems ... the frequency and intensity of the traffic will drive up the number of gems needed which will then drive up your overall system memory requirements ...

You should use something like siege[1] to get a feel for what the load will be like ... with siege you can control how many concurrent users and the frequency ... you can alos supply a set of urls to use when banging on the system ... start testing against your sandbox environment with 50 concurrent users and turn up the heat until you've saturated the system (use statmonitor to determine what's saturated). Also read my post on scaling seaside[2] most of the things I talk about in that article are still applicable today.

There are three basic variables that you'll end up monkeying with:

  disk i/o
  RAM
  CPU

disk i/o will probably be your first hurdle and the standard solution is to put you tranlogs on a separate "spindle" ... even if you are using a SAN you probably want to put the tranlogs on a separate logical drive as linux has some internal rules it follows about prioritizing reads and writes  ... if you can use an SSD drive, that should help minimize the impact of disk i/o. The second disk i/o related hurdle happens when your SPC fills up with "dirty pages" ... you probably won't hit this problem right away, but if you do, the problem is caused by the fact that you aio page servers are not able to write pages to disk fast enough ... the solution is to first make the aio page servers run hotter by tweaking their parameters and the second is to add additional "spindles" and  distribute the extents across multiple spindles and then add additional aio page servers...

While we're on the subject, Seaside on GemStone can produce quite a bit of session state which puts pressure on the SPC and disk ... for high volume applications you will want to minimize your use of session state as much as possible by using the RESTful facilities of Seaside to service your read mostly requests ... this can have a significant impact and presumably in most of the web applications the vast majority of requests are read only ...


If you are running over several days, you will want to turn on epoch gc and only run the mfc in off-hours (take the hourly mfc out of the maintenance vm script)... an epoch of an hour or so will get rid of the bulk of the session state garbage that accumulates and keep the size of your extents manageable ... Ideally for this test run, your repository size can be kept smaller than the SPC, but you may need the help of epoch gc ...


Your RAM requirements will start with a baseline of 2GB for the SPC plus whatever the stone and other supporting process take and then add somewhere around 100MB per gem (rule of thumb is twice the temp obj cache size) ...

If you find in your scaling work that you need to have more than 2 cpus or a larger SPC for a satisfactory demo, we can issue you a time limited license that gives you more cPU or SPC ...

This should be enough to get you started, but let me know if you need any other help,

Dale

[1] http://www.joedog.org/siege-home/
[2] http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/
----- Original Message -----
| From: "Dario Trussardi" <[hidden email]>
| To: "beta discussion Gemstone Seaside" <[hidden email]>
| Sent: Saturday, March 23, 2013 5:53:11 AM
| Subject: [GS/SS Beta] GLASS configuration
|
| Ciao,
|
| I would like to create a solution based on GLASS,
|
| the use of which is limited to a few hours per day (5-6 hours) for a few
| days (2 - 10 days).
|
| The problem to be considered is that when operating,
|
| the solution should withstand the simultaneous management of 100 -500
| users: PC - tablet and smartphone.
|
| What configuration of GLASS server (hardware - software) should I prepare?
|
| Does anyone have experience - advice about that?
|
| Thanks,
|
| Dario
Reply | Threaded
Open this post in threaded view
|

Re: GLASS configuration

dario trussardi
Dale,

        i have some questions:

A) Some years ago i buy a Intel processor for my server based on Ubuntu, where i run  some  GLASS environment.
       
        I remember to find problem about the Intel processor specifications (  Intel 64  ?     and Intel Virtualization Technology ? other ?  i don't remember in detail ).

        Now before buy a processor i ask: what technologies the processor need support for a GLASS server ?

        I'm oriented to  a Intel Core i5-3330 Processor.

        It support the Intel 64     and  the Intel Virtualization technology,   is't enough ?

        Any considerations about which processor to a GLASS server  ?

       

> Dario,
>
> It depends upon how hard those 100-500 users are hitting the system as your system sizing will depend upon how many concurrent gems you need to service the requests ...
>
> If the requests are short and infrequent you might be able to get away with 3 gems ... the frequency and intensity of the traffic will drive up the number of gems needed which will then drive up your overall system memory requirements ...

B) How i can configuring the number of  gems ?

> You should use something like siege[1] to get a feel for what the load will be like ... with siege you can control how many concurrent users and the frequency ... you can alos supply a set of urls to use when banging on the system ... start testing against your sandbox environment with 50 concurrent users and turn up the heat until you've saturated the system (use statmonitor to determine what's saturated). Also read my post on scaling seaside[2] most of the things I talk about in that article are still applicable today.
>
> There are three basic variables that you'll end up monkeying with:
>
>  disk i/o
>  RAM
>  CPU
>
> disk i/o will probably be your first hurdle and the standard solution is to put you tranlogs on a separate "spindle" ... even if you are using a SAN you probably want to put the tranlogs on a separate logical drive as linux has some internal rules it follows about prioritizing reads and writes  ... if you can use an SSD drive, that should help minimize the impact of disk i/o. The second disk i/o related hurdle happens when your SPC fills up with "dirty pages" ... you probably won't hit this problem right away, but if you do, the problem is caused by the fact that you aio page servers are not able to write pages to disk fast enough ... the solution is to first make the aio page servers run hotter by tweaking their parameters and the second is to add additional "spindles" and  distribute the extents across multiple spindles and then add additional aio page servers...

C) How i can distribute the extents across multiple spindles?  

What i need configuring for it ?

If i right understand it's best have two SSD  of 64GB ( with extents across this )   in place of 1 SSD of 128GB .

        Is't right ?

Thanks,

                Dario

>
> While we're on the subject, Seaside on GemStone can produce quite a bit of session state which puts pressure on the SPC and disk ... for high volume applications you will want to minimize your use of session state as much as possible by using the RESTful facilities of Seaside to service your read mostly requests ... this can have a significant impact and presumably in most of the web applications the vast majority of requests are read only ...
>
>
> If you are running over several days, you will want to turn on epoch gc and only run the mfc in off-hours (take the hourly mfc out of the maintenance vm script)... an epoch of an hour or so will get rid of the bulk of the session state garbage that accumulates and keep the size of your extents manageable ... Ideally for this test run, your repository size can be kept smaller than the SPC, but you may need the help of epoch gc ...
>
>
> Your RAM requirements will start with a baseline of 2GB for the SPC plus whatever the stone and other supporting process take and then add somewhere around 100MB per gem (rule of thumb is twice the temp obj cache size) ...
>
> If you find in your scaling work that you need to have more than 2 cpus or a larger SPC for a satisfactory demo, we can issue you a time limited license that gives you more cPU or SPC ...
>
> This should be enough to get you started, but let me know if you need any other help,
>
> Dale
>
> [1] http://www.joedog.org/siege-home/
> [2] http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/
> ----- Original Message -----
> | From: "Dario Trussardi" <[hidden email]>
> | To: "beta discussion Gemstone Seaside" <[hidden email]>
> | Sent: Saturday, March 23, 2013 5:53:11 AM
> | Subject: [GS/SS Beta] GLASS configuration
> |
> | Ciao,
> |
> | I would like to create a solution based on GLASS,
> |
> | the use of which is limited to a few hours per day (5-6 hours) for a few
> | days (2 - 10 days).
> |
> | The problem to be considered is that when operating,
> |
> | the solution should withstand the simultaneous management of 100 -500
> | users: PC - tablet and smartphone.
> |
> | What configuration of GLASS server (hardware - software) should I prepare?
> |
> | Does anyone have experience - advice about that?
> |
> | Thanks,
> |
> | Dario

Reply | Threaded
Open this post in threaded view
|

Re: GLASS configuration

Dale Henrichs-3
Dario,

The virtualization technology is needed if you are running gemstone within a vmware (or kvm) virtual machine (i.e., the appliance). If you install gemstone directly on your server, then the virutalization support isn't needed, although I think that all modern 64bit processors have virtualization support...

Dale

----- Original Message -----
| From: "Dario Trussardi" <[hidden email]>
| To: "GemStone Seaside beta discussion" <[hidden email]>
| Sent: Tuesday, June 4, 2013 7:59:01 AM
| Subject: Re: [GS/SS Beta] GLASS configuration
|
| Dale,
|
| i have some questions:
|
| A) Some years ago i buy a Intel processor for my server based on
| Ubuntu, where i run  some  GLASS environment.
|
| I remember to find problem about the Intel processor specifications
| (  Intel 64  ?     and Intel Virtualization Technology ? other ?
| i don't remember in detail ).
|
| Now before buy a processor i ask: what technologies the processor
| need support for a GLASS server ?
|
| I'm oriented to  a Intel Core i5-3330 Processor.
|
| It support the Intel 64     and  the Intel Virtualization
| technology,   is't enough ?
|
| Any considerations about which processor to a GLASS server  ?
|
|
|
| > Dario,
| >
| > It depends upon how hard those 100-500 users are hitting the system
| > as your system sizing will depend upon how many concurrent gems
| > you need to service the requests ...
| >
| > If the requests are short and infrequent you might be able to get
| > away with 3 gems ... the frequency and intensity of the traffic
| > will drive up the number of gems needed which will then drive up
| > your overall system memory requirements ...
|
| B) How i can configuring the number of  gems ?
|
| > You should use something like siege[1] to get a feel for what the
| > load will be like ... with siege you can control how many
| > concurrent users and the frequency ... you can alos supply a set
| > of urls to use when banging on the system ... start testing
| > against your sandbox environment with 50 concurrent users and turn
| > up the heat until you've saturated the system (use statmonitor to
| > determine what's saturated). Also read my post on scaling
| > seaside[2] most of the things I talk about in that article are
| > still applicable today.
| >
| > There are three basic variables that you'll end up monkeying with:
| >
| >  disk i/o
| >  RAM
| >  CPU
| >
| > disk i/o will probably be your first hurdle and the standard
| > solution is to put you tranlogs on a separate "spindle" ... even
| > if you are using a SAN you probably want to put the tranlogs on a
| > separate logical drive as linux has some internal rules it follows
| > about prioritizing reads and writes  ... if you can use an SSD
| > drive, that should help minimize the impact of disk i/o. The
| > second disk i/o related hurdle happens when your SPC fills up with
| > "dirty pages" ... you probably won't hit this problem right away,
| > but if you do, the problem is caused by the fact that you aio page
| > servers are not able to write pages to disk fast enough ... the
| > solution is to first make the aio page servers run hotter by
| > tweaking their parameters and the second is to add additional
| > "spindles" and  distribute the extents across multiple spindles
| > and then add additional aio page servers...
|
| C) How i can distribute the extents across multiple spindles?
|
| What i need configuring for it ?
|
| If i right understand it's best have two SSD  of 64GB ( with extents
| across this )   in place of 1 SSD of 128GB .
|
| Is't right ?
|
| Thanks,
|
| Dario
|
| >
| > While we're on the subject, Seaside on GemStone can produce quite a
| > bit of session state which puts pressure on the SPC and disk ...
| > for high volume applications you will want to minimize your use of
| > session state as much as possible by using the RESTful facilities
| > of Seaside to service your read mostly requests ... this can have
| > a significant impact and presumably in most of the web
| > applications the vast majority of requests are read only ...
| >
| >
| > If you are running over several days, you will want to turn on
| > epoch gc and only run the mfc in off-hours (take the hourly mfc
| > out of the maintenance vm script)... an epoch of an hour or so
| > will get rid of the bulk of the session state garbage that
| > accumulates and keep the size of your extents manageable ...
| > Ideally for this test run, your repository size can be kept
| > smaller than the SPC, but you may need the help of epoch gc ...
| >
| >
| > Your RAM requirements will start with a baseline of 2GB for the SPC
| > plus whatever the stone and other supporting process take and then
| > add somewhere around 100MB per gem (rule of thumb is twice the
| > temp obj cache size) ...
| >
| > If you find in your scaling work that you need to have more than 2
| > cpus or a larger SPC for a satisfactory demo, we can issue you a
| > time limited license that gives you more cPU or SPC ...
| >
| > This should be enough to get you started, but let me know if you
| > need any other help,
| >
| > Dale
| >
| > [1] http://www.joedog.org/siege-home/
| > [2]
| > http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/
| > ----- Original Message -----
| > | From: "Dario Trussardi" <[hidden email]>
| > | To: "beta discussion Gemstone Seaside"
| > | <[hidden email]>
| > | Sent: Saturday, March 23, 2013 5:53:11 AM
| > | Subject: [GS/SS Beta] GLASS configuration
| > |
| > | Ciao,
| > |
| > | I would like to create a solution based on GLASS,
| > |
| > | the use of which is limited to a few hours per day (5-6 hours)
| > | for a few
| > | days (2 - 10 days).
| > |
| > | The problem to be considered is that when operating,
| > |
| > | the solution should withstand the simultaneous management of 100
| > | -500
| > | users: PC - tablet and smartphone.
| > |
| > | What configuration of GLASS server (hardware - software) should
| > | I prepare?
| > |
| > | Does anyone have experience - advice about that?
| > |
| > | Thanks,
| > |
| > | Dario
|
|
Reply | Threaded
Open this post in threaded view
|

Re: GLASS configuration

Richard Sargent
Administrator
Hi Dario,

The question of SSD versus rotating drives is an interesting. The reason for multiple drives is directly related to IO throughput. If you are writing enough data (or reading), a single IO channel might be saturated, but multiple IO channels could avoid that. SSD is substantially faster than any rotating drive, so the "rule of thumb" about 1 drive per extent doesn't necessarily apply.


Have fun, and let us know how it works.
Richard Sargent


On Tue, Jun 4, 2013 at 10:10 AM, Dale K. Henrichs <[hidden email]> wrote:
Dario,

The virtualization technology is needed if you are running gemstone within a vmware (or kvm) virtual machine (i.e., the appliance). If you install gemstone directly on your server, then the virutalization support isn't needed, although I think that all modern 64bit processors have virtualization support...

Dale

----- Original Message -----
| From: "Dario Trussardi" <[hidden email]>
| To: "GemStone Seaside beta discussion" <[hidden email]>
| Sent: Tuesday, June 4, 2013 7:59:01 AM
| Subject: Re: [GS/SS Beta] GLASS configuration
|
| Dale,
|
|       i have some questions:
|
| A)    Some years ago i buy a Intel processor for my server based on
| Ubuntu, where i run  some  GLASS environment.
|
|       I remember to find problem about the Intel processor specifications
|       (  Intel 64  ?     and Intel Virtualization Technology ?                other ?
|        i don't remember in detail ).
|
|       Now before buy a processor i ask:        what technologies the processor
|       need support for a GLASS server ?
|
|       I'm oriented to  a Intel Core i5-3330 Processor.
|
|       It support the           Intel 64       and  the                Intel Virtualization
|       technology,     is't enough ?
|
|       Any considerations about which processor to a GLASS server  ?
|
|
|
| > Dario,
| >
| > It depends upon how hard those 100-500 users are hitting the system
| > as your system sizing will depend upon how many concurrent gems
| > you need to service the requests ...
| >
| > If the requests are short and infrequent you might be able to get
| > away with 3 gems ... the frequency and intensity of the traffic
| > will drive up the number of gems needed which will then drive up
| > your overall system memory requirements ...
|
| B) How i can configuring the number of  gems ?
|
| > You should use something like siege[1] to get a feel for what the
| > load will be like ... with siege you can control how many
| > concurrent users and the frequency ... you can alos supply a set
| > of urls to use when banging on the system ... start testing
| > against your sandbox environment with 50 concurrent users and turn
| > up the heat until you've saturated the system (use statmonitor to
| > determine what's saturated). Also read my post on scaling
| > seaside[2] most of the things I talk about in that article are
| > still applicable today.
| >
| > There are three basic variables that you'll end up monkeying with:
| >
| >  disk i/o
| >  RAM
| >  CPU
| >
| > disk i/o will probably be your first hurdle and the standard
| > solution is to put you tranlogs on a separate "spindle" ... even
| > if you are using a SAN you probably want to put the tranlogs on a
| > separate logical drive as linux has some internal rules it follows
| > about prioritizing reads and writes  ... if you can use an SSD
| > drive, that should help minimize the impact of disk i/o. The
| > second disk i/o related hurdle happens when your SPC fills up with
| > "dirty pages" ... you probably won't hit this problem right away,
| > but if you do, the problem is caused by the fact that you aio page
| > servers are not able to write pages to disk fast enough ... the
| > solution is to first make the aio page servers run hotter by
| > tweaking their parameters and the second is to add additional
| > "spindles" and  distribute the extents across multiple spindles
| > and then add additional aio page servers...
|
| C) How i can distribute the extents across multiple spindles?
|
| What i need configuring for it ?
|
| If i right understand it's best have two SSD  of 64GB ( with extents
| across this )   in place of 1 SSD of 128GB .
|
|       Is't right ?
|
| Thanks,
|
|               Dario
|
| >
| > While we're on the subject, Seaside on GemStone can produce quite a
| > bit of session state which puts pressure on the SPC and disk ...
| > for high volume applications you will want to minimize your use of
| > session state as much as possible by using the RESTful facilities
| > of Seaside to service your read mostly requests ... this can have
| > a significant impact and presumably in most of the web
| > applications the vast majority of requests are read only ...
| >
| >
| > If you are running over several days, you will want to turn on
| > epoch gc and only run the mfc in off-hours (take the hourly mfc
| > out of the maintenance vm script)... an epoch of an hour or so
| > will get rid of the bulk of the session state garbage that
| > accumulates and keep the size of your extents manageable ...
| > Ideally for this test run, your repository size can be kept
| > smaller than the SPC, but you may need the help of epoch gc ...
| >
| >
| > Your RAM requirements will start with a baseline of 2GB for the SPC
| > plus whatever the stone and other supporting process take and then
| > add somewhere around 100MB per gem (rule of thumb is twice the
| > temp obj cache size) ...
| >
| > If you find in your scaling work that you need to have more than 2
| > cpus or a larger SPC for a satisfactory demo, we can issue you a
| > time limited license that gives you more cPU or SPC ...
| >
| > This should be enough to get you started, but let me know if you
| > need any other help,
| >
| > Dale
| >
| > [1] http://www.joedog.org/siege-home/
| > [2]
| > http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/
| > ----- Original Message -----
| > | From: "Dario Trussardi" <[hidden email]>
| > | To: "beta discussion Gemstone Seaside"
| > | <[hidden email]>
| > | Sent: Saturday, March 23, 2013 5:53:11 AM
| > | Subject: [GS/SS Beta] GLASS configuration
| > |
| > | Ciao,
| > |
| > |   I would like to create a solution based on GLASS,
| > |
| > |   the use of which is limited to a few hours per day (5-6 hours)
| > |   for a few
| > |   days (2 - 10 days).
| > |
| > |   The problem to be considered is that when operating,
| > |
| > |   the solution should withstand the simultaneous management of 100
| > |   -500
| > |   users:  PC - tablet and smartphone.
| > |
| > |   What configuration of GLASS server (hardware - software) should
| > |   I prepare?
| > |
| > |   Does anyone have experience - advice about that?
| > |
| > |   Thanks,
| > |
| > |           Dario
|
|

Reply | Threaded
Open this post in threaded view
|

Re: GLASS configuration

Paul DeBruicker
In reply to this post by dario trussardi
The Core i5 does not support ECC ram.  You need to get a Xeon processor
for ECC ram.  For reasons why you'd want ECC ram in a server read this:

http://perspectives.mvdirona.com/2009/10/07/YouReallyDONeedECCMemory.aspx






On 06/04/2013 07:59 AM, Dario Trussardi wrote:

> Dale,
>
> i have some questions:
>
> A) Some years ago i buy a Intel processor for my server based on Ubuntu, where i run  some  GLASS environment.
>
> I remember to find problem about the Intel processor specifications (  Intel 64  ?     and Intel Virtualization Technology ? other ?  i don't remember in detail ).
>
> Now before buy a processor i ask: what technologies the processor need support for a GLASS server ?
>
> I'm oriented to  a Intel Core i5-3330 Processor.
>
> It support the Intel 64     and  the Intel Virtualization technology,   is't enough ?
>
> Any considerations about which processor to a GLASS server  ?
>
>
>
>> Dario,
>>
>> It depends upon how hard those 100-500 users are hitting the system as your system sizing will depend upon how many concurrent gems you need to service the requests ...
>>
>> If the requests are short and infrequent you might be able to get away with 3 gems ... the frequency and intensity of the traffic will drive up the number of gems needed which will then drive up your overall system memory requirements ...
>
> B) How i can configuring the number of  gems ?
>
>> You should use something like siege[1] to get a feel for what the load will be like ... with siege you can control how many concurrent users and the frequency ... you can alos supply a set of urls to use when banging on the system ... start testing against your sandbox environment with 50 concurrent users and turn up the heat until you've saturated the system (use statmonitor to determine what's saturated). Also read my post on scaling seaside[2] most of the things I talk about in that article are still applicable today.
>>
>> There are three basic variables that you'll end up monkeying with:
>>
>>  disk i/o
>>  RAM
>>  CPU
>>
>> disk i/o will probably be your first hurdle and the standard solution is to put you tranlogs on a separate "spindle" ... even if you are using a SAN you probably want to put the tranlogs on a separate logical drive as linux has some internal rules it follows about prioritizing reads and writes  ... if you can use an SSD drive, that should help minimize the impact of disk i/o. The second disk i/o related hurdle happens when your SPC fills up with "dirty pages" ... you probably won't hit this problem right away, but if you do, the problem is caused by the fact that you aio page servers are not able to write pages to disk fast enough ... the solution is to first make the aio page servers run hotter by tweaking their parameters and the second is to add additional "spindles" and  distribute the extents across multiple spindles and then add additional aio page servers...
>
> C) How i can distribute the extents across multiple spindles?  
>
> What i need configuring for it ?
>
> If i right understand it's best have two SSD  of 64GB ( with extents across this )   in place of 1 SSD of 128GB .
>
> Is't right ?
>
> Thanks,
>
> Dario
>
>>
>> While we're on the subject, Seaside on GemStone can produce quite a bit of session state which puts pressure on the SPC and disk ... for high volume applications you will want to minimize your use of session state as much as possible by using the RESTful facilities of Seaside to service your read mostly requests ... this can have a significant impact and presumably in most of the web applications the vast majority of requests are read only ...
>>
>>
>> If you are running over several days, you will want to turn on epoch gc and only run the mfc in off-hours (take the hourly mfc out of the maintenance vm script)... an epoch of an hour or so will get rid of the bulk of the session state garbage that accumulates and keep the size of your extents manageable ... Ideally for this test run, your repository size can be kept smaller than the SPC, but you may need the help of epoch gc ...
>>
>>
>> Your RAM requirements will start with a baseline of 2GB for the SPC plus whatever the stone and other supporting process take and then add somewhere around 100MB per gem (rule of thumb is twice the temp obj cache size) ...
>>
>> If you find in your scaling work that you need to have more than 2 cpus or a larger SPC for a satisfactory demo, we can issue you a time limited license that gives you more cPU or SPC ...
>>
>> This should be enough to get you started, but let me know if you need any other help,
>>
>> Dale
>>
>> [1] http://www.joedog.org/siege-home/
>> [2] http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/
>> ----- Original Message -----
>> | From: "Dario Trussardi" <[hidden email]>
>> | To: "beta discussion Gemstone Seaside" <[hidden email]>
>> | Sent: Saturday, March 23, 2013 5:53:11 AM
>> | Subject: [GS/SS Beta] GLASS configuration
>> |
>> | Ciao,
>> |
>> | I would like to create a solution based on GLASS,
>> |
>> | the use of which is limited to a few hours per day (5-6 hours) for a few
>> | days (2 - 10 days).
>> |
>> | The problem to be considered is that when operating,
>> |
>> | the solution should withstand the simultaneous management of 100 -500
>> | users: PC - tablet and smartphone.
>> |
>> | What configuration of GLASS server (hardware - software) should I prepare?
>> |
>> | Does anyone have experience - advice about that?
>> |
>> | Thanks,
>> |
>> | Dario
>

Reply | Threaded
Open this post in threaded view
|

Re: GLASS configuration

Richard Sargent
Administrator
Great link, Paul! Thanks for sharing it.


On Tue, Jun 4, 2013 at 10:24 AM, Paul DeBruicker <[hidden email]> wrote:
The Core i5 does not support ECC ram.  You need to get a Xeon processor
for ECC ram.  For reasons why you'd want ECC ram in a server read this:

http://perspectives.mvdirona.com/2009/10/07/YouReallyDONeedECCMemory.aspx






On 06/04/2013 07:59 AM, Dario Trussardi wrote:
> Dale,
>
>       i have some questions:
>
> A)    Some years ago i buy a Intel processor for my server based on Ubuntu, where i run  some  GLASS environment.
>
>       I remember to find problem about the Intel processor specifications (  Intel 64  ?     and Intel Virtualization Technology ?            other ?  i don't remember in detail ).
>
>       Now before buy a processor i ask:        what technologies the processor need support for a GLASS server ?
>
>       I'm oriented to  a Intel Core i5-3330 Processor.
>
>       It support the           Intel 64       and  the                Intel Virtualization technology,        is't enough ?
>
>       Any considerations about which processor to a GLASS server  ?
>
>
>
>> Dario,
>>
>> It depends upon how hard those 100-500 users are hitting the system as your system sizing will depend upon how many concurrent gems you need to service the requests ...
>>
>> If the requests are short and infrequent you might be able to get away with 3 gems ... the frequency and intensity of the traffic will drive up the number of gems needed which will then drive up your overall system memory requirements ...
>
> B) How i can configuring the number of  gems ?
>
>> You should use something like siege[1] to get a feel for what the load will be like ... with siege you can control how many concurrent users and the frequency ... you can alos supply a set of urls to use when banging on the system ... start testing against your sandbox environment with 50 concurrent users and turn up the heat until you've saturated the system (use statmonitor to determine what's saturated). Also read my post on scaling seaside[2] most of the things I talk about in that article are still applicable today.
>>
>> There are three basic variables that you'll end up monkeying with:
>>
>>  disk i/o
>>  RAM
>>  CPU
>>
>> disk i/o will probably be your first hurdle and the standard solution is to put you tranlogs on a separate "spindle" ... even if you are using a SAN you probably want to put the tranlogs on a separate logical drive as linux has some internal rules it follows about prioritizing reads and writes  ... if you can use an SSD drive, that should help minimize the impact of disk i/o. The second disk i/o related hurdle happens when your SPC fills up with "dirty pages" ... you probably won't hit this problem right away, but if you do, the problem is caused by the fact that you aio page servers are not able to write pages to disk fast enough ... the solution is to first make the aio page servers run hotter by tweaking their parameters and the second is to add additional "spindles" and  distribute the extents across multiple spindles and then add additional aio page servers...
>
> C) How i can distribute the extents across multiple spindles?
>
> What i need configuring for it ?
>
> If i right understand it's best have two SSD  of 64GB ( with extents across this )   in place of 1 SSD of 128GB .
>
>       Is't right ?
>
> Thanks,
>
>               Dario
>
>>
>> While we're on the subject, Seaside on GemStone can produce quite a bit of session state which puts pressure on the SPC and disk ... for high volume applications you will want to minimize your use of session state as much as possible by using the RESTful facilities of Seaside to service your read mostly requests ... this can have a significant impact and presumably in most of the web applications the vast majority of requests are read only ...
>>
>>
>> If you are running over several days, you will want to turn on epoch gc and only run the mfc in off-hours (take the hourly mfc out of the maintenance vm script)... an epoch of an hour or so will get rid of the bulk of the session state garbage that accumulates and keep the size of your extents manageable ... Ideally for this test run, your repository size can be kept smaller than the SPC, but you may need the help of epoch gc ...
>>
>>
>> Your RAM requirements will start with a baseline of 2GB for the SPC plus whatever the stone and other supporting process take and then add somewhere around 100MB per gem (rule of thumb is twice the temp obj cache size) ...
>>
>> If you find in your scaling work that you need to have more than 2 cpus or a larger SPC for a satisfactory demo, we can issue you a time limited license that gives you more cPU or SPC ...
>>
>> This should be enough to get you started, but let me know if you need any other help,
>>
>> Dale
>>
>> [1] http://www.joedog.org/siege-home/
>> [2] http://gemstonesoup.wordpress.com/2007/10/19/scaling-seaside-with-gemstones/
>> ----- Original Message -----
>> | From: "Dario Trussardi" <[hidden email]>
>> | To: "beta discussion Gemstone Seaside" <[hidden email]>
>> | Sent: Saturday, March 23, 2013 5:53:11 AM
>> | Subject: [GS/SS Beta] GLASS configuration
>> |
>> | Ciao,
>> |
>> |    I would like to create a solution based on GLASS,
>> |
>> |    the use of which is limited to a few hours per day (5-6 hours) for a few
>> |    days (2 - 10 days).
>> |
>> |    The problem to be considered is that when operating,
>> |
>> |    the solution should withstand the simultaneous management of 100 -500
>> |    users:  PC - tablet and smartphone.
>> |
>> |    What configuration of GLASS server (hardware - software) should I prepare?
>> |
>> |    Does anyone have experience - advice about that?
>> |
>> |    Thanks,
>> |
>> |            Dario
>