Sizing Glass Server on Amazon EC2

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Sizing Glass Server on Amazon EC2

johnmci
Ok, I"m slowing working towards deploying an iOS app that will have a back end hosted on GLASS. However I'm unsure how to size and configure the server for my needs.

(a) I have used RC classes in various places per some articles on usage of the classes to avoid conflict that causes Seaside to abort and restart the web request. IE http://gemstonesoup.wordpress.com/2008/03/17/glass-101-fire/

(b) I do have to occasionally go off to a third party web service for some data to satisfy a query, that might take a few seconds....

(c) I likely won't have lots of live data so doing things like http://blog.9minutesnooze.com/raid-10-ebs-data won't be required. Also I see Amazon has offerings now for lots of i/o bound instances.

So I'm curious about how many FastCGI servers I should run for example, although three is a basic recommendation I don't know what a more production related number should be.

Load? Goodness I'm sure the answer should be lots, but frankly I have no idea, and perhaps a better question is how do I tell if I have a backlog situation pending, would of course like to have the server email when it knows service thresholds have been exceeded.
Reply | Threaded
Open this post in threaded view
|

Re: Sizing Glass Server on Amazon EC2

Paul DeBruicker
Hi John,

I didn't get this message through my email client but instead saw it on
the nabble forum. So maybe it didn't get through to everyone else.  The
original is here:

http://forum.world.st/Sizing-Glass-Server-on-Amazon-EC2-td4644507.html


>  Ok, I"m slowing working towards deploying an iOS app that will have a back end hosted on GLASS. However I'm unsure how to size and configure the server for my needs.
>
> (a) I have used RC classes in various places per some articles on usage of the classes to avoid conflict that causes Seaside to abort and restart the web request. IE http://gemstonesoup.wordpress.com/2008/03/17/glass-101-fire/
>
> (b) I do have to occasionally go off to a third party web service for some data to satisfy a query, that might take a few seconds....
>
> (c) I likely won't have lots of live data so doing things like http://blog.9minutesnooze.com/raid-10-ebs-data won't be required. Also I see Amazon has offerings now for lots of i/o bound instances.
>
> So I'm curious about how many FastCGI servers I should run for example, although three is a basic recommendation I don't know what a more production related number should be.
>
> Load? Goodness I'm sure the answer should be lots, but frankly I have no idea, and perhaps a better question is how do I tell if I have a backlog situation pending, would of course like to have the server email when it knows service thresholds have been exceeded.



For item B the only helpful(maybe?) thing I can address is that it is
possible to adapt the Seaside gem start scripts to start a background
gem that can maintain and process a queue of worker threads for
background tasks that the Seaside gems can add tasks to as necessary.

Its my understanding that SSD's are a great help with gemstone so using
a VPS that has them seems to be the way to go.  Amazon High I/O
instances have them but seem huge for starting out.
Reply | Threaded
Open this post in threaded view
|

Re: Sizing Glass Server on Amazon EC2

Dale Henrichs
In reply to this post by johnmci
Hello John ...

embedded comments

----- Original Message -----
| From: "johnmci" <[hidden email]>
| To: [hidden email]
| Sent: Friday, August 17, 2012 4:13:58 PM
| Subject: [GS/SS Beta] Sizing Glass Server on Amazon EC2
|
| Ok, I"m slowing working towards deploying an iOS app that will have a
| back
| end hosted on GLASS. However I'm unsure how to size and configure the
| server
| for my needs.
|
| (a) I have used RC classes in various places per some articles on
| usage of
| the classes to avoid conflict that causes Seaside to abort and
| restart the
| web request. IE
| http://gemstonesoup.wordpress.com/2008/03/17/glass-101-fire/

That's exactly the use case for using RC classes ...

|
| (b) I do have to occasionally go off to a third party web service for
| some
| data to satisfy a query, that might take a few seconds....

For these "long requests" you will not want to handle them in the Seaside vm. You can arrange to have these long requests handled in a separate service vm ... while your browser polls/waits for results.

I wrote up some examples for an implementation of what I called a ServiceVM[1] and then Nick Ager expanded on the original example to add a future-like implementation[2]. I think that Paul DeBruicker has done some work in this area as well ...

The basic idea is that you set up the service vm so that it will pull tasks (basically blocks) off of a queue and then arrange to fork a thread to manage the execution of the task ... The tasks are expected to be operations (like third party web apis) that will spend most of the elapsed time waiting for a response with no need for the task to be in transaction while waiting ...

[1] http://code.google.com/p/glassdb/wiki/ServiceVMExample
[2] http://seaside.gemstone.com/ss/Seaside30/Seaside-GemStone-ServiceTask-NickAger.20.mcz
|
| (c) I likely won't have lots of live data so doing things like
| http://blog.9minutesnooze.com/raid-10-ebs-data won't be required.
| Also I see
| Amazon has offerings now for lots of i/o bound instances.

The free version of GemStone has a 2GB shared page cache limit, so if you can fit your working set in the 2GB SPC, you should be able to avoid i/o delays ... although the standard caveat of keeping your tranlogs and extents on separate spindles still applies, since the MFC does tend to hammer the disk when it runs.

|
| So I'm curious about how many FastCGI servers I should run for
| example,
| although three is a basic recommendation I don't know what a more
| production
| related number should be.

A Seaside server vm (whether it be Swazoo, FastCGI or Zinc) can only handle one concurrent transaction. You will want to have enough Seaside vms to handle the expected number of concurrent requests ...

|
| Load? Goodness I'm sure the answer should be lots, but frankly I have
| no
| idea, and perhaps a better question is how do I tell if I have a
| backlog
| situation pending, would of course like to have the server email when
| it
| knows service thresholds have been exceeded.

There is no stat that currently records the number of queued requests, but that could be added to the framework[3]. A task for monitoring performance could be added to the maintenance vm[4] and that task could arrange to send email ...

[3] http://code.google.com/p/glassdb/issues/detail?id=349
[4] http://code.google.com/p/glassdb/wiki/MaintenanceVMTasks

Are you planning on attending ESUG? If so that would a be a good time to talk in more detail about your application with myself and a number of the the other gmestone folks ...

Dale
Reply | Threaded
Open this post in threaded view
|

Re: Sizing Glass Server on Amazon EC2

Paul DeBruicker
On 08/20/2012 10:07 AM, Dale Henrichs wrote:

> |
> | So I'm curious about how many FastCGI servers I should run for
> | example,
> | although three is a basic recommendation I don't know what a more
> | production
> | related number should be.
>
> A Seaside server vm (whether it be Swazoo, FastCGI or Zinc) can only handle one concurrent transaction. You will want to have enough Seaside vms to handle the expected number of concurrent requests ...
>
> |
> | Load? Goodness I'm sure the answer should be lots, but frankly I have
> | no
> | idea, and perhaps a better question is how do I tell if I have a
> | backlog
> | situation pending, would of course like to have the server email when
> | it
> | knows service thresholds have been exceeded.
>
> There is no stat that currently records the number of queued requests, but that could be added to the framework[3]. A task for monitoring performance could be added to the maintenance vm[4] and that task could arrange to send email ...


Another thing to check into is mod_cluster. Philippe Marschal has built
some support for it in Seaside+Pharo but I have no idea what would be
necessary to run it in Gemstone. see:


http://article.gmane.org/gmane.comp.lang.smalltalk.squeak.seaside/22629
Reply | Threaded
Open this post in threaded view
|

Re: Sizing Glass Server on Amazon EC2

NorbertHartl
In reply to this post by johnmci

Am 18.08.2012 um 01:13 schrieb johnmci <[hidden email]>:

> Ok, I"m slowing working towards deploying an iOS app that will have a back
> end hosted on GLASS. However I'm unsure how to size and configure the server
> for my needs.
>
> (a) I have used RC classes in various places per some articles on usage of
> the classes to avoid conflict that causes Seaside to abort and restart the
> web request. IE http://gemstonesoup.wordpress.com/2008/03/17/glass-101-fire/
>
> (b) I do have to occasionally go off to a third party web service for some
> data to satisfy a query, that might take a few seconds....
>
> (c) I likely won't have lots of live data so doing things like
> http://blog.9minutesnooze.com/raid-10-ebs-data won't be required. Also I see
> Amazon has offerings now for lots of i/o bound instances.
>
> So I'm curious about how many FastCGI servers I should run for example,
> although three is a basic recommendation I don't know what a more production
> related number should be.
>
> Load? Goodness I'm sure the answer should be lots, but frankly I have no
> idea, and perhaps a better question is how do I tell if I have a backlog
> situation pending, would of course like to have the server email when it
> knows service thresholds have been exceeded.
>

John,

as the clever people already responded I think it is my turn :) I think regardless of technology why do you think about scaling until you face some performance problems? Most of the time it isn't necessary and if it is your premature choice was the wrong one :)
Using RC classes is good to avoid conflicts and also bring as a side effect additional speed. When you say you won't have lots of live data about what size are we talking? The magic number is that if your whole data set is smaller then the size of the shared page cache then you are in the fast category already.
On the other hand you are talking about EC2. There you can migrate an instance to the next bigger thing within minutes. That is so flexible that you can always scale in hardware until you solved a performance problem.

If you are still not convinced then just do some math. Estimate a number of app installations and imagine how often it will be used per day. If you then think about how many actions in usage will trigger a request you will come to a very small number I guess. Until you have very very heavy operations on the server in order to serve the app I would expect you won't experience a performance problem even with a EC2 tiny instance.

Our operations on the server are REST only and not very heavy. At the moment we serve for approx. 120.000 app installations which gives millions of requests. Looking at the server monitoring it tells me that I get 338 milliRequests per second. That is one request in 3 seconds in average. I have two fastcgi/topaz processes. As one request needs approx. 30-40 ms inside glass I could serve all of the clients with a single fastcgi instance easily. The server load is 0.09. Most of the provided load is not coming from the web request but from the gemstone garbage collector :)

To summarize this. You will need a large user base with heavy operations to get performance problems at all regardless of the EC2 instance. The tiny instance is just a bad choice because it has a high latency. If you will have some performance issues I guess GLASS will not be the first thing to look at. Caching behavior in your front end web server might be more important. And if you transfer bigger data to the client you need something like [1]. Mobile clients often have bad network connections that means the request takes a long time. So you need to have an environment where a slow client request does not block the fastcgi process.

hope this helps!

Norbert

[1] https://www.varnish-cache.org