[Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

[Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Mariano Martinez Peck
Hi guys, 

I have a strange situation with timeouts under nginx/fastCGI and I am not sure what is expected. I am executing a Seaside WATask, where one of its method takes (for sure) more then the timeout of nginx/fastCGI. My piece of nginx setup is something like this:

location @seasidemariano {
include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args; 
  fastcgi_pass seasidemariano;
fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next upstream (gem) in any error, including timeout. Now...say I have this method being executed and it takes more than 180 seconds. What happens is that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But... I have some questions:

1) what happens with that gem that was executing the task (the one that took more than 180)? is the execution finished even if the nginx give a timeout and pass the reuqest to the next gem? Or the gem execution is aborted?  Why I ask? Because...I out a log to a file inside my method...and it looks like if the method were called 3 times rather than 1. And from a domain point of view.... it is not good that such a method is executed 3 times...

2) If I put a larger timeout...say 1500 ... it works correct..the method is executed only once, no timeout. Same if I use swazoo. So it is something to do with the timeouts and fastCGI for sure. 

3) 3 times...why? It seems because I have 3 gems. I did an experiment, and I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2 times rather than 3. 

So....how do people normally deal with this? Of course, the immediate workaround seems to increase the timeout...but it seems risky to me, thinking that if for some reason (like GC running or whatever) one particular request takes more than the timeout, then my "backend code" could be run more than once...

Thanks in advance, 

--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Dale Henrichs-3



On Fri, Jul 18, 2014 at 7:36 AM, Mariano Martinez Peck <[hidden email]> wrote:
Hi guys, 

I have a strange situation with timeouts under nginx/fastCGI and I am not sure what is expected. I am executing a Seaside WATask, where one of its method takes (for sure) more then the timeout of nginx/fastCGI. My piece of nginx setup is something like this:

location @seasidemariano {
include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args; 
  fastcgi_pass seasidemariano;
fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next upstream (gem) in any error, including timeout. Now...say I have this method being executed and it takes more than 180 seconds. What happens is that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But... I have some questions:

1) what happens with that gem that was executing the task (the one that took more than 180)? is the execution finished even if the nginx give a timeout and pass the reuqest to the next gem? Or the gem execution is aborted?  Why I ask? Because...I out a log to a file inside my method...and it looks like if the method were called 3 times rather than 1. And from a domain point of view.... it is not good that such a method is executed 3 times...

It does sound like nginx is redispatching the http request on timeout... 

2) If I put a larger timeout...say 1500 ... it works correct..the method is executed only once, no timeout. Same if I use swazoo. So it is something to do with the timeouts and fastCGI for sure. 

In general I try to avoid timeouts ... it seems that timeouts fire more often because the system is slow than for any other reason and the standard answer: increase the timeout ...

So I guess I would wonder why the operation is taking so long ... if the operation is slow because the system is overloaded, then a longer timeout _is_ called for, but then what is a good value for the timeout ..

I guess the real question to ask is what is the purpose of the timeout? 

You might want the gem itself to decide to terminate a request if it is "taking too long" then you don't need  a timeout at the nginx level?


3) 3 times...why? It seems because I have 3 gems. I did an experiment, and I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2 times rather than 3. 

It does sound like nginx is sending the request again upon a timeout ... could that be? 

So....how do people normally deal with this? Of course, the immediate workaround seems to increase the timeout...but it seems risky to me, thinking that if for some reason (like GC running or whatever) one particular request takes more than the timeout, then my "backend code" could be run more than once...

Thanks in advance, 

--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Paul DeBruicker
In reply to this post by Mariano Martinez Peck
Within a WATask can you offload the long duration job to the service vm and then poll for the result avoiding timeouts altogether?

Change the UI to show 'working....' or something while the long job is processing then update it with the results when they've been calculated.  Or something like that.





Mariano Martinez Peck wrote
Hi guys,

I have a strange situation with timeouts under nginx/fastCGI and I am not
sure what is expected. I am executing a Seaside WATask, where one of its
method takes (for sure) more then the timeout of nginx/fastCGI. My piece of
nginx setup is something like this:

location @seasidemariano {
 include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args;
  fastcgi_pass seasidemariano;
 fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next
upstream (gem) in any error, including timeout. Now...say I have this
method being executed and it takes more than 180 seconds. What happens is
that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But...
I have some questions:

1) what happens with that gem that was executing the task (the one that
took more than 180)? is the execution finished even if the nginx give a
timeout and pass the reuqest to the next gem? Or the gem execution is
aborted?  Why I ask? Because...I out a log to a file inside my method...and
it looks like if the method were called 3 times rather than 1. And from a
domain point of view.... it is not good that such a method is executed 3
times...

2) If I put a larger timeout...say 1500 ... it works correct..the method is
executed only once, no timeout. Same if I use swazoo. So it is something to
do with the timeouts and fastCGI for sure.

3) 3 times...why? It seems because I have 3 gems. I did an experiment, and
I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2
times rather than 3.

So....how do people normally deal with this? Of course, the immediate
workaround seems to increase the timeout...but it seems risky to me,
thinking that if for some reason (like GC running or whatever) one
particular request takes more than the timeout, then my "backend code"
could be run more than once...

Thanks in advance,

--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Mariano Martinez Peck
In reply to this post by Dale Henrichs-3



On Fri, Jul 18, 2014 at 12:08 PM, Dale Henrichs <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 7:36 AM, Mariano Martinez Peck <[hidden email]> wrote:
Hi guys, 

I have a strange situation with timeouts under nginx/fastCGI and I am not sure what is expected. I am executing a Seaside WATask, where one of its method takes (for sure) more then the timeout of nginx/fastCGI. My piece of nginx setup is something like this:

location @seasidemariano {
include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args; 
  fastcgi_pass seasidemariano;
fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next upstream (gem) in any error, including timeout. Now...say I have this method being executed and it takes more than 180 seconds. What happens is that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But... I have some questions:

1) what happens with that gem that was executing the task (the one that took more than 180)? is the execution finished even if the nginx give a timeout and pass the reuqest to the next gem? Or the gem execution is aborted?  Why I ask? Because...I out a log to a file inside my method...and it looks like if the method were called 3 times rather than 1. And from a domain point of view.... it is not good that such a method is executed 3 times...

It does sound like nginx is redispatching the http request on timeout... 


Exactly. And it should, as my configuration is: 

fastcgi_next_upstream error invalid_header timeout http_500;

So yes...upon a gem timeout, nginx forwards the request to the next gem. 
 
2) If I put a larger timeout...say 1500 ... it works correct..the method is executed only once, no timeout. Same if I use swazoo. So it is something to do with the timeouts and fastCGI for sure. 

In general I try to avoid timeouts ... it seems that timeouts fire more often because the system is slow than for any other reason and the standard answer: increase the timeout ...

So I guess I would wonder why the operation is taking so long ... if the operation is slow because the system is overloaded, then a longer timeout _is_ called for, but then what is a good value for the timeout ..


The operation takes long because I need to call a HTTPS api (using Zinc to a local nginx tunnel) many times where I need to post a XML and I also get a large XML response. The time it takes..depends on how many "items" have been selected. So it is hard to estimate how much it would take. 
 
I guess the real question to ask is what is the purpose of the timeout? 


If a gem went down, I would like nginx to forward request to the other (available gems).
 
You might want the gem itself to decide to terminate a request if it is "taking too long" then you don't need  a timeout at the nginx level?


3) 3 times...why? It seems because I have 3 gems. I did an experiment, and I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2 times rather than 3. 

It does sound like nginx is sending the request again upon a timeout ... could that be?

Yes, it is that. But I don't know how to properly solve both things... be able to have large timeout (like for this scenario), yet manage the scenario of gems going down. Imagine I don't care and I put a timeout of 1 hour (to say something). Then.. imagine I have a gem down and then a web user that connects to the site.  nginx might assign that gem that gem that went down. Therefore... it would have the web user waiting in the browser for 1 hour until nginx answers for a timeout....Is this correct?

The biggest issue is that nginx thinks the gem timeouted and then fordwads to the next gem. However...the gem was not dead...it was simply too busy with a time consuming request  ;) There is no way I can make the gem answer to nginx "I am fine, don't worry, just busy, continue to next gem" hahaha ? 

Probably the real real solution is the service VM as you and Paul pointed out several times. But i didn't have time to take a look to it yet :(
 



 

So....how do people normally deal with this? Of course, the immediate workaround seems to increase the timeout...but it seems risky to me, thinking that if for some reason (like GC running or whatever) one particular request takes more than the timeout, then my "backend code" could be run more than once...

Thanks in advance, 

--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass





--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Mariano Martinez Peck



On Fri, Jul 18, 2014 at 12:54 PM, Mariano Martinez Peck <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 12:08 PM, Dale Henrichs <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 7:36 AM, Mariano Martinez Peck <[hidden email]> wrote:
Hi guys, 

I have a strange situation with timeouts under nginx/fastCGI and I am not sure what is expected. I am executing a Seaside WATask, where one of its method takes (for sure) more then the timeout of nginx/fastCGI. My piece of nginx setup is something like this:

location @seasidemariano {
include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args; 
  fastcgi_pass seasidemariano;
fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next upstream (gem) in any error, including timeout. Now...say I have this method being executed and it takes more than 180 seconds. What happens is that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But... I have some questions:

1) what happens with that gem that was executing the task (the one that took more than 180)? is the execution finished even if the nginx give a timeout and pass the reuqest to the next gem? Or the gem execution is aborted?  Why I ask? Because...I out a log to a file inside my method...and it looks like if the method were called 3 times rather than 1. And from a domain point of view.... it is not good that such a method is executed 3 times...

It does sound like nginx is redispatching the http request on timeout... 


Exactly. And it should, as my configuration is: 

fastcgi_next_upstream error invalid_header timeout http_500;

So yes...upon a gem timeout, nginx forwards the request to the next gem. 
 
2) If I put a larger timeout...say 1500 ... it works correct..the method is executed only once, no timeout. Same if I use swazoo. So it is something to do with the timeouts and fastCGI for sure. 

In general I try to avoid timeouts ... it seems that timeouts fire more often because the system is slow than for any other reason and the standard answer: increase the timeout ...

So I guess I would wonder why the operation is taking so long ... if the operation is slow because the system is overloaded, then a longer timeout _is_ called for, but then what is a good value for the timeout ..


The operation takes long because I need to call a HTTPS api (using Zinc to a local nginx tunnel) many times where I need to post a XML and I also get a large XML response. The time it takes..depends on how many "items" have been selected. So it is hard to estimate how much it would take. 
 
I guess the real question to ask is what is the purpose of the timeout? 


If a gem went down, I would like nginx to forward request to the other (available gems).
 
You might want the gem itself to decide to terminate a request if it is "taking too long" then you don't need  a timeout at the nginx level?


3) 3 times...why? It seems because I have 3 gems. I did an experiment, and I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2 times rather than 3. 

It does sound like nginx is sending the request again upon a timeout ... could that be?

Yes, it is that. But I don't know how to properly solve both things... be able to have large timeout (like for this scenario), yet manage the scenario of gems going down. Imagine I don't care and I put a timeout of 1 hour (to say something). Then.. imagine I have a gem down and then a web user that connects to the site.  nginx might assign that gem that gem that went down. Therefore... it would have the web user waiting in the browser for 1 hour until nginx answers for a timeout....Is this correct?

The biggest issue is that nginx thinks the gem timeouted and then fordwads to the next gem. However...the gem was not dead...it was simply too busy with a time consuming request  ;) There is no way I can make the gem answer to nginx "I am fine, don't worry, just busy, continue to next gem" hahaha ? 


If we assume that when a gem goes down it is normally that the process is aborted (rather than an unresponding http server), then it would be nice if nignx could check if the process is alive (using PID or whatever) in order to forward the request to another upstream rather than checking via http timeout ... 

 
Probably the real real solution is the service VM as you and Paul pointed out several times. But i didn't have time to take a look to it yet :(
 



 

So....how do people normally deal with this? Of course, the immediate workaround seems to increase the timeout...but it seems risky to me, thinking that if for some reason (like GC running or whatever) one particular request takes more than the timeout, then my "backend code" could be run more than once...

Thanks in advance, 

--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass





--
Mariano
http://marianopeck.wordpress.com



--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Mariano Martinez Peck



On Fri, Jul 18, 2014 at 12:58 PM, Mariano Martinez Peck <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 12:54 PM, Mariano Martinez Peck <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 12:08 PM, Dale Henrichs <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 7:36 AM, Mariano Martinez Peck <[hidden email]> wrote:
Hi guys, 

I have a strange situation with timeouts under nginx/fastCGI and I am not sure what is expected. I am executing a Seaside WATask, where one of its method takes (for sure) more then the timeout of nginx/fastCGI. My piece of nginx setup is something like this:

location @seasidemariano {
include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args; 
  fastcgi_pass seasidemariano;
fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next upstream (gem) in any error, including timeout. Now...say I have this method being executed and it takes more than 180 seconds. What happens is that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But... I have some questions:

1) what happens with that gem that was executing the task (the one that took more than 180)? is the execution finished even if the nginx give a timeout and pass the reuqest to the next gem? Or the gem execution is aborted?  Why I ask? Because...I out a log to a file inside my method...and it looks like if the method were called 3 times rather than 1. And from a domain point of view.... it is not good that such a method is executed 3 times...

It does sound like nginx is redispatching the http request on timeout... 


Exactly. And it should, as my configuration is: 

fastcgi_next_upstream error invalid_header timeout http_500;

So yes...upon a gem timeout, nginx forwards the request to the next gem. 
 
2) If I put a larger timeout...say 1500 ... it works correct..the method is executed only once, no timeout. Same if I use swazoo. So it is something to do with the timeouts and fastCGI for sure. 

In general I try to avoid timeouts ... it seems that timeouts fire more often because the system is slow than for any other reason and the standard answer: increase the timeout ...

So I guess I would wonder why the operation is taking so long ... if the operation is slow because the system is overloaded, then a longer timeout _is_ called for, but then what is a good value for the timeout ..


The operation takes long because I need to call a HTTPS api (using Zinc to a local nginx tunnel) many times where I need to post a XML and I also get a large XML response. The time it takes..depends on how many "items" have been selected. So it is hard to estimate how much it would take. 
 
I guess the real question to ask is what is the purpose of the timeout? 


If a gem went down, I would like nginx to forward request to the other (available gems).
 
You might want the gem itself to decide to terminate a request if it is "taking too long" then you don't need  a timeout at the nginx level?


3) 3 times...why? It seems because I have 3 gems. I did an experiment, and I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2 times rather than 3. 

It does sound like nginx is sending the request again upon a timeout ... could that be?

Yes, it is that. But I don't know how to properly solve both things... be able to have large timeout (like for this scenario), yet manage the scenario of gems going down. Imagine I don't care and I put a timeout of 1 hour (to say something). Then.. imagine I have a gem down and then a web user that connects to the site.  nginx might assign that gem that gem that went down. Therefore... it would have the web user waiting in the browser for 1 hour until nginx answers for a timeout....Is this correct?

The biggest issue is that nginx thinks the gem timeouted and then fordwads to the next gem. However...the gem was not dead...it was simply too busy with a time consuming request  ;) There is no way I can make the gem answer to nginx "I am fine, don't worry, just busy, continue to next gem" hahaha ? 


If we assume that when a gem goes down it is normally that the process is aborted (rather than an unresponding http server), then it would be nice if nignx could check if the process is alive (using PID or whatever) in order to forward the request to another upstream rather than checking via http timeout ... 


Maybe I can let the "error" option but remove "timeout". That way, maybe, it will only forward upon a gem crash and not timeout. I will try this and let you know.

In the meanwhile...do you know how does nginx does the assignment of a request to the particular upstream?  it is a plain round robin or smarter? Imagine I have some gems that are idle and some that are busy (already processing a request). Would nginx simply forward to the next one (when a request arrives) or is it smart to look for those that are idle?

Thanks in advance! 

 
 
Probably the real real solution is the service VM as you and Paul pointed out several times. But i didn't have time to take a look to it yet :(
 



 

So....how do people normally deal with this? Of course, the immediate workaround seems to increase the timeout...but it seems risky to me, thinking that if for some reason (like GC running or whatever) one particular request takes more than the timeout, then my "backend code" could be run more than once...

Thanks in advance, 

--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass





--
Mariano
http://marianopeck.wordpress.com



--
Mariano
http://marianopeck.wordpress.com



--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Dale Henrichs-3
In reply to this post by Mariano Martinez Peck



On Fri, Jul 18, 2014 at 8:54 AM, Mariano Martinez Peck <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 12:08 PM, Dale Henrichs <[hidden email]> wrote:



On Fri, Jul 18, 2014 at 7:36 AM, Mariano Martinez Peck <[hidden email]> wrote:
Hi guys, 

I have a strange situation with timeouts under nginx/fastCGI and I am not sure what is expected. I am executing a Seaside WATask, where one of its method takes (for sure) more then the timeout of nginx/fastCGI. My piece of nginx setup is something like this:

location @seasidemariano {
include fastcgi_params;
fastcgi_param REQUEST_URI $uri?$args; 
  fastcgi_pass seasidemariano;
fastcgi_connect_timeout      180;
      fastcgi_send_timeout         180;
      fastcgi_read_timeout         180;

  fastcgi_next_upstream error invalid_header timeout http_500;
}


So...as you can see I have a timeout of 180 and I tell to go to the next upstream (gem) in any error, including timeout. Now...say I have this method being executed and it takes more than 180 seconds. What happens is that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But... I have some questions:

1) what happens with that gem that was executing the task (the one that took more than 180)? is the execution finished even if the nginx give a timeout and pass the reuqest to the next gem? Or the gem execution is aborted?  Why I ask? Because...I out a log to a file inside my method...and it looks like if the method were called 3 times rather than 1. And from a domain point of view.... it is not good that such a method is executed 3 times...

It does sound like nginx is redispatching the http request on timeout... 


Exactly. And it should, as my configuration is: 

fastcgi_next_upstream error invalid_header timeout http_500;

So yes...upon a gem timeout, nginx forwards the request to the next gem. 
 
2) If I put a larger timeout...say 1500 ... it works correct..the method is executed only once, no timeout. Same if I use swazoo. So it is something to do with the timeouts and fastCGI for sure. 

In general I try to avoid timeouts ... it seems that timeouts fire more often because the system is slow than for any other reason and the standard answer: increase the timeout ...

So I guess I would wonder why the operation is taking so long ... if the operation is slow because the system is overloaded, then a longer timeout _is_ called for, but then what is a good value for the timeout ..


The operation takes long because I need to call a HTTPS api (using Zinc to a local nginx tunnel) many times where I need to post a XML and I also get a large XML response. The time it takes..depends on how many "items" have been selected. So it is hard to estimate how much it would take. 
 
I guess the real question to ask is what is the purpose of the timeout? 


If a gem went down, I would like nginx to forward request to the other (available gems).

Are you using something like daemontools? 

If a gem is no longer listening on a socket, nginx should recognize that and send the request to next available gem. With daemontools, the gem is automatically restarted when the process is no longer present ... it is quite fast at responding ...

There are cases where a gem stays up, but is not responsive ... for those I think that you can use monit to send periodic http requests to each of the gems to see if it is responding (and kill/restart the gem if it is absent or not responding)..Johan has a formula that he uses ...

 
You might want the gem itself to decide to terminate a request if it is "taking too long" then you don't need  a timeout at the nginx level?


3) 3 times...why? It seems because I have 3 gems. I did an experiment, and I set only 2 gems to nginx fastcgi. And yes, the method was executed only 2 times rather than 3. 

It does sound like nginx is sending the request again upon a timeout ... could that be?

Yes, it is that. But I don't know how to properly solve both things... be able to have large timeout (like for this scenario), yet manage the scenario of gems going down. Imagine I don't care and I put a timeout of 1 hour (to say something). Then.. imagine I have a gem down and then a web user that connects to the site.  nginx might assign that gem that gem that went down. Therefore... it would have the web user waiting in the browser for 1 hour until nginx answers for a timeout....Is this correct?

I think that monit/daemontools combo will handle the unresponsive/dead gem issue ... then the timeout becomes less important ... 

The biggest issue is that nginx thinks the gem timeouted and then fordwads to the next gem. However...the gem was not dead...it was simply too busy with a time consuming request  ;) There is no way I can make the gem answer to nginx "I am fine, don't worry, just busy, continue to next gem" hahaha ? 

I have done some things in the past where I've had the gem bounce fastCGI requests because it is too busy ... so it can be done, but if all of the gems are too busy you end up in trouble with infinite bouncing ... 

Probably the real real solution is the service VM as you and Paul pointed out several times. But i didn't have time to take a look to it yet :(

I think that this is the right answer at the end of the day ... you want your seaside gems to only take a short period of time ... I've recently made sure that the serviceVM is in good shape for GemStone 3.x (can't remember if I sent mail to list or not:) 

Until you can get things moved to a serviceVM, you can add additional seaside vms...
 
Dale

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Dale Henrichs-3
In reply to this post by Mariano Martinez Peck
I do think that nginx will just skip over gems that are no longer present.

daemontools/monit are the tools you would use to restart dead/unresponsive gems


On Fri, Jul 18, 2014 at 8:58 AM, Mariano Martinez Peck <[hidden email]> wrote:

If we assume that when a gem goes down it is normally that the process is aborted (rather than an unresponding http server), then it would be nice if nignx could check if the process is alive (using PID or whatever) in order to forward the request to another upstream rather than checking via http timeout ... 


_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Dale Henrichs-3
In reply to this post by Mariano Martinez Peck



On Fri, Jul 18, 2014 at 9:06 AM, Mariano Martinez Peck <[hidden email]> wrote:

Maybe I can let the "error" option but remove "timeout". That way, maybe, it will only forward upon a gem crash and not timeout. I will try this and let you know.

In the meanwhile...do you know how does nginx does the assignment of a request to the particular upstream?  it is a plain round robin or smarter? Imagine I have some gems that are idle and some that are busy (already processing a request). Would nginx simply forward to the next one (when a request arrives) or is it smart to look for those that are idle?

I'm not familiar with nginx, but back when I was testing apache and lightppd, I observed that lightppd was pretty good at round robining requests to idle gems ... I assume that nginx is simlarly well-behaved ...

Dale

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass