Concurrent requests from multiple sessions

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
34 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Concurrent requests from multiple sessions

wilwarin
Hi all,

Currently we are developing a web application in Seaside, and it is necessary for us to work in VAST. The application runs as a Windows service, and for a few days we are facing the following issue:

- At one moment there are two users (let's call them 'A' and 'B') logged in in the application, so we have two different sessions.
- 'A' requests a page with a long list of objects obtained from DB2, so it takes a number of seconds to get the results.
- Less than one second after 'A''s request, 'B' requests another page, for instance an easy static page.
- For the time the 'A''s request is handled, the 'B''s browser window freezes and waits for those number of seconds mentioned above.

We searched really a lot, but still the results are not what we would expect. This issue makes as confused, because in the future the application should serve hundreds of users with very similar combinations of requests.

We didn't know, where our problem lies, so we tried a similar test with a single page with a difficult calculation inside. Then we tried the same in Pharo to exclude the problem in VAST. Both with the same results.

Is there anything we are missing? What should we do to achieve a parallel (or kinda better) processing of requests?

Thank you very much for your responses.

Ondrej
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

StormByte
wilwarin wrote:

> Hi all,
>
> Currently we are developing a web application in Seaside, and it is
> necessary for us to work in VAST. The application runs as a Windows
> service, and for a few days we are facing the following issue:
>
> - At one moment there are two users (let's call them 'A' and 'B') logged
> in in the application, so we have two different sessions.
> - 'A' requests a page with a long list of objects obtained from DB2, so it
> takes a number of seconds to get the results.
> - Less than one second after 'A''s request, 'B' requests another page, for
> instance an easy static page.
> - For the time the 'A''s request is handled, the 'B''s browser window
> freezes and waits for those number of seconds mentioned above.
>
> We searched really a lot, but still the results are not what we would
> expect. This issue makes as confused, because in the future the
> application should serve hundreds of users with very similar combinations
> of requests.
>
> We didn't know, where our problem lies, so we tried a similar test with a
> single page with a difficult calculation inside. Then we tried the same in
> Pharo to exclude the problem in VAST. Both with the same results.
>
> Is there anything we are missing? What should we do to achieve a parallel
> (or kinda better) processing of requests?
>
> Thank you very much for your responses.
>
> Ondrej
>
>
>
> --
> View this message in context:
> http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929.html
> Sent from the Seaside General mailing list archive at Nabble.com.
Since pharo is green threaded (that means it has only 1 thread in CPU), if
you do a very expensive operation it may become unresponsive until it
finishes.
Try with 9999999999 factorial. and you will see. Due to the previous, even
if you do [ 9999999999 factorial ] fork. you will still experiment some kind
of lag.

Being that said, and also discusses in another thread upon my name, I would
suggest you to run several images with a load balancer (nginx for example),
my suggestion is to use 1 image per virtual core to achieve the maximum
performance possible.

That way, when the user expensive operation is taking place, other users may
be directed to another image which could be more idle.

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

StormByte
In reply to this post by wilwarin
wilwarin wrote:

> because in the future the
> application should serve hundreds of users with very similar combinations
> of requests.
>
That sentence also makes me to think that maybe you could also enable
database data cache (for example via memcached), so the request response is
stored into RAM, and same further requests will take nothing to be done
since they are already cached.

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

wilwarin
David, thank you for your answers.

Several images & load balancing was the solution we were a little bit worried about, as we assume we are going to need more database connections then. And that was one of the goals we were thinking about, minimalize this number as much as possible.

Regarding to similar combinations of requests I mentioned, those long lists of objects are overviews of big tables, and user always creates a specific filter to display their subsets. Right now I cannot imagine, how the database cache could be helpful in this case, but I will look at it.

Thank you once more.

Ondrej.
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Jan van de Sandt
In reply to this post by wilwarin
Hi,

By default a call to a database like DB2 will block the image in VASt. You can change this by setting the #AllCallsThreaded preference in AbtDbmSystem to true. Then the calls with use a differente OS thread and other processes in the image will continue to run.

Jan.

On Fri, Mar 6, 2015 at 11:17 AM, wilwarin <[hidden email]> wrote:
Hi all,

Currently we are developing a web application in Seaside, and it is
necessary for us to work in VAST. The application runs as a Windows service,
and for a few days we are facing the following issue:

- At one moment there are two users (let's call them 'A' and 'B') logged in
in the application, so we have two different sessions.
- 'A' requests a page with a long list of objects obtained from DB2, so it
takes a number of seconds to get the results.
- Less than one second after 'A''s request, 'B' requests another page, for
instance an easy static page.
- For the time the 'A''s request is handled, the 'B''s browser window
freezes and waits for those number of seconds mentioned above.

We searched really a lot, but still the results are not what we would
expect. This issue makes as confused, because in the future the application
should serve hundreds of users with very similar combinations of requests.

We didn't know, where our problem lies, so we tried a similar test with a
single page with a difficult calculation inside. Then we tried the same in
Pharo to exclude the problem in VAST. Both with the same results.

Is there anything we are missing? What should we do to achieve a parallel
(or kinda better) processing of requests?

Thank you very much for your responses.

Ondrej



--
View this message in context: http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929.html
Sent from the Seaside General mailing list archive at Nabble.com.
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside


_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

wilwarin
This post was updated on .
Thank you, Jan.

We will try it. Unfortunately we found out that a communication with a database is not the worst part. But, if we have many results, there is a need to process them before rendering and it takes most of time, which blocks the image.

We assumed that almost every project is facing a similar issue - many sessions requesting some server-side processing simultaneously - and it won't be such difficult to make it all 'parallel'.

Does it mean, that most of Seaside projects around have several images behind, so user's request is not blocked by another one? Do we have any other way to achieve that (instead of combination 'several images & load balancing)?

Again, thank you all for your time.


Jan van de Sandt wrote
Hi,

By default a call to a database like DB2 will block the image in VASt. You
can change this by setting the #AllCallsThreaded preference in AbtDbmSystem
to true. Then the calls with use a differente OS thread and other processes
in the image will continue to run.

Jan.

On Fri, Mar 6, 2015 at 11:17 AM, wilwarin <[hidden email]> wrote:

> Hi all,
>
> Currently we are developing a web application in Seaside, and it is
> necessary for us to work in VAST. The application runs as a Windows
> service,
> and for a few days we are facing the following issue:
>
> - At one moment there are two users (let's call them 'A' and 'B') logged in
> in the application, so we have two different sessions.
> - 'A' requests a page with a long list of objects obtained from DB2, so it
> takes a number of seconds to get the results.
> - Less than one second after 'A''s request, 'B' requests another page, for
> instance an easy static page.
> - For the time the 'A''s request is handled, the 'B''s browser window
> freezes and waits for those number of seconds mentioned above.
>
> We searched really a lot, but still the results are not what we would
> expect. This issue makes as confused, because in the future the application
> should serve hundreds of users with very similar combinations of requests.
>
> We didn't know, where our problem lies, so we tried a similar test with a
> single page with a difficult calculation inside. Then we tried the same in
> Pharo to exclude the problem in VAST. Both with the same results.
>
> Is there anything we are missing? What should we do to achieve a parallel
> (or kinda better) processing of requests?
>
> Thank you very much for your responses.
>
> Ondrej
>
>
>
> --
> View this message in context:
> http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929.html
> Sent from the Seaside General mailing list archive at Nabble.com.
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Bob Arning-2
In reply to this post by wilwarin
You've had some good answers so far. Here are some other ideas that might also help

- Can you split the single big DB2 request into several smaller ones and yield the processor between each to allow other Smalltalk processes a chance?
- Can you move the DB2 requests to a separate Smalltalk image? Image one connects to image two via socket and sends the request. While image two is building the answer, image one is simply waiting for data on a socket and that can happen concurrently with other processes in image one.

Cheers,
Bob

On 3/6/15 5:17 AM, wilwarin wrote:
Hi all,

Currently we are developing a web application in Seaside, and it is
necessary for us to work in VAST. The application runs as a Windows service,
and for a few days we are facing the following issue:

- At one moment there are two users (let's call them 'A' and 'B') logged in
in the application, so we have two different sessions.
- 'A' requests a page with a long list of objects obtained from DB2, so it
takes a number of seconds to get the results.
- Less than one second after 'A''s request, 'B' requests another page, for
instance an easy static page.
- For the time the 'A''s request is handled, the 'B''s browser window
freezes and waits for those number of seconds mentioned above.

We searched really a lot, but still the results are not what we would
expect. This issue makes as confused, because in the future the application
should serve hundreds of users with very similar combinations of requests.

We didn't know, where our problem lies, so we tried a similar test with a
single page with a difficult calculation inside. Then we tried the same in
Pharo to exclude the problem in VAST. Both with the same results.

Is there anything we are missing? What should we do to achieve a parallel
(or kinda better) processing of requests?

Thank you very much for your responses.

Ondrej



--
View this message in context: http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929.html
Sent from the Seaside General mailing list archive at Nabble.com.
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside



_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

AW: [Seaside] Re: Concurrent requests from multiple sessions

Nowak, Helge
In reply to this post by wilwarin
Dear Ondrej,

since processing the loaded data is your bottleneck: is there a way to parallelize that processing? If so, you could try parallelization using Smalltalk processes. Depending on the nature of the processing it can already yield good results. In Cincom Smalltalk you could also use MatriX to spawn worker images. As you are on VAST you don't have that out. Yet maybe VAST offers other possibilities of a similar architecture.

HTH
Helge

-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von wilwarin
Gesendet: Freitag, 6. März 2015 14:04
An: [hidden email]
Betreff: [Seaside] Re: Concurrent requests from multiple sessions

Thank you, Jan.

We will try it. Unfortunately we find out that a communication with a database is not the worst part. But, if we have many results, there is a need to process them before rendering and it takes most of time, which blocks the image.

We assumed that almost every project is facing a similar issue - many sessions requesting some server-side processing simultaneously - and it won't be such difficult to make it all 'parallel'.

Does it mean, that most of Seaside projects around have several images behind, so user's request is not blocked by another one? Do we have any other way to achieve that (instead of combination 'several images & load balancing)?

Again, thank you all for your time.



Jan van de Sandt wrote

> Hi,
>
> By default a call to a database like DB2 will block the image in VASt.
> You can change this by setting the #AllCallsThreaded preference in
> AbtDbmSystem to true. Then the calls with use a differente OS thread
> and other processes in the image will continue to run.
>
> Jan.
>
> On Fri, Mar 6, 2015 at 11:17 AM, wilwarin &lt;

> Ondrej.Altman@

> &gt; wrote:
>
>> Hi all,
>>
>> Currently we are developing a web application in Seaside, and it is
>> necessary for us to work in VAST. The application runs as a Windows
>> service, and for a few days we are facing the following issue:
>>
>> - At one moment there are two users (let's call them 'A' and 'B')
>> logged in in the application, so we have two different sessions.
>> - 'A' requests a page with a long list of objects obtained from DB2,
>> so it takes a number of seconds to get the results.
>> - Less than one second after 'A''s request, 'B' requests another
>> page, for instance an easy static page.
>> - For the time the 'A''s request is handled, the 'B''s browser window
>> freezes and waits for those number of seconds mentioned above.
>>
>> We searched really a lot, but still the results are not what we would
>> expect. This issue makes as confused, because in the future the
>> application should serve hundreds of users with very similar
>> combinations of requests.
>>
>> We didn't know, where our problem lies, so we tried a similar test
>> with a single page with a difficult calculation inside. Then we tried
>> the same in Pharo to exclude the problem in VAST. Both with the same
>> results.
>>
>> Is there anything we are missing? What should we do to achieve a
>> parallel (or kinda better) processing of requests?
>>
>> Thank you very much for your responses.
>>
>> Ondrej
>>
>>
>>
>> --
>> View this message in context:
>> http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp48
>> 09929.html Sent from the Seaside General mailing list archive at
>> Nabble.com.
>> _______________________________________________
>> seaside mailing list
>>

> seaside@.squeakfoundation

>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
>>
>
> _______________________________________________
> seaside mailing list

> seaside@.squeakfoundation

> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside





--
View this message in context: http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929p4809979.html
Sent from the Seaside General mailing list archive at Nabble.com.
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Sven Van Caekenberghe-2
In reply to this post by wilwarin

> On 06 Mar 2015, at 11:17, wilwarin <[hidden email]> wrote:
>
> We didn't know, where our problem lies, so we tried a similar test with a
> single page with a difficult calculation inside. Then we tried the same in
> Pharo to exclude the problem in VAST. Both with the same results.

I just tried the following on Pharo 4 with Seaside 3.1, in a WATask

go
  self inform: 'WATest1 ready.'.
  (self confirm: 'Wait 30s ?')
    ifTrue: [
      (self confirm: 'Do some benchmarking ?')
        ifTrue: [
          self inform: ([ 100 factorial ] benchFor: 30 seconds) asString ]
        ifFalse: [
          30 seconds wait.
          self inform: 'Back from waiting 30s' ] ]
    ifFalse: [ self inform: 'OK then, I did not wait' ]

In both cases, you can do other work during the 30s. Of course, things will get quite slow during the benchmarking, but that is logical since you are pushing the machine 100%, consuming all CPU power.

Like Jan suggests, some DB interfaces are blocking, effectively killing (serialising) multiprocessing. I know that PostgresV2 is not like that, since it uses a TCP networking interface.

But that problem (not being able to process concurrent requests) is certainly not inherent to Seaside or Pharo. If that were the case we should all stop using it.

Sven

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Philippe Marschall
In reply to this post by wilwarin
On Fri, Mar 6, 2015 at 11:17 AM, wilwarin <[hidden email]> wrote:

> Hi all,
>
> Currently we are developing a web application in Seaside, and it is
> necessary for us to work in VAST. The application runs as a Windows service,
> and for a few days we are facing the following issue:
>
> - At one moment there are two users (let's call them 'A' and 'B') logged in
> in the application, so we have two different sessions.
> - 'A' requests a page with a long list of objects obtained from DB2, so it
> takes a number of seconds to get the results.
> - Less than one second after 'A''s request, 'B' requests another page, for
> instance an easy static page.
> - For the time the 'A''s request is handled, the 'B''s browser window
> freezes and waits for those number of seconds mentioned above.
>
> We searched really a lot, but still the results are not what we would
> expect. This issue makes as confused, because in the future the application
> should serve hundreds of users with very similar combinations of requests.
>
> We didn't know, where our problem lies, so we tried a similar test with a
> single page with a difficult calculation inside. Then we tried the same in
> Pharo to exclude the problem in VAST. Both with the same results.
>
> Is there anything we are missing? What should we do to achieve a parallel
> (or kinda better) processing of requests?
>
> Thank you very much for your responses.

The only Seaside limitation I'm aware of is a lock around every
session. So only one request of 'A' can be processed at any given
time. However this should not affect any other users. The lock is
there because sessions and components are mutable. Use the code from
Sven to verify Seaside and your webserver are not the issue.

In theory it is possible for Seaside to stream the response to the
client while rendering it on the sever. This should consume less
resources on the server (because not whole response has to be build in
the server) and should improve the feel of responsiveness on the
client because the browser can start rendering before the response if
fully received. However this requires server support, I don't know the
state of this in VA. Also you can't display an error page in case of
an exception. Also you have various other options like pagination our
loading via JavaScript.

Using multiple images which Seaside is possible but has pros and cons.
On the positive side you can make use of multiple CPUs and have better
availability. On the negative side you'll have to implement sticky
sessions (we support faking jvmRoute) and you'll have to juggle
multiple images.

Cheers
Philippe
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Sean Glazier
In reply to this post by Sven Van Caekenberghe-2
I would make a specialized DB 2 interface in c that makes the request in an OS thread that the signals the vast vm when the result is in. Your vast vm will not then block preventing the processing of the DB 2 request. In visual works we have thappi precisely because of these DB and other blocking issues. You can make a similar system for vast c calls only you will need to generalize a solution to give rhe needed information for the call and the data. You can spike a solution easily enough for the DB 2 calls by creating a dll of the same name that vast loads rather than the DB 2 dll. You dll starts up a thread pool and forwards the call in a new thread. You will need to coordinate your response etc so you are not blocking the vm thread allowing it to respond to other seaside sessions.

Sean

Sent from my iPhone
Sean Glazier
Light your self on fire with Enthusiasm and people will come for miles around to watch you burn!

On Mar 6, 2015, at 15:55, Sven Van Caekenberghe <[hidden email]> wrote:

>
>> On 06 Mar 2015, at 11:17, wilwarin <[hidden email]> wrote:
>>
>> We didn't know, where our problem lies, so we tried a similar test with a
>> single page with a difficult calculation inside. Then we tried the same in
>> Pharo to exclude the problem in VAST. Both with the same results.
>
> I just tried the following on Pharo 4 with Seaside 3.1, in a WATask
>
> go
>  self inform: 'WATest1 ready.'.
>  (self confirm: 'Wait 30s ?')
>    ifTrue: [
>      (self confirm: 'Do some benchmarking ?')
>        ifTrue: [
>          self inform: ([ 100 factorial ] benchFor: 30 seconds) asString ]
>        ifFalse: [
>          30 seconds wait.
>          self inform: 'Back from waiting 30s' ] ]
>    ifFalse: [ self inform: 'OK then, I did not wait' ]
>
> In both cases, you can do other work during the 30s. Of course, things will get quite slow during the benchmarking, but that is logical since you are pushing the machine 100%, consuming all CPU power.
>
> Like Jan suggests, some DB interfaces are blocking, effectively killing (serialising) multiprocessing. I know that PostgresV2 is not like that, since it uses a TCP networking interface.
>
> But that problem (not being able to process concurrent requests) is certainly not inherent to Seaside or Pharo. If that were the case we should all stop using it.
>
> Sven
>
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

StormByte
In reply to this post by wilwarin
wilwarin wrote:

> David, thank you for your answers.
>
> Several images & load balancing was the solution we were a little bit
> worried about, as we assume we are going to need more database connections
> then. And that was one of the goals we were thinking about, minimalize
> this number as much as possible.
>
> Regarding to similar combinations of requests I mentioned, those long
> lists of objects are overviews of big tables, and user always creates a
> specific filter to display their subsets. Right now I cannot imagine, how
> the database cache could be helpful in this case, but I will look at it.
>
> Thank you once more.
>
> Ondrej.
>
>
>
> --
> View this message in context:
> http://forum.world.st/Concurrent-requests-from-multiple-sessions-tp4809929p4809955.html
> Sent from the Seaside General mailing list archive at Nabble.com.
The number of connections should not worry you if you use a pool with a
healthy maximum number of connections defined.

For the cache, well, it is only useful for gathering same data, I mean:

User A asks for X
 * query database
 * store cache

User B asks for X
 * retrieve from cache

User C adds items
 * query database
 * invalidate cache


About the filters, depending on the data amount, a) you can apply filters
after database data is grabbed, or b) directly into database.

With a) you gain more cache hits, but you may lose some performance if the
data/filters combination is expensive

With b) you will have likelly less cache hits (because you should store in
cache data+filter combination but you may gain potentially a bit of
performance.

Those are the 2 cases of study here

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

StormByte
In reply to this post by wilwarin
wilwarin wrote:

> Does it mean, that most of Seaside projects around have several images
> behind, so user's request is not blocked by another one? Do we have any
> other way to achieve that (instead of combination 'several images & load
> balancing)?
>

For that I have an idea.

In an experiment, when I was programming my own database connection pool, I
decided to try to have many opened connections in the pool, and use one of
those connections per query.

That could potentially help you into the blocking process.

I mean, while user A is executing operations in connection A, user B can
just grab connection B to execute its own operations without affecting user
A (except if you use transaction and you have locked tables).

That will work also for small operations, when you grab a connection,
execute actions and return the connection to the pool.

Depending on your use case, this schema may help you also without the need
for more images, but I insist, even if you fork the expensive processes,
pharo is still green threaded, so you can't expect it being so responsibe
under such a heavy load.

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

wilwarin
In reply to this post by wilwarin
Hi all,

please excuse my inactivity, now I would like to summarize all above, and to ask you maybe for a few last answers and advices.

Regarding to mentioned DB2, as I assumed, database is definitely not an issue, so I am sorry for my inaccuracy in my first post. At the moment we are sure that there is no block from the database side (we can see the block in data processing). Thank you guys for your solution proposals.


> The only Seaside limitation I'm aware of is a lock around every
> session. So only one request of 'A' can be processed at any given
> time. However this should not affect any other users. The lock is
> there because sessions and components are mutable. Use the code from
> Sven to verify Seaside and your webserver are not the issue.

This is the behavior we assumed, this (max. one request per session at any time) has to be, and actually is, enough. However we are really surprised, that other users are affected. And by our investigation they are always affected by every concurrent request, no matter how difficult it is.

( I tried Sven's code and it is possible to do another work during waiting, it is OK. )


> But that problem (not being able to process concurrent requests) is
> certainly not inherent to Seaside or Pharo. If that were the case we
> should all stop using it.

Hopefully I understand, so it is not the case. If I am correct, it means that in Seaside it is necessary to solve similar situations with different access. In case of for example difficult server-side calculation (or processing of big amount of data), we have to count on a possible problem with concurrent requests from multiple sessions. Am I right? Or, if not, where could the problem lay?

As I wrote, we are working in VAST - I tried to simulate the same problem in Pharo with the same result, so I think in this case it does not matter much. Regarding to Apache (or, when we run it from code), everything is in default configuration. Database is not an issue. For me it looks like everything, that I wrote here, is a standard behavior of application build on Seaside => the application which has to serve not-so-easy concurrent requests has to use some way around (multiple images & load balancing, parallelization of processes, ...). Please correct me if I am wrong.

Guys, could you please tell me what I am missing?

Maybe I can imagine, that the application we are developing is not a 'standard' Seaside app (in the sense of data amount and its difficult server-side processing). Then I believe it would require a non-standard architecture (according to that all, is Seaside still the path we can/should follow?). It is our problem, we do not know about a reliable way to do that - and this is the reason, why I am asking here.

What would you recommend?

Thank you again for your time, and thank you for your responses in advance.

Cheers,
Ondrej.
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Sven Van Caekenberghe-2

> On 10 Mar 2015, at 11:15, wilwarin <[hidden email]> wrote:
>
> ( I tried Sven's code and it is possible to do another work during waiting,
> it is OK. )

Does it also work (slowly) during the benchmark ?
That simulates your 'long running work'.

>> But that problem (not being able to process concurrent requests) is
>> certainly not inherent to Seaside or Pharo. If that were the case we
>> should all stop using it.
>
> Hopefully I understand, so it is not the case. If I am correct, it means
> that in Seaside it is necessary to solve similar situations with different
> access. In case of for example difficult server-side calculation (or
> processing of big amount of data), we have to count on a possible problem
> with concurrent requests from multiple sessions. Am I right? Or, if not,
> where could the problem lay?

What everybody here tries to explain to you is that the problem is most probably in your DB driver (in Smalltalk or further down): it probably cannot do more than one concurrent request (as it is currently implemented). That is not a Seaside nor a fundamental Smalltalk problem.

Try investigating that aspect, separate from Seaside._______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

AW: [Seaside] Re: Concurrent requests from multiple sessions

Nowak, Helge
Hmm,

I am sure Ondrej understood what was proposed. He mentioned at least twice that the DB connections are NOT the problem. He said that the processing AFTER loading the data from the database into the image is the problem. Why shouldn't I believe him? OTOH I believe you Seaside experts that Seaside's architecture is inherently non-blocking between sessions.

Thus there are two questions:
- how could any processing block the whole Seaside image?
- how to find possibly problematic spots in Ondrej's application code?

Once that is resolved one could think about further improvements.

Cheers
Helge

-----Ursprüngliche Nachricht-----
Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Sven Van Caekenberghe
Gesendet: Dienstag, 10. März 2015 12:12
An: Seaside - general discussion
Betreff: Re: [Seaside] Re: Concurrent requests from multiple sessions


> On 10 Mar 2015, at 11:15, wilwarin <[hidden email]> wrote:
>
> ( I tried Sven's code and it is possible to do another work during
> waiting, it is OK. )

Does it also work (slowly) during the benchmark ?
That simulates your 'long running work'.

>> But that problem (not being able to process concurrent requests) is
>> certainly not inherent to Seaside or Pharo. If that were the case we
>> should all stop using it.
>
> Hopefully I understand, so it is not the case. If I am correct, it
> means that in Seaside it is necessary to solve similar situations with
> different access. In case of for example difficult server-side
> calculation (or processing of big amount of data), we have to count on
> a possible problem with concurrent requests from multiple sessions. Am
> I right? Or, if not, where could the problem lay?

What everybody here tries to explain to you is that the problem is most probably in your DB driver (in Smalltalk or further down): it probably cannot do more than one concurrent request (as it is currently implemented). That is not a Seaside nor a fundamental Smalltalk problem.

Try investigating that aspect, separate from Seaside._______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Sven Van Caekenberghe-2
Well, we can't help him unless he give us a reproducible case.

My code shows there is no such problem.

There might be some miscommunication going on, but from my cursory reading, I got the impression that he suggests there is a fundamental problem with something except his own code.

> On 10 Mar 2015, at 12:31, Nowak, Helge <[hidden email]> wrote:
>
> Hmm,
>
> I am sure Ondrej understood what was proposed. He mentioned at least twice that the DB connections are NOT the problem. He said that the processing AFTER loading the data from the database into the image is the problem. Why shouldn't I believe him? OTOH I believe you Seaside experts that Seaside's architecture is inherently non-blocking between sessions.
>
> Thus there are two questions:
> - how could any processing block the whole Seaside image?
> - how to find possibly problematic spots in Ondrej's application code?
>
> Once that is resolved one could think about further improvements.
>
> Cheers
> Helge
>
> -----Ursprüngliche Nachricht-----
> Von: [hidden email] [mailto:[hidden email]] Im Auftrag von Sven Van Caekenberghe
> Gesendet: Dienstag, 10. März 2015 12:12
> An: Seaside - general discussion
> Betreff: Re: [Seaside] Re: Concurrent requests from multiple sessions
>
>
>> On 10 Mar 2015, at 11:15, wilwarin <[hidden email]> wrote:
>>
>> ( I tried Sven's code and it is possible to do another work during
>> waiting, it is OK. )
>
> Does it also work (slowly) during the benchmark ?
> That simulates your 'long running work'.
>
>>> But that problem (not being able to process concurrent requests) is
>>> certainly not inherent to Seaside or Pharo. If that were the case we
>>> should all stop using it.
>>
>> Hopefully I understand, so it is not the case. If I am correct, it
>> means that in Seaside it is necessary to solve similar situations with
>> different access. In case of for example difficult server-side
>> calculation (or processing of big amount of data), we have to count on
>> a possible problem with concurrent requests from multiple sessions. Am
>> I right? Or, if not, where could the problem lay?
>
> What everybody here tries to explain to you is that the problem is most probably in your DB driver (in Smalltalk or further down): it probably cannot do more than one concurrent request (as it is currently implemented). That is not a Seaside nor a fundamental Smalltalk problem.
>
> Try investigating that aspect, separate from Seaside._______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
> _______________________________________________
> seaside mailing list
> [hidden email]
> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Bob Arning-2
One question that occurs to me is whether different Smalltalks exhibit different behaviors in switching from one process to another at the same priority. I recall thinking this was a bit odd when I first became aware of it and I seem to remember a discussion once about changing Squeak's behavior in this case. I don't know if other Smalltalks did change or were even different from the very beginning. If you run the following, do the three processes proceed at roughly the same pace or does 1 finish before 2 starts?

    | nums procs |
   
    nums := OrderedCollection new.
    procs := (1 to: 3) collect: [ : i |
        nums add: 0.
        [
            10 timesRepeat: [
                [10000 factorial] timeToRun.
                nums at: i put: (nums at: i) + 1.
            ].
        ].
    ].
    procs do: [ : e | e forkAt: Processor userBackgroundPriority].
    nums inspect


On 3/10/15 7:51 AM, Sven Van Caekenberghe wrote:
Well, we can't help him unless he give us a reproducible case.

My code shows there is no such problem.

There might be some miscommunication going on, but from my cursory reading, I got the impression that he suggests there is a fundamental problem with something except his own code.

On 10 Mar 2015, at 12:31, Nowak, Helge [hidden email] wrote:

Hmm,

I am sure Ondrej understood what was proposed. He mentioned at least twice that the DB connections are NOT the problem. He said that the processing AFTER loading the data from the database into the image is the problem. Why shouldn't I believe him? OTOH I believe you Seaside experts that Seaside's architecture is inherently non-blocking between sessions.

Thus there are two questions:
- how could any processing block the whole Seaside image?
- how to find possibly problematic spots in Ondrej's application code?

Once that is resolved one could think about further improvements.

Cheers
Helge

-----Ursprüngliche Nachricht-----
Von: [hidden email] [[hidden email]] Im Auftrag von Sven Van Caekenberghe
Gesendet: Dienstag, 10. März 2015 12:12
An: Seaside - general discussion
Betreff: Re: [Seaside] Re: Concurrent requests from multiple sessions


On 10 Mar 2015, at 11:15, wilwarin [hidden email] wrote:

( I tried Sven's code and it is possible to do another work during 
waiting, it is OK. )
Does it also work (slowly) during the benchmark ?
That simulates your 'long running work'.

But that problem (not being able to process concurrent requests) is 
certainly not inherent to Seaside or Pharo. If that were the case we 
should all stop using it.
Hopefully I understand, so it is not the case. If I am correct, it 
means that in Seaside it is necessary to solve similar situations with 
different access. In case of for example difficult server-side 
calculation (or processing of big amount of data), we have to count on 
a possible problem with concurrent requests from multiple sessions. Am 
I right? Or, if not, where could the problem lay?
What everybody here tries to explain to you is that the problem is most probably in your DB driver (in Smalltalk or further down): it probably cannot do more than one concurrent request (as it is currently implemented). That is not a Seaside nor a fundamental Smalltalk problem.

Try investigating that aspect, separate from Seaside._______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside



_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

Sven Van Caekenberghe-2
Nice test, Bob.

In my Pharo 4.0 image the processes seem to be progressing about evenly.

> On 10 Mar 2015, at 13:42, Bob Arning <[hidden email]> wrote:
>
>     | nums procs |
>    
>     nums := OrderedCollection new.
>     procs := (1 to: 3) collect: [ : i |
>         nums add: 0.
>         [
>             10 timesRepeat: [
>                 [10000 factorial] timeToRun.
>                 nums at: i put: (nums at: i) + 1.
>             ].
>         ].
>     ].
>     procs do: [ : e | e forkAt: Processor userBackgroundPriority].
>     nums inspect

_______________________________________________
seaside mailing list
[hidden email]
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside
Reply | Threaded
Open this post in threaded view
|

Re: Concurrent requests from multiple sessions

wilwarin
In reply to this post by Sven Van Caekenberghe-2
> I got the impression that he suggests there is a fundamental
> problem with something except his own code.

Then I have to apologize, it really was not meant to be offensive. I just wanted to know, whether the issue we are facing is standard or not. That's why I was asking for correction of my words. Sorry.

> Well, we can't help him unless he give us a reproducible case.

In the end I tested a really simple component:

1) getResult

	| result |
	
	startTime := Time now asString.
	result := "<DO SOME MATH>".
	finishTime := Time now asString.
	
	^ result.

2) renderContentOn: html

	html div: [
		html anchor
			callback: [ self getResult. ];
			with: 'Click me!'.
	].
	
	html div: 'Start: ', startTime.
	html div: 'Finish: ', finishTime.

Then the link was clicked from two different sessions simultaneously. And now it is the time, when I have to admit I was quite wrong (problem with my code). During first comparisons, the 'getResults' part was called during the rendering (between those two DIVs), and results (in both VAST and Pharo) were like following:

Session 1:		Start:		TimeA
			Finish:	TimeB
Session 2:		Start:		TimeB
			Finish:	TimeC

--- BTW: ---
> Your rendering method is just for painting the current state of your
> component, it shouldn’t be concerned with changing that state.

So, when somebody does some data processing during the rendering, it will cause this problem?
-------------

=> Now, after several changes it looks like we still have this problem with a component above only in VAST.
12