Microservices using Pharo

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Microservices using Pharo

Andrei Stebakov
Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

NorbertHartl


> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>
> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.

I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.

But these numbers are more of a gut feeling then something scientific

Norbert
Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Sven Van Caekenberghe-2


> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>
>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>
> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>
> But these numbers are more of a gut feeling then something scientific
>
> Norbert

A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.

Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.

Sven
Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

NorbertHartl


> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <[hidden email]>:
>
>
>
>> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>>
>>
>>
>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>>
>>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>>
>> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>>
>> But these numbers are more of a gut feeling then something scientific
>>
>> Norbert
>
> A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.
>
> Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.
>

I meant it regarding what a single image can do. And it can do thousands of requests only if there is no I/O involved and I doubt this will be a very useful service to build if it does not any additional I/O. Still I would try not to have more than 5 req/s on a single image before scaling up. The only number I can report is that 2 images serving 30 requests/s while using mongodb are not noticable in system stats.

Norbert
Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Sven Van Caekenberghe-2


> On 26 Jun 2018, at 15:52, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>>
>>
>>> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>>>
>>>
>>>
>>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>>>
>>>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>>>
>>> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>>>
>>> But these numbers are more of a gut feeling then something scientific
>>>
>>> Norbert
>>
>> A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.
>>
>> Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.
>>
>
> I meant it regarding what a single image can do. And it can do thousands of requests only if there is no I/O involved and I doubt this will be a very useful service to build if it does not any additional I/O. Still I would try not to have more than 5 req/s on a single image before scaling up. The only number I can report is that 2 images serving 30 requests/s while using mongodb are not noticable in system stats.
>
> Norbert

That is what I meant: it is an upper limit of an empty REST call, the rest depends on the application and the situation. If your operation takes seconds to complete, the request rate will go way down.

But with in memory operations and/or caching, responses can be quite fast (sub 100 ms).


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Andrei Stebakov
Thanks, guys! I really appreciate your input!

On Tue, Jun 26, 2018, 11:16 Sven Van Caekenberghe <[hidden email]> wrote:


> On 26 Jun 2018, at 15:52, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>>
>>
>>> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>>>
>>>
>>>
>>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>>>
>>>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>>>
>>> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>>>
>>> But these numbers are more of a gut feeling then something scientific
>>>
>>> Norbert
>>
>> A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.
>>
>> Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.
>>
>
> I meant it regarding what a single image can do. And it can do thousands of requests only if there is no I/O involved and I doubt this will be a very useful service to build if it does not any additional I/O. Still I would try not to have more than 5 req/s on a single image before scaling up. The only number I can report is that 2 images serving 30 requests/s while using mongodb are not noticable in system stats.
>
> Norbert

That is what I meant: it is an upper limit of an empty REST call, the rest depends on the application and the situation. If your operation takes seconds to complete, the request rate will go way down.

But with in memory operations and/or caching, responses can be quite fast (sub 100 ms).


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Andrei Stebakov
What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

On Tue, Jun 26, 2018, 14:32 Andrei Stebakov <[hidden email]> wrote:
Thanks, guys! I really appreciate your input!

On Tue, Jun 26, 2018, 11:16 Sven Van Caekenberghe <[hidden email]> wrote:


> On 26 Jun 2018, at 15:52, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>>
>>
>>> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>>>
>>>
>>>
>>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>>>
>>>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>>>
>>> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>>>
>>> But these numbers are more of a gut feeling then something scientific
>>>
>>> Norbert
>>
>> A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.
>>
>> Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.
>>
>
> I meant it regarding what a single image can do. And it can do thousands of requests only if there is no I/O involved and I doubt this will be a very useful service to build if it does not any additional I/O. Still I would try not to have more than 5 req/s on a single image before scaling up. The only number I can report is that 2 images serving 30 requests/s while using mongodb are not noticable in system stats.
>
> Norbert

That is what I meant: it is an upper limit of an empty REST call, the rest depends on the application and the situation. If your operation takes seconds to complete, the request rate will go way down.

But with in memory operations and/or caching, responses can be quite fast (sub 100 ms).


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Andrei Stebakov
I guess for multiple images on the same server we need to spawn off images listening on different ports.

On Tue, Jun 26, 2018, 14:44 Andrei Stebakov <[hidden email]> wrote:
What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

On Tue, Jun 26, 2018, 14:32 Andrei Stebakov <[hidden email]> wrote:
Thanks, guys! I really appreciate your input!

On Tue, Jun 26, 2018, 11:16 Sven Van Caekenberghe <[hidden email]> wrote:


> On 26 Jun 2018, at 15:52, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>>
>>
>>> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>>>
>>>
>>>
>>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>>>
>>>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>>>
>>> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>>>
>>> But these numbers are more of a gut feeling then something scientific
>>>
>>> Norbert
>>
>> A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.
>>
>> Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.
>>
>
> I meant it regarding what a single image can do. And it can do thousands of requests only if there is no I/O involved and I doubt this will be a very useful service to build if it does not any additional I/O. Still I would try not to have more than 5 req/s on a single image before scaling up. The only number I can report is that 2 images serving 30 requests/s while using mongodb are not noticable in system stats.
>
> Norbert

That is what I meant: it is an upper limit of an empty REST call, the rest depends on the application and the situation. If your operation takes seconds to complete, the request rate will go way down.

But with in memory operations and/or caching, responses can be quite fast (sub 100 ms).


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

NorbertHartl
In reply to this post by Andrei Stebakov


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.

Norbert
On Tue, Jun 26, 2018, 14:32 Andrei Stebakov <[hidden email]> wrote:
Thanks, guys! I really appreciate your input!

On Tue, Jun 26, 2018, 11:16 Sven Van Caekenberghe <[hidden email]> wrote:


> On 26 Jun 2018, at 15:52, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 26.06.2018 um 15:41 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>>
>>
>>> On 26 Jun 2018, at 15:24, Norbert Hartl <[hidden email]> wrote:
>>>
>>>
>>>
>>>> Am 26.06.2018 um 14:52 schrieb Andrei Stebakov <[hidden email]>:
>>>>
>>>> Does anyone use Pharo for Miro services? I heard about Seaside and Teapot, just was wondering if Pharo can handle multiple simultaneous requests and if it can, where it reaches the limit.
>>>
>>> I use it extensively. I use Zinc-REST package to offer services. The answer how much it can handle in parallel is hard to answer. For this you need to tell what you are about to do. But a rule of thumb is not to exceed 5 parallel tasks that are working at the same time. But a lot of tasks have wait times while accessing another HTTP service, a database, a filesystem etc. For this you can easily go up to 10 I guess.
>>>
>>> But these numbers are more of a gut feeling then something scientific
>>>
>>> Norbert
>>
>> A single ZnServer instance on a single image can handle thousands of requests per seconds (local network, very small payload, low concurrency). On a modern multi core / multi processor machine with lots of memory you can 10s if not 100s of Pharo image under a load balancer, provided you either do not share state or use high performance state sharing technology - this is the whole point of REST.
>>
>> Of course, larger payloads, more complex operations, real world networking, etc will slow you down. And it is very easy to make some architectural or implementation error somewhere that makes everything slow. As they say, YMMV.
>>
>
> I meant it regarding what a single image can do. And it can do thousands of requests only if there is no I/O involved and I doubt this will be a very useful service to build if it does not any additional I/O. Still I would try not to have more than 5 req/s on a single image before scaling up. The only number I can report is that 2 images serving 30 requests/s while using mongodb are not noticable in system stats.
>
> Norbert

That is what I meant: it is an upper limit of an empty REST call, the rest depends on the application and the situation. If your operation takes seconds to complete, the request rate will go way down.

But with in memory operations and/or caching, responses can be quite fast (sub 100 ms).


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

jtuchel
In reply to this post by Andrei Stebakov
Am 26.06.18 um 20:47 schrieb Andrei Stebakov:
> I guess for multiple images on the same server we need to spawn off
> images listening on different ports.
>
exactly. We use Apache mod_proxy_balancer with sticky sessions for our
configuration. The sticky session part is important if you use Seaside
(share everything approach with state living on the server).

We do have problems with mod_proxy_balancer when an image gets stuck,
because it is not actively monitoring its backends, so there are better
options out there that we need to investigate once we havetime. Luckily,
our Smalltalk images turn put to be very stable, the bottleneck being
DB2 (surprisingly).

Joachim


--
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          mailto:[hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

jtuchel
In reply to this post by NorbertHartl
Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.

What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1

Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Julián Maestri-2
At work we're using some microservices in pharo, implemented with Zink and Teapot, can't tell you much about performance because they are not being stress tested currently.

I'm using Traefik as a load balancer WITH sticky session on docker, mainly because it scales acording to the docker configuration (traefik can detect new docker instances and add them to the load balancer group)

To keep images running we use a docker swarm which is in charge of restarting them if they fail. Why? ease of use, it's easier to deploy and keep track of what is where.
https://docs.docker.com/engine/swarm/

For the Pharo docker image i'm running one based on debian slim, with the VM for Pharo 6.1 (32 bit). Code in https://github.com/ba-st/docker-pharo, available in the docker registry as basmalltalk/pharo:6.1
As i have no need for the sources file and removing it reduces the final image size it's removed, but you can add it (or prevent it's removal if you need)

If you need help with any of these, please ask (if it's urgent send me a direct mail, i don't read the Pharo list very often).

On 27 June 2018 at 02:42, [hidden email] <[hidden email]> wrote:
Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.

What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Torsten Bergmann
In reply to this post by jtuchel
Hi Andrei,

I guess there is no definitive answer - it highly depends on your microservice itself
and what it does with other backends (database, other services)

- If you run on Seaside with state then this might be interesting:
  http://onsmalltalk.com/scaling-seaside

  If you want to scale then stateful might not be the best way to go - this is not different from other web technologies.
 
- I prefer the REST and stateless approach, several images with a fast balancer at the
  front (nginx)

  This is a pattern you more often see in web development as usually
         - the client side is written in JavaScript as single page app (SPA) or progressive web app (PWA)
         - and server side  provides a stateless REST API (which opens your application not only to human users using the SPA/PWA
           but also to other programs and services using the API)
 
  You can easily provide REST API with Seaside-REST package, with Teapot or with Tealight (an addition I wrote to Teapot).
  Follow the guide for Tealight, especially the last part here:  https://github.com/astares/Tealight 

- Allstocker.com is running with just 2 images and nginx, see the details here
  https://pharoweekly.wordpress.com/2016/10/17/allstocker-internals/

- The enterprise book has some guidance too
  https://ci.inria.fr/pharo-contribution/job/EnterprisePharoBook/lastSuccessfulBuild/artifact/book-result/DeploymentWeb/DeployForProduction.html

I recommend to make a technical prototype, test and measure. Balancing and scaling is not different
from other technologies (Java, PHP, NodeJS, ...) and there are many tutorials out there for this part.

Think about container technologies and virtualization (for cloud or other). Played with Pharo on docker - very easy to setup.
I summarized my findings in a new "Pharo and docker" tutorial on http://wiki.astares.com/pharo/613

Hope some of this helps!

Bye
Torsten

Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

NorbertHartl
In reply to this post by jtuchel
Joachim,

Am 27.06.2018 um 07:42 schrieb [hidden email]:

Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

there is the upstream module which provides load balancing. But you are right I think sticky sessions is not part of it. The closest you get IIRC is IP based hashing.

I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.
 
Docker is pretty nice. You can have the exact same deployment artefact started multiple times. I used tools like daemontools, monit, etc. before but starting the image, assigning ports etc. you have to do yourself which is cumbersome and I don’t like any of those tools anymore. If you created your docker image you can start that multiple times because networking is virtualized all images can have the same port serving e.g.

I think talking about performance these days is not easy. Modern machines are so fast that you need a lot of users before you experience any problems. The mention of „2 images per core“ I need to explain. A CPU core can execute only one thing at a time. Therefor 1 image per core would be enough. The second one is for that time slices where there are gaps in processing meaning the process is suspended, switched etc. It is just the rule of thumb that it is good to have one process waiting in the scheduling queue so it can step in as soon as there is free cycles. The „2 images per core“ have the assumption that you can put an arbitrary load on one image. So with this assumption a third image won’t give you anything because it cannot do anything the other two images cannot do.
So according to the „hard“ facts it does not help having more than two images. On the other hand each image is single threaded and using more images lowers the probability that processes get blocked because they are executed within one image. On yet another hand if you use a database a lot of the time for a process is waiting for the response of the database so other processes can be executed. And and and…. So in the end you have to try it. 

What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Docker has a restart policy so restarting shouldn’t be an issue with it. Monitoring is always hard. I use prometheus with grafana but that is quite a bit to set up. But in the end you get graphs and you can define alerts for system value thresholds. 
If the topic gets accepted Marcus and me will tell about these things at ESUG. 

 
Norbert
Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1


Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

jtuchel
Norbert,


thanks for your insighgts, explanations and thoughts. It is good to read and learn from people who are a step or two ahead...

Am 27.06.18 um 09:31 schrieb Norbert Hartl:
Joachim,

Am 27.06.2018 um 07:42 schrieb [hidden email]:

Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

there is the upstream module which provides load balancing. But you are right I think sticky sessions is not part of it. The closest you get IIRC is IP based hashing.
I see.


I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.
 
Docker is pretty nice. You can have the exact same deployment artefact started multiple times. I used tools like daemontools, monit, etc. before but starting the image, assigning ports etc. you have to do yourself which is cumbersome and I don’t like any of those tools anymore. If you created your docker image you can start that multiple times because networking is virtualized all images can have the same port serving e.g.

oh, I see. This is a plus. We're not using any containers and have to provide individual configurations for each image we start up. Works well, not too many moving parts (our resources are very limited) and we try to keep things as simple as possible. As long as we can live with providing a statically sized pool of machines and images and load doesn't vary too much, this is not too bad. But once you need to dynamically add and remove images for coping with load peeks and lows, our approach will probably become cumbersome and complicated.
OTOH, I guess usind Docker just means solving the same problems on another level - but I guess there are lots of toosl in the Container area that can help here (like the trafik thing mentioned in another thread).


I think talking about performance these days is not easy. Modern machines are so fast that you need a lot of users before you experience any problems.
... depending on your usage of resources. As I said, we're using SQL heavily because of the way Glorp works. So it is easy to introduce bottlenecks even for smaller jobs.
The mention of „2 images per core“ I need to explain. A CPU core can execute only one thing at a time. Therefor 1 image per core would be enough. The second one is for that time slices where there are gaps in processing meaning the process is suspended, switched etc. It is just the rule of thumb that it is good to have one process waiting in the scheduling queue so it can step in as soon as there is free cycles. The „2 images per core“ have the assumption that you can put an arbitrary load on one image. So with this assumption a third image won’t give you anything because it cannot do anything the other two images cannot do.
So according to the „hard“ facts it does not help having more than two images. On the other hand each image is single threaded and using more images lowers the probability that processes get blocked because they are executed within one image. On yet another hand if you use a database a lot of the time for a process is waiting for the response of the database so other processes can be executed. And and and…. So in the end you have to try it.

You are correct. The third image con anly jump in if both the others are in a wait state. It "feels" as if there was enough air for a third one to operate, but we'd have to try if that holds true.


What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Docker has a restart policy so restarting shouldn’t be an issue with it. Monitoring is always hard. I use prometheus with grafana but that is quite a bit to set up. But in the end you get graphs and you can define alerts for system value thresholds.
Well, that is also true for monit (which we use), the question always is: what do you make of those numbers. We have situations in whcih an Image responds to http requests as if all were good. But for some reason, DB2 sometimes takes forever to answer queries, and will probably answer with a "cannot handle requests at this time" after literally a minute or so. Other DB connections work well in parallel. We're still looking for ways to recognize such situations externally (and think about moving from DB2 to PostgreSQL).

If the topic gets accepted Marcus and me will tell about these things at ESUG.

So if anybody from the program committee is reading this: Please accept and schedule Norbert's and Macus' talk, I'll be sticking to their lips and I guess I won't be alone ;-)


Joachim


 
Norbert
Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1



-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1

Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

Andrei Stebakov
Thank you guys for your insiteful answers. I wish we could have some kind of article summarizing those approaches so that next devs wouldn't have to reinvent the wheel but start with some tried approach and maybe improve it.
As I only scratched the surface learning Pharo, I may have some naive questions.
Does the fact (fact?) that Pharo uses green threads (not native os threads) impact the performance? 
With two Pharo images running in parallel on two core system, how does it handle multiple requests at a time? There must always be some unblocked thread waiting for connections and delegating requests to request handlers in different green threads (using fork operation). Is my understanding correct?
So even if one of those threads has to wait on a long IO operation (say from DB2) that shouldn't impact the performance of other handlers?
I think that in most cases the CPU time for request processing is minal as the bottleneck is in lengthy IO operations , DB waits and calling external REST-ful services. So two images on two cores should be enough to handle hundreds of simultaneous requests since most of the times the threads will wait on external operations, not using the local CPU.
Please let me know if this summary that I got from this thread makes sense.
Yes, I fully agree that using docker pharo containers under some load balancing is the way to go. 

On Wed, Jun 27, 2018, 04:10 [hidden email] <[hidden email]> wrote:
Norbert,


thanks for your insighgts, explanations and thoughts. It is good to read and learn from people who are a step or two ahead...

Am 27.06.18 um 09:31 schrieb Norbert Hartl:
Joachim,

Am 27.06.2018 um 07:42 schrieb [hidden email]:

Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

there is the upstream module which provides load balancing. But you are right I think sticky sessions is not part of it. The closest you get IIRC is IP based hashing.
I see.


I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.
 
Docker is pretty nice. You can have the exact same deployment artefact started multiple times. I used tools like daemontools, monit, etc. before but starting the image, assigning ports etc. you have to do yourself which is cumbersome and I don’t like any of those tools anymore. If you created your docker image you can start that multiple times because networking is virtualized all images can have the same port serving e.g.

oh, I see. This is a plus. We're not using any containers and have to provide individual configurations for each image we start up. Works well, not too many moving parts (our resources are very limited) and we try to keep things as simple as possible. As long as we can live with providing a statically sized pool of machines and images and load doesn't vary too much, this is not too bad. But once you need to dynamically add and remove images for coping with load peeks and lows, our approach will probably become cumbersome and complicated.
OTOH, I guess usind Docker just means solving the same problems on another level - but I guess there are lots of toosl in the Container area that can help here (like the trafik thing mentioned in another thread).


I think talking about performance these days is not easy. Modern machines are so fast that you need a lot of users before you experience any problems.
... depending on your usage of resources. As I said, we're using SQL heavily because of the way Glorp works. So it is easy to introduce bottlenecks even for smaller jobs.
The mention of „2 images per core“ I need to explain. A CPU core can execute only one thing at a time. Therefor 1 image per core would be enough. The second one is for that time slices where there are gaps in processing meaning the process is suspended, switched etc. It is just the rule of thumb that it is good to have one process waiting in the scheduling queue so it can step in as soon as there is free cycles. The „2 images per core“ have the assumption that you can put an arbitrary load on one image. So with this assumption a third image won’t give you anything because it cannot do anything the other two images cannot do.
So according to the „hard“ facts it does not help having more than two images. On the other hand each image is single threaded and using more images lowers the probability that processes get blocked because they are executed within one image. On yet another hand if you use a database a lot of the time for a process is waiting for the response of the database so other processes can be executed. And and and…. So in the end you have to try it.

You are correct. The third image con anly jump in if both the others are in a wait state. It "feels" as if there was enough air for a third one to operate, but we'd have to try if that holds true.


What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Docker has a restart policy so restarting shouldn’t be an issue with it. Monitoring is always hard. I use prometheus with grafana but that is quite a bit to set up. But in the end you get graphs and you can define alerts for system value thresholds.
Well, that is also true for monit (which we use), the question always is: what do you make of those numbers. We have situations in whcih an Image responds to http requests as if all were good. But for some reason, DB2 sometimes takes forever to answer queries, and will probably answer with a "cannot handle requests at this time" after literally a minute or so. Other DB connections work well in parallel. We're still looking for ways to recognize such situations externally (and think about moving from DB2 to PostgreSQL).

If the topic gets accepted Marcus and me will tell about these things at ESUG.

So if anybody from the program committee is reading this: Please accept and schedule Norbert's and Macus' talk, I'll be sticking to their lips and I guess I won't be alone ;-)


Joachim


 
Norbert
Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1



-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1

Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

NorbertHartl
In reply to this post by jtuchel


Am 27.06.2018 um 10:09 schrieb "[hidden email]" <[hidden email]>:

Norbert,


thanks for your insighgts, explanations and thoughts. It is good to read and learn from people who are a step or two ahead...

Am 27.06.18 um 09:31 schrieb Norbert Hartl:
Joachim,

Am 27.06.2018 um 07:42 schrieb [hidden email]:

Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

there is the upstream module which provides load balancing. But you are right I think sticky sessions is not part of it. The closest you get IIRC is IP based hashing.
I see.


I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.
 
Docker is pretty nice. You can have the exact same deployment artefact started multiple times. I used tools like daemontools, monit, etc. before but starting the image, assigning ports etc. you have to do yourself which is cumbersome and I don’t like any of those tools anymore. If you created your docker image you can start that multiple times because networking is virtualized all images can have the same port serving e.g.

oh, I see. This is a plus. We're not using any containers and have to provide individual configurations for each image we start up. Works well, not too many moving parts (our resources are very limited) and we try to keep things as simple as possible. As long as we can live with providing a statically sized pool of machines and images and load doesn't vary too much, this is not too bad. But once you need to dynamically add and remove images for coping with load peeks and lows, our approach will probably become cumbersome and complicated.

Sure. Your situation is exactly the one I had before. We have now a project that really needs to scale and managing resources is cumbersome. Docker helps a lot here. But I cannot say how hard the learning curve is because I‘m used to this kind of system stuff.

OTOH, I guess usind Docker just means solving the same problems on another level - but I guess there are lots of toosl in the Container area that can help here (like the trafik thing mentioned in another thread).

It is called traefik. You need that if you want load balancing to be dynamic. traefik listens to the docker daemon and figures out containers that need load balancing and adds them at runtime. And it supports sticky sessions. Or you use the commercial nginx which can do the same. 

Norbert

I think talking about performance these days is not easy. Modern machines are so fast that you need a lot of users before you experience any problems.
... depending on your usage of resources. As I said, we're using SQL heavily because of the way Glorp works. So it is easy to introduce bottlenecks even for smaller jobs.
The mention of „2 images per core“ I need to explain. A CPU core can execute only one thing at a time. Therefor 1 image per core would be enough. The second one is for that time slices where there are gaps in processing meaning the process is suspended, switched etc. It is just the rule of thumb that it is good to have one process waiting in the scheduling queue so it can step in as soon as there is free cycles. The „2 images per core“ have the assumption that you can put an arbitrary load on one image. So with this assumption a third image won’t give you anything because it cannot do anything the other two images cannot do.
So according to the „hard“ facts it does not help having more than two images. On the other hand each image is single threaded and using more images lowers the probability that processes get blocked because they are executed within one image. On yet another hand if you use a database a lot of the time for a process is waiting for the response of the database so other processes can be executed. And and and…. So in the end you have to try it.

You are correct. The third image con anly jump in if both the others are in a wait state. It "feels" as if there was enough air for a third one to operate, but we'd have to try if that holds true.


What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Docker has a restart policy so restarting shouldn’t be an issue with it. Monitoring is always hard. I use prometheus with grafana but that is quite a bit to set up. But in the end you get graphs and you can define alerts for system value thresholds.
Well, that is also true for monit (which we use), the question always is: what do you make of those numbers. We have situations in whcih an Image responds to http requests as if all were good. But for some reason, DB2 sometimes takes forever to answer queries, and will probably answer with a "cannot handle requests at this time" after literally a minute or so. Other DB connections work well in parallel. We're still looking for ways to recognize such situations externally (and think about moving from DB2 to PostgreSQL).

If the topic gets accepted Marcus and me will tell about these things at ESUG.

So if anybody from the program committee is reading this: Please accept and schedule Norbert's and Macus' talk, I'll be sticking to their lips and I guess I won't be alone ;-)


Joachim


 
Norbert
Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1



-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1

Reply | Threaded
Open this post in threaded view
|

Re: Microservices using Pharo

NorbertHartl
In reply to this post by Andrei Stebakov


Am 27.06.2018 um 15:08 schrieb Andrei Stebakov <[hidden email]>:

Thank you guys for your insiteful answers. I wish we could have some kind of article summarizing those approaches so that next devs wouldn't have to reinvent the wheel but start with some tried approach and maybe improve it.
As I only scratched the surface learning Pharo, I may have some naive questions.
Does the fact (fact?) that Pharo uses green threads (not native os threads) impact the performance? 

Yes and no. There is nothing wrong with green threads. They are super lightweight and enable some sort of parallelism. If you look at Erlang/OTP it handles ten thousands of green threads easily. The performance bottleneck is due to the fact that you cannot utilize multiple cores of a CPU. So it is usual to have some images being spread out to separate cores and the images handle things concurrently. 

With two Pharo images running in parallel on two core system, how does it handle multiple requests at a time? There must always be some unblocked thread waiting for connections and delegating requests to request handlers in different green threads (using fork operation). Is my understanding correct?

Not completely. It is also a green thread accepting connections. The priority is given due to a socket waiting on a system resource that gets signalled if a connection comes in. 

So even if one of those threads has to wait on a long IO operation (say from DB2) that shouldn't impact the performance of other handlers?

Exactly. That is the way through orchestration to have maximum throughput.

I think that in most cases the CPU time for request processing is minal as the bottleneck is in lengthy IO operations , DB waits and calling external REST-ful services. So two images on two cores should be enough to handle hundreds of simultaneous requests since most of the times the threads will wait on external operations, not using the local CPU.

Yes, it depends on the use case of course.

Please let me know if this summary that I got from this thread makes sense.
Yes, I fully agree that using docker pharo containers under some load balancing is the way to go. 

I think your summary is pretty accurate. Docker also the advantage that it uses a lot of shared memory. So starting 100 pharo images most resources including the vm are in memory only once. 

Hope it helps,

Norbert
On Wed, Jun 27, 2018, 04:10 [hidden email] <[hidden email]> wrote:
Norbert,


thanks for your insighgts, explanations and thoughts. It is good to read and learn from people who are a step or two ahead...

Am 27.06.18 um 09:31 schrieb Norbert Hartl:
Joachim,

Am 27.06.2018 um 07:42 schrieb [hidden email]:

Norbert,

Am 26.06.18 um 21:41 schrieb Norbert Hartl:


Am 26.06.2018 um 20:44 schrieb Andrei Stebakov <[hidden email]>:

What would be an example for load balancer for Pharo images? Can we run multiple images on the same server or for the sake of balancing configuration we can only run one image per server?

There are a lot of possibilities. You can start multiple images on different ports and use nginx with an upstream rule to load balance. I would recommend using docker for spawning multiple images on a host. Again with nginx as frontend load balancer. The point is that you can have at least twice as muh inages running then you have CPU cores. And of course a lot more.


the last time I checked nginx, the load balancing and sticky session stuff was not available in the free edition. So I guess you either pay for nginx (which I think is good) or you know some free 3d party addons...

there is the upstream module which provides load balancing. But you are right I think sticky sessions is not part of it. The closest you get IIRC is IP based hashing.
I see.


I wonder what exactly the benefit of Docker is in that game? On our servers we run 10 images on 4 cores with HT (8 virtual cores) and very rareley have real performance problems. We use Glorp, so there is a lot of SQL queriing going on for quite basic things already. So my guess would be that your "2 images per core"  are conservative and leave air for even a third one, depending on all the factors already discussed here.
 
Docker is pretty nice. You can have the exact same deployment artefact started multiple times. I used tools like daemontools, monit, etc. before but starting the image, assigning ports etc. you have to do yourself which is cumbersome and I don’t like any of those tools anymore. If you created your docker image you can start that multiple times because networking is virtualized all images can have the same port serving e.g.

oh, I see. This is a plus. We're not using any containers and have to provide individual configurations for each image we start up. Works well, not too many moving parts (our resources are very limited) and we try to keep things as simple as possible. As long as we can live with providing a statically sized pool of machines and images and load doesn't vary too much, this is not too bad. But once you need to dynamically add and remove images for coping with load peeks and lows, our approach will probably become cumbersome and complicated.
OTOH, I guess usind Docker just means solving the same problems on another level - but I guess there are lots of toosl in the Container area that can help here (like the trafik thing mentioned in another thread).


I think talking about performance these days is not easy. Modern machines are so fast that you need a lot of users before you experience any problems.
... depending on your usage of resources. As I said, we're using SQL heavily because of the way Glorp works. So it is easy to introduce bottlenecks even for smaller jobs.
The mention of „2 images per core“ I need to explain. A CPU core can execute only one thing at a time. Therefor 1 image per core would be enough. The second one is for that time slices where there are gaps in processing meaning the process is suspended, switched etc. It is just the rule of thumb that it is good to have one process waiting in the scheduling queue so it can step in as soon as there is free cycles. The „2 images per core“ have the assumption that you can put an arbitrary load on one image. So with this assumption a third image won’t give you anything because it cannot do anything the other two images cannot do.
So according to the „hard“ facts it does not help having more than two images. On the other hand each image is single threaded and using more images lowers the probability that processes get blocked because they are executed within one image. On yet another hand if you use a database a lot of the time for a process is waiting for the response of the database so other processes can be executed. And and and…. So in the end you have to try it.

You are correct. The third image con anly jump in if both the others are in a wait state. It "feels" as if there was enough air for a third one to operate, but we'd have to try if that holds true.


What's not to underestimate is all the stuff around monitoring and restarting images when things go wrong, but that's another story...

Docker has a restart policy so restarting shouldn’t be an issue with it. Monitoring is always hard. I use prometheus with grafana but that is quite a bit to set up. But in the end you get graphs and you can define alerts for system value thresholds.
Well, that is also true for monit (which we use), the question always is: what do you make of those numbers. We have situations in whcih an Image responds to http requests as if all were good. But for some reason, DB2 sometimes takes forever to answer queries, and will probably answer with a "cannot handle requests at this time" after literally a minute or so. Other DB connections work well in parallel. We're still looking for ways to recognize such situations externally (and think about moving from DB2 to PostgreSQL).

If the topic gets accepted Marcus and me will tell about these things at ESUG.

So if anybody from the program committee is reading this: Please accept and schedule Norbert's and Macus' talk, I'll be sticking to their lips and I guess I won't be alone ;-)


Joachim


 
Norbert
Joachim

 
-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1



-- 
-----------------------------------------------------------------------
Objektfabrik Joachim Tuchel          [hidden email]
Fliederweg 1                         http://www.objektfabrik.de
D-71640 Ludwigsburg                  http://joachimtuchel.wordpress.com
Telefon: +49 7141 56 10 86 0         Fax: +49 7141 56 10 86 1