[ANN] Success story Mobility Map

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

[ANN] Success story Mobility Map

NorbertHartl
As presented on ESUG here is the brief description of one of our current projects. 

Mobility Map
——————

Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way. 

For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort. 

Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.

This is a screenshot of the search page for multi-modal results:


Reply | Threaded
Open this post in threaded view
|

Re: [ANN] Success story Mobility Map

NorbertHartl


> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>
> Wow. Very nice, well done.
>
> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>
Sure :)

The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.

In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.

Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.

In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases

- Request as inquiry to another micro service
- Response returns values to a Request
- Error is transferred like a Response but is then signalled on the receiving side
- Notification connects the announcers on the broker and the micro service side.

Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.

And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.

Norbert

>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>
>> As presented on ESUG here is the brief description of one of our current projects.
>>
>> Mobility Map
>> ——————
>>
>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>
>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>
>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>
>> This is a screenshot of the search page for multi-modal results:
>>
>>
>> <Screen Shot 2018-09-21 at 16.54.30.png>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: [ANN] Success story Mobility Map

Esteban A. Maringolo
Thanks for sharing this Norbert.

I'm happy for you as a company, and also for Pharo, this should help promoting it to bystanders, the whole set of tools and libraries helps demonstrating the convenience and applicability of Pharo in "modern" designs (docker, swarms, MQ, microservices, and whatnot).

Regards




Esteban A. Maringolo


El mar., 25 sept. 2018 a las 9:40, Norbert Hartl (<[hidden email]>) escribió:


> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>
> Wow. Very nice, well done.
>
> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>
Sure :)

The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.

In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.

Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.

In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases

- Request as inquiry to another micro service
- Response returns values to a Request
- Error is transferred like a Response but is then signalled on the receiving side
- Notification connects the announcers on the broker and the micro service side.

Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.

And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.

Norbert

>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>
>> As presented on ESUG here is the brief description of one of our current projects.
>>
>> Mobility Map
>> ——————
>>
>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>
>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>
>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>
>> This is a screenshot of the search page for multi-modal results:
>>
>>
>> <Screen Shot 2018-09-21 at 16.54.30.png>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

Sven Van Caekenberghe-2
In reply to this post by NorbertHartl


> On 25 Sep 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>> Wow. Very nice, well done.
>>
>> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>>
> Sure :)

Thanks, this is very interesting !

> The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.

Are you happy with docker swarm's availability/fail-over behaviour ? In other words: does it work when one image/instance goes bad, does it detect and restore the missing functionality ?

> In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.

Same question: does RabbitMQ's clustering work well under stress/problems ? Syncing all queues between all machines sounds quite heavy (I never tried it, but maybe it just works).

> Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.

I think I understand ;-)

> In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases
>
> - Request as inquiry to another micro service
> - Response returns values to a Request
> - Error is transferred like a Response but is then signalled on the receiving side
> - Notification connects the announcers on the broker and the micro service side.

Yes, makes total sense.

> Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.

Which Promise/Future framework/library are you using in Pharo ?

You did not go for single threaded worker images ?

> And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.
>
> Norbert
>
>>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>>
>>> As presented on ESUG here is the brief description of one of our current projects.
>>>
>>> Mobility Map
>>> ——————
>>>
>>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>>
>>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>>
>>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>>
>>> This is a screenshot of the search page for multi-modal results:
>>>
>>>
>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>>
>>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

Nicolas Cellier
Very nice proof that we can leverage efficient and up to date technologies !
Thank you so much for sharing, that's the right way to make Pharo (and Smalltalk) alive and kicking. How did the Pharo IDE help in such context? (did you use debugging facility extensively?).
What tool is missing?

Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe <[hidden email]> a écrit :


> On 25 Sep 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>> Wow. Very nice, well done.
>>
>> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>>
> Sure :)

Thanks, this is very interesting !

> The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.

Are you happy with docker swarm's availability/fail-over behaviour ? In other words: does it work when one image/instance goes bad, does it detect and restore the missing functionality ?

> In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.

Same question: does RabbitMQ's clustering work well under stress/problems ? Syncing all queues between all machines sounds quite heavy (I never tried it, but maybe it just works).

> Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.

I think I understand ;-)

> In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases
>
> - Request as inquiry to another micro service
> - Response returns values to a Request
> - Error is transferred like a Response but is then signalled on the receiving side
> - Notification connects the announcers on the broker and the micro service side.

Yes, makes total sense.

> Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.

Which Promise/Future framework/library are you using in Pharo ?

You did not go for single threaded worker images ?

> And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.
>
> Norbert
>
>>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>>
>>> As presented on ESUG here is the brief description of one of our current projects.
>>>
>>> Mobility Map
>>> ——————
>>>
>>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>>
>>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>>
>>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>>
>>> This is a screenshot of the search page for multi-modal results:
>>>
>>>
>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>>
>>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

NorbertHartl
In reply to this post by Sven Van Caekenberghe-2


> Am 25.09.2018 um 17:44 schrieb Sven Van Caekenberghe <[hidden email]>:
>
>
>
>> On 25 Sep 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:
>>
>>
>>
>>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>>>
>>> Wow. Very nice, well done.
>>>
>>> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>>>
>> Sure :)
>
> Thanks, this is very interesting !
>
>> The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.
>
> Are you happy with docker swarm's availability/fail-over behaviour ? In other words: does it work when one image/instance goes bad, does it detect and restore the missing functionality ?
>
Yes, I‘m very satisified. Instances are automatically restarted if they crash. And not necessarily on the same machine exactly how I expect it. Docker has something called Healthcheck. You can have a command executed every 20 seconds. I hooked this to a curl command and to your SUnit Rest handler. The rest is writing unit tests for server health. If tests fail in sequence the instances are taken out of operation and are replaced with a new instance. The same is done for updating. Instances are started in a fashion that one instance of the new image is started, if it survives a couple of health checks it is taken operational and an old one is taken out. Then the next new is started... For simple software updates you have zero-downtime deployment.

>> In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.
>
> Same question: does RabbitMQ's clustering work well under stress/problems ? Syncing all queues between all machines sounds quite heavy (I never tried it, but maybe it just works).

I did not yet have time to real stress test the queue. And you are right copying between nodes might be a lot but still I have the feeling it is better to have then a single instance but no  real reason. We also use huge payloads which might have to change if we encounter problems.

>
>> Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.
>
> I think I understand ;-)
>
>> In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases
>>
>> - Request as inquiry to another micro service
>> - Response returns values to a Request
>> - Error is transferred like a Response but is then signalled on the receiving side
>> - Notification connects the announcers on the broker and the micro service side.
>
> Yes, makes total sense.
>
>> Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.
>
> Which Promise/Future framework/library are you using in Pharo ?
>
We rolled our own. And I don‘t know about other frameworks.

> You did not go for single threaded worker images ?

For the micro services yes. They read from the queue one by one and process it. Here I scale out with multiple pharo instances. On the API image we have zinc for HTTP that is a multithreaded server and the clients are polling.

Norbert

>
>> And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.
>>
>> Norbert
>>
>>>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>>>
>>>> As presented on ESUG here is the brief description of one of our current projects.
>>>>
>>>> Mobility Map
>>>> ——————
>>>>
>>>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>>>
>>>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>>>
>>>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>>>
>>>> This is a screenshot of the search page for multi-modal results:
>>>>
>>>>
>>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>>>
>>>
>>
>>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

Pierce Ng-3
On Wed, Sep 26, 2018 at 07:49:10PM +0200, Norbert Hartl wrote:
>>> And a lot more. This is a coarse grained overview over the
>>> architecture. I’m happy to answer further questions about this.
>>> [very nice writeup]

Hi Norbert,

Very nice write-up, thanks.

What persistence mechanism are you using - Gemstone/S, Glorp, Voyage, ...?

Pierce


Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

NorbertHartl
In reply to this post by Nicolas Cellier


Am 26.09.2018 um 19:22 schrieb Nicolas Cellier <[hidden email]>:

Very nice proof that we can leverage efficient and up to date technologies !

That was part of the plan ;)

Thank you so much for sharing, that's the right way to make Pharo (and Smalltalk) alive and kicking. How did the Pharo IDE help in such context? (did you use debugging facility extensively?).
What tool is missing?

At first pharo is always a bliss to work with. In such a project I try to make the whole application managable on different levels. The most important one is to have the whole application in one image. We have tests for the broker and for each micro service. But we also have tests that operate on the whole stack. Here the Q is being shortcut and handling is more synchronous than in the Q case. With this it is easier to get a stack in the debugger of the whole roundtrip. 

Using docker we can have the whole application started on a local laptop. This way all components are much closer to investigate. We can switch the frontend server with a local development entity. I started to do that for the pharo images but did not finish yet. The idea is to start the same stack as in the swarm but replace one image with an actual development image where you can use the debugger.

On the swarm itself we configure each microservice to upload a fuel context dump to a central server on exception. From my development image I have a simple client to look for exceptions for a project and version number. Clicking on one it downloads the dump and opens a debugger locally. I can fix the bug and commit with iceberg. This goes well with our continuous deployment. When a commit is done, jenkins builds the whole product and deploys that automatically on the alpha swarm. This way from seeing an error the only thing to do is clicking on it solve the problem in the debugger and commit. Well, in most of the cases ;)

What I’m working on and gave a quick preview on ESUg is to have a client for docker swarm. It is another way to close the cycle to use pharo to manage things in a swarm that is build from pharo images. I did a first prototype how to connect on a particular image in the swarm and start TelePharo on it so you can set breakpoints for certain things and have a debugger in your image from the live swarm. 

The last things we did is to add proper monitoring of metrics withing the service so you can spot problems that belong to any kind of resource shortage. In this case it would be especially useful to connect to such an image to do live investigations. Yes and object centric debugging/logging will help here Steven/guys ;)

Or two say it in less words :) The two problems we had to solve is to remove complexity where possible and to have all the mentioned approaches to enable our team to tackle an occurring problem from different angles. If no one is blocked in work the project does not stagnate. It is not a guarantee to succeed but a requirement

Hope this is the information you were asking for.

Norbert


Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe <[hidden email]> a écrit :


> On 25 Sep 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>> Wow. Very nice, well done.
>>
>> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>>
> Sure :)

Thanks, this is very interesting !

> The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.

Are you happy with docker swarm's availability/fail-over behaviour ? In other words: does it work when one image/instance goes bad, does it detect and restore the missing functionality ?

> In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.

Same question: does RabbitMQ's clustering work well under stress/problems ? Syncing all queues between all machines sounds quite heavy (I never tried it, but maybe it just works).

> Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.

I think I understand ;-)

> In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases
>
> - Request as inquiry to another micro service
> - Response returns values to a Request
> - Error is transferred like a Response but is then signalled on the receiving side
> - Notification connects the announcers on the broker and the micro service side.

Yes, makes total sense.

> Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.

Which Promise/Future framework/library are you using in Pharo ?

You did not go for single threaded worker images ?

> And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.
>
> Norbert
>
>>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>>
>>> As presented on ESUG here is the brief description of one of our current projects.
>>>
>>> Mobility Map
>>> ——————
>>>
>>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>>
>>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>>
>>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>>
>>> This is a screenshot of the search page for multi-modal results:
>>>
>>>
>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>>
>>
>
>



Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

NorbertHartl
In reply to this post by Pierce Ng-3


> Am 30.09.2018 um 13:01 schrieb Pierce Ng <[hidden email]>:
>
> On Wed, Sep 26, 2018 at 07:49:10PM +0200, Norbert Hartl wrote:
>>>> And a lot more. This is a coarse grained overview over the
>>>> architecture. I’m happy to answer further questions about this.
>>>> [very nice writeup]
>
> Hi Norbert,
>
> Very nice write-up, thanks.
>
thanks

> What persistence mechanism are you using - Gemstone/S, Glorp, Voyage, …?

We use voyage with a mongo replica set.

Norbert




Reply | Threaded
Open this post in threaded view
|

Re: [Pharo-users] [ANN] Success story Mobility Map

Nicolas Cellier
In reply to this post by NorbertHartl
Hi Norbert,

Le lun. 1 oct. 2018 à 19:22, Norbert Hartl <[hidden email]> a écrit :


Am 26.09.2018 um 19:22 schrieb Nicolas Cellier <[hidden email]>:

Very nice proof that we can leverage efficient and up to date technologies !

That was part of the plan ;)

Thank you so much for sharing, that's the right way to make Pharo (and Smalltalk) alive and kicking. How did the Pharo IDE help in such context? (did you use debugging facility extensively?).
What tool is missing?

At first pharo is always a bliss to work with. In such a project I try to make the whole application managable on different levels. The most important one is to have the whole application in one image. We have tests for the broker and for each micro service. But we also have tests that operate on the whole stack. Here the Q is being shortcut and handling is more synchronous than in the Q case. With this it is easier to get a stack in the debugger of the whole roundtrip. 

Using docker we can have the whole application started on a local laptop. This way all components are much closer to investigate. We can switch the frontend server with a local development entity. I started to do that for the pharo images but did not finish yet. The idea is to start the same stack as in the swarm but replace one image with an actual development image where you can use the debugger.

On the swarm itself we configure each microservice to upload a fuel context dump to a central server on exception. From my development image I have a simple client to look for exceptions for a project and version number. Clicking on one it downloads the dump and opens a debugger locally. I can fix the bug and commit with iceberg. This goes well with our continuous deployment. When a commit is done, jenkins builds the whole product and deploys that automatically on the alpha swarm. This way from seeing an error the only thing to do is clicking on it solve the problem in the debugger and commit. Well, in most of the cases ;)

What I’m working on and gave a quick preview on ESUg is to have a client for docker swarm. It is another way to close the cycle to use pharo to manage things in a swarm that is build from pharo images. I did a first prototype how to connect on a particular image in the swarm and start TelePharo on it so you can set breakpoints for certain things and have a debugger in your image from the live swarm. 

The last things we did is to add proper monitoring of metrics withing the service so you can spot problems that belong to any kind of resource shortage. In this case it would be especially useful to connect to such an image to do live investigations. Yes and object centric debugging/logging will help here Steven/guys ;)

Or two say it in less words :) The two problems we had to solve is to remove complexity where possible and to have all the mentioned approaches to enable our team to tackle an occurring problem from different angles. If no one is blocked in work the project does not stagnate. It is not a guarantee to succeed but a requirement

Hope this is the information you were asking for.

Norbert

Yes, thanks for these details, that's exactly the kind of testimony i'm after.
I wish i were able to demonstrate the advantages by myself to some teams not working with Pharo, but i belong to dinosaures era of desktop apps, i can preach, but not easily practice.
The team has chosen to work in heterogeneous languages/environments so the idea of having services integrated in single image while very interesting, won't directly translate.
I don't know if Pharo will shine in this context for developing a single service as a POC, and i guess that remote debugging will indeed be a must in such more adverse ( less integrated) environment.
I'll take time to reread your detailed answer and also transmit the pointers to the team.
Thanks again.


Le mar. 25 sept. 2018 à 17:45, Sven Van Caekenberghe <[hidden email]> a écrit :


> On 25 Sep 2018, at 14:39, Norbert Hartl <[hidden email]> wrote:
>
>
>
>> Am 25.09.2018 um 12:52 schrieb Sven Van Caekenberghe <[hidden email]>:
>>
>> Wow. Very nice, well done.
>>
>> Any chance on some more technical details, as in what 'connected by a message queue for the communication' exactly means ? How did you approach micro services exactly ?
>>
> Sure :)

Thanks, this is very interesting !

> The installation spawns multiple physical machines. All the machines are joined to a docker swarm. The installation is reified as either task or service from the view on the docker swarm. Meaning you instantiate an arbitrary amount of services and docker swarm distributes them among the physical machines. Usually you don’t take control which is running where but you can. At this point you have spread dozens of pharo images among multiple machines and each of them has an IP address. Furthermore in docker swarm you have a reification of a network meaning that every instance in a network can see all other instances on this network. Each service can be reached by its service name in that network. Docker swarm does all the iptables/firewall and DNS setup for you.

Are you happy with docker swarm's availability/fail-over behaviour ? In other words: does it work when one image/instance goes bad, does it detect and restore the missing functionality ?

> In order to have communication between those runtimes we use rabbitmq because you were so nice writing a driver for it ;) The rabbitmq does have a support for cluster setup, meaning each of the physical machines has a rabbitmq installation and they know each other. So it does not matter to which instance you send messages to and on which you register for receiving messages. So every pharo image connects to the service rabbitmq and opens a queue for interaction.

Same question: does RabbitMQ's clustering work well under stress/problems ? Syncing all queues between all machines sounds quite heavy (I never tried it, but maybe it just works).

> Each service like the car sharing opens a queue e.g. /queue/carSharing and listens on it. The broker images are stateful so they open queues like /queue/mobility-map-afdeg32 where afdeg32 is the container id of the instance (hostname in docker). In each request the queue name to reply is sent as a header. So we can make sure that the right image gets the message back. This way we can have sticky sessions keeping volatile data in memory for the lifecycle of a session. There is one worker image which opens a queue /queue/mobility-map where session independent requests can be processed.

I think I understand ;-)

> In order to ease development we are sharing code between the broker and the micro service. Each micro service has a -Common package where the classes are in that build the interface. The classes in here are a kind of data entity facades. They use NeoJSON to map to and from a stream. The class name is send with the message as a header so the remote side knows what to materialize. The handling is unified for the four cases
>
> - Request as inquiry to another micro service
> - Response returns values to a Request
> - Error is transferred like a Response but is then signalled on the receiving side
> - Notification connects the announcers on the broker and the micro service side.

Yes, makes total sense.

> Asynchronous calls we solved using Promises and Futures. Each async call to the Q becomes a promise (that blocks on #value) and is combined to a future value containing all promises with support to generate a delta of all resolved promises. This we need because you issue a search that takes longer and you want to display results as soon as they are resolved not after all haven been resolved.

Which Promise/Future framework/library are you using in Pharo ?

You did not go for single threaded worker images ?

> And a lot more. This is a coarse grained overview over the architecture. I’m happy to answer further questions about this.
>
> Norbert
>
>>> On 25 Sep 2018, at 12:20, Norbert Hartl <[hidden email]> wrote:
>>>
>>> As presented on ESUG here is the brief description of one of our current projects.
>>>
>>> Mobility Map
>>> ——————
>>>
>>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>>
>>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>>
>>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>>
>>> This is a screenshot of the search page for multi-modal results:
>>>
>>>
>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>>
>>
>
>



Reply | Threaded
Open this post in threaded view
|

Re: [ANN] Success story Mobility Map

Marcus Denker-4
In reply to this post by NorbertHartl

The Slides from the ESUG presentation are now inline here:

        https://www.slideshare.net/zweidenker/docker-and-pharo-zweidenker


>>>
>>> As presented on ESUG here is the brief description of one of our current projects.
>>>
>>> Mobility Map
>>> ——————
>>>
>>> Mobility Map is a broker for mobility services. It offers multi-modal routing search enabling users to find the best travel options between locations. Travel options include car sharing, bikes, trains, busses etc. Rented cars can be offered for ride sharing on booking time letting other people find it to participate in the ride. Single travel options are combined in travel plans that can be booked and managed in a very easy way.
>>>
>>> For this project main requirements were scalability to serve a large user base and flexibility to add more additional providers to the broker. The application has been realized using web technologies for the frontend and pharo for the backend. Using a microservice architecture combined with a broker it is easy to extend the platform with additional brokers. Deployment is done using docker swarm for distributing dozens of pharo images among multiple server machines connected by a message queue for the communication. Pharo supported that scenario very well enabling us the meet the requirements with less effort.
>>>
>>> Pharo turned out to be a perfect fit to develop the application in a agile way. Small development cycles with continuous integration and continuous delivery enables fast turnarounds for the customers to validate progress.
>>>
>>> This is a screenshot of the search page for multi-modal results:
>>>
>>>
>>> <Screen Shot 2018-09-21 at 16.54.30.png>
>>
>>
>
>