[GS/SS Beta] Scale to multiple nodes sharing a DB?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

[GS/SS Beta] Scale to multiple nodes sharing a DB?

Mariano Martinez Peck
Hi, 

Right now I am prototyping a solution of an app using GemStone (GLASS). If we success, and we really move to GemStone we will probably need more than one machine (even if we could have a few cores running in one machine), yet we need to of course share the "DB".

So...I know we will have to pay for that, but from the technical point of view, can GemStone  run in several nodes/machines in parallel yet provide a "unique DB" for the client point of view? Is this supported? Do you have customers with such a layout?
Reply | Threaded
Open this post in threaded view
|

Re: [GS/SS Beta] Scale to multiple nodes sharing a DB?

James Foster-9
Mariano,

GemStone/S certainly supports a multi-machine configuration. In the early days, before machines had many CPUs or cores, this was the primary way to scale. (Now, a massive system can have hundreds of cores and terabytes of RAM on one machine.) We test the multi-machine setup extensively, even between different hardware architectures and operating systems, and we have a customer with hundreds of machines sharing the same repository. From the point-of-view of the gem, the isolated view of the database is coordinated at each commit or abort whether the gem is on the stone machine or on a remote machine. There is, of course, some overhead in managing concurrency across multiple machines, but it is a very powerful way to scale.

James

On Nov 14, 2013, at 3:19 PM, Mariano Martinez Peck <[hidden email]> wrote:

Hi, 

Right now I am prototyping a solution of an app using GemStone (GLASS). If we success, and we really move to GemStone we will probably need more than one machine (even if we could have a few cores running in one machine), yet we need to of course share the "DB".

So...I know we will have to pay for that, but from the technical point of view, can GemStone  run in several nodes/machines in parallel yet provide a "unique DB" for the client point of view? Is this supported? Do you have customers with such a layout?

Reply | Threaded
Open this post in threaded view
|

Re: [GS/SS Beta] Scale to multiple nodes sharing a DB?

Dale Henrichs-3
In reply to this post by Mariano Martinez Peck



From: "Mariano Martinez Peck" <[hidden email]>
To: "GemStone Seaside beta discussion" <[hidden email]>
Sent: Thursday, November 14, 2013 12:19:23 PM
Subject: [GS/SS Beta] Scale to multiple nodes sharing a DB?

Hi, 

Right now I am prototyping a solution of an app using GemStone (GLASS). If we success, and we really move to GemStone we will probably need more than one machine (even if we could have a few cores running in one machine), yet we need to of course share the "DB".So...I know we will have to pay for that, but from the technical point of view, can GemStone  run in several nodes/machines in parallel yet provide a "unique DB" for the client point of view? Is this supported? Do you have customers with such a layout?
As James has pointed out we do have customers where they have a single stone and the db shared across multiple nodes/machines in parrallel;, but I am interested in having more detail about what you mean by "unique DB", since there are several interpretations:)

Basically a single stone presents a single unified object graph, however, that object graph can be partitioned in a number of ways depending upon what you need ... The simplest partition is at the application level where you arrange via aplication code to allow a given user access to only a subnet of the graph ... there are other approaches as well, but this is where I need to understand more:)

Dale
Reply | Threaded
Open this post in threaded view
|

Re: [GS/SS Beta] Scale to multiple nodes sharing a DB?

BrunoBB
Hi,

As i understad, ... each machine (with gems or the stone) has a unique SPC (shared page cache).
Is this correct ?
Gems in the same node (machine) access the same SPC, but for others gems in other node have their own SPC. Or a single node can run more than one SPC ? (may be this not make sense).

Can the extents and the Stone run on different nodes ?
Can the translog and the Stone run on different nodes ?

Regards,
Bruno
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Dale Henrichs-3
Bruno,

The short answer is yes:)

Mutiple stones can be run on a single host. There is one SPC per stone per host.

So in a system with two stones running on host A, there will be two SPCs on host A.

You can connect to either of the stones running on host A from host B.

In the first form you can connect to a stone on host A and arrange for the rpc gems servicing your GemTools client (for example) to run on host A in which case those gems will connect to the SPC on host A.

In the second form you can connect to a stone on host A and arrange for the gems to run on host B (not permitted with Free license) in which case there will be an SPC on host B as well ... if you connect to both stones from host B in this fashion, then you'll have one SPC per stone on host B ...

tranlogs and extents must be on the "same filesystem" as the stone ... NFS filesystems are not reliable enouch to be used for extents and tranlogs ... non NFS, shared filesystems can be used for tranlogs and extents ...

Dale
----- Original Message -----
| From: "BrunoBB" <[hidden email]>
| To: [hidden email]
| Sent: Thursday, November 14, 2013 3:44:02 PM
| Subject: Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?
|
| Hi,
|
| As i understad, ... each machine (with gems or the stone) has a
| unique SPC
| (shared page cache).
| Is this correct ?
| Gems in the same node (machine) access the same SPC, but for others
| gems in
| other node have their own SPC. Or a single node can run more than one
| SPC ?
| (may be this not make sense).
|
| Can the extents and the Stone run on different nodes ?
| Can the translog and the Stone run on different nodes ?
|
| Regards,
| Bruno
|
|
|
| --
| View this message in context:
| http://forum.world.st/GS-SS-Beta-Scale-to-multiple-nodes-sharing-a-DB-tp4722243p4722305.html
| Sent from the GLASS mailing list archive at Nabble.com.
| _______________________________________________
| Glass mailing list
| [hidden email]
| http://lists.gemtalksystems.com/mailman/listinfo/glass
|
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Mariano Martinez Peck
Hi guys,

Thanks for the answers. That's what I wanted to hear!!! It is a good feeling to know that from the technical point of view we can scale a lot :) 


On Thu, Nov 14, 2013 at 9:05 PM, Dale K. Henrichs <[hidden email]> wrote:
Bruno,

The short answer is yes:)

Mutiple stones can be run on a single host. There is one SPC per stone per host.

So in a system with two stones running on host A, there will be two SPCs on host A.

You can connect to either of the stones running on host A from host B.

In the first form you can connect to a stone on host A and arrange for the rpc gems servicing your GemTools client (for example) to run on host A in which case those gems will connect to the SPC on host A.

In the second form you can connect to a stone on host A and arrange for the gems to run on host B (not permitted with Free license) in which case there will be an SPC on host B as well ... if you connect to both stones from host B in this fashion, then you'll have one SPC per stone on host B ...

tranlogs and extents must be on the "same filesystem" as the stone ... NFS filesystems are not reliable enouch to be used for extents and tranlogs ... non NFS, shared filesystems can be used for tranlogs and extents ...

Dale
----- Original Message -----
| From: "BrunoBB" <[hidden email]>
| To: [hidden email]
| Sent: Thursday, November 14, 2013 3:44:02 PM
| Subject: Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?
|
| Hi,
|
| As i understad, ... each machine (with gems or the stone) has a
| unique SPC
| (shared page cache).
| Is this correct ?
| Gems in the same node (machine) access the same SPC, but for others
| gems in
| other node have their own SPC. Or a single node can run more than one
| SPC ?
| (may be this not make sense).
|
| Can the extents and the Stone run on different nodes ?
| Can the translog and the Stone run on different nodes ?
|
| Regards,
| Bruno
|
|
|
| --
| View this message in context:
| http://forum.world.st/GS-SS-Beta-Scale-to-multiple-nodes-sharing-a-DB-tp4722243p4722305.html
| Sent from the GLASS mailing list archive at Nabble.com.
| _______________________________________________
| Glass mailing list
| [hidden email]
| http://lists.gemtalksystems.com/mailman/listinfo/glass
|
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

NorbertHartl

Am 15.11.2013 um 15:25 schrieb Mariano Martinez Peck <[hidden email]>:

Hi guys,

Thanks for the answers. That's what I wanted to hear!!! It is a good feeling to know that from the technical point of view we can scale a lot :) 

Depends on what you call a lot :) Without knowing gemstone in depth I can easily say that distributing an unpartitioned memory space over multiple hosts and modifying state only within transactions has severe limits. So keep that in mind. 

Norbert


On Thu, Nov 14, 2013 at 9:05 PM, Dale K. Henrichs <[hidden email]> wrote:
Bruno,

The short answer is yes:)

Mutiple stones can be run on a single host. There is one SPC per stone per host.

So in a system with two stones running on host A, there will be two SPCs on host A.

You can connect to either of the stones running on host A from host B.

In the first form you can connect to a stone on host A and arrange for the rpc gems servicing your GemTools client (for example) to run on host A in which case those gems will connect to the SPC on host A.

In the second form you can connect to a stone on host A and arrange for the gems to run on host B (not permitted with Free license) in which case there will be an SPC on host B as well ... if you connect to both stones from host B in this fashion, then you'll have one SPC per stone on host B ...

tranlogs and extents must be on the "same filesystem" as the stone ... NFS filesystems are not reliable enouch to be used for extents and tranlogs ... non NFS, shared filesystems can be used for tranlogs and extents ...

Dale
----- Original Message -----
| From: "BrunoBB" <[hidden email]>
| To: [hidden email]
| Sent: Thursday, November 14, 2013 3:44:02 PM
| Subject: Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?
|
| Hi,
|
| As i understad, ... each machine (with gems or the stone) has a
| unique SPC
| (shared page cache).
| Is this correct ?
| Gems in the same node (machine) access the same SPC, but for others
| gems in
| other node have their own SPC. Or a single node can run more than one
| SPC ?
| (may be this not make sense).
|
| Can the extents and the Stone run on different nodes ?
| Can the translog and the Stone run on different nodes ?
|
| Regards,
| Bruno
|
|
|
| --
| View this message in context:
| http://forum.world.st/GS-SS-Beta-Scale-to-multiple-nodes-sharing-a-DB-tp4722243p4722305.html
| Sent from the GLASS mailing list archive at Nabble.com.
| _______________________________________________
| Glass mailing list
| [hidden email]
| http://lists.gemtalksystems.com/mailman/listinfo/glass
|
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



--
Mariano
http://marianopeck.wordpress.com
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass


_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

BrunoBB
Hi Norbert,

"distributing an unpartitioned memory space over multiple hosts and modifying state only within transactions has severe limits."

Can you elaborate on this sentence ? What do you have in mind ?

-unpartitioned- can have multiple interpretations. Unpartitioned at hardware level, at logical level ?

I think you are talking about Shared Page Cache,  a SPC in a host has different objects that another SPC in other host (both attached to the same stone). In this case with your definition is partitioned or not ?

You will have a problem if your transactions are large in time (lot of changes without commits or aborts). You can solve this at application level.

Regards,
Bruno
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Dale Henrichs-3


----- Original Message -----
| From: "BrunoBB" <[hidden email]>
| To: [hidden email]
| Sent: Friday, November 15, 2013 9:17:13 AM
| Subject: Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?
|
| Hi Norbert,
|
| "distributing an unpartitioned memory space over multiple hosts and
| modifying state only within transactions has severe limits."
|
| Can you elaborate on this sentence ? What do you have in mind ?
|
| -unpartitioned- can have multiple interpretations. Unpartitioned at
| hardware
| level, at logical level ?
|
| I think you are talking about Shared Page Cache,  a SPC in a host has
| different objects that another SPC in other host (both attached to
| the same
| stone). In this case with your definition is partitioned or not ?

You are correct in surmising that the SPC does provide an additional level of partitioning that tends to organize references. GemStone also provides a clustering protocol for compacting object subgraphs on the pages ...
|
| You will have a problem if your transactions are large in time (lot
| of
| changes without commits or aborts). You can solve this at application
| level.

Our customers tend to towards larger and larger SPCs and more and more cpus on a single machine for the abosulte fastest performance ... As is always the case the fastest access paths are always memory to memory if you have to go over a wire or hit disk then the performance impact is not transparent ...

So in the end, Norbert is correct that there are limitations to what can be achieved by adding multiple machines as opposed to increasing the size of a given machine ... but in the end it does depend upon the characteristics of the application and the size of the working set required ....

Dale
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

NorbertHartl

Am 15.11.2013 um 18:59 schrieb Dale K. Henrichs <[hidden email]>:

>
>
> ----- Original Message -----
> | From: "BrunoBB" <[hidden email]>
> | To: [hidden email]
> | Sent: Friday, November 15, 2013 9:17:13 AM
> | Subject: Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?
> |
> | Hi Norbert,
> |
> | "distributing an unpartitioned memory space over multiple hosts and
> | modifying state only within transactions has severe limits."
> |
> | Can you elaborate on this sentence ? What do you have in mind ?
> |
> | -unpartitioned- can have multiple interpretations. Unpartitioned at
> | hardware
> | level, at logical level ?
> |
> | I think you are talking about Shared Page Cache,  a SPC in a host has
> | different objects that another SPC in other host (both attached to
> | the same
> | stone). In this case with your definition is partitioned or not ?
>
> You are correct in surmising that the SPC does provide an additional level of partitioning that tends to organize references. GemStone also provides a clustering protocol for compacting object subgraphs on the pages ...
> |
> | You will have a problem if your transactions are large in time (lot
> | of
> | changes without commits or aborts). You can solve this at application
> | level.
>
> Our customers tend to towards larger and larger SPCs and more and more cpus on a single machine for the abosulte fastest performance ... As is always the case the fastest access paths are always memory to memory if you have to go over a wire or hit disk then the performance impact is not transparent ...
>
> So in the end, Norbert is correct that there are limitations to what can be achieved by adding multiple machines as opposed to increasing the size of a given machine ... but in the end it does depend upon the characteristics of the application and the size of the working set required ….

Agreed. My statement was gemstone independent. There are things that need to be done. You need to distribute your objects in a way that finding the location of an object (and retrieving it) is fast. But still there is overhead while distributing and locating. For transactions a kind of lock or negotiation needs to be done to have fault-free transaction. And there is overhead for this, too. And these overheads are added with every added SPC. I would even assume that the overhead is non-linear so there is a maximum-not-too-big of instances you can have before the management overhead is multiple of your real data work. But you can organize your data in a way to mitigate the problem.  
So what I really wanted to say is: There are technical problems that exist regardless which technology you use. I think gemstone is really good in managing and distributing objects and assuring data consistency. But it is not alien technology so it is constrained by those general technical problems as well.

Norbert
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Jon Paynter-2
Another consideration for scaling -- what do you need to scale do?
Do you want to handle lots of client sessions?
Do you need to work with large / huge domain objects?
Do you lots of complex processing to do?

Depending on which of these you are facing, depends on how you scale.  Where I am, we frequently deal with the last - complex processing.  therefore it makes sense to scale gemstone across several separate physical hosts so we can use all 100+ cores to do processing.

If you have large (memory intensive) domain objects to work on, then sending them over the wire to other hosts is not a good idea.



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Mariano Martinez Peck



On Fri, Nov 15, 2013 at 3:45 PM, Jon Paynter <[hidden email]> wrote:
Another consideration for scaling -- what do you need to scale do?
Do you want to handle lots of client sessions?
Do you need to work with large / huge domain objects?
Do you lots of complex processing to do?

These are excellent questions!
 

Depending on which of these you are facing, depends on how you scale.  Where I am, we frequently deal with the last - complex processing.  therefore it makes sense to scale gemstone across several separate physical hosts so we can use all 100+ cores to do processing.


In my case, also the last question was the most important, and hence why I asked that :)
 
If you have large (memory intensive) domain objects to work on, then sending them over the wire to other hosts is not a good idea.


Indeed. There are always tradeoffs. But at least it is good to know which are the possibilities of GemStone. 
 

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass




--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Martin McClure-5
In reply to this post by Dale Henrichs-3
On 11/15/2013 09:59 AM, Dale K. Henrichs wrote:
>
> Our customers tend to towards larger and larger SPCs and more and more cpus on a single machine for the abosulte fastest performance ... As is always the case the fastest access paths are always memory to memory if you have to go over a wire or hit disk then the performance impact is not transparent ...
>
> So in the end, Norbert is correct that there are limitations to what can be achieved by adding multiple machines as opposed to increasing the size of a given machine ... but in the end it does depend upon the characteristics of the application and the size of the working set required ....

As a counter-example, there are people running successfully with
hundreds of nodes per stone. Eventually your network becomes the
bottleneck -- exactly when that happens is very application-specific.

Regards,

-Martin
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [GS/SS Beta] Scale to multiple nodes sharing a DB?

johnmci
In reply to this post by Mariano Martinez Peck
Mariano likely the other things you should consider is the reduced conflict classes, for example we use one based on a UUID generated by our client app, thus the entity and it's graph of data is unique to that user. 

and other fine articles to understand locking and transaction management. 



On Thu, Nov 14, 2013 at 3:19 PM, Mariano Martinez Peck <[hidden email]> wrote:
Hi, 

Right now I am prototyping a solution of an app using GemStone (GLASS). If we success, and we really move to GemStone we will probably need more than one machine (even if we could have a few cores running in one machine), yet we need to of course share the "DB".

So...I know we will have to pay for that, but from the technical point of view, can GemStone  run in several nodes/machines in parallel yet provide a "unique DB" for the client point of view? Is this supported? Do you have customers with such a layout?



--
===========================================================================
John M. McIntosh <[hidden email]>
Corporate Smalltalk Consulting Ltd. Twitter: squeaker68882
===========================================================================

Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Mariano Martinez Peck
In reply to this post by Dale Henrichs-3



On Thu, Nov 14, 2013 at 9:05 PM, Dale K. Henrichs <[hidden email]> wrote:
Bruno,

The short answer is yes:)

Mutiple stones can be run on a single host.

The following questions are outside GLASS, I know. Also, note that I already understand I can have many gems running in other nodes. 

Ok, but are they "isolated" stones each with its own repository?   Or I can have many stores (I refer to the process running the store) running/sharing the same repository?
My question in other words is...if the stone is finally the bottleneck, could we have many stores running in many nodes sharing the same repository (object graph)?
If not in different nodes, at least can I run many stones processes in the same node?
Not only for scalability but about fail tolerance. What if somehow the stone is down? 


 
There is one SPC per stone per host.

So in a system with two stones running on host A, there will be two SPCs on host A.

You can connect to either of the stones running on host A from host B.

In the first form you can connect to a stone on host A and arrange for the rpc gems servicing your GemTools client (for example) to run on host A in which case those gems will connect to the SPC on host A.

In the second form you can connect to a stone on host A and arrange for the gems to run on host B (not permitted with Free license) in which case there will be an SPC on host B as well ... if you connect to both stones from host B in this fashion, then you'll have one SPC per stone on host B ...

tranlogs and extents must be on the "same filesystem" as the stone ... NFS filesystems are not reliable enouch to be used for extents and tranlogs ... non NFS, shared filesystems can be used for tranlogs and extents ...

Dale
----- Original Message -----
| From: "BrunoBB" <[hidden email]>
| To: [hidden email]
| Sent: Thursday, November 14, 2013 3:44:02 PM
| Subject: Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?
|
| Hi,
|
| As i understad, ... each machine (with gems or the stone) has a
| unique SPC
| (shared page cache).
| Is this correct ?
| Gems in the same node (machine) access the same SPC, but for others
| gems in
| other node have their own SPC. Or a single node can run more than one
| SPC ?
| (may be this not make sense).
|
| Can the extents and the Stone run on different nodes ?
| Can the translog and the Stone run on different nodes ?
|
| Regards,
| Bruno
|
|
|
| --
| View this message in context:
| http://forum.world.st/GS-SS-Beta-Scale-to-multiple-nodes-sharing-a-DB-tp4722243p4722305.html
| Sent from the GLASS mailing list archive at Nabble.com.
| _______________________________________________
| Glass mailing list
| [hidden email]
| http://lists.gemtalksystems.com/mailman/listinfo/glass
|
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

James Foster-9
On Nov 28, 2013, at 7:57 PM, Mariano Martinez Peck <[hidden email]> wrote:

On Thu, Nov 14, 2013 at 9:05 PM, Dale K. Henrichs <[hidden email]> wrote:
Bruno,

The short answer is yes:)

Mutiple stones can be run on a single host.

The following questions are outside GLASS, I know. Also, note that I already understand I can have many gems running in other nodes. 

Ok, but are they "isolated" stones each with its own repository?  

Yes, each stone is associated with only one repository.

Or I can have many stores (I refer to the process running the store) running/sharing the same repository?
My question in other words is...if the stone is finally the bottleneck, could we have many stores running in many nodes sharing the same repository (object graph)?

No, the stone is a possible bottleneck because it handles things that are necessarily single-threaded, such as allocating object IDs, page IDs, object locks, the commit token, etc. In actuality, the stone is rarely a bottleneck. You are much more likely to have problems with writing to the transaction logs before you have problems with the stone. For more discussion about the stone see http://www.youtube.com/watch?v=NAW7OkjXZ0M

If not in different nodes, at least can I run many stones processes in the same node?

As Dale says above, multiple stones (with separate repositories) can run on a single host.

Not only for scalability but about fail tolerance. What if somehow the stone is down? 

This is where you want a good standby. See chapter 10 of the System Administration Guide.

James



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

Mariano Martinez Peck
In reply to this post by Martin McClure-5



On Fri, Nov 15, 2013 at 4:34 PM, Martin McClure <[hidden email]> wrote:
On 11/15/2013 09:59 AM, Dale K. Henrichs wrote:
>
> Our customers tend to towards larger and larger SPCs and more and more cpus on a single machine for the abosulte fastest performance ... As is always the case the fastest access paths are always memory to memory if you have to go over a wire or hit disk then the performance impact is not transparent ...
>
> So in the end, Norbert is correct that there are limitations to what can be achieved by adding multiple machines as opposed to increasing the size of a given machine ... but in the end it does depend upon the characteristics of the application and the size of the working set required ....

As a counter-example, there are people running successfully with
hundreds of nodes per stone. Eventually your network becomes the
bottleneck -- exactly when that happens is very application-specific.


ok. Just to see if I understand...What they run in each node are gems right? So each node will have 1 SPC and many gems, and all those gems/SPC of all nodes will be talking (wire in the middle) to the one stone in some node?


 
Regards,

-Martin
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



--
Mariano
http://marianopeck.wordpress.com

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: [Glass] [GS/SS Beta] Scale to multiple nodes sharing a DB?

James Foster-9
On Nov 28, 2013, at 8:31 PM, Mariano Martinez Peck <[hidden email]> wrote:

On Fri, Nov 15, 2013 at 4:34 PM, Martin McClure <[hidden email]> wrote:
On 11/15/2013 09:59 AM, Dale K. Henrichs wrote:
>
> Our customers tend to towards larger and larger SPCs and more and more cpus on a single machine for the abosulte fastest performance ... As is always the case the fastest access paths are always memory to memory if you have to go over a wire or hit disk then the performance impact is not transparent ...
>
> So in the end, Norbert is correct that there are limitations to what can be achieved by adding multiple machines as opposed to increasing the size of a given machine ... but in the end it does depend upon the characteristics of the application and the size of the working set required ....

As a counter-example, there are people running successfully with
hundreds of nodes per stone. Eventually your network becomes the
bottleneck -- exactly when that happens is very application-specific.


ok. Just to see if I understand...What they run in each node are gems right? So each node will have 1 SPC and many gems, and all those gems/SPC of all nodes will be talking (wire in the middle) to the one stone in some node?

Yes, each host/machine/node can run Gems that communicate with a central Stone on another host/machine/node. See Figures 3.2 and 3.3 in the System Administration Guide (“Connecting Distributed Systems”).

James


_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass