Problem with #fork and #performOnServer?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Problem with #fork and #performOnServer?

GLASS mailing list
Hi guys,

I am trying to accomplish something easy: I have a main gem that iterates some "reports" and exports each report into a PDF by calling a unix lib. This PDF export takes some seconds. So what I wanted to do is something like this pseudo code:

self reports do: [:aReport | [  System performOnServer: (self pdfExportStringFor: aReport)  ] fork ]. 

What I wanted to do with that is that each unix process for the PDF tool was executed on separate CPU cores than the GEM (my current GemStone license does allow all cores). However, I am not sure I am getting that behavior. It looks like I am still using 1 core and being sequential.
 
Finally, I couldn't even reproduce a single test case with that I had in mind:

1 to: 6 do: [:index |
[ System performOnServer: 'tar -zcvf test', index asString, '.tar.gz /home/quuve/GsDevKit_home/server/stones/xxx_333/extents' ] fork.
].

I would have expected those lines to burn my server and use 6 cpu cores at 100%. But no, nothing happens. What is funny is that if I call the very same line without the #fork I do get the 100% CPU process:

System performOnServer: 'tar -zcvf test', 1 asString, '.tar.gz /home/quuve/GsDevKit_home/server/stones/debrisDemo_333/extents'

So....is there something I am not seeing with #fork and #performOnServer: ?

Thanks a lot in advance, 

--

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: Problem with #fork and #performOnServer?

GLASS mailing list
> Hi guys,
>
> I am trying to accomplish something easy: I have a main gem that iterates
> some "reports" and exports each report into a PDF by calling a unix lib.
> This PDF export takes some seconds. So what I wanted to do is something
> like this pseudo code:
>
> self reports do: [:aReport | [  System performOnServer: (self
> pdfExportStringFor: aReport)  ] fork ].
>
> What I wanted to do with that is that each unix process for the PDF tool
> was executed on separate CPU cores than the GEM (my current GemStone
> license does allow all cores). However, I am not sure I am getting that
> behavior. It looks like I am still using 1 core and being sequential.
>
> Finally, I couldn't even reproduce a single test case with that I had in
> mind:
>
> 1 to: 6 do: [:index |
> [ System performOnServer: 'tar -zcvf test', index asString, '.tar.gz
> /home/quuve/GsDevKit_home/server/stones/xxx_333/extents' ] fork.
> ].
>
> I would have expected those lines to burn my server and use 6 cpu cores at
> 100%. But no, nothing happens. What is funny is that if I call the very
> same line without the #fork I do get the 100% CPU process:

Just a note:
tar/gzip is not written with multicore support IMHO, so you always utilize single core only to 100%. But there is "parallel gzip" (pigz), which definitelly turns CPU cooler to speed.
Usage: tar --use-compress-program=pigz ...

Is performOnServer: really non blocking for whole image/gem (or the whole VM simply wait for command to complete)?

>
> System performOnServer: 'tar -zcvf test', 1 asString, '.tar.gz
> /home/quuve/GsDevKit_home/server/stones/debrisDemo_333/extents'
>
> So....is there something I am not seeing with #fork and #performOnServer: ?
>
> Thanks a lot in advance,
>
> --
> Mariano
> http://marianopeck.wordpress.com

> _______________________________________________
> Glass mailing list
> [hidden email]
> http://lists.gemtalksystems.com/mailman/listinfo/glass

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: Problem with #fork and #performOnServer?

GLASS mailing list


On Mon, Jul 3, 2017 at 4:52 PM, Petr Fischer via Glass <[hidden email]> wrote:
> Hi guys,
>
> I am trying to accomplish something easy: I have a main gem that iterates
> some "reports" and exports each report into a PDF by calling a unix lib.
> This PDF export takes some seconds. So what I wanted to do is something
> like this pseudo code:
>
> self reports do: [:aReport | [  System performOnServer: (self
> pdfExportStringFor: aReport)  ] fork ].
>
> What I wanted to do with that is that each unix process for the PDF tool
> was executed on separate CPU cores than the GEM (my current GemStone
> license does allow all cores). However, I am not sure I am getting that
> behavior. It looks like I am still using 1 core and being sequential.
>
> Finally, I couldn't even reproduce a single test case with that I had in
> mind:
>
> 1 to: 6 do: [:index |
> [ System performOnServer: 'tar -zcvf test', index asString, '.tar.gz
> /home/quuve/GsDevKit_home/server/stones/xxx_333/extents' ] fork.
> ].
>
> I would have expected those lines to burn my server and use 6 cpu cores at
> 100%. But no, nothing happens. What is funny is that if I call the very
> same line without the #fork I do get the 100% CPU process:

Just a note:
tar/gzip is not written with multicore support IMHO, so you always utilize single core only to 100%. But there is "parallel gzip" (pigz), which definitelly turns CPU cooler to speed.
Usage: tar --use-compress-program=pigz ...

Sure, that was a dummy example to see CPU in usage and test my thoughts (not the real unix command called)
 

Is performOnServer: really non blocking for whole image/gem (or the whole VM simply wait for command to complete)?


Yeah, that's the thing, I think you are right, the #performOnServer: may be blocking at gem level. 

For OSSubprocess I was able to allow specifying/managing none blocking pipes for standard streams. 

And now, I am reading GsHostProcess and it also supports none blocking streams!!

I see that #_waitChild does indeed call waitpid() so.... I should be able to do a busy waiting around #childHasExited

BTW,  for OSSubprocesses I added a SIGCHLD kind of waiting to avoid polling.... but of course, you must be careful because you may need to force reading from streams (depending on how much the process writes)

 

Thanks!

--

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: Problem with #fork and #performOnServer?

GLASS mailing list
 am interested also in this: in free Gemstone version, all Gemstone processes has CPU affinity to first 2 CPU cores (licensing).
 Does this also apply for your own sub-processes (performOnServer: or OSSubprocess)?

 In Free version is possible to run 10-20 Gems, but only on 2 cores, quite a bottleneck...

 pf


> On Mon, Jul 3, 2017 at 4:52 PM, Petr Fischer via Glass <
> [hidden email]> wrote:
>
> > > Hi guys,
> > >
> > > I am trying to accomplish something easy: I have a main gem that iterates
> > > some "reports" and exports each report into a PDF by calling a unix lib.
> > > This PDF export takes some seconds. So what I wanted to do is something
> > > like this pseudo code:
> > >
> > > self reports do: [:aReport | [  System performOnServer: (self
> > > pdfExportStringFor: aReport)  ] fork ].
> > >
> > > What I wanted to do with that is that each unix process for the PDF tool
> > > was executed on separate CPU cores than the GEM (my current GemStone
> > > license does allow all cores). However, I am not sure I am getting that
> > > behavior. It looks like I am still using 1 core and being sequential.
> > >
> > > Finally, I couldn't even reproduce a single test case with that I had in
> > > mind:
> > >
> > > 1 to: 6 do: [:index |
> > > [ System performOnServer: 'tar -zcvf test', index asString, '.tar.gz
> > > /home/quuve/GsDevKit_home/server/stones/xxx_333/extents' ] fork.
> > > ].
> > >
> > > I would have expected those lines to burn my server and use 6 cpu cores
> > at
> > > 100%. But no, nothing happens. What is funny is that if I call the very
> > > same line without the #fork I do get the 100% CPU process:
> >
> > Just a note:
> > tar/gzip is not written with multicore support IMHO, so you always utilize
> > single core only to 100%. But there is "parallel gzip" (pigz), which
> > definitelly turns CPU cooler to speed.
> > Usage: tar --use-compress-program=pigz ...
> >
>
> Sure, that was a dummy example to see CPU in usage and test my thoughts
> (not the real unix command called)
>
>
> >
> > Is performOnServer: really non blocking for whole image/gem (or the whole
> > VM simply wait for command to complete)?
> >
> >
> Yeah, that's the thing, I think you are right, the #performOnServer: may be
> blocking at gem level.
>
> For OSSubprocess I was able to allow specifying/managing none blocking
> pipes for standard streams.
>
> And now, I am reading GsHostProcess and it also supports none blocking
> streams!!
>
> I see that #_waitChild does indeed call waitpid() so.... I should be able
> to do a busy waiting around #childHasExited
>
> BTW,  for OSSubprocesses I added a SIGCHLD kind of waiting to avoid
> polling.... but of course, you must be careful because you may need to
> force reading from streams (depending on how much the process writes)
>
> [2]
> https://github.com/marianopeck/OSSubprocess#semaphore-based-sigchld-waiting
>
>
> Thanks!
>
> --
> Mariano
> http://marianopeck.wordpress.com
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: Problem with #fork and #performOnServer?

GLASS mailing list


On Tue, Jul 4, 2017 at 5:31 AM, Petr Fischer via Glass <[hidden email]> wrote:
 am interested also in this: in free Gemstone version, all Gemstone processes has CPU affinity to first 2 CPU cores (licensing).
 Does this also apply for your own sub-processes (performOnServer: or OSSubprocess)?


As far as I understand, yes to both questions. 
 
 In Free version is possible to run 10-20 Gems, but only on 2 cores, quite a bottleneck...


Well...bottleneck on CPU yes, but you may be able to take benefits of other resources anyway (I/O etc). What I mean is...imagine you are serving a website...I rather than 10 / 20 gems even if split across 2 cores than only have 2 gems.


 
 pf


> On Mon, Jul 3, 2017 at 4:52 PM, Petr Fischer via Glass <
> [hidden email]> wrote:
>
> > > Hi guys,
> > >
> > > I am trying to accomplish something easy: I have a main gem that iterates
> > > some "reports" and exports each report into a PDF by calling a unix lib.
> > > This PDF export takes some seconds. So what I wanted to do is something
> > > like this pseudo code:
> > >
> > > self reports do: [:aReport | [  System performOnServer: (self
> > > pdfExportStringFor: aReport)  ] fork ].
> > >
> > > What I wanted to do with that is that each unix process for the PDF tool
> > > was executed on separate CPU cores than the GEM (my current GemStone
> > > license does allow all cores). However, I am not sure I am getting that
> > > behavior. It looks like I am still using 1 core and being sequential.
> > >
> > > Finally, I couldn't even reproduce a single test case with that I had in
> > > mind:
> > >
> > > 1 to: 6 do: [:index |
> > > [ System performOnServer: 'tar -zcvf test', index asString, '.tar.gz
> > > /home/quuve/GsDevKit_home/server/stones/xxx_333/extents' ] fork.
> > > ].
> > >
> > > I would have expected those lines to burn my server and use 6 cpu cores
> > at
> > > 100%. But no, nothing happens. What is funny is that if I call the very
> > > same line without the #fork I do get the 100% CPU process:
> >
> > Just a note:
> > tar/gzip is not written with multicore support IMHO, so you always utilize
> > single core only to 100%. But there is "parallel gzip" (pigz), which
> > definitelly turns CPU cooler to speed.
> > Usage: tar --use-compress-program=pigz ...
> >
>
> Sure, that was a dummy example to see CPU in usage and test my thoughts
> (not the real unix command called)
>
>
> >
> > Is performOnServer: really non blocking for whole image/gem (or the whole
> > VM simply wait for command to complete)?
> >
> >
> Yeah, that's the thing, I think you are right, the #performOnServer: may be
> blocking at gem level.
>
> For OSSubprocess I was able to allow specifying/managing none blocking
> pipes for standard streams.
>
> And now, I am reading GsHostProcess and it also supports none blocking
> streams!!
>
> I see that #_waitChild does indeed call waitpid() so.... I should be able
> to do a busy waiting around #childHasExited
>
> BTW,  for OSSubprocesses I added a SIGCHLD kind of waiting to avoid
> polling.... but of course, you must be careful because you may need to
> force reading from streams (depending on how much the process writes)
>
> [2]
> https://github.com/marianopeck/OSSubprocess#semaphore-based-sigchld-waiting
>
>
> Thanks!
>
> --
> Mariano
> http://marianopeck.wordpress.com
_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass



--

_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass
Reply | Threaded
Open this post in threaded view
|

Re: Problem with #fork and #performOnServer?

GLASS mailing list
In reply to this post by GLASS mailing list

The start edition allows only 10 GEMS, the Limited version 20 GEMS ..


Marten

Petr Fischer via Glass <[hidden email]> hat am 4. Juli 2017 um 10:31 geschrieben:

am interested also in this: in free Gemstone version, all Gemstone processes has CPU affinity to first 2 CPU cores (licensing).
Does this also apply for your own sub-processes (performOnServer: or OSSubprocess)?

In Free version is possible to run 10-20 Gems, but only on 2 cores, quite a bottleneck...



_______________________________________________
Glass mailing list
[hidden email]
http://lists.gemtalksystems.com/mailman/listinfo/glass