Hi, David, all
i just want to know, if we can use FFI calls, for implementing an OSProcessPlugin functionality (or at least most of it). and if there parts which is hard to implement, i would like to know what and why. If you want to ask, why one would want to do this.. the answer is simple: we stated previously, that we want to reduce the VM complexity by implementing things at language side, when it possible, leaving only key parts to VM. This is the main reason why i consider implementing OSProcess using purely FFI, not because of quality of OSProcessPlugin implementation etc. -- Best regards, Igor Stasenko. |
On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: Hi, David, all Once we have the threaded FFI and the threaded VM then yes, because one can block one thread in a read without blocking the entire VM. But without the threaded VM one needs asynchronous i/o.
best, Eliot |
On Sat, May 5, 2012 at 9:00 AM, Eliot Miranda <[hidden email]> wrote:
But it can still be convenient to have a forkAndExec combined primitive to avoid the complications of having the system run hile forked (two processes accessing the same screen and changes files, so that if there's an error in the forked child it will collide with the parent when reporting its error).
best, Eliot |
In reply to this post by Eliot Miranda-2
On 5 May 2012 17:00, Eliot Miranda <[hidden email]> wrote:
> > > On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: >> >> Hi, David, all >> >> i just want to know, if we can use FFI calls, for implementing an >> OSProcessPlugin functionality (or at least most of it). >> and if there parts which is hard to implement, i would like to know >> what and why. > > > Once we have the threaded FFI and the threaded VM then yes, because one can > block one thread in a read without blocking the entire VM. But without the > threaded VM one needs asynchronous i/o. It's mildly off-topic, but I was under the impression that asynchronous i/o beat the trousers off blocking i/o, specifically when one has hundreds/thousands of sockets to process. So while it might be _nice_ to have blocking i/o as an option (one can often write clearer code), it's not something that (at least AFAIK) scales very well. frank >> If you want to ask, why one would want to do this.. the answer is simple: >> we stated previously, that we want to reduce the VM complexity by >> implementing things at language side, when it possible, >> leaving only key parts to VM. >> This is the main reason why i consider implementing OSProcess using >> purely FFI, not because of quality of OSProcessPlugin >> implementation etc. >> >> -- >> Best regards, >> Igor Stasenko. >> > > > > -- > best, > Eliot > |
In reply to this post by Eliot Miranda-2
On 5 May 2012 18:00, Eliot Miranda <[hidden email]> wrote:
> > > On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: >> >> Hi, David, all >> >> i just want to know, if we can use FFI calls, for implementing an >> OSProcessPlugin functionality (or at least most of it). >> and if there parts which is hard to implement, i would like to know >> what and why. > > > Once we have the threaded FFI and the threaded VM then yes, because one can > block one thread in a read without blocking the entire VM. But without the > threaded VM one needs asynchronous i/o. > the main advantage which i see for using FFI is being able to deal with everything at language level. so, bugs can be fixed quickly, things can be improved more easily etc etc.. and the main disadvantage is, of course, speed. >> >> If you want to ask, why one would want to do this.. the answer is simple: >> we stated previously, that we want to reduce the VM complexity by >> implementing things at language side, when it possible, >> leaving only key parts to VM. >> This is the main reason why i consider implementing OSProcess using >> purely FFI, not because of quality of OSProcessPlugin >> implementation etc. >> >> -- >> Best regards, >> Igor Stasenko. >> > > > > -- > best, > Eliot > -- Best regards, Igor Stasenko. |
2012/5/6 Igor Stasenko <[hidden email]>:
> On 5 May 2012 18:00, Eliot Miranda <[hidden email]> wrote: >> >> >> On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: >>> >>> Hi, David, all >>> >>> i just want to know, if we can use FFI calls, for implementing an >>> OSProcessPlugin functionality (or at least most of it). >>> and if there parts which is hard to implement, i would like to know >>> what and why. >> >> >> Once we have the threaded FFI and the threaded VM then yes, because one can >> block one thread in a read without blocking the entire VM. But without the >> threaded VM one needs asynchronous i/o. >> > yes, but nobody said, that i can't use non-blocking i/o. > > the main advantage which i see for using FFI is being able to deal > with everything at language level. > so, bugs can be fixed quickly, things can be improved more easily etc etc.. > and the main disadvantage is, of course, speed. > No. The main disadvantage is to expose the complexity of outside world at user face, when it was once hidden under the VM carpet. In the VM, complexity is handled with the "right tools" (cough...) #include machine_specific_stuff.h driven by #ifdef why_the_hell_is_it_so_complex ;) and include paths library paths ldconfig etc... It's of course cough-able because this complexity then spread into m4 macros for configure scripts and other such niceties. Sure, - when the VM is not able to handle outside world zoo because a lack of manpower, and because the outside world changed too fast while we were sleeping happy in our mickey world, - or if we want more than least common denominator between several underlying platforms, then yes, it might be better to adopt your strategy, because we can get more manpower at image level than at VM level... However, it's a kind of pact with the devil: we get the power at the price of introducing the #ifdef hell in the image. Sure, with a good object model, we might think that it will be easier to handle such complexity. But then we're just ignoring the efforts of hordes of programmers maintaining this shit, and redo all in image. I'm not sure that it will be sustainable... Unless of course we focus on a single platform. Your work is very important, because at least, it will enable powerful single platform extensions. But as you see, I just wonder how it would help us to continue providing ubiquity... Nicolas >>> >>> If you want to ask, why one would want to do this.. the answer is simple: >>> we stated previously, that we want to reduce the VM complexity by >>> implementing things at language side, when it possible, >>> leaving only key parts to VM. >>> This is the main reason why i consider implementing OSProcess using >>> purely FFI, not because of quality of OSProcessPlugin >>> implementation etc. >>> >>> -- >>> Best regards, >>> Igor Stasenko. >>> >> >> >> >> -- >> best, >> Eliot >> > > > > -- > Best regards, > Igor Stasenko. > |
Hi,
On May 6, 2012, at 9:45 AM, Nicolas Cellier wrote: > 2012/5/6 Igor Stasenko <[hidden email]>: >> On 5 May 2012 18:00, Eliot Miranda <[hidden email]> wrote: >>> >>> >>> On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: >>>> >>>> Hi, David, all >>>> >>>> i just want to know, if we can use FFI calls, for implementing an >>>> OSProcessPlugin functionality (or at least most of it). >>>> and if there parts which is hard to implement, i would like to know >>>> what and why. >>> >>> >>> Once we have the threaded FFI and the threaded VM then yes, because one can >>> block one thread in a read without blocking the entire VM. But without the >>> threaded VM one needs asynchronous i/o. >>> >> yes, but nobody said, that i can't use non-blocking i/o. >> >> the main advantage which i see for using FFI is being able to deal >> with everything at language level. >> so, bugs can be fixed quickly, things can be improved more easily etc etc.. >> and the main disadvantage is, of course, speed. >> > > No. > The main disadvantage is to expose the complexity of outside world at > user face, when it was once hidden under the VM carpet. And why is that a disadvantage? This is smalltalk. "Users" are developers, not ben laden... is not like we are going to do terrible things with that power. Even if we are newbies, worst thing that can happen is to make mistakes while learning... same as today :) > In the VM, complexity is handled with the "right tools" (cough...) your "cough" is right :). Right tools is what does our life easier, not what is closer to the hardware. > #include machine_specific_stuff.h driven by #ifdef > why_the_hell_is_it_so_complex ;) and include paths library paths > ldconfig etc... > It's of course cough-able because this complexity then spread into m4 > macros for configure scripts and other such niceties. > > Sure, > - when the VM is not able to handle outside world zoo because a lack > of manpower, and because the outside world changed too fast while we > were sleeping happy in our mickey world, > - or if we want more than least common denominator between several > underlying platforms, > then yes, it might be better to adopt your strategy, because we can > get more manpower at image level than at VM level... this is an important issue: is easier to maintain at smalltalk level than c level. And there are more willing hands to code :) > However, it's a kind of pact with the devil: we get the power at the > price of introducing the #ifdef hell in the image. That's simply not true... look at dbxtalk code, if you want. We could do a plugin to deal with that, but we did it with FFI. Plain smalltalk and straightforward (real problem is to compile opendbx libraries, in fact... and I would love to replace them with smalltalk code, but those are just dreams) > Sure, with a good object model, we might think that it will be easier > to handle such complexity. +1 > But then we're just ignoring the efforts of hordes of programmers > maintaining this shit, and redo all in image. horde of programmers (me, for instance) will be most than pleased to be ignored in this case :) > I'm not sure that it will be sustainable... > Unless of course we focus on a single platform. right now OSProcess is also a hell... if Dave chooses to do something else, we are in problems. I prefer a hell in smalltalk than a paradise in C (and of course, vm is not a paradise :) > Your work is very important, because at least, it will enable powerful > single platform extensions. > But as you see, I just wonder how it would help us to continue > providing ubiquity... Same as today, but in a better way :) Lot of plugins don't need to be in VM side if we have a powerful FFI, with callbacks and threads. I think (as a smalltalk programmer and as a vm programmer) that if we can move to image side those plugins, we can focus in the real complexity of the vm, and after all do better things. > > Nicolas > >>>> >>>> If you want to ask, why one would want to do this.. the answer is simple: >>>> we stated previously, that we want to reduce the VM complexity by >>>> implementing things at language side, when it possible, >>>> leaving only key parts to VM. >>>> This is the main reason why i consider implementing OSProcess using >>>> purely FFI, not because of quality of OSProcessPlugin >>>> implementation etc. >>>> >>>> -- >>>> Best regards, >>>> Igor Stasenko. >>>> >>> >>> >>> >>> -- >>> best, >>> Eliot >>> >> >> >> >> -- >> Best regards, >> Igor Stasenko. >> > |
In reply to this post by Nicolas Cellier
Hi nicolas
>>>> It's of course cough-able because this complexity then spread into m4 > macros for configure scripts and other such niceties. > > Sure, > - when the VM is not able to handle outside world zoo because a lack > of manpower, and because the outside world changed too fast while we > were sleeping happy in our mickey world, I like the image :) > - or if we want more than least common denominator between several > underlying platforms, > then yes, it might be better to adopt your strategy, because we can > get more manpower at image level than at VM level... > > However, it's a kind of pact with the devil: we get the power at the > price of introducing the #ifdef hell in the image. > Sure, with a good object model, we might think that it will be easier > to handle such complexity. > But then we're just ignoring the efforts of hordes of programmers > maintaining this shit, and redo all in image. > I'm not sure that it will be sustainable... > Unless of course we focus on a single platform. No necessarily, we have object, encapsulation so we should be able to manage modularity the ifdef plague. After all this is all the idea behind "Transform type check to polymorphism" :) > > Your work is very important, because at least, it will enable powerful > single platform extensions. > But as you see, I just wonder how it would help us to continue > providing ubiquity... |
In reply to this post by EstebanLM
2012/5/6 Esteban Lorenzano <[hidden email]>:
> Hi, > > On May 6, 2012, at 9:45 AM, Nicolas Cellier wrote: > >> 2012/5/6 Igor Stasenko <[hidden email]>: >>> On 5 May 2012 18:00, Eliot Miranda <[hidden email]> wrote: >>>> >>>> >>>> On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: >>>>> >>>>> Hi, David, all >>>>> >>>>> i just want to know, if we can use FFI calls, for implementing an >>>>> OSProcessPlugin functionality (or at least most of it). >>>>> and if there parts which is hard to implement, i would like to know >>>>> what and why. >>>> >>>> >>>> Once we have the threaded FFI and the threaded VM then yes, because one can >>>> block one thread in a read without blocking the entire VM. But without the >>>> threaded VM one needs asynchronous i/o. >>>> >>> yes, but nobody said, that i can't use non-blocking i/o. >>> >>> the main advantage which i see for using FFI is being able to deal >>> with everything at language level. >>> so, bugs can be fixed quickly, things can be improved more easily etc etc.. >>> and the main disadvantage is, of course, speed. >>> >> >> No. >> The main disadvantage is to expose the complexity of outside world at >> user face, when it was once hidden under the VM carpet. > > And why is that a disadvantage? This is smalltalk. "Users" are developers, not ben laden... is not like we are going to do terrible things with that power. Even if we are newbies, worst thing that can happen is to make mistakes while learning... same as today :) > Well, simplicity is also a goal. A system that a single person can understand... Think Cuis. Especially in Smalltalk there is nothing private, API are not that easy to identify and it's so easy to use a package at the wrong level... Also, another drawback is having to mirror all the foreign system in your image (like you are on Mac intel and have plenty of code for handling Linux libraries, Windows, Solaris, FreeBSD, Smartphones etc...). Unless you automatically load those packages at startup (and more difficult, unload at snapshot)... >> In the VM, complexity is handled with the "right tools" (cough...) > > your "cough" is right :). Right tools is what does our life easier, not what is closer to the hardware. > >> #include machine_specific_stuff.h driven by #ifdef >> why_the_hell_is_it_so_complex ;) and include paths library paths >> ldconfig etc... >> It's of course cough-able because this complexity then spread into m4 >> macros for configure scripts and other such niceties. >> >> Sure, >> - when the VM is not able to handle outside world zoo because a lack >> of manpower, and because the outside world changed too fast while we >> were sleeping happy in our mickey world, >> - or if we want more than least common denominator between several >> underlying platforms, >> then yes, it might be better to adopt your strategy, because we can >> get more manpower at image level than at VM level... > > this is an important issue: is easier to maintain at smalltalk level than c level. And there are more willing hands to code :) > >> However, it's a kind of pact with the devil: we get the power at the >> price of introducing the #ifdef hell in the image. > > That's simply not true... look at dbxtalk code, if you want. We could do a plugin to deal with that, but we did it with FFI. Plain smalltalk and straightforward (real problem is to compile opendbx libraries, in fact... and I would love to replace them with smalltalk code, but those are just dreams) > Exactly, it may remain a dream... It's not true in this specific case because opendbx took all the burden of ubiquity... That's why it is so complex to compile ;) >> Sure, with a good object model, we might think that it will be easier >> to handle such complexity. > > +1 > >> But then we're just ignoring the efforts of hordes of programmers >> maintaining this shit, and redo all in image. > > horde of programmers (me, for instance) will be most than pleased to be ignored in this case :) > It's not the problem of ignoring, it's more the problem of under estimating the required work... >> I'm not sure that it will be sustainable... >> Unless of course we focus on a single platform. > > right now OSProcess is also a hell... if Dave chooses to do something else, we are in problems. I prefer a hell in smalltalk than a paradise in C (and of course, vm is not a paradise :) > >> Your work is very important, because at least, it will enable powerful >> single platform extensions. >> But as you see, I just wonder how it would help us to continue >> providing ubiquity... > > Same as today, but in a better way :) > > Lot of plugins don't need to be in VM side if we have a powerful FFI, with callbacks and threads. > I think (as a smalltalk programmer and as a vm programmer) that if we can move to image side those plugins, we can focus in the real complexity of the vm, and after all do better things. > There's something that always amazed me: plugins are supposed to be pluggable and while I understand why they depend on some Kernel-VM services, i don't really see why the contrary has to be true. Is it a packaging problem? A distribution problem? I'm all for a better and more powerfull FFI (without threading we can't have efficient asynchronous IO implemented at image level, just a poor man polling one indeed...) But I continue to think there is a price. Is the unix style select() ubiquitous or should I use WaitForMultipleObject() on Windows? Are specification of read/write streams implementation machine independant (bsd/sysv/others...) - AFAIR the selection is implemented with macros... are the fcntl call to set non blocking portable ? Nicolas > >> >> Nicolas >> >>>>> >>>>> If you want to ask, why one would want to do this.. the answer is simple: >>>>> we stated previously, that we want to reduce the VM complexity by >>>>> implementing things at language side, when it possible, >>>>> leaving only key parts to VM. >>>>> This is the main reason why i consider implementing OSProcess using >>>>> purely FFI, not because of quality of OSProcessPlugin >>>>> implementation etc. >>>>> >>>>> -- >>>>> Best regards, >>>>> Igor Stasenko. >>>>> >>>> >>>> >>>> >>>> -- >>>> best, >>>> Eliot >>>> >>> >>> >>> >>> -- >>> Best regards, >>> Igor Stasenko. >>> >> > > |
On May 6, 2012, at 1:16 PM, Nicolas Cellier wrote: > 2012/5/6 Esteban Lorenzano <[hidden email]>: >> Hi, >> >> On May 6, 2012, at 9:45 AM, Nicolas Cellier wrote: >> >>> 2012/5/6 Igor Stasenko <[hidden email]>: >>>> On 5 May 2012 18:00, Eliot Miranda <[hidden email]> wrote: >>>>> >>>>> >>>>> On Fri, May 4, 2012 at 6:23 PM, Igor Stasenko <[hidden email]> wrote: >>>>>> >>>>>> Hi, David, all >>>>>> >>>>>> i just want to know, if we can use FFI calls, for implementing an >>>>>> OSProcessPlugin functionality (or at least most of it). >>>>>> and if there parts which is hard to implement, i would like to know >>>>>> what and why. >>>>> >>>>> >>>>> Once we have the threaded FFI and the threaded VM then yes, because one can >>>>> block one thread in a read without blocking the entire VM. But without the >>>>> threaded VM one needs asynchronous i/o. >>>>> >>>> yes, but nobody said, that i can't use non-blocking i/o. >>>> >>>> the main advantage which i see for using FFI is being able to deal >>>> with everything at language level. >>>> so, bugs can be fixed quickly, things can be improved more easily etc etc.. >>>> and the main disadvantage is, of course, speed. >>>> >>> >>> No. >>> The main disadvantage is to expose the complexity of outside world at >>> user face, when it was once hidden under the VM carpet. >> >> And why is that a disadvantage? This is smalltalk. "Users" are developers, not ben laden... is not like we are going to do terrible things with that power. Even if we are newbies, worst thing that can happen is to make mistakes while learning... same as today :) >> > > Well, simplicity is also a goal. A system that a single person can > understand... Think Cuis. > Especially in Smalltalk there is nothing private, API are not that > easy to identify and it's so easy to use a package at the wrong > level... yeah, is a goal, but also is a goal "everyone being capable to modify their system", and that is much easy if all system lives inside image, and not hidden in a plugin. > > Also, another drawback is having to mirror all the foreign system in > your image (like you are on Mac intel and have plenty of code for > handling Linux libraries, Windows, Solaris, FreeBSD, Smartphones > etc...). > Unless you automatically load those packages at startup (and more > difficult, unload at snapshot)... well nobody (except Marcus and myself) complains against one-click packaging, and it's exactly that :) nah, seriously: yes, this is an issue and we need to think about that to have a good solution (btw, OSProcess and many other plugins are also not available for all platforms, just mac, win and linux). > >>> In the VM, complexity is handled with the "right tools" (cough...) >> >> your "cough" is right :). Right tools is what does our life easier, not what is closer to the hardware. >> >>> #include machine_specific_stuff.h driven by #ifdef >>> why_the_hell_is_it_so_complex ;) and include paths library paths >>> ldconfig etc... >>> It's of course cough-able because this complexity then spread into m4 >>> macros for configure scripts and other such niceties. >>> >>> Sure, >>> - when the VM is not able to handle outside world zoo because a lack >>> of manpower, and because the outside world changed too fast while we >>> were sleeping happy in our mickey world, >>> - or if we want more than least common denominator between several >>> underlying platforms, >>> then yes, it might be better to adopt your strategy, because we can >>> get more manpower at image level than at VM level... >> >> this is an important issue: is easier to maintain at smalltalk level than c level. And there are more willing hands to code :) >> >>> However, it's a kind of pact with the devil: we get the power at the >>> price of introducing the #ifdef hell in the image. >> >> That's simply not true... look at dbxtalk code, if you want. We could do a plugin to deal with that, but we did it with FFI. Plain smalltalk and straightforward (real problem is to compile opendbx libraries, in fact... and I would love to replace them with smalltalk code, but those are just dreams) >> > > Exactly, it may remain a dream... > It's not true in this specific case because opendbx took all the > burden of ubiquity... That's why it is so complex to compile ;) in fact, opendbx just provides a common api to different backends... and that's perfectly doable in smalltalk (also it should be much easier than doing it in C) > >>> Sure, with a good object model, we might think that it will be easier >>> to handle such complexity. >> >> +1 >> >>> But then we're just ignoring the efforts of hordes of programmers >>> maintaining this shit, and redo all in image. >> >> horde of programmers (me, for instance) will be most than pleased to be ignored in this case :) >> > > It's not the problem of ignoring, it's more the problem of under > estimating the required work... problem is that if you have "right tool" (and I think Pharo is), complexity is automatically reduced :) > >>> I'm not sure that it will be sustainable... >>> Unless of course we focus on a single platform. >> >> right now OSProcess is also a hell... if Dave chooses to do something else, we are in problems. I prefer a hell in smalltalk than a paradise in C (and of course, vm is not a paradise :) >> >>> Your work is very important, because at least, it will enable powerful >>> single platform extensions. >>> But as you see, I just wonder how it would help us to continue >>> providing ubiquity... >> >> Same as today, but in a better way :) >> >> Lot of plugins don't need to be in VM side if we have a powerful FFI, with callbacks and threads. >> I think (as a smalltalk programmer and as a vm programmer) that if we can move to image side those plugins, we can focus in the real complexity of the vm, and after all do better things. >> > > There's something that always amazed me: plugins are supposed to be > pluggable and while I understand why they depend on some Kernel-VM > services, i don't really see why the contrary has to be true. Is it a > packaging problem? A distribution problem? Is because they are not plugins at all, because without them image don't start (think on FilePlugin, for instance... since image need it to load changes and sources, you can't drop it –unless we put sources inside image), but well... there are more. I think is a design problem (and by the way, a design problem we can solve by having plugins inside image :P) > > I'm all for a better and more powerfull FFI (without threading we > can't have efficient asynchronous IO implemented at image level, just > a poor man polling one indeed...) > But I continue to think there is a price. Is the unix style select() > ubiquitous or should I use WaitForMultipleObject() on Windows? Are > specification of read/write streams implementation machine independant > (bsd/sysv/others...) - AFAIR the selection is implemented with > macros... are the fcntl call to set non blocking portable ? Every benefit has its drawbacks... and we need to take steps to mitigate them... but I just don't think in this case we can list "overcomplexity" as a drawback, I think is quite the opposite: it will reduce accidental complexity (essential complexity will always be there: in a plugin or in Pharo). I'm more worried about having all-platforms-specific-stuff inside image... but we can mitigate that with fuel, and making loadable packages when running the image... I don't know, I'm just thinking while writing, so, this is probably stupid :) Esteban > > Nicolas > >> >>> >>> Nicolas >>> >>>>>> >>>>>> If you want to ask, why one would want to do this.. the answer is simple: >>>>>> we stated previously, that we want to reduce the VM complexity by >>>>>> implementing things at language side, when it possible, >>>>>> leaving only key parts to VM. >>>>>> This is the main reason why i consider implementing OSProcess using >>>>>> purely FFI, not because of quality of OSProcessPlugin >>>>>> implementation etc. >>>>>> >>>>>> -- >>>>>> Best regards, >>>>>> Igor Stasenko. >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> best, >>>>> Eliot >>>>> >>>> >>>> >>>> >>>> -- >>>> Best regards, >>>> Igor Stasenko. >>>> >>> >> >> > |
On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[hidden email]> wrote:
Just think how many times you took a development image and used it in several platforms. At least I don't. Same happened when I used Eclipse. I didn't share my eclipses between systems. Even, I had several eclipse installations with their own plugins (just like images, hehe). Probably with jenkins, metacello, and kernel/bootstrap we can generate distributions per platform (With the possibility of an all-in-one distribution for the ones who like that). Guille |
since most of Nicolas concerns were answered by others, there's only
one thing i'd like to say about: platform-specifc zoo. At first look it can be scary, but look at opengl binding using nativeboost: we were able to deal with it at image side. And in our case a platform-specific code is maybe less than 1% of total codebase of all NBOpenGL code. Remember an endless issues "B3DAccelerator doesn't works, XYZPlugin doesn't works", because of all this ldconfig/etc mess.. Every time we have such issue, we need to look at VM code, patch it, and rebuilt it again. That means more time and more people involved every time there are issue in VM side. In contrary, if problems like this arising in FFI world, users can easily patch the code by they own, and have stuff working in minuted instead of days (or even months) waiting till new VM build will be released. Especially for OSProcess: why i targeting it? Because it exposing most basic kernel functionality, which can be found on all systems. That means there much less likely will be the problems with linkage with right libs etc. Since OS kernel is there and cannot disappear. And last but not least, of course, avoiding problem of minimal common denominator. For any software experts, i guess , it is important to be able to access the full potential of a system, not a small portion of it. It might be not that important for people with "living in sandbox" mindset, but if you really approach with goals like "i want to implement a modern application on this system using this language" then you probably will start worrying about limitations sooner or later, as well as, how easy you can change it to adopt to your needs. -- Best regards, Igor Stasenko. |
In reply to this post by Guillermo Polito
2012/5/6 Guillermo Polito <[hidden email]>:
> > > On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[hidden email]> > wrote: >> >> <snip> >> I'm more worried about having all-platforms-specific-stuff inside image... >> but we can mitigate that with fuel, and making loadable packages when >> running the image... I don't know, I'm just thinking while writing, so, this >> is probably stupid :) >> > > Just think how many times you took a development image and used it in > several platforms. At least I don't. Same happened when I used Eclipse. I > didn't share my eclipses between systems. Even, I had several eclipse > installations with their own plugins (just like images, hehe). > > Probably with jenkins, metacello, and kernel/bootstrap we can generate > distributions per platform (With the possibility of an all-in-one > distribution for the ones who like that). > > Guille Yes, I understand that we can live without this feature... - If we can reconstruct images easily (one of the goal of Pharo) - I mean not only code, but any object (eventually with Fuel) - If we solve the bootstrap problem (or if we can still prepare an image for cross platform startup) - If we don't forget to always talk (send messages) thru an abstract layer, and never directly name the target library, Since I didn't have all these tools in the past, I were forced to use development images across different platform a lot, and yes, it was not following the mainstream rules (a la "we can reconstruct all from scratch") but damn powerful. For deploying applications, it also is very powerful and cheap. Personally, I would feel sore to lose it. Nicolas |
On 6 May 2012 17:08, Nicolas Cellier <[hidden email]> wrote:
> 2012/5/6 Guillermo Polito <[hidden email]>: >> >> >> On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[hidden email]> >> wrote: >>> >>> <snip> >>> I'm more worried about having all-platforms-specific-stuff inside image... >>> but we can mitigate that with fuel, and making loadable packages when >>> running the image... I don't know, I'm just thinking while writing, so, this >>> is probably stupid :) >>> >> >> Just think how many times you took a development image and used it in >> several platforms. At least I don't. Same happened when I used Eclipse. I >> didn't share my eclipses between systems. Even, I had several eclipse >> installations with their own plugins (just like images, hehe). >> >> Probably with jenkins, metacello, and kernel/bootstrap we can generate >> distributions per platform (With the possibility of an all-in-one >> distribution for the ones who like that). >> >> Guille > > Yes, I understand that we can live without this feature... > - If we can reconstruct images easily (one of the goal of Pharo) - I > mean not only code, but any object (eventually with Fuel) > - If we solve the bootstrap problem (or if we can still prepare an > image for cross platform startup) > - If we don't forget to always talk (send messages) thru an abstract > layer, and never directly name the target library, > > Since I didn't have all these tools in the past, I were forced to use > development images across different platform a lot, and yes, it was > not following the mainstream rules (a la "we can reconstruct all from > scratch") but damn powerful. > For deploying applications, it also is very powerful and cheap. > Personally, I would feel sore to lose it. > But look at the root of what we are talking about: N bytes in VM versus M bytes in image to support certain functionality. I think if you need it, you will make sure that those bytes is there and properly packaged with you application. You can ship your product and use it on multiple platforms with ease, granted that appropriate platform-specific code is loaded into your image. With distribution using VMs it's a bit different story: it is a barrier with high entry cost. Especially if you think about all those RPMs, which is controlled by third-party maintainers. It is not that easy to directly control it, and much much slower in case if you need to deal with some problems. > Nicolas > -- Best regards, Igor Stasenko. |
2012/5/6 Igor Stasenko <[hidden email]>:
> On 6 May 2012 17:08, Nicolas Cellier <[hidden email]> wrote: >> 2012/5/6 Guillermo Polito <[hidden email]>: >>> >>> >>> On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[hidden email]> >>> wrote: >>>> >>>> <snip> >>>> I'm more worried about having all-platforms-specific-stuff inside image... >>>> but we can mitigate that with fuel, and making loadable packages when >>>> running the image... I don't know, I'm just thinking while writing, so, this >>>> is probably stupid :) >>>> >>> >>> Just think how many times you took a development image and used it in >>> several platforms. At least I don't. Same happened when I used Eclipse. I >>> didn't share my eclipses between systems. Even, I had several eclipse >>> installations with their own plugins (just like images, hehe). >>> >>> Probably with jenkins, metacello, and kernel/bootstrap we can generate >>> distributions per platform (With the possibility of an all-in-one >>> distribution for the ones who like that). >>> >>> Guille >> >> Yes, I understand that we can live without this feature... >> - If we can reconstruct images easily (one of the goal of Pharo) - I >> mean not only code, but any object (eventually with Fuel) >> - If we solve the bootstrap problem (or if we can still prepare an >> image for cross platform startup) >> - If we don't forget to always talk (send messages) thru an abstract >> layer, and never directly name the target library, >> >> Since I didn't have all these tools in the past, I were forced to use >> development images across different platform a lot, and yes, it was >> not following the mainstream rules (a la "we can reconstruct all from >> scratch") but damn powerful. >> For deploying applications, it also is very powerful and cheap. >> Personally, I would feel sore to lose it. >> > > But look at the root of what we are talking about: N bytes in VM > versus M bytes in image to support certain functionality. > I think if you need it, you will make sure that those bytes is there > and properly packaged with you application. > Unfortunately, it's more than moving code... What I mean is that when I need to pass an O_NONBLOCK flag to a FFI call, it's going to be a problem because I have to know how this information is encoded on each and every platform I want to support. At least, in C code, I just care about the symbolic information and have a relatively portable sentence. To me that's one of the highest hurdle with FFI, because this is the kind of complexity I wish I never cared of. That's just a flavour of #define/#ifdef hell. It can be worse if you want to interface with IPC (which has lot's of different flavours). Same for functions defined by macros that just use machine specific structure layout... We can no longer use these structures as opaque handles. > You can ship your product and use it on multiple platforms with ease, > granted that appropriate platform-specific code is loaded into your > image. > With distribution using VMs it's a bit different story: it is a > barrier with high entry cost. Especially if you think about all those > RPMs, which is controlled by third-party maintainers. It is not that > easy to directly control it, and much much slower in case if you need > to deal with some problems. > I agree on the principle, I always prefer to develop in FFI than hack the VM, I'm far far more efficient on the former. Nonetheless, I don't think FFI can magically solve all our problems. In certain ways it can make them worse. Nicolas >> Nicolas >> > > > > -- > Best regards, > Igor Stasenko. > |
In reply to this post by Igor Stasenko
2012/5/6 Igor Stasenko <[hidden email]>:
> since most of Nicolas concerns were answered by others, there's only > one thing i'd like to say about: > platform-specifc zoo. > > At first look it can be scary, but look at opengl binding using > nativeboost: we were able to deal with it > at image side. And in our case a platform-specific code is maybe less > than 1% of total codebase of all NBOpenGL code. > Well, like opendbx, maybe because opengl has quite standard interface... > Remember an endless issues "B3DAccelerator doesn't works, XYZPlugin > doesn't works", because of all this ldconfig/etc mess.. By the way, a wrong decision in the VM totally messed up the way FFI search and find libraries too. This decision was related to plugins, since the goal was to limit the possibility of loading a wrong plugin. So the looked up paths were restricted for plugins. Unfortunately, the same primitive is shared by FFI, and we also restricted the FFI search path... But it would be a mistake to try to mimic underlying platform rules with in-image code. That's a waste of time, energy and cannot fit settings that are customizable! What we need instead is to correct the VM and just use existing platform mechanism. We also need to add a possibility to bypass it and use our own specific library paths when we want to. > Every time we have such issue, we need to look at VM code, patch it, > and rebuilt it again. > That means more time and more people involved every time there are > issue in VM side. > In contrary, if problems like this arising in FFI world, users can > easily patch the code by they own, > and have stuff working in minuted instead of days (or even months) > waiting till new VM build will be released. > I agree. For our own use, it's not a problem because we can compile our own VM. But when it comes to distributing, yes, this is an heavy burden we drag... As for the user, yes, a DIY can make a workaround possible and that's better than nothing. See the library path example above... But imagine you want to distribute a software made with Pharo. What you gonna tell to your user if you don't have a generic solution that works and only workarounds? DIY ? Maybe I'm old, but I remember of Smalltalk distributions that just work... > Especially for OSProcess: why i targeting it? Because it exposing most > basic kernel functionality, which can be found on all systems. That > means there much less likely will be the problems with linkage with > right libs etc. Since OS kernel is there and cannot disappear. > As Eliot said, it's hard to solve it entirely with FFI, and it's better to have a single primitive that fork, close file descriptors except a list to keep opened (pipes) and exec... > And last but not least, of course, avoiding problem of minimal common > denominator. For any software experts, i guess , it is important to be > able to access the full potential of a system, not a small portion of > it. > It might be not that important for people with "living in sandbox" > mindset, but if you really approach with goals like "i want to > implement a modern application on this system using this language" > then you probably > will start worrying about limitations sooner or later, as well as, how > easy you can change it to adopt to your needs. > I can only agree with that. Nonetheless, it's also Pharo's responsibility to provide the common denominator. Nicolas > -- > Best regards, > Igor Stasenko. > |
In reply to this post by Igor Stasenko
On Sat, May 05, 2012 at 03:23:15AM +0200, Igor Stasenko wrote:
> Hi, David, all > > i just want to know, if we can use FFI calls, for implementing an > OSProcessPlugin functionality (or at least most of it). > and if there parts which is hard to implement, i would like to know > what and why. Most, and probably all, of OSProcess could be implemented with FFI. It would take some work, and others have already pointed out the difficult parts, but with enough time an effort it could be done. > > If you want to ask, why one would want to do this.. the answer is simple: > we stated previously, that we want to reduce the VM complexity by > implementing things at language side, when it possible, > leaving only key parts to VM. > This is the main reason why i consider implementing OSProcess using > purely FFI, not because of quality of OSProcessPlugin > implementation etc. > I also prefer to do things in the image as much as possible. Aside from fork/exec and a few other things, almost everything in OSProcessPlugin is a thin layer over the corresponding C runtime call or system call. On the image side, a platform-specific OSProcessAccessor is responsible for the rest of the interface. If you want to experiment with this, I would suggest starting with a copy of UnixOSProcessAccessor (maybe call it UnixOSProcessFFIAccessor), and replacing calls to the primitives with the corresponding FFI calls. There is a fairly good set of unit tests, so as long as you can keep the tests green you will know you are making progress. Note that I am answering the question "could this be done", not "should this be done" ;) HTH, Dave |
In reply to this post by Nicolas Cellier
On Sun, May 6, 2012 at 9:14 AM, Nicolas Cellier <[hidden email]> wrote: 2012/5/6 Igor Stasenko <[hidden email]>: But there are solutions to this which mean you *don't* have to know. I wrote a prototype for VisualWorks that maps a SharedPool to these externally defined variables. Here's how it works.
For each group of C constants, e.g. i/o constants, one populates a subclasss of SharedPoolForC with the variables one wants to define, and in a class-side method one defines the set of include files per platform that one should pull in to evaluate the constants. SharedPoolForC has code in it to automatically generate a C program, and compile it, e.g. to provide a shared library/dll for the current platform. The C program is essentially a name-value dictionary that maps from the abstract name #O_NONBLOCK to the size and value for a particular platform. SharedPoolForC also contains code to load the shared library/dll, extract the values and update the pool variables automatically.
The deployment scheme is as follows, at start-up the system asks each SharedPoolForC subclass to check the platform and see if the platform has changed. If it hasn't changed, nothing needs to happen. If it has changed the system attempts to locate the shared library/dll for the current platform (the platform name is embedded in the dll's name), and update the pool variables from that dll, raising an exception if unavailable (and the exception could be mapped into a warning or an error to suit). So to deploy e.g. a one-click one needs to generate the set of dlls fort the platforms one wants to deploy on.
The development scheme is simply to run a method on the SharedPoolForC when one adds some class variables and/or changes the set of include files which turns the crank, generating, compiling and loading the C file to get the value(s) for the new variable(s).
An alternative scheme would generate a program that would print e.g. STON, which could be parsed or evaluated to compute the values. This would have the advantage that the definitions of the values are readable and editable by mere humans. So I think I'd discard the shared library/dll approach and keep it simple.
At least, in C code, I just care about the symbolic information and best, Eliot |
On 7 May 2012 20:15, Eliot Miranda <[hidden email]> wrote:
> > > > On Sun, May 6, 2012 at 9:14 AM, Nicolas Cellier <[hidden email]> wrote: >> >> 2012/5/6 Igor Stasenko <[hidden email]>: >> > On 6 May 2012 17:08, Nicolas Cellier <[hidden email]> wrote: >> >> 2012/5/6 Guillermo Polito <[hidden email]>: >> >>> >> >>> >> >>> On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[hidden email]> >> >>> wrote: >> >>>> >> >>>> <snip> >> >>>> I'm more worried about having all-platforms-specific-stuff inside image... >> >>>> but we can mitigate that with fuel, and making loadable packages when >> >>>> running the image... I don't know, I'm just thinking while writing, so, this >> >>>> is probably stupid :) >> >>>> >> >>> >> >>> Just think how many times you took a development image and used it in >> >>> several platforms. At least I don't. Same happened when I used Eclipse. I >> >>> didn't share my eclipses between systems. Even, I had several eclipse >> >>> installations with their own plugins (just like images, hehe). >> >>> >> >>> Probably with jenkins, metacello, and kernel/bootstrap we can generate >> >>> distributions per platform (With the possibility of an all-in-one >> >>> distribution for the ones who like that). >> >>> >> >>> Guille >> >> >> >> Yes, I understand that we can live without this feature... >> >> - If we can reconstruct images easily (one of the goal of Pharo) - I >> >> mean not only code, but any object (eventually with Fuel) >> >> - If we solve the bootstrap problem (or if we can still prepare an >> >> image for cross platform startup) >> >> - If we don't forget to always talk (send messages) thru an abstract >> >> layer, and never directly name the target library, >> >> >> >> Since I didn't have all these tools in the past, I were forced to use >> >> development images across different platform a lot, and yes, it was >> >> not following the mainstream rules (a la "we can reconstruct all from >> >> scratch") but damn powerful. >> >> For deploying applications, it also is very powerful and cheap. >> >> Personally, I would feel sore to lose it. >> >> >> > >> > But look at the root of what we are talking about: N bytes in VM >> > versus M bytes in image to support certain functionality. >> > I think if you need it, you will make sure that those bytes is there >> > and properly packaged with you application. >> > >> >> Unfortunately, it's more than moving code... >> What I mean is that when I need to pass an O_NONBLOCK flag to a FFI >> call, it's going to be a problem because I have to know how this >> information is encoded on each and every platform I want to support. > > > But there are solutions to this which mean you *don't* have to know. I wrote a prototype for VisualWorks that maps a SharedPool to these externally defined variables. Here's how it works. > > For each group of C constants, e.g. i/o constants, one populates a subclasss of SharedPoolForC with the variables one wants to define, and in a class-side method one defines the set of include files per platform that one should pull in to evaluate the constants. SharedPoolForC has code in it to automatically generate a C program, and compile it, e.g. to provide a shared library/dll for the current platform. The C program is essentially a name-value dictionary that maps from the abstract name #O_NONBLOCK to the size and value for a particular platform. SharedPoolForC also contains code to load the shared library/dll, extract the values and update the pool variables automatically. > > The deployment scheme is as follows, at start-up the system asks each SharedPoolForC subclass to check the platform and see if the platform has changed. If it hasn't changed, nothing needs to happen. If it has changed the system attempts to locate the shared library/dll for the current platform (the platform name is embedded in the dll's name), and update the pool variables from that dll, raising an exception if unavailable (and the exception could be mapped into a warning or an error to suit). So to deploy e.g. a one-click one needs to generate the set of dlls fort the platforms one wants to deploy on. > > The development scheme is simply to run a method on the SharedPoolForC when one adds some class variables and/or changes the set of include files which turns the crank, generating, compiling and loading the C file to get the value(s) for the new variable(s). > Yes, i remember you told about it some years ago ;) But this requires a platform with compiler installed. otherwise your system won't be able to adopt to new environment. But of course there is a solution to that as well: keep a database for every platform in image, and detect platform on startup, hoping that everything is in sync :) And if not, then it usually means that guy using either non-standard distribution, or custom built kernel etc.. At this point i would stop worrying about it, since if he is good enough to be able to customize own system, he can deal with FFI troubles in no time.. and usually those guys having compiler installed :) Also, if system supports binary distributions, then we're in same boat as any other binary compiled and distributed for that platform.. (need to elaborate that?) > An alternative scheme would generate a program that would print e.g. STON, which could be parsed or evaluated to compute the values. This would have the advantage that the definitions of the values are readable and editable by mere humans. So I think I'd discard the shared library/dll approach and keep it simple. > > Indeed. Again, generate C code.. a bit more code doesn't hurts. -- Best regards, Igor Stasenko. |
2012/5/8 Igor Stasenko <[hidden email]>:
> On 7 May 2012 20:15, Eliot Miranda <[hidden email]> wrote: >> >> >> >> On Sun, May 6, 2012 at 9:14 AM, Nicolas Cellier <[hidden email]> wrote: >>> >>> 2012/5/6 Igor Stasenko <[hidden email]>: >>> > On 6 May 2012 17:08, Nicolas Cellier <[hidden email]> wrote: >>> >> 2012/5/6 Guillermo Polito <[hidden email]>: >>> >>> >>> >>> >>> >>> On Sun, May 6, 2012 at 2:16 PM, Esteban Lorenzano <[hidden email]> >>> >>> wrote: >>> >>>> >>> >>>> <snip> >>> >>>> I'm more worried about having all-platforms-specific-stuff inside image... >>> >>>> but we can mitigate that with fuel, and making loadable packages when >>> >>>> running the image... I don't know, I'm just thinking while writing, so, this >>> >>>> is probably stupid :) >>> >>>> >>> >>> >>> >>> Just think how many times you took a development image and used it in >>> >>> several platforms. At least I don't. Same happened when I used Eclipse. I >>> >>> didn't share my eclipses between systems. Even, I had several eclipse >>> >>> installations with their own plugins (just like images, hehe). >>> >>> >>> >>> Probably with jenkins, metacello, and kernel/bootstrap we can generate >>> >>> distributions per platform (With the possibility of an all-in-one >>> >>> distribution for the ones who like that). >>> >>> >>> >>> Guille >>> >> >>> >> Yes, I understand that we can live without this feature... >>> >> - If we can reconstruct images easily (one of the goal of Pharo) - I >>> >> mean not only code, but any object (eventually with Fuel) >>> >> - If we solve the bootstrap problem (or if we can still prepare an >>> >> image for cross platform startup) >>> >> - If we don't forget to always talk (send messages) thru an abstract >>> >> layer, and never directly name the target library, >>> >> >>> >> Since I didn't have all these tools in the past, I were forced to use >>> >> development images across different platform a lot, and yes, it was >>> >> not following the mainstream rules (a la "we can reconstruct all from >>> >> scratch") but damn powerful. >>> >> For deploying applications, it also is very powerful and cheap. >>> >> Personally, I would feel sore to lose it. >>> >> >>> > >>> > But look at the root of what we are talking about: N bytes in VM >>> > versus M bytes in image to support certain functionality. >>> > I think if you need it, you will make sure that those bytes is there >>> > and properly packaged with you application. >>> > >>> >>> Unfortunately, it's more than moving code... >>> What I mean is that when I need to pass an O_NONBLOCK flag to a FFI >>> call, it's going to be a problem because I have to know how this >>> information is encoded on each and every platform I want to support. >> >> >> But there are solutions to this which mean you *don't* have to know. I wrote a prototype for VisualWorks that maps a SharedPool to these externally defined variables. Here's how it works. >> >> For each group of C constants, e.g. i/o constants, one populates a subclasss of SharedPoolForC with the variables one wants to define, and in a class-side method one defines the set of include files per platform that one should pull in to evaluate the constants. SharedPoolForC has code in it to automatically generate a C program, and compile it, e.g. to provide a shared library/dll for the current platform. The C program is essentially a name-value dictionary that maps from the abstract name #O_NONBLOCK to the size and value for a particular platform. SharedPoolForC also contains code to load the shared library/dll, extract the values and update the pool variables automatically. >> >> The deployment scheme is as follows, at start-up the system asks each SharedPoolForC subclass to check the platform and see if the platform has changed. If it hasn't changed, nothing needs to happen. If it has changed the system attempts to locate the shared library/dll for the current platform (the platform name is embedded in the dll's name), and update the pool variables from that dll, raising an exception if unavailable (and the exception could be mapped into a warning or an error to suit). So to deploy e.g. a one-click one needs to generate the set of dlls fort the platforms one wants to deploy on. >> >> The development scheme is simply to run a method on the SharedPoolForC when one adds some class variables and/or changes the set of include files which turns the crank, generating, compiling and loading the C file to get the value(s) for the new variable(s). >> > Having an in-image C-parser is a pain (VW/DLLCC parser is not maintained and cannot parse modern headers). So I agree, the best is to use an external C preprocessor/compiler, it's up to date. We thus replace C with C, so it seems more or less equivalent... Is it? We mirror these defines in image with a bunch of OSAPI subclasses. We have also to handle different structure layout if some or our API functions are picking-fields-macros... Also across several linux distros, one has glibc 4.5 and libm 4.8, the other glibc 5.1 but libm 4.3, plus opengl, ssl, etc... A lot of combinatorial... So we cleverly decide to reify each library, and subclass them with each major version, so as to program an OSAPI by composition. We then just have to prepare a configuration file. And if we are very clever, maybe we can query the OS and create a sort of autoconf... To ease our task, each OS provides more or less the same functions, but these are not packaged the same. Two functions are in same library in this OS, and two different libraries on another. Or we have several concurrent implementations (like winsock wsock2)... Some of these nice features of the real world are also polluting the C code of our plugins with #ifdef #include and platform dependent files, but some of the concerns are separated in makefiles at compile time and mainstream cmake and other shits help us a bit. In FFI, I still have the feeling that it is worse simply because we have to resolve the macros and care for differences that would not appear in our C source (only in machine dependent includes), and also care for library packaging if we want to use composition... It's not that it's not doable, it's that we gonna reinvent gaz plant and it gonna be so boring... I'd like to see a proof of concept, even if we restrict to libc, libm, kernel.dll, msvcrt.dll ... Bonus, imagine I install a brand new Linux v3.x.x Because we have a simpler VM, and they care of compatibility, I can recompile the VM, the plugins and restart my favourite image with reduced cmake activity. Unfortunately files are not accessed via a plugin but via FFI, the FILE structure layout changed, I used a picking-fields-macro (like getc, feof, FD_SET or ...) and my image crashes at startup... I have to extract the new structure/defines on v3.x.x, restart an old kernel v2.y.y, prepare in-image structure for new OS, save the image, restart kernel v3.x.x. Phew... I ain't got the feeling we simplified the tool chain in this case too. Nicolas > Yes, i remember you told about it some years ago ;) > > But this requires a platform with compiler installed. otherwise your > system won't be able to adopt to new environment. But of course there > is a solution to that as well: keep a database for every platform in > image, > and detect platform on startup, hoping that everything is in sync :) > And if not, then it usually means that guy using either non-standard > distribution, or custom built kernel etc.. At this point i would stop > worrying about it, since if he is good enough to be able to customize > own system, he can deal with FFI troubles in no time.. > and usually those guys having compiler installed :) > > Also, if system supports binary distributions, then we're in same boat > as any other binary compiled and distributed for that platform.. (need > to elaborate that?) > >> An alternative scheme would generate a program that would print e.g. STON, which could be parsed or evaluated to compute the values. This would have the advantage that the definitions of the values are readable and editable by mere humans. So I think I'd discard the shared library/dll approach and keep it simple. >> >> > Indeed. Again, generate C code.. a bit more code doesn't hurts. > > > -- > Best regards, > Igor Stasenko. > |
Free forum by Nabble | Edit this page |