Hi,
-- some of you might know SqueakJS (http://bertfreudenberg.github.io/SqueakJS/demo/simple.html) which is byte code interpreter written in JavaScript that can run Squeak/... images. I think it was mentioned in this group before. I played around with SqueakJS for a while and was surprised that SqueakJS is much faster than Amber Smalltalk. What I expected was the opposite. SqueakJS: 0 tinyBenchmarks '93910491 bytecodes/sec; 1098401 sends/sec' Amber: 0 tinyBenchmarks '3072196.6205837172 bytecodes/sec; 393573.3746130031 sends/sec' I am wondering whether the reason for this is that Amber is not as optimized as SqueakJS is for JavaScript interpreters, or whether byte code interpreters like SqueakJS provide more optimization potential for JavaScript engines (JIT, ...) in general. If the latter case is true, then SqueakJS combined with a minimal image, a JavaScript bridge, and HTML helper classes would be superior compared to Amber Smalltalk from a technical point of view. SqueakJS essentially has one big loop reading byte codes and executing them. In Amber Smalltalk, we have a lot of JavaScript functions (e.g., for every block closure), which might screw up the JavaScript interpreter. There is also a research paper about SqueakJS (http://www.freudenbergs.de/bert/publications/Freudenberg-2014-SqueakJS.pdf) which explains some of the concepts. Any ideas/comments? Maybe we can make Amber Smalltalk as fast as SqueakJS somehow... You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Wish PharoJS existed :) On Sun, Jan 25, 2015 at 2:22 PM, Matthias Springer <[hidden email]> wrote:
You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Matthias Springer-2
woha!
I wasn’t aware of such difference, SqueakJS is certainly interesting! Thanks for bringing this up.
You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Is it against latest Amber in deploy mode? Now in practice it will depend on how easy it is to proxy JSObjects luke in Amber. Phil woha!
-- I wasn’t aware of such difference, SqueakJS is certainly interesting! Thanks for bringing this up.
You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Matthias Springer-2
They seem to have very different objectives. SqueakJS runs a real Smalltalk image in an HTML5 canvas, whereas AmberJS manipulates the whole DOM. I don't see them as mutually exclusive, just as living in different (and somewhat overlapping) "worlds."
-- On Sun, Jan 25, 2015 at 7:22 AM, Matthias Springer <[hidden email]> wrote:
Bob Calco You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
I ran the benchmarks in the development mode. The version is from
December 2014 or so. SqueakJS could theoretically run a minimalistic image without a user interface and without an HTML5 canvas, and interact with the browser only via a JavaScript bridge (and then modify the DOM). Running a full Squeak/Pharo image inside an HTML canvas is nice, but I don't see a practical use case for that. On Sun, Jan 25, 2015 at 3:06 PM, Robert Calco <[hidden email]> wrote: > They seem to have very different objectives. SqueakJS runs a real Smalltalk > image in an HTML5 canvas, whereas AmberJS manipulates the whole DOM. I don't > see them as mutually exclusive, just as living in different (and somewhat > overlapping) "worlds." > > On Sun, Jan 25, 2015 at 7:22 AM, Matthias Springer <[hidden email]> wrote: >> >> Hi, >> >> some of you might know SqueakJS >> (http://bertfreudenberg.github.io/SqueakJS/demo/simple.html) which is byte >> code interpreter written in JavaScript that can run Squeak/... images. I >> think it was mentioned in this group before. >> >> I played around with SqueakJS for a while and was surprised that SqueakJS >> is much faster than Amber Smalltalk. What I expected was the opposite. >> >> SqueakJS: 0 tinyBenchmarks '93910491 bytecodes/sec; 1098401 sends/sec' >> Amber: 0 tinyBenchmarks '3072196.6205837172 bytecodes/sec; >> 393573.3746130031 sends/sec' >> >> I am wondering whether the reason for this is that Amber is not as >> optimized as SqueakJS is for JavaScript interpreters, or whether byte code >> interpreters like SqueakJS provide more optimization potential for >> JavaScript engines (JIT, ...) in general. If the latter case is true, then >> SqueakJS combined with a minimal image, a JavaScript bridge, and HTML helper >> classes would be superior compared to Amber Smalltalk from a technical point >> of view. >> >> SqueakJS essentially has one big loop reading byte codes and executing >> them. In Amber Smalltalk, we have a lot of JavaScript functions (e.g., for >> every block closure), which might screw up the JavaScript interpreter. There >> is also a research paper about SqueakJS >> (http://www.freudenbergs.de/bert/publications/Freudenberg-2014-SqueakJS.pdf) >> which explains some of the concepts. >> >> Any ideas/comments? Maybe we can make Amber Smalltalk as fast as SqueakJS >> somehow... >> >> -- >> You received this message because you are subscribed to the Google Groups >> "amber-lang" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [hidden email]. >> For more options, visit https://groups.google.com/d/optout. > > > > > -- > Bob Calco > > -- > You received this message because you are subscribed to the Google Groups > "amber-lang" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > For more options, visit https://groups.google.com/d/optout. -- Best regards, Matthias Springer Student M.Sc. IT Systems Engineering (Hasso Plattner Institute) [hidden email] * Stahnsdorfer Straße 144b, 14482 Potsdam, Germany -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Agree. What it would be practical is to have a minimal load that can manipulate the DOM and interact with JavaScript with as less friction as possible. Did you use that JavaScript bridge you mention? I’m curios on how much mismatch it adds or not. I’m also curios in how it looks `console log: someSqueajJSObject` and what happens with nil for example You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
No, in fact I just ran the benchmarks so far and did nothing else.
Also, I don't know if the JavaScript bridge exists, yet. I talked with Bert Freudenberg (the creator of SqueakJS) a while ago and he mentioned that he is working on that. Maybe he can give us some more insights and tell us about his vision for that project... On Sun, Jan 25, 2015 at 3:32 PM, sebastian <[hidden email]> wrote: > > On Jan 25, 2015, at 12:26 PM, Matthias Springer <[hidden email]> > wrote: > > I ran the benchmarks in the development mode. The version is from > December 2014 or so. > > SqueakJS could theoretically run a minimalistic image without a user > interface and without an HTML5 canvas, and interact with the browser > only via a JavaScript bridge (and then modify the DOM). Running a full > Squeak/Pharo image inside an HTML canvas is nice, but I don't see a > practical use case for that. > > > Agree. > > What it would be practical is to have a minimal load that can manipulate the > DOM and interact with JavaScript with as less friction as possible. > > Did you use that JavaScript bridge you mention? > > I’m curios on how much mismatch it adds or not. > > I’m also curios in how it looks `console log: someSqueajJSObject` > > and what happens with nil for example > > > -- > You received this message because you are subscribed to the Google Groups > "amber-lang" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [hidden email]. > For more options, visit https://groups.google.com/d/optout. -- Best regards, Matthias Springer Student M.Sc. IT Systems Engineering (Hasso Plattner Institute) [hidden email] * Stahnsdorfer Straße 144b, 14482 Potsdam, Germany -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
I think you'll find this is pretty amazing, and useful too: 2015-01-25 15:44 GMT+01:00 Matthias Springer <[hidden email]>: No, in fact I just ran the benchmarks so far and did nothing else. Bernat Romagosa.
You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by matthias.springer
Hi folks,
yes, SqueakJS has a JSBridge now. I actually used Amber's JS interface as inspiration :) The implementation is very different of course, but I tried to avoid gratuitous incompatibilities. You can try it here: http://bertfreudenberg.github.io/SqueakJS/demo/simple.html#document=JSBridge.st Regarding performance, I'm sure it would be relatively easy to make Amber faster than SqueakJS, since Amber does not have to provide the full Smalltalk semantics (thisContext etc). In fact, the original SqueakJS bytecode interpreter was much slower than Amber. Only after I added a JIT compiler (which translates Squeak methods to JS functions) did I get better performance. - Bert - > On 25.01.2015, at 15:44, Matthias Springer <[hidden email]> wrote: > > No, in fact I just ran the benchmarks so far and did nothing else. > Also, I don't know if the JavaScript bridge exists, yet. I talked with > Bert Freudenberg (the creator of SqueakJS) a while ago and he > mentioned that he is working on that. Maybe he can give us some more > insights and tell us about his vision for that project... > > On Sun, Jan 25, 2015 at 3:32 PM, sebastian <[hidden email]> wrote: >> >> On Jan 25, 2015, at 12:26 PM, Matthias Springer <[hidden email]> >> wrote: >> >> I ran the benchmarks in the development mode. The version is from >> December 2014 or so. >> >> SqueakJS could theoretically run a minimalistic image without a user >> interface and without an HTML5 canvas, and interact with the browser >> only via a JavaScript bridge (and then modify the DOM). Running a full >> Squeak/Pharo image inside an HTML canvas is nice, but I don't see a >> practical use case for that. >> >> >> Agree. >> >> What it would be practical is to have a minimal load that can manipulate the >> DOM and interact with JavaScript with as less friction as possible. >> >> Did you use that JavaScript bridge you mention? >> >> I’m curios on how much mismatch it adds or not. >> >> I’m also curios in how it looks `console log: someSqueajJSObject` >> >> and what happens with nil for example >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "amber-lang" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [hidden email]. >> For more options, visit https://groups.google.com/d/optout. > > > > -- > Best regards, > Matthias Springer > > Student M.Sc. IT Systems Engineering (Hasso Plattner Institute) > [hidden email] * Stahnsdorfer Straße 144b, 14482 Potsdam, Germany You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. smime.p7s (5K) Download Attachment |
null <-> nil
wonderful! Impressive work Bert! Hats off to SqueakJS > On Jan 25, 2015, at 5:08 PM, Bert Freudenberg <[hidden email]> wrote: > > Hi folks, > > yes, SqueakJS has a JSBridge now. I actually used Amber's JS interface as inspiration :) The implementation is very different of course, but I tried to avoid gratuitous incompatibilities. You can try it here: > > http://bertfreudenberg.github.io/SqueakJS/demo/simple.html#document=JSBridge.st > > Regarding performance, I'm sure it would be relatively easy to make Amber faster than SqueakJS, since Amber does not have to provide the full Smalltalk semantics (thisContext etc). In fact, the original SqueakJS bytecode interpreter was much slower than Amber. Only after I added a JIT compiler (which translates Squeak methods to JS functions) did I get better performance. > > - Bert - > >> On 25.01.2015, at 15:44, Matthias Springer <[hidden email]> wrote: >> >> No, in fact I just ran the benchmarks so far and did nothing else. >> Also, I don't know if the JavaScript bridge exists, yet. I talked with >> Bert Freudenberg (the creator of SqueakJS) a while ago and he >> mentioned that he is working on that. Maybe he can give us some more >> insights and tell us about his vision for that project... >> >> On Sun, Jan 25, 2015 at 3:32 PM, sebastian <[hidden email]> wrote: >>> >>> On Jan 25, 2015, at 12:26 PM, Matthias Springer <[hidden email]> >>> wrote: >>> >>> I ran the benchmarks in the development mode. The version is from >>> December 2014 or so. >>> >>> SqueakJS could theoretically run a minimalistic image without a user >>> interface and without an HTML5 canvas, and interact with the browser >>> only via a JavaScript bridge (and then modify the DOM). Running a full >>> Squeak/Pharo image inside an HTML canvas is nice, but I don't see a >>> practical use case for that. >>> >>> >>> Agree. >>> >>> What it would be practical is to have a minimal load that can manipulate the >>> DOM and interact with JavaScript with as less friction as possible. >>> >>> Did you use that JavaScript bridge you mention? >>> >>> I’m curios on how much mismatch it adds or not. >>> >>> I’m also curios in how it looks `console log: someSqueajJSObject` >>> >>> and what happens with nil for example >>> >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "amber-lang" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to [hidden email]. >>> For more options, visit https://groups.google.com/d/optout. >> >> >> >> -- >> Best regards, >> Matthias Springer >> >> Student M.Sc. IT Systems Engineering (Hasso Plattner Institute) >> [hidden email] * Stahnsdorfer Straße 144b, 14482 Potsdam, Germany > > -- > You received this message because you are subscribed to the Google Groups "amber-lang" group. > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Matthias Springer-2
Matthias Springer wrote: > Hi, > > some of you might know SqueakJS > (http://bertfreudenberg.github.io/SqueakJS/demo/simple.html) which is > byte code interpreter written in JavaScript that can run Squeak/... > images. I think it was mentioned in this group before. > > I played around with SqueakJS for a while and was surprised that > SqueakJS is much faster than Amber Smalltalk. What I expected was the > opposite. > > SqueakJS: 0 tinyBenchmarks '93910491 bytecodes/sec; 1098401 sends/sec' > Amber: 0 tinyBenchmarks '3072196.6205837172 bytecodes/sec; > 393573.3746130031 sends/sec' They are using typeof comparisions very cleverly to be ast fast as possible. Integer >> +, Integer >> - and Integer >> < are all inlined (none of them in Amber, yet), so using typeof check and inlined JS + / - / < make it fast. CPU. The big perf hog in Amber is bulding contexts. When I tried in deploy mode, it was twice as fast (still not better). Profiler FTW, I found two very fundamental functions in the core that took many of the CPU (asReceiver as assert), I managed to get them to take nearly none in last commit by using typeof checks as well. After they were eliminated, and Number >> +, Number >> - and Number >> < took the most, so inlining must be surely done. When I returned to devel mode and ran the profiler, it was of course shown that withContext and inContext take their toll, but, suprisingly, the code in compiled functions that is calling them and preparing the parameters to run them, also takes considerable amounts of CPU. We should probably change from $core.withContext(function ($ctx1) { // the real code }, function ($ctx1) { /* lazy setup */ }) The problem probably is with creating two functions with outer lexical context set each time. to something else, like var $ctx1 = $core.addContext(function ($ctx) { /* lazy setup */ }); // the real code $core.popContext($ctx1): or maybe even without lazy setup but with eager setup (it was actually used to speed up, not to setup the local vars etc. in the context object when it vrey likely won't be used at all, unless error happens and then the setup function is called to fill the contents). The problem is, $core.withContext does the much needed try/catch wrapping for error handling. So something like this is needed: if (!$core.errorHandlingWrapped) { return $core.wrapErrorHandling(thisFunction, this, arguments); } var $ctx1 = $core.addContext(function ($ctx) { /* lazy setup */ }); // the real code $core.popContext($ctx1): which is a bit less dense than the original. But all this would only speed up devel version. Maybe it is more important is to make deploy version run fast. > I am wondering whether the reason for this is that Amber is not as > optimized as SqueakJS is for JavaScript interpreters, or whether byte > code interpreters like SqueakJS provide more optimization potential for > JavaScript engines (JIT, ...) in general. If the latter case is true, > then SqueakJS combined with a minimal image, a JavaScript bridge, and > HTML helper classes would be superior compared to Amber Smalltalk from a > technical point of view. > > SqueakJS essentially has one big loop reading byte codes and executing > them. In Amber Smalltalk, we have a lot of JavaScript functions (e.g., > for every block closure), which might screw up the JavaScript > interpreter. There is also a research paper about SqueakJS > (http://www.freudenbergs.de/bert/publications/Freudenberg-2014-SqueakJS.pdf) > which explains some of the concepts. > > Any ideas/comments? Maybe we can make Amber Smalltalk as fast as > SqueakJS somehow... Herby -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Bert Freudenberg
Hello,
-- Bert do you have an estimation of how much performance you earned with your Javascript JIT ? Looking at the code, it looks like your JIT generates vm.send for each message send. Does it mean you don't compile a smalltalk message send to a javascript function call except maybe for primitives and bytecodes with type prediction (+, -, ...) ? Can't you improve your JIT with a simulated stack in order to generate direct function calls for message sends targeting methods already compiled to JS (with some cog-like inline caching ?)? Your way is nice because I believe with the labels you add you can stop the execution at any bytecode (and not only at interrupt points such as message sends) but most probably this adds quite some overhead. Right now it looks like the JIT removes only the bytecode decoding overhead. It's an impressive piece of work anyway. Congratulations. Clement 2015-01-25 20:08 GMT+01:00 Bert Freudenberg <[hidden email]>: Hi folks, You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Herby Vojčík
On 25.01.2015, at 20:47, Herby Vojčík <[hidden email]> wrote:
> The big perf hog in Amber is bulding contexts. Yep, same in SqueakJS. That's the biggest problem for improving send performance. > The problem probably is with creating two functions with outer lexical context set each time. Are you saying Amber is creating a new function for every invocation of a method? Wouldn't that mean the JavaScript JIT could never optimize it, because it is thrown away after the first use? - Bert - -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. smime.p7s (5K) Download Attachment |
Bert Freudenberg wrote: > On 25.01.2015, at 20:47, Herby Vojčík<[hidden email]> wrote: >> The big perf hog in Amber is bulding contexts. > > Yep, same in SqueakJS. That's the biggest problem for improving send performance. > >> The problem probably is with creating two functions with outer lexical context set each time. > > Are you saying Amber is creating a new function for every invocation of a method? Wouldn't that mean the JavaScript JIT could never optimize it, because it is thrown away after the first use? No, it is not a new function. It is inner function, so it is always the same function, just its executing context is always different. JS engines must have this optimized, I would say, creating functions within functions is normal in JS, so inner functions should be able to be created fast. > > - Bert - > -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
In reply to this post by Clément Béra
(apologies - if someone thinks this is too off-topic for Amber development, please say so, we can easily continue on a Squeak list)
On 25.01.2015, at 21:52, Clément Bera <[hidden email]> wrote: > > Hello, > > Bert do you have an estimation of how much performance you earned with your Javascript JIT ? SqueakJS does not have a high-performance JIT yet (it does no optimizations at all), but it helps a lot compared to the simple interpreter. Here's some numbers on Chrome's V8: with JIT: 82,315,112 bytecodes/sec; 902,155 sends/sec no JIT: 2,775,850 bytecodes/sec; 137,439 sends/sec (on the same browser Amber reports '2214839.4241417497 bytecodes/sec; 229042.45283018867 sends/sec’) > Looking at the code, it looks like your JIT generates vm.send for each message send. Does it mean you don't compile a smalltalk message send to a javascript function call except maybe for primitives and bytecodes with type prediction (+, -, ...) ? Exactly. There is no optimization of sends beyond what the pure interpreter does. Every send builds a full context object. The bytecodes check for SmallInteger operands, which are represented by JavaScript numbers instead of a full object (similar to the C VM using tagged oops). If the type check fails, it does a full send instead. > Can't you improve your JIT with a simulated stack in order to generate direct function calls for message sends targeting methods already compiled to JS (with some cog-like inline caching ?)? I hope so. In fact, there is a group of students working on that at HPI Potsdam right now. The difficulty is that at all interrupt points we must be able to return control to the browser (I described that in the aforementioned paper). This means we have to exit a possibly deeply nested call chain of functions, and we must be able to resume at the exact same point we left off once the browser calls us again. > Your way is nice because I believe with the labels you add you can stop the execution at any bytecode (and not only at interrupt points such as message sends) Almost. Only when I compile for single-stepping (in the Lively interface), every bytecode gets a label, and it checks a global "break" flag after each bytecode. In regular execution, a label is only added for jump targets and after sends. The "break" flag is only checked at interrupt points. > but most probably this adds quite some overhead. Yep. A jitted method never directly calls another jitted method. When a send happens, the current PC is stored in the context, a new context is allocated (in vm.send), and control returns to the main loop. The main loop then calls the jitted method for the new context, and after it returns, the previous method is called again. But since its stored PC is not 0, it jumps to the right place (after the send) and continues from there. > Right now it looks like the JIT removes only the bytecode decoding overhead. That is precisely what it does, and nothing more. However, distributing the control flow across many more specialized JS functions allows the JavaScript JIT (e.g. V8's Crankshaft) to better optimize the execution. With my JIT, even though it is very simplistic, the code is more distributed and less polymorphic. Excessive polymorphism kills performance because V8 "deoptimizes" an already optimized method if the PICs get too big. E.g. here's the numbers *without* JIT, but for a "cold run", before V8 deoptimization kicks in: 11,494,252 bytecodes/sec; 523,121 sends/sec This is 3x as fast as the no-JIT numbers above! Apologies again, but maybe this discussion could help finding bottle necks in Amber performance, too. - Bert - -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. smime.p7s (5K) Download Attachment |
Oh Bert! this is amongst the most exciting threads ever!
I want to know more! o/ > On Jan 25, 2015, at 9:13 PM, Bert Freudenberg <[hidden email]> wrote: > > (apologies - if someone thinks this is too off-topic for Amber development, please say so, we can easily continue on a Squeak list) > > > On 25.01.2015, at 21:52, Clément Bera <[hidden email]> wrote: >> >> Hello, >> >> Bert do you have an estimation of how much performance you earned with your Javascript JIT ? > > SqueakJS does not have a high-performance JIT yet (it does no optimizations at all), but it helps a lot compared to the simple interpreter. Here's some numbers on Chrome's V8: > > with JIT: 82,315,112 bytecodes/sec; 902,155 sends/sec > no JIT: 2,775,850 bytecodes/sec; 137,439 sends/sec > > (on the same browser Amber reports '2214839.4241417497 bytecodes/sec; 229042.45283018867 sends/sec’) > >> Looking at the code, it looks like your JIT generates vm.send for each message send. Does it mean you don't compile a smalltalk message send to a javascript function call except maybe for primitives and bytecodes with type prediction (+, -, ...) ? > > Exactly. There is no optimization of sends beyond what the pure interpreter does. Every send builds a full context object. The bytecodes check for SmallInteger operands, which are represented by JavaScript numbers instead of a full object (similar to the C VM using tagged oops). If the type check fails, it does a full send instead. > >> Can't you improve your JIT with a simulated stack in order to generate direct function calls for message sends targeting methods already compiled to JS (with some cog-like inline caching ?)? > > I hope so. In fact, there is a group of students working on that at HPI Potsdam right now. The difficulty is that at all interrupt points we must be able to return control to the browser (I described that in the aforementioned paper). This means we have to exit a possibly deeply nested call chain of functions, and we must be able to resume at the exact same point we left off once the browser calls us again. > >> Your way is nice because I believe with the labels you add you can stop the execution at any bytecode (and not only at interrupt points such as message sends) > > Almost. Only when I compile for single-stepping (in the Lively interface), every bytecode gets a label, and it checks a global "break" flag after each bytecode. In regular execution, a label is only added for jump targets and after sends. The "break" flag is only checked at interrupt points. > >> but most probably this adds quite some overhead. > > Yep. A jitted method never directly calls another jitted method. When a send happens, the current PC is stored in the context, a new context is allocated (in vm.send), and control returns to the main loop. The main loop then calls the jitted method for the new context, and after it returns, the previous method is called again. But since its stored PC is not 0, it jumps to the right place (after the send) and continues from there. > >> Right now it looks like the JIT removes only the bytecode decoding overhead. > > That is precisely what it does, and nothing more. However, distributing the control flow across many more specialized JS functions allows the JavaScript JIT (e.g. V8's Crankshaft) to better optimize the execution. With my JIT, even though it is very simplistic, the code is more distributed and less polymorphic. Excessive polymorphism kills performance because V8 "deoptimizes" an already optimized method if the PICs get too big. E.g. here's the numbers *without* JIT, but for a "cold run", before V8 deoptimization kicks in: > > 11,494,252 bytecodes/sec; 523,121 sends/sec > > This is 3x as fast as the no-JIT numbers above! > > Apologies again, but maybe this discussion could help finding bottle necks in Amber performance, too. > > - Bert - > > -- > You received this message because you are subscribed to the Google Groups "amber-lang" group. > To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. > For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "amber-lang" group. To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email]. For more options, visit https://groups.google.com/d/optout. |
Free forum by Nabble | Edit this page |