A little more info on V8. I'm sure that these sorts of things can be worked around, but they do mean that V8 will never in its pure form quite reach the pinnacle of theoretical performance possible for a VM targeted specifically to Smalltalk etc. So it won't be as fast as Strongtalk, although it may get fairly close to VisualWorks performance.I talked briefly with Lars Bak and Robert Griesemer today, (both are on the V8 team, and Lars is the lead) and got a little bit of their perspective on using V8 for other languages. As was to be expected, the VM is targeted to JavaScript semantics, and given the gnarliness of those semantics, there are a few caveats to think about.
Nonetheless, I still think it or some derivative will quickly become the dominant dynamic language VM, for the following reasons:
And remember, the bottom line is that it is a clean, supported, state-of-the-art multi-threaded design that is fully open-source. So as a last resort, there is always FORK! -Dave |
> state-of-the-art multi-threaded design
Can you say more about this? I can't find any information about V8's thread handling and concurrency options. Cheers, - Andreas David Griswold wrote: > A little more info on V8. > > I talked briefly with Lars Bak and Robert Griesemer today, (both are on > the V8 team, and Lars is the lead) and got a little bit of their > perspective on using V8 for other languages. As was to be expected, the > VM is targeted to JavaScript semantics, and given the gnarliness of > those semantics, there are a few caveats to think about. > > * V8 will get faster as it matures, of course, however: > * There will be issues around things like immediate object > semantics, which don't exactly match up with any other language. > Yucky JavaScript! > * A bigger long-term performance issue is that given the dynamic > nature of JavaScript objects (i.e. adding/removing slots on the > fly) there apparently isn't any way around adding an additional > indirection to deal with the object size changing dynamically. > That is something that will just have to be lived with. I was > hoping they had some magic there, but apparently not. > > I'm sure that these sorts of things can be worked around, but they do > mean that V8 will never in its pure form quite reach the pinnacle of > theoretical performance possible for a VM targeted specifically to > Smalltalk etc. So it won't be as fast as Strongtalk, although it may > get fairly close to VisualWorks performance. > > Nonetheless, I still think it or some derivative will quickly become the > dominant dynamic language VM, for the following reasons: > > * Given who the developers are, and with Google behind it, it will > be the fastest JavaScript VM for a long time to come. > * For the same reason, it will be reliable and secure (as much as it > can be, anyway; nothing is perfect). > * It will be supported on the three major platforms (Windows, Linux, > Mac). > * It can be used with other browsers, so I'm sure it will be ported > to Firefox (if only as an option). Some or all of the other > browsers may also adopt it, given that it will have a very > hard-to-overcome performance advantage (these sorts of VMs can't > be pulled out of a hat). Although MS and maybe Safari may have > too much of a Not Invented Here problem with it, as well as > standards war issues. > * those things, plus the other architectural advantages it brings, > will make it a primary target for serious web app development, > esp. Google apps. > * So it will be ubiquitous > > So it will be an irresistible platform for other dynamic languages, even > if they could theoretically run a bit faster on a custom VM. Remember > it will still be a lot easier to run other dynamic languages on > JavaScript than it is to run them on Java, since at least JavaScript is > fully dynamic, unlike Java. > > And remember, the bottom line is that it is a clean, supported, > state-of-the-art multi-threaded design that is fully open-source. So as > a last resort, there is always FORK! > > -Dave > > > ------------------------------------------------------------------------ > > |
Hi Andreas,
No, I haven't had time yet to look at the google code site for V8 (I'm leaving early tomorrow morning on vacation), and as you may imagine they are too busy to talk much right now. I guess I should have said multi-process, since I was referring to the ability to run many separate fully independent processes from a shared VM, which probably translates to some form of concurrency inside the VM. But as I think you are pointing out, that isn't the same as JavaScript in-language multi-threading or concurrency; I don't know of anything new there. If they wanted to do fancy concurrency, they certainly know how, since we did all that for the JVM (unlike Strongtalk, which is still in 'yield' land), but of course that doesn't mean they did. I believe Lars will be giving a more detailed talk on the architecture at the JAOO conference in Denmark on Sept 30th. I guess to be more precise, I think the sort of multi-process architecture they are using will be very well suited to apps that want to start, stop, and isolate lots of VM instances compactly and quickly. That sounds great for servers, and desktops too, where each app can run (and maybe garbage collect?) independently in lightweight VM instances. To me, that sounds quite a bit more OS-like than existing VMs. If each app runs in a different process, then they can't block each other during i/o or callouts (or whatever the plugin equivalent is), and they will be run concurrently on multi-core processors, which are capabilities I normally associate with a multi-threaded VM. But data sharing and communication is another matter, and I don't have an answer there. There's going to be a lot to find out! -Dave On Tue, Sep 2, 2008 at 11:15 PM, Andreas Raab <[hidden email]> wrote:
Cheers, |
I read about the new IE8 from microsoft that they start a new thread
per tab etc and that it was using more threads and memory than WindowsXP. It was ok on multiprocessor machines but a pig on single cpu. karl On 9/3/08, David Griswold <[hidden email]> wrote: > Hi Andreas, > > No, I haven't had time yet to look at the google code site for V8 (I'm > leaving early tomorrow morning on vacation), and as you may imagine they are > too busy to talk much right now. I guess I should have said multi-process, > since I was referring to the ability to run many separate fully independent > processes from a shared VM, which probably translates to some form of > concurrency inside the VM. > > But as I think you are pointing out, that isn't the same as JavaScript > in-language multi-threading or concurrency; I don't know of anything new > there. If they wanted to do fancy concurrency, they certainly know how, > since we did all that for the JVM (unlike Strongtalk, which is still in > 'yield' land), but of course that doesn't mean they did. I believe Lars > will be giving a more detailed talk on the architecture at the JAOO > conference in Denmark on Sept 30th. > > I guess to be more precise, I think the sort of multi-process architecture > they are using will be very well suited to apps that want to start, stop, > and isolate lots of VM instances compactly and quickly. That sounds great > for servers, and desktops too, where each app can run (and maybe garbage > collect?) independently in lightweight VM instances. > > To me, that sounds quite a bit more OS-like than existing VMs. If each app > runs in a different process, then they can't block each other during i/o or > callouts (or whatever the plugin equivalent is), and they will be run > concurrently on multi-core processors, which are capabilities I normally > associate with a multi-threaded VM. But data sharing and communication is > another matter, and I don't have an answer there. > > There's going to be a lot to find out! > -Dave > > On Tue, Sep 2, 2008 at 11:15 PM, Andreas Raab <[hidden email]> wrote: > >> > state-of-the-art multi-threaded design >> >> Can you say more about this? I can't find any information about V8's >> thread >> handling and concurrency options. >> > Cheers, >> - Andreas >> >> David Griswold wrote: >> >>> A little more info on V8. >>> >>> I talked briefly with Lars Bak and Robert Griesemer today, (both are on >>> the V8 team, and Lars is the lead) and got a little bit of their >>> perspective >>> on using V8 for other languages. As was to be expected, the VM is >>> targeted >>> to JavaScript semantics, and given the gnarliness of those semantics, >>> there >>> are a few caveats to think about. >>> >>> * V8 will get faster as it matures, of course, however: >>> * There will be issues around things like immediate object >>> semantics, which don't exactly match up with any other language. >>> Yucky JavaScript! >>> * A bigger long-term performance issue is that given the dynamic >>> nature of JavaScript objects (i.e. adding/removing slots on the >>> fly) there apparently isn't any way around adding an additional >>> indirection to deal with the object size changing dynamically. >>> That is something that will just have to be lived with. I was >>> hoping they had some magic there, but apparently not. >>> >>> I'm sure that these sorts of things can be worked around, but they do >>> mean >>> that V8 will never in its pure form quite reach the pinnacle of >>> theoretical >>> performance possible for a VM targeted specifically to Smalltalk etc. So >>> it >>> won't be as fast as Strongtalk, although it may get fairly close to >>> VisualWorks performance. >>> >>> Nonetheless, I still think it or some derivative will quickly become the >>> dominant dynamic language VM, for the following reasons: >>> >>> * Given who the developers are, and with Google behind it, it will >>> be the fastest JavaScript VM for a long time to come. >>> * For the same reason, it will be reliable and secure (as much as it >>> can be, anyway; nothing is perfect). >>> * It will be supported on the three major platforms (Windows, Linux, >>> Mac). * It can be used with other browsers, so I'm sure it will >>> be >>> ported >>> to Firefox (if only as an option). Some or all of the other >>> browsers may also adopt it, given that it will have a very >>> hard-to-overcome performance advantage (these sorts of VMs can't >>> be pulled out of a hat). Although MS and maybe Safari may have >>> too much of a Not Invented Here problem with it, as well as >>> standards war issues. >>> * those things, plus the other architectural advantages it brings, >>> will make it a primary target for serious web app development, >>> esp. Google apps. >>> * So it will be ubiquitous >>> >>> So it will be an irresistible platform for other dynamic languages, even >>> if they could theoretically run a bit faster on a custom VM. Remember it >>> will still be a lot easier to run other dynamic languages on JavaScript >>> than >>> it is to run them on Java, since at least JavaScript is fully dynamic, >>> unlike Java. >>> >>> And remember, the bottom line is that it is a clean, supported, >>> state-of-the-art multi-threaded design that is fully open-source. So as >>> a >>> last resort, there is always FORK! >>> >>> -Dave >>> >>> >>> ------------------------------------------------------------------------ >>> >>> >>> >> >> > |
In reply to this post by David Griswold-3
David Griswold <[hidden email]>
wrote...
There's no doubt that V8 puts us on a new plateau, and the V8
team is good, and that the further attention being put on
multiprocessing and security are critical. However VMs are not
black magic, and you might be interested to read some comparisons with
TraceMonkey...
One thing is clear: JavaScript *is* the assembly language
of the Internet, at least for a few years now.
|
In reply to this post by Andreas.Raab
After sleeping on it, I woke up and realized that V8 is almost certainly not multi-threaded at the client code level (within a single VM instance), since they have that indirection... the indirection sounds more like a traditional object table, like VisualWorks, and I don't see how they could possibly have made something like that multi-threaded since performance would critically depend on caching the body pointer within a method activation. But they can't do that because another method might be adding or removing properties while the method is running, and that is way too fine-grained to be doing any kind of check.
Having an object-table indirection isn't as bad for performance as it sounds because you can cache the body pointer within a method activation, which I seem to remember L. Peter Deutsch pointing out a long time ago. But if the average method is very small it will have very few instvar/property accesses, which reduces the effectiveness of the caching (although if there are *no* property accesses you can skip the indirection altogether, which helps compensate). So that is something that can benefit tremendously from inlining à la Strongtalk, which merges activations and thus increases the possible scope of the cached pointer. But if they are not inlining, the performance sounds like it would be more restricted to a sort of VisualWorks kind of range (which of course is not bad, but wouldn't threaten Strongtalk). Plus, the additional word for the indirection is one more word of overhead in the header, so I don't see how they could get less than a two word header. I thought Lars told me it was one word, but I don't see how they could do that; maybe he wasn't counting the indirection as part of the header. -Dave On Tue, Sep 2, 2008 at 11:15 PM, Andreas Raab <[hidden email]> wrote:
|
In reply to this post by Dan Ingalls
Dan wrote:
>One thing is clear: JavaScript *is* the assembly language of
the
> Internet, at least for a few years now.
For people working at assembly(low) level a very
good debugger
is the most important tool.
Any reference to a good debugger for
Javascript will be appreciated.
Are the reflexive capabilities of the V8 platform
enough to build
a smalltalk-like debugger? (e.g. reflexion
and change of the
running stack w/change of object shape &
behavior during debuggin)
Ale.
|
In reply to this post by David Griswold-3
On Wed, Sep 3, 2008 at 8:05 AM, David Griswold <[hidden email]> wrote:
Look at include/v8.h which explains the concurrency model which sounds very similar to Strongtalk: * Multiple threads in V8 are allowed, but only one thread at a time
* is allowed to use V8. The definition of 'using V8' includes * accessing handles or holding onto object pointers obtained from V8 * handles. It is up to the user of V8 to ensure (perhaps with
* locking) that this constraint is not violated. * * If you wish to start using V8 in a thread you can do this by constructing * a v8::Locker object. After the code using V8 has completed for the
* current thread you can call the destructor. This can be combined * with C++ scope-based construction as follows: * * \code * ... * { * v8::Locker locker;
* ... * // Code using V8 goes here. * ... * } // Destructor called here * \endcode * * If you wish to stop using V8 in a thread A you can do this by either
* by destroying the v8::Locker object as above or by constructing a * v8::Unlocker object: * * \code * { * v8::Unlocker unlocker; * ...
* // Code not using V8 goes here while V8 can run in another thread. * ... * } // Destructor called here. * \endcode * * The Unlocker object is intended for use in a long-running callback
* from V8, where you want to release the V8 lock for other threads to * use. * * The v8::Locker is a recursive lock. That is, you can lock more than * once in a given thread. This can be useful if you have code that can
* be called either from code that holds the lock or from code that does * not. The Unlocker is not recursive so you can not have several * Unlockers on the stack at once, and you can not use an Unlocker in a
* thread that is not inside a Locker's scope. * * An unlocker will unlock several lockers if it has to and reinstate * the correct depth of locking on its destruction. eg.:
* * \code * // V8 not locked. * { * v8::Locker locker; * // V8 locked. * { * v8::Locker another_locker; * // V8 still locked (2 levels).
* { * v8::Unlocker unlocker; * // V8 not locked. * } * // V8 locked again (2 levels). * } * // V8 still locked (1 level).
* } * // V8 Now no longer locked. * \endcode and * Start preemption. * * When preemption is started, a timer is fired every n milli seconds
* that will switch between multiple threads that are in contention * for the V8 lock.
|
In reply to this post by Alejandro F. Reimondo
On Wed, Sep 3, 2008 at 10:31 AM, Alejandro F. Reimondo
<[hidden email]> wrote: > Any reference to a good debugger for Javascript will be appreciated. > Are the reflexive capabilities of the V8 platform enough to build > a smalltalk-like debugger? (e.g. reflexion and change of the > running stack w/change of object shape & behavior during debuggin) It seems to use mirrors for reflection - and there is a FrameMirror which provides introspection to stack frames. But whether you can modify the stack, I'm not sure. Avi |
In reply to this post by Eliot Miranda-2
2008/9/3 Eliot Miranda <[hidden email]>:
> > > On Wed, Sep 3, 2008 at 8:05 AM, David Griswold > <[hidden email]> wrote: >> >> After sleeping on it, I woke up and realized that V8 is almost certainly >> not multi-threaded at the client code level (within a single VM instance), >> since they have that indirection... the indirection sounds more like a >> traditional object table, like VisualWorks, and I don't see how they could >> possibly have made something like that multi-threaded since performance >> would critically depend on caching the body pointer within a method >> activation. But they can't do that because another method might be adding >> or removing properties while the method is running, and that is way too >> fine-grained to be doing any kind of check. > > Look at include/v8.h which explains the concurrency model which sounds very > similar to Strongtalk: > * Multiple threads in V8 are allowed, but only one thread at a time > * is allowed to use V8. The definition of 'using V8' includes > * accessing handles or holding onto object pointers obtained from V8 > * handles. It is up to the user of V8 to ensure (perhaps with > * locking) that this constraint is not violated. > * > * If you wish to start using V8 in a thread you can do this by constructing > * a v8::Locker object. After the code using V8 has completed for the > * current thread you can call the destructor. This can be combined > * with C++ scope-based construction as follows: > * > * \code > * ... > * { > * v8::Locker locker; > * ... > * // Code using V8 goes here. > * ... > * } // Destructor called here > * \endcode > * > * If you wish to stop using V8 in a thread A you can do this by either > * by destroying the v8::Locker object as above or by constructing a > * v8::Unlocker object: > * [ snip ] This indicates that VM is written with no concurrency in mind. They did a simplest possible thing: use a global lock to enable calling VM functions from many threads. But only single thread could use it at some point of time. -- Best regards, Igor Stasenko AKA sig. |
In reply to this post by David Griswold-3
On Wed, Sep 3, 2008 at 7:56 AM, Dan Ingalls <[hidden email]> wrote:
Yes, it's becoming clear that V8 doesn't do anything that radically new or magical, and that it was specifically designed for JavaScript, not as any kind of universal dynamic language VM. I'm especially disappointed at Eliot's observation that it doesn't have a bytecode intermediate form, although they may do mixed-mode execution by first interpreting the AST. I'm not sure that the tracemonkey benchmarks are very definitive, but at the least it seems like V8 isn't blowing everything else away. But you are right; at least now that there are multiple fast JavaScript implementations, a lot more stuff will target it. -Dave |
On Fri, Sep 5, 2008 at 12:17 PM, David Griswold
<[hidden email]> wrote: > But you are right; at least now that there are multiple fast JavaScript > implementations, a lot more stuff will target it. One interesting (if odd) just-released language that targets JavaScript is Objective-J: http://cappuccino.org/ . It's a near clone of Objective-C (only without the C), that compiles to JavaScript on the fly in the browser. For example: import <Foundation/CPString.j> @implementation CPString (Reversing) - (CPString)reverse { var reversedString = "", index = [self length]; while(index--) reversedString += [self characterAtIndex: index]; return reversedString; } @end I suppose that's as close to Smalltalk running on V8 as we're likely to see for a while... Avi |
Avi Bryant wrote:
> I suppose that's as close to Smalltalk running on V8 as we're likely > to see for a while... How about this? http://www.squeaksource.com/ST2JS.html It would be amazingly cool if someone could translate tinyBenchmarks through it (or some other benchmark) and see how that comes out ;-) Cheers, - Andreas |
Am 05.09.2008 um 22:25 schrieb Andreas Raab: > Avi Bryant wrote: >> I suppose that's as close to Smalltalk running on V8 as we're likely >> to see for a while... > > How about this? > > http://www.squeaksource.com/ST2JS.html > > It would be amazingly cool if someone could translate tinyBenchmarks > through it (or some other benchmark) and see how that comes out ;-) > > Cheers, > - Andreas You are certainly aware of http://www.cs.ucla.edu/~awarth/ometa/ometa-js - Bert - |
In reply to this post by David Griswold-3
On Fri, Sep 5, 2008 at 3:17 PM, David Griswold <[hidden email]> wrote:
Why is that a bad thing? I actually thought that was one of the most interesting aspects. Bytecodes can provide you a concise portable format, but you could also do that by compressing or otherwise condensing source code (which I guess one way of condensing is to map to bytecode). I'm not saying bytecodes wouldn't be desirable, but there's something appealing (to me) about a direct translation from source to machine code and I'm curious what other advantages bytecodes might have.
Also, in a really pure OO VM and language, what would the bytecode set reduce to? Three instructions? push, pop and send? - Stephen |
In reply to this post by Bert Freudenberg
Bert Freudenberg wrote:
> You are certainly aware of > > http://www.cs.ucla.edu/~awarth/ometa/ometa-js Oh, of course you are right! (Alex just sent his paper around too ;-) I'll give this a whirl with Chrome; this should be great fun! - A. |
In reply to this post by Avi Bryant-2
On Fri, Sep 5, 2008 at 4:04 PM, Avi Bryant <[hidden email]> wrote:
What is the point of Objective-J? I looked into it a while back and didn't get it. The only advantage I could imagine was being able to take some Objective-C code and readily port it to Objective-J. And perhaps the familiarity of the syntax to people that already know Objective-C is worth something. But, in most respects, Objective-C is inferior to Javascript as far as I can tell (for example Objective-C lacks closures).
- Stephen |
On 5-Sep-08, at 1:38 PM, Stephen Pair wrote: > What is the point of Objective-J? I looked into it a while back and > didn't get it. The only advantage I could imagine was being able to > take some Objective-C code and readily port it to Objective-J. And > perhaps the familiarity of the syntax to people that already know > Objective-C is worth something. But, in most respects, Objective-C > is inferior to Javascript as far as I can tell (for example > Objective-C lacks closures). I've never used Objective-J, so I don't know for sure, but one thing that I find attractive is that it introduces message sends. It's easy to forget, but object.doSomething() is *not* a message, it's a property access and function call. This causes real problems - see for example, all the advice against modifying the Object prototype, because it renders the for...in construct useless. If Objective-J solves that problem, it's valuable. Colin |
In reply to this post by Stephen Pair
On Fri, Sep 5, 2008 at 1:30 PM, Stephen Pair <[hidden email]> wrote:
push arg/temp pop-store temp
push literal push self pop dup return top block return top send send super/outer et al push inst var (still need this even though it is only used in accessors)
pop-store inst var (ditto) plus some form of closure support, e.g. create blockpush new array push non-local temp pop-store non-local temp
which is not much fewer than there are now. Don't confuse encoding with semantics. My Squeak compiler (heavily derivative of the current Squeak compiler) currently has 34 opcodes distributed over 253 bytecodes. Of these, 7 are for optimizations other than inlining blocks, so reduce to 27. That's essentially twice as many as the list above which you get by dropping direct access to literals.
The number of opcodes for a pure OO language is around 15, slightly more than 3 :)
|
In reply to this post by Stephen Pair
It would be very interesting to compare Smee perf to this new generation. I'm also thinking that the hidden class strategy could be applied to Smee to make it incredibly fast. I'm not sure how this helps, but it is very interesting... On Sep 5, 2008, at 3:38 PM, "Stephen Pair" <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |