ECAP 2 Update...

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: ECAP 2 Update...

Seth Berman
Hello Simon,

I just saw your statement "Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler."
Did you see the benchmarks I did later in this thread?  Or did you mean something else?

- Seth

On Friday, June 14, 2019 at 3:34:16 PM UTC-4, Simon Franz wrote:
These are really good improvements!
Did you also run your code in VA 8.6.3 or 9.1 and measured the time? Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler.

- Simon

Am Montag, 10. Juni 2019 22:52:27 UTC+2 schrieb Seth Berman:
Hi Lou,

No problem...fun exercise.

I think the canonical version of python that everyone uses (CPython) is not that fast...its kind of a slow C interpreter.
However, there are other folks that make faster versions of python...but there are other trade-offs if you choose them over CPython.

I just installed and tried PyPy which is python with a JIT compiler and this was the result:
Python - PyPy 7.1.1-beta0 with MSC v.1910 32 bit - 172ms   (vs 703ms for CPython).
This is clearly a big improvement, though still not as fast as VAST 9.x JIT 32-bit.
Perhaps we should have created a really slow interpreter so we also could claim 3x, 4x and beyond for our 1st tier JIT:)

I use Visual Studio's for vm development but I really don't do a lot of C# development. So other than familiarity with the .NET clr and C# syntax and semantics...I can't say I understand the "feel" of development for it.
Java I understand better since I spent a lot of time developing with it.  It's not like I felt unproductive with java, but certainly not as productive as I am with a live system like Smalltalk.

- Seth


On Monday, June 10, 2019 at 3:16:24 PM UTC-4, Louis LaBrunda wrote:
Hi Seth,

Thanks for doing this.  LOL about the kids watching Shrek 2.

A lot of people think that Python, which is becoming more and more popular, is compiled to machine code and therefor should be fast, it's not and it isn't.

A lot of people use Ruby on Rails for web work but if you have a choice for new work I don't see it beating Seaside on Smalltalk.  My oldest son's company uses Ruby on Rails.  From time to time he tells me about changes being made to Ruby to make it look more like Smalltalk.  I think most Smalltalk derivatives have left things out (like blocks of code being objects or some data types not being objects) only to learn later, Smalltalk had it right.

Java and C# look impressive but is development as easy in them as it is in Smalltalk?

Lou



On Monday, June 10, 2019 at 11:33:58 AM UTC-4, Seth Berman wrote:
Hi All,

I was looking more into Go lang and the more I looked, the more I felt that result couldn't be right.
Looking further, I had been including startup costs into that for Go...which was not fair.
So I went back and made sure all scripts had milliseconds time wrapped around just the code and not do command line timer.
I need to profile the version in C, but Go is looking more like it should I think.
Python dropped around 50ms in startup costs.
I updated Ruby and run again...it did pretty horrible.  I was looking at wrong value in Measure-Command on powershell.

Just to be sure, I'm going to include the Go, Ruby, Python scripts in case anyone wants to check along with me.
I was trying to do these benchmarks with my kids watching Shrek 2 in the background...clearly a mistake.
Next I'm going to review java and C# one more time...and I'll try to do C so we get a good optimal baseline

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 15m
Ruby - cruby 2.1.5p273 x64 - 2437ms
Python - cpython 3.6.2 x64 - 703ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

On Sunday, June 9, 2019 at 11:11:38 PM UTC-4, Seth Berman wrote:
Hi All,

Ok, for what its worth I ported the sudoku solver from the following location below (see attached) and then ran what is in the repo for Java, C#, Ruby, Python and Go.
The port I did is mostly from Julia (sudoku_v1.jl) because that also uses 1-based indexing.
<a href="https://github.com/attractivechaos/plb" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;">https://github.com/attractivechaos/plb

The results are pretty much what I expected (except I had no idea where Go would end up).  I guess I figured java and C# would be more ahead than they ended up being, so I was 
triple checking if I was doing anything wrong.  Still not sure, both were built in release mode and if I did miss anything, I don't know what it is.  No big surprise for Python and Ruby.

I've attached a picture of the cachegrind profile of a partial sudoku run just so I could get a sense of what kind of work we are doing...thought the source code pretty much explained it.
Looking at the solver code , the areas of the vm we should be thinking about are:
- Iteration (i.e. 1 to: 374 do: [:i ...) and efficiency of compare/branch bytecodes like BCincTempJumpLessEqualTOSB
- At and AtPuts for byte, word and pointer collections (i.e. ByteArray, String, SudokuWordArray, Array)
- Basic push/pop temps and ivar (which is pretty much always the case).
- Primitive send machinery

What this isn't so good at showing is normal message send machinery...most of the work happens in a few methods and with bytecodes and primitive sends.

Results:
Machine - Intel(R) Core(TM) i7-4910MQ CPU @ 2.90GHz / 32 GB RAM / Windows 10 64-bit

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 404ms
Ruby - cruby 2.1.5p273 x64 - 713ms
Python - cpython 3.6.2 x64 - 758ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

- Seth

On Saturday, June 8, 2019 at 9:30:46 AM UTC-4, Seth Berman wrote:
Hi Lou,

Well that was kind of my point.  I’m sure there is a reasonable benchmark.  All we’ll be showing is how the machinery involved in that benchmark compares to others.
Kind of like a car where we’re comparing lbs of boost in turbo-chargers.  At the end of the day, “my turbo-charger is bigger than your turbo-charger” is interesting and a great claim we can all be proud of.  But if the turbo-charger is mated with a in-line 2 cylinder on a vehicle weighing 5000lbs...then it becomes less interesting.  Not saying we’re a 2-cylinder tank but it’s important to set expectations about what we’re going to be excited about when we have a fantastic showing.
But like I said, I’ll take a look.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/2e74cbff-1cf6-4395-9802-ab47d5515933%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: ECAP 2 Update...

Simon Franz
Hi Seth, 
I wasn't sure if the noted performance of 9.x (interpreter) is the same speed as in VA 8.6.3 (surely on 32 bit). Is there also an speed increase from VA 8.6.x to VA 9.x which might be visible in a benchmark?
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

I've seen the benchmarks and i'm really exited about the new JIT-Compire :-)

- Simon
 

Am Freitag, 14. Juni 2019 22:21:23 UTC+2 schrieb Seth Berman:
Hello Simon,

I just saw your statement "Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler."
Did you see the benchmarks I did later in this thread?  Or did you mean something else?

- Seth

On Friday, June 14, 2019 at 3:34:16 PM UTC-4, Simon Franz wrote:
These are really good improvements!
Did you also run your code in VA 8.6.3 or 9.1 and measured the time? Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler.

- Simon

Am Montag, 10. Juni 2019 22:52:27 UTC+2 schrieb Seth Berman:
Hi Lou,

No problem...fun exercise.

I think the canonical version of python that everyone uses (CPython) is not that fast...its kind of a slow C interpreter.
However, there are other folks that make faster versions of python...but there are other trade-offs if you choose them over CPython.

I just installed and tried PyPy which is python with a JIT compiler and this was the result:
Python - PyPy 7.1.1-beta0 with MSC v.1910 32 bit - 172ms   (vs 703ms for CPython).
This is clearly a big improvement, though still not as fast as VAST 9.x JIT 32-bit.
Perhaps we should have created a really slow interpreter so we also could claim 3x, 4x and beyond for our 1st tier JIT:)

I use Visual Studio's for vm development but I really don't do a lot of C# development. So other than familiarity with the .NET clr and C# syntax and semantics...I can't say I understand the "feel" of development for it.
Java I understand better since I spent a lot of time developing with it.  It's not like I felt unproductive with java, but certainly not as productive as I am with a live system like Smalltalk.

- Seth


On Monday, June 10, 2019 at 3:16:24 PM UTC-4, Louis LaBrunda wrote:
Hi Seth,

Thanks for doing this.  LOL about the kids watching Shrek 2.

A lot of people think that Python, which is becoming more and more popular, is compiled to machine code and therefor should be fast, it's not and it isn't.

A lot of people use Ruby on Rails for web work but if you have a choice for new work I don't see it beating Seaside on Smalltalk.  My oldest son's company uses Ruby on Rails.  From time to time he tells me about changes being made to Ruby to make it look more like Smalltalk.  I think most Smalltalk derivatives have left things out (like blocks of code being objects or some data types not being objects) only to learn later, Smalltalk had it right.

Java and C# look impressive but is development as easy in them as it is in Smalltalk?

Lou



On Monday, June 10, 2019 at 11:33:58 AM UTC-4, Seth Berman wrote:
Hi All,

I was looking more into Go lang and the more I looked, the more I felt that result couldn't be right.
Looking further, I had been including startup costs into that for Go...which was not fair.
So I went back and made sure all scripts had milliseconds time wrapped around just the code and not do command line timer.
I need to profile the version in C, but Go is looking more like it should I think.
Python dropped around 50ms in startup costs.
I updated Ruby and run again...it did pretty horrible.  I was looking at wrong value in Measure-Command on powershell.

Just to be sure, I'm going to include the Go, Ruby, Python scripts in case anyone wants to check along with me.
I was trying to do these benchmarks with my kids watching Shrek 2 in the background...clearly a mistake.
Next I'm going to review java and C# one more time...and I'll try to do C so we get a good optimal baseline

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 15m
Ruby - cruby 2.1.5p273 x64 - 2437ms
Python - cpython 3.6.2 x64 - 703ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

On Sunday, June 9, 2019 at 11:11:38 PM UTC-4, Seth Berman wrote:
Hi All,

Ok, for what its worth I ported the sudoku solver from the following location below (see attached) and then ran what is in the repo for Java, C#, Ruby, Python and Go.
The port I did is mostly from Julia (sudoku_v1.jl) because that also uses 1-based indexing.
<a href="https://github.com/attractivechaos/plb" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;">https://github.com/attractivechaos/plb

The results are pretty much what I expected (except I had no idea where Go would end up).  I guess I figured java and C# would be more ahead than they ended up being, so I was 
triple checking if I was doing anything wrong.  Still not sure, both were built in release mode and if I did miss anything, I don't know what it is.  No big surprise for Python and Ruby.

I've attached a picture of the cachegrind profile of a partial sudoku run just so I could get a sense of what kind of work we are doing...thought the source code pretty much explained it.
Looking at the solver code , the areas of the vm we should be thinking about are:
- Iteration (i.e. 1 to: 374 do: [:i ...) and efficiency of compare/branch bytecodes like BCincTempJumpLessEqualTOSB
- At and AtPuts for byte, word and pointer collections (i.e. ByteArray, String, SudokuWordArray, Array)
- Basic push/pop temps and ivar (which is pretty much always the case).
- Primitive send machinery

What this isn't so good at showing is normal message send machinery...most of the work happens in a few methods and with bytecodes and primitive sends.

Results:
Machine - Intel(R) Core(TM) i7-4910MQ CPU @ 2.90GHz / 32 GB RAM / Windows 10 64-bit

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 404ms
Ruby - cruby 2.1.5p273 x64 - 713ms
Python - cpython 3.6.2 x64 - 758ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

- Seth

On Saturday, June 8, 2019 at 9:30:46 AM UTC-4, Seth Berman wrote:
Hi Lou,

Well that was kind of my point.  I’m sure there is a reasonable benchmark.  All we’ll be showing is how the machinery involved in that benchmark compares to others.
Kind of like a car where we’re comparing lbs of boost in turbo-chargers.  At the end of the day, “my turbo-charger is bigger than your turbo-charger” is interesting and a great claim we can all be proud of.  But if the turbo-charger is mated with a in-line 2 cylinder on a vehicle weighing 5000lbs...then it becomes less interesting.  Not saying we’re a 2-cylinder tank but it’s important to set expectations about what we’re going to be excited about when we have a fantastic showing.
But like I said, I’ll take a look.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/6b92fe43-7d59-4566-adc7-9c8df6105dff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: ECAP 2 Update...

Wayne Johnston
Very much looking forward to trying this 9.2.
Is there a hint of what the contents of the migration guide will be for this release?
Thought I'd look at the web site - it has been down at least a couple hours.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/bd154085-3688-4847-b90f-04c2f1b64d6d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: ECAP 2 Update...

Seth Berman
In reply to this post by Simon Franz
Hi Simon,

The basic bytecode processing engine and message send speed of 9.x 32-bit (interpreter) is not as fast as that of 8.6.3 32-bit (JIT).
That will change with 9.2 32-bit (JIT) which will make both of these as fast.

Numeric operations and primitives code is much faster in 9.x 32-bit (interpreter) than in 8.6.3 32-bit (JIT).
9.2 32-bit (JIT) makes this even faster

Memory management (allocation / GC) is faster in 9.x in general.
9.2 32-bit (JIT) allocation will be faster, GC algorithms will be the same as 9.x

Linux abt script always disabled jit (-mcd) because of some issue with the IBM JIT in a development environment. (see abt script at the bottom)
In 9.2, we remove that conditional....JIT is always enabled in all contexts on all platforms.

At ESUG and FAST conferences this year, Alexander Mitin (our Lead VM engineer) will be presenting all the work that has been done over the last
8 months that has led to the 9.2 JIT and some benchmarks.
 

On Saturday, June 15, 2019 at 4:35:47 PM UTC-4, Simon Franz wrote:
Hi Seth, 
I wasn't sure if the noted performance of 9.x (interpreter) is the same speed as in VA 8.6.3 (surely on 32 bit). Is there also an speed increase from VA 8.6.x to VA 9.x which might be visible in a benchmark?
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

I've seen the benchmarks and i'm really exited about the new JIT-Compire :-)

- Simon
 

Am Freitag, 14. Juni 2019 22:21:23 UTC+2 schrieb Seth Berman:
Hello Simon,

I just saw your statement "Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler."
Did you see the benchmarks I did later in this thread?  Or did you mean something else?

- Seth

On Friday, June 14, 2019 at 3:34:16 PM UTC-4, Simon Franz wrote:
These are really good improvements!
Did you also run your code in VA 8.6.3 or 9.1 and measured the time? Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler.

- Simon

Am Montag, 10. Juni 2019 22:52:27 UTC+2 schrieb Seth Berman:
Hi Lou,

No problem...fun exercise.

I think the canonical version of python that everyone uses (CPython) is not that fast...its kind of a slow C interpreter.
However, there are other folks that make faster versions of python...but there are other trade-offs if you choose them over CPython.

I just installed and tried PyPy which is python with a JIT compiler and this was the result:
Python - PyPy 7.1.1-beta0 with MSC v.1910 32 bit - 172ms   (vs 703ms for CPython).
This is clearly a big improvement, though still not as fast as VAST 9.x JIT 32-bit.
Perhaps we should have created a really slow interpreter so we also could claim 3x, 4x and beyond for our 1st tier JIT:)

I use Visual Studio's for vm development but I really don't do a lot of C# development. So other than familiarity with the .NET clr and C# syntax and semantics...I can't say I understand the "feel" of development for it.
Java I understand better since I spent a lot of time developing with it.  It's not like I felt unproductive with java, but certainly not as productive as I am with a live system like Smalltalk.

- Seth


On Monday, June 10, 2019 at 3:16:24 PM UTC-4, Louis LaBrunda wrote:
Hi Seth,

Thanks for doing this.  LOL about the kids watching Shrek 2.

A lot of people think that Python, which is becoming more and more popular, is compiled to machine code and therefor should be fast, it's not and it isn't.

A lot of people use Ruby on Rails for web work but if you have a choice for new work I don't see it beating Seaside on Smalltalk.  My oldest son's company uses Ruby on Rails.  From time to time he tells me about changes being made to Ruby to make it look more like Smalltalk.  I think most Smalltalk derivatives have left things out (like blocks of code being objects or some data types not being objects) only to learn later, Smalltalk had it right.

Java and C# look impressive but is development as easy in them as it is in Smalltalk?

Lou



On Monday, June 10, 2019 at 11:33:58 AM UTC-4, Seth Berman wrote:
Hi All,

I was looking more into Go lang and the more I looked, the more I felt that result couldn't be right.
Looking further, I had been including startup costs into that for Go...which was not fair.
So I went back and made sure all scripts had milliseconds time wrapped around just the code and not do command line timer.
I need to profile the version in C, but Go is looking more like it should I think.
Python dropped around 50ms in startup costs.
I updated Ruby and run again...it did pretty horrible.  I was looking at wrong value in Measure-Command on powershell.

Just to be sure, I'm going to include the Go, Ruby, Python scripts in case anyone wants to check along with me.
I was trying to do these benchmarks with my kids watching Shrek 2 in the background...clearly a mistake.
Next I'm going to review java and C# one more time...and I'll try to do C so we get a good optimal baseline

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 15m
Ruby - cruby 2.1.5p273 x64 - 2437ms
Python - cpython 3.6.2 x64 - 703ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

On Sunday, June 9, 2019 at 11:11:38 PM UTC-4, Seth Berman wrote:
Hi All,

Ok, for what its worth I ported the sudoku solver from the following location below (see attached) and then ran what is in the repo for Java, C#, Ruby, Python and Go.
The port I did is mostly from Julia (sudoku_v1.jl) because that also uses 1-based indexing.
<a href="https://github.com/attractivechaos/plb" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;">https://github.com/attractivechaos/plb

The results are pretty much what I expected (except I had no idea where Go would end up).  I guess I figured java and C# would be more ahead than they ended up being, so I was 
triple checking if I was doing anything wrong.  Still not sure, both were built in release mode and if I did miss anything, I don't know what it is.  No big surprise for Python and Ruby.

I've attached a picture of the cachegrind profile of a partial sudoku run just so I could get a sense of what kind of work we are doing...thought the source code pretty much explained it.
Looking at the solver code , the areas of the vm we should be thinking about are:
- Iteration (i.e. 1 to: 374 do: [:i ...) and efficiency of compare/branch bytecodes like BCincTempJumpLessEqualTOSB
- At and AtPuts for byte, word and pointer collections (i.e. ByteArray, String, SudokuWordArray, Array)
- Basic push/pop temps and ivar (which is pretty much always the case).
- Primitive send machinery

What this isn't so good at showing is normal message send machinery...most of the work happens in a few methods and with bytecodes and primitive sends.

Results:
Machine - Intel(R) Core(TM) i7-4910MQ CPU @ 2.90GHz / 32 GB RAM / Windows 10 64-bit

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 404ms
Ruby - cruby 2.1.5p273 x64 - 713ms
Python - cpython 3.6.2 x64 - 758ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

- Seth

On Saturday, June 8, 2019 at 9:30:46 AM UTC-4, Seth Berman wrote:
Hi Lou,

Well that was kind of my point.  I’m sure there is a reasonable benchmark.  All we’ll be showing is how the machinery involved in that benchmark compares to others.
Kind of like a car where we’re comparing lbs of boost in turbo-chargers.  At the end of the day, “my turbo-charger is bigger than your turbo-charger” is interesting and a great claim we can all be proud of.  But if the turbo-charger is mated with a in-line 2 cylinder on a vehicle weighing 5000lbs...then it becomes less interesting.  Not saying we’re a 2-cylinder tank but it’s important to set expectations about what we’re going to be excited about when we have a fantastic showing.
But like I said, I’ll take a look.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/a029d6cd-68b4-4744-af9e-89fbb7f1e5ce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: ECAP 2 Update...

Seth Berman
In reply to this post by Wayne Johnston
Hi Wayne,

Site should be back up.
No, we don't have migration guides updated for ECAP previews.
But, I actually can't think of many off the top of my head.
The JIT vm, believe it or not, is pretty much just copy/replace esvm40.dll (vm) and esvm40.bin (native templates) into existing VA install.
I think I copied them before into some old 7.0 installs and that worked (but not tested).
Obviously, you should be making a backup of anything that you replace:)  Just saying...

- Seth

On Monday, June 17, 2019 at 10:35:51 AM UTC-4, Wayne Johnston wrote:
Very much looking forward to trying this 9.2.
Is there a hint of what the contents of the migration guide will be for this release?
Thought I'd look at the web site - it has been down at least a couple hours.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/7fa7905a-c80f-4924-a334-d259f716fe6b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: ECAP 2 Update...

Simon Franz
In reply to this post by Seth Berman
Thanks, Seth, this is exactly the info I wanted to know. I am very curious about the news of 9.2.

Am Montag, 17. Juni 2019 17:21:19 UTC+2 schrieb Seth Berman:
Hi Simon,

The basic bytecode processing engine and message send speed of 9.x 32-bit (interpreter) is not as fast as that of 8.6.3 32-bit (JIT).
That will change with 9.2 32-bit (JIT) which will make both of these as fast.

Numeric operations and primitives code is much faster in 9.x 32-bit (interpreter) than in 8.6.3 32-bit (JIT).
9.2 32-bit (JIT) makes this even faster

Memory management (allocation / GC) is faster in 9.x in general.
9.2 32-bit (JIT) allocation will be faster, GC algorithms will be the same as 9.x

Linux abt script always disabled jit (-mcd) because of some issue with the IBM JIT in a development environment. (see abt script at the bottom)
In 9.2, we remove that conditional....JIT is always enabled in all contexts on all platforms.

At ESUG and FAST conferences this year, Alexander Mitin (our Lead VM engineer) will be presenting all the work that has been done over the last
8 months that has led to the 9.2 JIT and some benchmarks.
 

On Saturday, June 15, 2019 at 4:35:47 PM UTC-4, Simon Franz wrote:
Hi Seth, 
I wasn't sure if the noted performance of 9.x (interpreter) is the same speed as in VA 8.6.3 (surely on 32 bit). Is there also an speed increase from VA 8.6.x to VA 9.x which might be visible in a benchmark?
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

I've seen the benchmarks and i'm really exited about the new JIT-Compire :-)

- Simon
 

Am Freitag, 14. Juni 2019 22:21:23 UTC+2 schrieb Seth Berman:
Hello Simon,

I just saw your statement "Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler."
Did you see the benchmarks I did later in this thread?  Or did you mean something else?

- Seth

On Friday, June 14, 2019 at 3:34:16 PM UTC-4, Simon Franz wrote:
These are really good improvements!
Did you also run your code in VA 8.6.3 or 9.1 and measured the time? Would it be interesting to see that in comparison to other programming languages as well to 9.2 with the coming changes with the JIT-Compiler.

- Simon

Am Montag, 10. Juni 2019 22:52:27 UTC+2 schrieb Seth Berman:
Hi Lou,

No problem...fun exercise.

I think the canonical version of python that everyone uses (CPython) is not that fast...its kind of a slow C interpreter.
However, there are other folks that make faster versions of python...but there are other trade-offs if you choose them over CPython.

I just installed and tried PyPy which is python with a JIT compiler and this was the result:
Python - PyPy 7.1.1-beta0 with MSC v.1910 32 bit - 172ms   (vs 703ms for CPython).
This is clearly a big improvement, though still not as fast as VAST 9.x JIT 32-bit.
Perhaps we should have created a really slow interpreter so we also could claim 3x, 4x and beyond for our 1st tier JIT:)

I use Visual Studio's for vm development but I really don't do a lot of C# development. So other than familiarity with the .NET clr and C# syntax and semantics...I can't say I understand the "feel" of development for it.
Java I understand better since I spent a lot of time developing with it.  It's not like I felt unproductive with java, but certainly not as productive as I am with a live system like Smalltalk.

- Seth


On Monday, June 10, 2019 at 3:16:24 PM UTC-4, Louis LaBrunda wrote:
Hi Seth,

Thanks for doing this.  LOL about the kids watching Shrek 2.

A lot of people think that Python, which is becoming more and more popular, is compiled to machine code and therefor should be fast, it's not and it isn't.

A lot of people use Ruby on Rails for web work but if you have a choice for new work I don't see it beating Seaside on Smalltalk.  My oldest son's company uses Ruby on Rails.  From time to time he tells me about changes being made to Ruby to make it look more like Smalltalk.  I think most Smalltalk derivatives have left things out (like blocks of code being objects or some data types not being objects) only to learn later, Smalltalk had it right.

Java and C# look impressive but is development as easy in them as it is in Smalltalk?

Lou



On Monday, June 10, 2019 at 11:33:58 AM UTC-4, Seth Berman wrote:
Hi All,

I was looking more into Go lang and the more I looked, the more I felt that result couldn't be right.
Looking further, I had been including startup costs into that for Go...which was not fair.
So I went back and made sure all scripts had milliseconds time wrapped around just the code and not do command line timer.
I need to profile the version in C, but Go is looking more like it should I think.
Python dropped around 50ms in startup costs.
I updated Ruby and run again...it did pretty horrible.  I was looking at wrong value in Measure-Command on powershell.

Just to be sure, I'm going to include the Go, Ruby, Python scripts in case anyone wants to check along with me.
I was trying to do these benchmarks with my kids watching Shrek 2 in the background...clearly a mistake.
Next I'm going to review java and C# one more time...and I'll try to do C so we get a good optimal baseline

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 15m
Ruby - cruby 2.1.5p273 x64 - 2437ms
Python - cpython 3.6.2 x64 - 703ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

On Sunday, June 9, 2019 at 11:11:38 PM UTC-4, Seth Berman wrote:
Hi All,

Ok, for what its worth I ported the sudoku solver from the following location below (see attached) and then ran what is in the repo for Java, C#, Ruby, Python and Go.
The port I did is mostly from Julia (sudoku_v1.jl) because that also uses 1-based indexing.
<a href="https://github.com/attractivechaos/plb" rel="nofollow" target="_blank" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2Fattractivechaos%2Fplb\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNGF4-dqqXIMORrHBkTBlhwAHcul6w&#39;;return true;">https://github.com/attractivechaos/plb

The results are pretty much what I expected (except I had no idea where Go would end up).  I guess I figured java and C# would be more ahead than they ended up being, so I was 
triple checking if I was doing anything wrong.  Still not sure, both were built in release mode and if I did miss anything, I don't know what it is.  No big surprise for Python and Ruby.

I've attached a picture of the cachegrind profile of a partial sudoku run just so I could get a sense of what kind of work we are doing...thought the source code pretty much explained it.
Looking at the solver code , the areas of the vm we should be thinking about are:
- Iteration (i.e. 1 to: 374 do: [:i ...) and efficiency of compare/branch bytecodes like BCincTempJumpLessEqualTOSB
- At and AtPuts for byte, word and pointer collections (i.e. ByteArray, String, SudokuWordArray, Array)
- Basic push/pop temps and ivar (which is pretty much always the case).
- Primitive send machinery

What this isn't so good at showing is normal message send machinery...most of the work happens in a few methods and with bytecodes and primitive sends.

Results:
Machine - Intel(R) Core(TM) i7-4910MQ CPU @ 2.90GHz / 32 GB RAM / Windows 10 64-bit

Java - jdk 11.0.1 x64 - 63ms
C# - Visual Studio 2017 x64 - @67ms  (lots of variance..between 40 and 140ms...didn't bother to see why)
Go - go1.12.5 windows/amd64 - 404ms
Ruby - cruby 2.1.5p273 x64 - 713ms
Python - cpython 3.6.2 x64 - 758ms

VAST 9.x 32-bit (JIT) - 128ms
VAST 9.x 64-bit (JIT) - 156ms
VAST 9.x 32-bit (Interpreter) - 250ms
VAST 9.x 64-bit (Interpreter) - 266ms

- Seth

On Saturday, June 8, 2019 at 9:30:46 AM UTC-4, Seth Berman wrote:
Hi Lou,

Well that was kind of my point.  I’m sure there is a reasonable benchmark.  All we’ll be showing is how the machinery involved in that benchmark compares to others.
Kind of like a car where we’re comparing lbs of boost in turbo-chargers.  At the end of the day, “my turbo-charger is bigger than your turbo-charger” is interesting and a great claim we can all be proud of.  But if the turbo-charger is mated with a in-line 2 cylinder on a vehicle weighing 5000lbs...then it becomes less interesting.  Not saying we’re a 2-cylinder tank but it’s important to set expectations about what we’re going to be excited about when we have a fantastic showing.
But like I said, I’ll take a look.

--
You received this message because you are subscribed to the Google Groups "VA Smalltalk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/va-smalltalk.
To view this discussion on the web visit https://groups.google.com/d/msgid/va-smalltalk/f02d4437-2939-404f-b899-cd2d8bbb2c87%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
12