A speedcenter for Squeak

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

A speedcenter for Squeak

timfelgentreff
Hi,

I sent around a note earlier about a benchmarking tool that we're using internally to track RSqueak/VM performance on each commit. Every time Eliot releases a new set of Cog VMs, I also manually trigger the system to run benchmarks on Cog. (Once we move the proper VM to Github, I will set it up so we test each commit on the main development branch and the release branch, too, so we will have very detailed breakdowns.) We wanted to share this setup and the results with the community.

We're collecting results in a Codespeed website (just a frontend to present the data) which we moved to speed.squeak.org today, and it is also linked from the squeak.org website (http://squeak.org/codespeed/).

We have some info about the setup on the about page: http://speed.squeak.org/about. On the Changes tab, you can see the most recent results per platform and environment, with details about the machines on the bottom. Note that we calculate all the statistics on the workers themselves and only send the time and std dev, so the results' min and max values you see on the website are bogus.

Finally, the code for the workers also is on Github (https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks are all organized in Squeaksource (http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html). Right now I've just dumped benchmarks from various sources in there, that's why you see the same benchmark implemented multiple times in different ways, and some micro benchmarks don't make too much sense as they are. We're happy to get comments, feedback, or updated versions of the benchmarking packages. Updating the benchmarking code is easy, and we hope this setup proves to be useful enough for the community to warrant continuously updating and extending the set of benchmarks.

We are also planning to add more platforms, the setup should make this fairly painless, we just need the dedicated machines. We've been testing the standard Cog/Spur VM on a Ubuntu machine, and today we added a Raspberry Pi 1 that is still churning through the latest Cog and RSqueak/VM commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and other builds of the Squeak VM, too.

Cheers,
Tim


Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

David T. Lewis
This really looks very useful, and the graphs and trend lines are nice for
visualization.

Would there be any value in adding an interpreter VM as a baseline to show
Cog/Spur/RSqueak compared to a non-optimized VM?

Dave


> Hi,
>
> I sent around a note earlier about a benchmarking tool that we're using
> internally to track RSqueak/VM performance on each commit. Every time
> Eliot
> releases a new set of Cog VMs, I also manually trigger the system to run
> benchmarks on Cog. (Once we move the proper VM to Github, I will set it up
> so we test each commit on the main development branch and the release
> branch, too, so we will have very detailed breakdowns.) We wanted to share
> this setup and the results with the community.
>
> We're collecting results in a Codespeed website (just a frontend to
> present
> the data) which we moved to speed.squeak.org today, and it is also linked
> from the squeak.org website (http://squeak.org/codespeed/).
>
> We have some info about the setup on the about page:
> http://speed.squeak.org/about. On the Changes tab, you can see the most
> recent results per platform and environment, with details about the
> machines on the bottom. Note that we calculate all the statistics on the
> workers themselves and only send the time and std dev, so the results' min
> and max values you see on the website are bogus.
>
> Finally, the code for the workers also is on Github (
> https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks
> are
> all organized in Squeaksource (
> http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html).
> Right now I've just dumped benchmarks from various sources in there,
> that's
> why you see the same benchmark implemented multiple times in different
> ways, and some micro benchmarks don't make too much sense as they are.
> We're happy to get comments, feedback, or updated versions of the
> benchmarking packages. Updating the benchmarking code is easy, and we hope
> this setup proves to be useful enough for the community to warrant
> continuously updating and extending the set of benchmarks.
>
> We are also planning to add more platforms, the setup should make this
> fairly painless, we just need the dedicated machines. We've been testing
> the standard Cog/Spur VM on a Ubuntu machine, and today we added a
> Raspberry Pi 1 that is still churning through the latest Cog and
> RSqueak/VM
> commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and
> other builds of the Squeak VM, too.
>
> Cheers,
> Tim
>
>



Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

marcel.taeumel
David T. Lewis wrote
This really looks very useful, and the graphs and trend lines are nice for
visualization.

Would there be any value in adding an interpreter VM as a baseline to show
Cog/Spur/RSqueak compared to a non-optimized VM?

Dave


> Hi,
>
> I sent around a note earlier about a benchmarking tool that we're using
> internally to track RSqueak/VM performance on each commit. Every time
> Eliot
> releases a new set of Cog VMs, I also manually trigger the system to run
> benchmarks on Cog. (Once we move the proper VM to Github, I will set it up
> so we test each commit on the main development branch and the release
> branch, too, so we will have very detailed breakdowns.) We wanted to share
> this setup and the results with the community.
>
> We're collecting results in a Codespeed website (just a frontend to
> present
> the data) which we moved to speed.squeak.org today, and it is also linked
> from the squeak.org website (http://squeak.org/codespeed/).
>
> We have some info about the setup on the about page:
> http://speed.squeak.org/about. On the Changes tab, you can see the most
> recent results per platform and environment, with details about the
> machines on the bottom. Note that we calculate all the statistics on the
> workers themselves and only send the time and std dev, so the results' min
> and max values you see on the website are bogus.
>
> Finally, the code for the workers also is on Github (
> https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks
> are
> all organized in Squeaksource (
> http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html).
> Right now I've just dumped benchmarks from various sources in there,
> that's
> why you see the same benchmark implemented multiple times in different
> ways, and some micro benchmarks don't make too much sense as they are.
> We're happy to get comments, feedback, or updated versions of the
> benchmarking packages. Updating the benchmarking code is easy, and we hope
> this setup proves to be useful enough for the community to warrant
> continuously updating and extending the set of benchmarks.
>
> We are also planning to add more platforms, the setup should make this
> fairly painless, we just need the dedicated machines. We've been testing
> the standard Cog/Spur VM on a Ubuntu machine, and today we added a
> Raspberry Pi 1 that is still churning through the latest Cog and
> RSqueak/VM
> commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and
> other builds of the Squeak VM, too.
>
> Cheers,
> Tim
>
>
+1

Best,
Marcel
Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

timfelgentreff
I added the VM today, and then realized I haven't built in a variation point for choosing the image - and I only have a script to build a Spur image right now. So I regret that this wasn't (as I first assumed) a thing of 2 minutes. But I'll get to it.

marcel.taeumel wrote
David T. Lewis wrote
This really looks very useful, and the graphs and trend lines are nice for
visualization.

Would there be any value in adding an interpreter VM as a baseline to show
Cog/Spur/RSqueak compared to a non-optimized VM?

Dave


> Hi,
>
> I sent around a note earlier about a benchmarking tool that we're using
> internally to track RSqueak/VM performance on each commit. Every time
> Eliot
> releases a new set of Cog VMs, I also manually trigger the system to run
> benchmarks on Cog. (Once we move the proper VM to Github, I will set it up
> so we test each commit on the main development branch and the release
> branch, too, so we will have very detailed breakdowns.) We wanted to share
> this setup and the results with the community.
>
> We're collecting results in a Codespeed website (just a frontend to
> present
> the data) which we moved to speed.squeak.org today, and it is also linked
> from the squeak.org website (http://squeak.org/codespeed/).
>
> We have some info about the setup on the about page:
> http://speed.squeak.org/about. On the Changes tab, you can see the most
> recent results per platform and environment, with details about the
> machines on the bottom. Note that we calculate all the statistics on the
> workers themselves and only send the time and std dev, so the results' min
> and max values you see on the website are bogus.
>
> Finally, the code for the workers also is on Github (
> https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks
> are
> all organized in Squeaksource (
> http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html).
> Right now I've just dumped benchmarks from various sources in there,
> that's
> why you see the same benchmark implemented multiple times in different
> ways, and some micro benchmarks don't make too much sense as they are.
> We're happy to get comments, feedback, or updated versions of the
> benchmarking packages. Updating the benchmarking code is easy, and we hope
> this setup proves to be useful enough for the community to warrant
> continuously updating and extending the set of benchmarks.
>
> We are also planning to add more platforms, the setup should make this
> fairly painless, we just need the dedicated machines. We've been testing
> the standard Cog/Spur VM on a Ubuntu machine, and today we added a
> Raspberry Pi 1 that is still churning through the latest Cog and
> RSqueak/VM
> commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and
> other builds of the Squeak VM, too.
>
> Cheers,
> Tim
>
>
+1

Best,
Marcel
Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

Stefan Marr-3
In reply to this post by timfelgentreff
Hi Tim:

Out of curiosity, where exactly did you take the GraphSearch and Json benchmarks from?
Are they consistent wit the latest versions on https://github.com/smarr/are-we-fast-yet/tree/master/benchmarks/SOM?

Just wondering. Of course it would also be interesting to have a highly optimized Smalltalk like TruffleSOM on there, just to keep people motivated to reach some state-of-the-art performance ;)

Btw, I recently added the Collision Detection and Havlak benchmarks to AWFY. Those are additional larger benchmarks on the level of Richards and DeltaBlue. They should be a little more representative then microbenchmarks.

Best regards
Stefan

> On 08 Jun 2016, at 16:43, Tim Felgentreff <[hidden email]> wrote:
>
> Hi,
>
> I sent around a note earlier about a benchmarking tool that we're using internally to track RSqueak/VM performance on each commit. Every time Eliot releases a new set of Cog VMs, I also manually trigger the system to run benchmarks on Cog. (Once we move the proper VM to Github, I will set it up so we test each commit on the main development branch and the release branch, too, so we will have very detailed breakdowns.) We wanted to share this setup and the results with the community.
>
> We're collecting results in a Codespeed website (just a frontend to present the data) which we moved to speed.squeak.org today, and it is also linked from the squeak.org website (http://squeak.org/codespeed/).
>
> We have some info about the setup on the about page: http://speed.squeak.org/about. On the Changes tab, you can see the most recent results per platform and environment, with details about the machines on the bottom. Note that we calculate all the statistics on the workers themselves and only send the time and std dev, so the results' min and max values you see on the website are bogus.
>
> Finally, the code for the workers also is on Github (https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks are all organized in Squeaksource (http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html). Right now I've just dumped benchmarks from various sources in there, that's why you see the same benchmark implemented multiple times in different ways, and some micro benchmarks don't make too much sense as they are. We're happy to get comments, feedback, or updated versions of the benchmarking packages. Updating the benchmarking code is easy, and we hope this setup proves to be useful enough for the community to warrant continuously updating and extending the set of benchmarks.
>
> We are also planning to add more platforms, the setup should make this fairly painless, we just need the dedicated machines. We've been testing the standard Cog/Spur VM on a Ubuntu machine, and today we added a Raspberry Pi 1 that is still churning through the latest Cog and RSqueak/VM commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and other builds of the Squeak VM, too.
>
> Cheers,
> Tim
>


Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

David T. Lewis
In reply to this post by timfelgentreff
On Fri, Jun 10, 2016 at 06:54:24AM -0700, timfelgentreff wrote:
> I added the VM today, and then realized I haven't built in a variation point
> for choosing the image - and I only have a script to build a Spur image
> right now. So I regret that this wasn't (as I first assumed) a thing of 2
> minutes. But I'll get to it.
>

Let me know if I can help. Attached is a script for building the VM from
latest SVN sources (the install step is commented out).

Choosing the image might be harder. ckformat can be used to select a VM
for an image, but the interpreter VM can only run a V3 image (not Spur),
so maybe the comparison is not so meaningful. I have been maintaining a V3
mirror of Squeak trunk (http://build.squeak.org/job/FollowTrunkOnOldV3Image/)
but I will not be maintaining long term (only a few more months or so).

On balance, maybe it is better to use a Stack interpreter VM as the baseline.
This should be similar enough to the context interpreter VM, and it will
run Spur images, so that may be good as a baseline. It would have been nice
to say that "Cog is X times faster than the original interpreter VM" but
comparing to StackInterpreter may be close enough.

Dave


>
> marcel.taeumel wrote
> >
> > David T. Lewis wrote
> >> This really looks very useful, and the graphs and trend lines are nice
> >> for
> >> visualization.
> >>
> >> Would there be any value in adding an interpreter VM as a baseline to
> >> show
> >> Cog/Spur/RSqueak compared to a non-optimized VM?
> >>
> >> Dave
> >>
> >>
> >>> Hi,
> >>>
> >>> I sent around a note earlier about a benchmarking tool that we're using
> >>> internally to track RSqueak/VM performance on each commit. Every time
> >>> Eliot
> >>> releases a new set of Cog VMs, I also manually trigger the system to run
> >>> benchmarks on Cog. (Once we move the proper VM to Github, I will set it
> >>> up
> >>> so we test each commit on the main development branch and the release
> >>> branch, too, so we will have very detailed breakdowns.) We wanted to
> >>> share
> >>> this setup and the results with the community.
> >>>
> >>> We're collecting results in a Codespeed website (just a frontend to
> >>> present
> >>> the data) which we moved to speed.squeak.org today, and it is also
> >>> linked
> >>> from the squeak.org website (http://squeak.org/codespeed/).
> >>>
> >>> We have some info about the setup on the about page:
> >>> http://speed.squeak.org/about. On the Changes tab, you can see the most
> >>> recent results per platform and environment, with details about the
> >>> machines on the bottom. Note that we calculate all the statistics on the
> >>> workers themselves and only send the time and std dev, so the results'
> >>> min
> >>> and max values you see on the website are bogus.
> >>>
> >>> Finally, the code for the workers also is on Github (
> >>> https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks
> >>> are
> >>> all organized in Squeaksource (
> >>> http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html).
> >>> Right now I've just dumped benchmarks from various sources in there,
> >>> that's
> >>> why you see the same benchmark implemented multiple times in different
> >>> ways, and some micro benchmarks don't make too much sense as they are.
> >>> We're happy to get comments, feedback, or updated versions of the
> >>> benchmarking packages. Updating the benchmarking code is easy, and we
> >>> hope
> >>> this setup proves to be useful enough for the community to warrant
> >>> continuously updating and extending the set of benchmarks.
> >>>
> >>> We are also planning to add more platforms, the setup should make this
> >>> fairly painless, we just need the dedicated machines. We've been testing
> >>> the standard Cog/Spur VM on a Ubuntu machine, and today we added a
> >>> Raspberry Pi 1 that is still churning through the latest Cog and
> >>> RSqueak/VM
> >>> commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS
> >>> and
> >>> other builds of the Squeak VM, too.
> >>>
> >>> Cheers,
> >>> Tim
> >>>
> >>>
> > +1
> >
> > Best,
> > Marcel
>
>
>
>
>
> --
> View this message in context: http://forum.world.st/A-speedcenter-for-Squeak-tp4899946p4900414.html
> Sent from the Squeak - Dev mailing list archive at Nabble.com.



mvm (821 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

timfelgentreff
In reply to this post by Stefan Marr-3

Hi Stefan,

the benachmarks are from an older version of SMark that was on SmalltalkHub. I will take a look at integrating your newer versions.

About also running TruffleSOM, while it might be interesting for some benchmarks, I am really interested in testing VMs that run the full image (including heartbeat, event sensor) while running those benchmarks.

cheers,
Tim

Am 10.06.2016 6:22 nachm. schrieb "Stefan Marr" <[hidden email]>:
Hi Tim:

Out of curiosity, where exactly did you take the GraphSearch and Json benchmarks from?
Are they consistent wit the latest versions on https://github.com/smarr/are-we-fast-yet/tree/master/benchmarks/SOM?

Just wondering. Of course it would also be interesting to have a highly optimized Smalltalk like TruffleSOM on there, just to keep people motivated to reach some state-of-the-art performance ;)

Btw, I recently added the Collision Detection and Havlak benchmarks to AWFY. Those are additional larger benchmarks on the level of Richards and DeltaBlue. They should be a little more representative then microbenchmarks.

Best regards
Stefan

> On 08 Jun 2016, at 16:43, Tim Felgentreff <[hidden email]> wrote:
>
> Hi,
>
> I sent around a note earlier about a benchmarking tool that we're using internally to track RSqueak/VM performance on each commit. Every time Eliot releases a new set of Cog VMs, I also manually trigger the system to run benchmarks on Cog. (Once we move the proper VM to Github, I will set it up so we test each commit on the main development branch and the release branch, too, so we will have very detailed breakdowns.) We wanted to share this setup and the results with the community.
>
> We're collecting results in a Codespeed website (just a frontend to present the data) which we moved to speed.squeak.org today, and it is also linked from the squeak.org website (http://squeak.org/codespeed/).
>
> We have some info about the setup on the about page: http://speed.squeak.org/about. On the Changes tab, you can see the most recent results per platform and environment, with details about the machines on the bottom. Note that we calculate all the statistics on the workers themselves and only send the time and std dev, so the results' min and max values you see on the website are bogus.
>
> Finally, the code for the workers also is on Github (https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks are all organized in Squeaksource (http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html). Right now I've just dumped benchmarks from various sources in there, that's why you see the same benchmark implemented multiple times in different ways, and some micro benchmarks don't make too much sense as they are. We're happy to get comments, feedback, or updated versions of the benchmarking packages. Updating the benchmarking code is easy, and we hope this setup proves to be useful enough for the community to warrant continuously updating and extending the set of benchmarks.
>
> We are also planning to add more platforms, the setup should make this fairly painless, we just need the dedicated machines. We've been testing the standard Cog/Spur VM on a Ubuntu machine, and today we added a Raspberry Pi 1 that is still churning through the latest Cog and RSqueak/VM commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS and other builds of the Squeak VM, too.
>
> Cheers,
> Tim
>




Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

timfelgentreff
In reply to this post by David T. Lewis

Hi David,

sure, I can easily add the Stack Vm once we have it built on every commit from Github :)

cheers,
Tim

Am 11.06.2016 3:16 vorm. schrieb "David T. Lewis" <[hidden email]>:
On Fri, Jun 10, 2016 at 06:54:24AM -0700, timfelgentreff wrote:
> I added the VM today, and then realized I haven't built in a variation point
> for choosing the image - and I only have a script to build a Spur image
> right now. So I regret that this wasn't (as I first assumed) a thing of 2
> minutes. But I'll get to it.
>

Let me know if I can help. Attached is a script for building the VM from
latest SVN sources (the install step is commented out).

Choosing the image might be harder. ckformat can be used to select a VM
for an image, but the interpreter VM can only run a V3 image (not Spur),
so maybe the comparison is not so meaningful. I have been maintaining a V3
mirror of Squeak trunk (http://build.squeak.org/job/FollowTrunkOnOldV3Image/)
but I will not be maintaining long term (only a few more months or so).

On balance, maybe it is better to use a Stack interpreter VM as the baseline.
This should be similar enough to the context interpreter VM, and it will
run Spur images, so that may be good as a baseline. It would have been nice
to say that "Cog is X times faster than the original interpreter VM" but
comparing to StackInterpreter may be close enough.

Dave


>
> marcel.taeumel wrote
> >
> > David T. Lewis wrote
> >> This really looks very useful, and the graphs and trend lines are nice
> >> for
> >> visualization.
> >>
> >> Would there be any value in adding an interpreter VM as a baseline to
> >> show
> >> Cog/Spur/RSqueak compared to a non-optimized VM?
> >>
> >> Dave
> >>
> >>
> >>> Hi,
> >>>
> >>> I sent around a note earlier about a benchmarking tool that we're using
> >>> internally to track RSqueak/VM performance on each commit. Every time
> >>> Eliot
> >>> releases a new set of Cog VMs, I also manually trigger the system to run
> >>> benchmarks on Cog. (Once we move the proper VM to Github, I will set it
> >>> up
> >>> so we test each commit on the main development branch and the release
> >>> branch, too, so we will have very detailed breakdowns.) We wanted to
> >>> share
> >>> this setup and the results with the community.
> >>>
> >>> We're collecting results in a Codespeed website (just a frontend to
> >>> present
> >>> the data) which we moved to speed.squeak.org today, and it is also
> >>> linked
> >>> from the squeak.org website (http://squeak.org/codespeed/).
> >>>
> >>> We have some info about the setup on the about page:
> >>> http://speed.squeak.org/about. On the Changes tab, you can see the most
> >>> recent results per platform and environment, with details about the
> >>> machines on the bottom. Note that we calculate all the statistics on the
> >>> workers themselves and only send the time and std dev, so the results'
> >>> min
> >>> and max values you see on the website are bogus.
> >>>
> >>> Finally, the code for the workers also is on Github (
> >>> https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking) and the Benchmarks
> >>> are
> >>> all organized in Squeaksource (
> >>> http://www.hpi.uni-potsdam.de/hirschfeld/squeaksource/BenchmarkRunner.html).
> >>> Right now I've just dumped benchmarks from various sources in there,
> >>> that's
> >>> why you see the same benchmark implemented multiple times in different
> >>> ways, and some micro benchmarks don't make too much sense as they are.
> >>> We're happy to get comments, feedback, or updated versions of the
> >>> benchmarking packages. Updating the benchmarking code is easy, and we
> >>> hope
> >>> this setup proves to be useful enough for the community to warrant
> >>> continuously updating and extending the set of benchmarks.
> >>>
> >>> We are also planning to add more platforms, the setup should make this
> >>> fairly painless, we just need the dedicated machines. We've been testing
> >>> the standard Cog/Spur VM on a Ubuntu machine, and today we added a
> >>> Raspberry Pi 1 that is still churning through the latest Cog and
> >>> RSqueak/VM
> >>> commits. We'd like to add a Mac and a Windows box, and maybe SqueakJS
> >>> and
> >>> other builds of the Squeak VM, too.
> >>>
> >>> Cheers,
> >>> Tim
> >>>
> >>>
> > +1
> >
> > Best,
> > Marcel
>
>
>
>
>
> --
> View this message in context: http://forum.world.st/A-speedcenter-for-Squeak-tp4899946p4900414.html
> Sent from the Squeak - Dev mailing list archive at Nabble.com.





Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

Stefan Marr-3
In reply to this post by timfelgentreff
Hi Tim:

> On 11 Jun 2016, at 07:39, Tim Felgentreff <[hidden email]> wrote:
>
> About also running TruffleSOM, while it might be interesting for some benchmarks, I am really interested in testing VMs that run the full image (including heartbeat, event sensor) while running those benchmarks.

The JVM generates safe points, yield points, and all the things that are necessary for Java semantics… (i.e., all the equivalents for heartbeat/event sensor overhead).
Also, such minor details don’t account for more than a few percent overhead on average.

No excuses :-P

Best regards
Stefan


--
Stefan Marr
Johannes Kepler Universität Linz
http://stefan-marr.de/research/




Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

timfelgentreff
Well Stefan, if you want to contribute the code and setup a hook on your end to trigger the benchmarks, I'm happy to include it in the benchmarking repository at https://github.com/HPI-SWA-Lab/RSqueak-Benchmarking/. You'll have to host the binaries somewhere ;)

:P


Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

Bert Freudenberg
In reply to this post by Stefan Marr-3
On Saturday, June 11, 2016, Stefan Marr <[hidden email]> wrote:
Hi Tim:

> On 11 Jun 2016, at 07:39, Tim Felgentreff <<a href="javascript:;" onclick="_e(event, &#39;cvml&#39;, &#39;timfelgentreff@gmail.com&#39;)">timfelgentreff@...> wrote:
>
> About also running TruffleSOM, while it might be interesting for some benchmarks, I am really interested in testing VMs that run the full image (including heartbeat, event sensor) while running those benchmarks.

The JVM generates safe points, yield points, and all the things that are necessary for Java semantics… (i.e., all the equivalents for heartbeat/event sensor overhead).
Also, such minor details don’t account for more than a few percent overhead on average.

No excuses :-P
 
The point is that we want to benchmark a full  Smalltalk system, not just a language runtime. There *is* a difference ;)

- Bert -


--
- Bert -



Reply | Threaded
Open this post in threaded view
|

Re: A speedcenter for Squeak

Stefan Marr-3
Hi:

> On 11 Jun 2016, at 13:01, Bert Freudenberg <[hidden email]> wrote:
>  
> The point is that we want to benchmark a full  Smalltalk system, not just a language runtime. There *is* a difference ;)

You measure what the benchmarks exercise.
If that’s the same as in another runtime, then it is still a comparison that can provide useful insights ;)

Best regards
Stefan


--
Stefan Marr
Johannes Kepler Universität Linz
http://stefan-marr.de/research/