Hi,
i proposing to collect and put existing benchmarks into VMMaker repository. The idea is simple: when we will have an automated build server, then we could run benchmarks on a successful build and log the numbers to a file(s), so we could compare them between different builds or different VMs, and keep track of history, like many other projects do. -- Best regards, Igor Stasenko AKA sig. |
Hi Igor:
On 04 Jan 2011, at 19:41, Igor Stasenko wrote: > i proposing to collect and put existing benchmarks into VMMaker repository. Would be great if they would be kept somewhere not to VM specific. I also have always the need for a good set of maintained benchmarks for the RoarVM. In case someone is interested, the parallel versions of testCompiler, tinyBenchmarks, and the integer bucket sort from the NAS Parallel benchmarks should be somewhere on my hard disk. Best regards Stefan -- Stefan Marr Software Languages Lab Vrije Universiteit Brussel Pleinlaan 2 / B-1050 Brussels / Belgium http://soft.vub.ac.be/~smarr Phone: +32 2 629 2974 Fax: +32 2 629 3525 |
In reply to this post by Igor Stasenko
I would love to have such kind of benchmarks too. Mostly to know how much overhead I add whith my vm changes ;)
I don't care in which repo , the same for me. On Tue, Jan 4, 2011 at 7:41 PM, Igor Stasenko <[hidden email]> wrote: Hi, |
In reply to this post by Igor Stasenko
+ 1
Let us learn from the pypy guys. > i proposing to collect and put existing benchmarks into VMMaker repository. > > The idea is simple: > when we will have an automated build server, then we could run benchmarks > on a successful build and log the numbers to a file(s), so we could > compare them between different builds or different VMs, > and keep track of history, like many other projects do. > > > > -- > Best regards, > Igor Stasenko AKA sig. > |
Hi:
On 04 Jan 2011, at 21:54, stephane ducasse wrote: > > + 1 > Let us learn from the pypy guys. The speed center they use is a nice thing. However, the Chrome browser people have real performance regression tests which provide unit-test like feedback. But their infrastructure is based on buildbot. Maybe there is similar stuff for Hudson out there. Best regards Stefan > > >> i proposing to collect and put existing benchmarks into VMMaker repository. >> >> The idea is simple: >> when we will have an automated build server, then we could run benchmarks >> on a successful build and log the numbers to a file(s), so we could >> compare them between different builds or different VMs, >> and keep track of history, like many other projects do. -- Stefan Marr Software Languages Lab Vrije Universiteit Brussel Pleinlaan 2 / B-1050 Brussels / Belgium http://soft.vub.ac.be/~smarr Phone: +32 2 629 2974 Fax: +32 2 629 3525 |
In reply to this post by Stefan Marr
On 4 January 2011 20:00, Stefan Marr <[hidden email]> wrote:
> > Hi Igor: > > On 04 Jan 2011, at 19:41, Igor Stasenko wrote: > >> i proposing to collect and put existing benchmarks into VMMaker repository. > > Would be great if they would be kept somewhere not to VM specific. Okay, how about creating a separate VMBenchmarks repository and putting VMBenchmarks package there? > I also have always the need for a good set of maintained benchmarks for the RoarVM. > > In case someone is interested, the parallel versions of testCompiler, tinyBenchmarks, and the integer bucket sort from the NAS Parallel benchmarks should be somewhere on my hard disk. > > > Best regards > Stefan > > > -- > Stefan Marr > Software Languages Lab > Vrije Universiteit Brussel > Pleinlaan 2 / B-1050 Brussels / Belgium > http://soft.vub.ac.be/~smarr > Phone: +32 2 629 2974 > Fax: +32 2 629 3525 > > -- Best regards, Igor Stasenko AKA sig. |
Hi Igor:
On 04 Jan 2011, at 22:40, Igor Stasenko wrote: > Okay, how about creating a separate > VMBenchmarks repository > and putting VMBenchmarks package there? Sure, sounds good. There are also the Systems benchmarks at http://www.squeaksource.com/PharoBenchmarks. One question would be what to include in such a benchmark suite. And another question how to design the benchmark harness. I need something (and if you are going to automate it, you probably, too) which is scriptable from the command line. My harness registers itself in the startup list and then looks for the command line arguments to choose a benchmark class. Best regards Stefan -- Stefan Marr Software Languages Lab Vrije Universiteit Brussel Pleinlaan 2 / B-1050 Brussels / Belgium http://soft.vub.ac.be/~smarr Phone: +32 2 629 2974 Fax: +32 2 629 3525 |
Free forum by Nabble | Edit this page |