"0 tinyBenchmarks" results

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

"0 tinyBenchmarks" results

Michael Haupt-3
Dear all,

could someone please give me some insight on the meanings of the
results obtained from running "0 tinyBenchmarks"?

It reports results for bytecodes per second, and for sends per second.
Those should, if I understand that correctly, correlate in that a
larger bc/sec value should imply a larger sends/sec value.

That is not the case; for example, I have obtained the following
(format: [bc/sec, sends/sec]):

[20565552,1355113]
[20901371,1352909]

Apparently, the bytecodes/sec rate is larger in the second result, but
the sends/sec rate is about the same.

BTW the results were obtained running Dan Ingalls' SqueakOnJava on a
Java 5 VM, Linux/AMD64.

Best,

Michael

Reply | Threaded
Open this post in threaded view
|

Re: "0 tinyBenchmarks" results

Andreas.Raab
Michael Haupt wrote:
> could someone please give me some insight on the meanings of the
> results obtained from running "0 tinyBenchmarks"?

tinyBenchmarks are micro-benchmarks measuring the speed of bytecode
execution and message sending. They have no meaning other than to
compare VMs, e.g., it is not valid to make any larger claims from
certain results of tinyBenchmarks.

> It reports results for bytecodes per second, and for sends per second.
> Those should, if I understand that correctly, correlate in that a
> larger bc/sec value should imply a larger sends/sec value.

Usually, yes this is the case. However...

> That is not the case; for example, I have obtained the following
> (format: [bc/sec, sends/sec]):
>
> [20565552,1355113]
> [20901371,1352909]
>
> Apparently, the bytecodes/sec rate is larger in the second result, but
> the sends/sec rate is about the same.

... these results are *way* too close to be able to compare them.
Benchmarks results vary based on load of the machine and just having
your email client check for mail in the background, or some memory
swapping would perfectly explain the differences you see in the above.

Cheers,
   - Andreas


Reply | Threaded
Open this post in threaded view
|

Re: "0 tinyBenchmarks" results

stéphane ducasse-2
In reply to this post by Michael Haupt-3
michael were can we find the sources of SqueakOnJava?

On 9 août 06, at 07:17, Michael Haupt wrote:

> Dear all,
>
> could someone please give me some insight on the meanings of the
> results obtained from running "0 tinyBenchmarks"?
>
> It reports results for bytecodes per second, and for sends per second.
> Those should, if I understand that correctly, correlate in that a
> larger bc/sec value should imply a larger sends/sec value.
>
> That is not the case; for example, I have obtained the following
> (format: [bc/sec, sends/sec]):
>
> [20565552,1355113]
> [20901371,1352909]
>
> Apparently, the bytecodes/sec rate is larger in the second result, but
> the sends/sec rate is about the same.
>
> BTW the results were obtained running Dan Ingalls' SqueakOnJava on a
> Java 5 VM, Linux/AMD64.
>
> Best,
>
> Michael
>


Reply | Threaded
Open this post in threaded view
|

Re: "0 tinyBenchmarks" results

Michael Haupt-3
In reply to this post by Andreas.Raab
Hi Andreas,

thanks for your explanations.

On 8/9/06, Andreas Raab <[hidden email]> wrote:
> tinyBenchmarks are micro-benchmarks measuring the speed of bytecode
> execution and message sending. They have no meaning other than to
> compare VMs, e.g., it is not valid to make any larger claims from
> certain results of tinyBenchmarks.

Sure; I had no such intention.

> > (format: [bc/sec, sends/sec]):
> > [20565552,1355113]
> > [20901371,1352909]
>
> ... these results are *way* too close to be able to compare them.
> Benchmarks results vary based on load of the machine and just having
> your email client check for mail in the background, or some memory
> swapping would perfectly explain the differences you see in the above.

Hm, I see... well, there was no e-mail client running when I ran those
measurements, but of course the Linux box was also not running in
single-user mode (which would of course not eliminate swapping
overhead).

All in all, I have ran the benchmarks 10 times, and there are also
differences like [20565552,1355113] vs. [21024967,1269320], where the
bc/sec rate is 2.2 % larger and the sends/sec rate 6.7 % smaller in
the second pair. Those are also insignificant, given your explanation.

Thanks again!

Are there more complete benchmarks available that would also run on
the SqueakOnJava VM, i.e., in a mini image?

Best,

Michael

Reply | Threaded
Open this post in threaded view
|

Re: Re: "0 tinyBenchmarks" results

Michael Haupt-3
In reply to this post by stéphane ducasse-2
Hi Stéphane,

On 8/9/06, stéphane ducasse <[hidden email]> wrote:
> michael were can we find the sources of SqueakOnJava?

the sources aren't publicly available (unless using a decompiler), and
I haven't used the source code to run the measurements; it's all in
the mini image. I had just ran the measurements several times in a row
and was surprised at the differences.

Best,

Michael