Hi all,
I am interested in energy efficiency metrics for Pharo (version >=8). Just now, I came across this research and related GitHub project:
Unfortunately, the paper mentions that Smalltalk was excluded from the results because the (VW) compiler was proprietary :-S However, the GitHub repository does contain Smalltalk code and results, but I haven't been able to evaluate those. [1] Does anyone here have more information on this topic? The benchmarks seem to be low-level algorithms. Although that is useful, I think that a better argument for Pharo/Smalltalk efficiency is that a good OO design (e.g. created using responsibility-driven design with behaviorally complete objects) will be a better fit, can be much simpler and will thus be more efficient during development, as well as easier to maintain and evolve. [2] Has anyone done any research in this area that can quantify this aspect? Kind regards,
Jonathan van Alteren Founding Member | Object Guild B.V. Sustainable Software for Purpose-Driven Organizations [hidden email] |
Hi there, We did a related experiment with VA Smalltalk and other languages like Python, Java, etc... the context was how a JIT compiler can also help you reduce energy consumption...which could be very important for IoT. We did experiments on a Raspberry Pi, run some benchmarks and with some hardware tool we were measuring the wattage. You can see the whole presentation here: https://youtu.be/2xO0ohUNnug Hopefully this can get you started with Pharo experiments. Cheers, On Thu, Oct 1, 2020 at 8:47 AM Jonathan van Alteren <[hidden email]> wrote:
Mariano Martinez Peck Email: [hidden email] Twitter: @MartinezPeck |
Hi Mariano,
Thanks for your response! I will take a look at the presentation. I was hoping for some more concrete experience with the Green Software Lab benchmark. It's not a priority at the moment, but hopefully I'll get back to this in the near future. I'll report back with any findings. Kind regards,
Jonathan van Alteren Founding Member | Object Guild B.V. Sustainable Software for Purpose-Driven Organizations [hidden email] On 1 Oct 2020, 22:34 +0200, Mariano Martinez Peck <[hidden email]>, wrote:
|
In reply to this post by jvalteren@objectguild.com
The problem is that what do you measure.
When you move computation from the CPU to a GPU for example does it consume less or more. I think that such analyses are totally stupid. Is a fast execution consume less? I have serious doubts about it. Now if we measure how fast we drain a battery because of polling vs event based then this is different. S.
-------------------------------------------- Stéphane Ducasse 03 59 35 87 52 Assistant: Aurore Dalle FAX 03 59 57 78 50 TEL 03 59 35 86 16 S. Ducasse - Inria 40, avenue Halley, Parc Scientifique de la Haute Borne, Bât.A, Park Plaza Villeneuve d'Ascq 59650 France |
Hi Stéphane,
Thanks for your feedback. I agree that the usefulness of these results is limited. However, if we (Object Guild) want to make a case for energy efficiency, it can help if the language itself can be shown to be efficient as well. For now, I think the efficiency will need to come from a good object design. Kind regards,
Jonathan van Alteren Founding Member | Object Guild B.V. Sustainable Software for Purpose-Driven Organizations On 11 Oct 2020, 16:49 +0200, Stéphane Ducasse <[hidden email]>, wrote:
The problem is that what do you measure. |
> Hi Stéphane, > > Thanks for your feedback. I agree that the usefulness of these results is limited. However, if we (Object Guild) want to make a case for energy efficiency, it can help if the language itself can be shown to be efficient as well. I do not know what is energy efficient nor how it is measurable. Now our objectives is that pharo does not burn the batteries when doing nothing and we start to have that with the headless and idle vm. > For now, I think the efficiency will need to come from a good object design. this would presume that message passing is faster than branching. And I remember that we had argument with hardware people about our reengineering pattern on condition to polymorphism > > Kind regards, > > Jonathan van Alteren > > Founding Member | Object Guild B.V. > Sustainable Software for Purpose-Driven Organizations > > |
Hi Stéphane,
I dug around a little bit regarding this subject and found that people are working to create software that is aware of its energy consumption. There is a Dutch university research group actively involved with this and related topics here: http://s2group.cs.vu.nl/mission/. This article might be a good read on the subject: https://research.vu.nl/en/publications/a-manifesto-for-energy-aware-software Is this something that could be of interest to Inria or the Pharo project? What do you mean exactly with your last comment? I think that when distinctions in a domain are successfully made explicit at design time, this will improve performance at runtime and thus should also improve energy efficiency. How does that relate to your comment about message passing/branching/polymorphism? Kind regards,
Jonathan van Alteren Founding Member | Object Guild B.V. Sustainable Software for Purpose-Driven Organizations [hidden email] On 13 Oct 2020, 16:49 +0200, Stéphane Ducasse <[hidden email]>, wrote:
|
Or may be your speedy web browser consumes less.
I skimmed over it and it is wishful thinking. How can I engineer a system (besides using good algorithms instead of sloppy one) to consume less energy, if we do not know what means energy aware: At then this is what? - Number of instructions executed, the least the better? - What about missed caches? - What about missed instruction pipelining? For example on your mac when you unplug the cable you have a different card setup because the videos card can consume more. So we can degrade certain operation. But how to measure this seriously.
To me I’m sorry but I do not buy without serious measurement that " distinctions in a domain are successfully made explicit at design time, this will improve performance at runtime “ Let us play the scientific here for a moment: do you have data? did you measure it? what are you biais in your measurement/experiment. Now to reply to your question in OORP we promote that case statement were bad that we it is better to use message passing. Except that in some domains in some specific circumstances have case statement is faster. Similarly they are domains where GC is a problem.
-------------------------------------------- Stéphane Ducasse 03 59 35 87 52 Assistant: Aurore Dalle FAX 03 59 57 78 50 TEL 03 59 35 86 16 S. Ducasse - Inria 40, avenue Halley, Parc Scientifique de la Haute Borne, Bât.A, Park Plaza Villeneuve d'Ascq 59650 France |
In reply to this post by jvalteren@objectguild.com
It doesn't make a whole lot of sense to talk about the energy efficiency of a programming language. For example, I've seen the run time of a C benchmark go from 50 seconds to 1 microsecond when the optimisation level was changed. It doesn't even make much sense to talk about the energy efficiency of the code generated by a specific compiler with specific options: the underlying hardware counts too. A colleague of mine, looking at text compression algorithms for an information retrieval engine, found that the fastest algorithm depended on just which x86-64 chip, even what motherboard, was in use. It's obviously going to be the same for energy efficiency. So let's specify a particular physical machine, a particular compiler, and a particular set of compiler options. NOW does it make sense to talk about energy efficiency? Nope. It's going to depend on the problem as well. And the thing is that people tend to do different things in different programming languages, and different communities attract different support. There is no portable Smalltalk equivalent of NumPy, able to automatically take advantage of GPUs, for example. You can get some real surprises. For example, just now while writing this message, I fired up powerstat(8). I had the browser open and power consumption was about 12.8 W. I then launched Squeak and ran some benchmarks. Power consumption went DOWN to 11.4 W. That is, Squeak was "costing" me -1.4 W. If you understand the kind of things modern CPUs get up to, that is not as surprising as it seems. All it demonstrates is that getting MEANINGFUL answers is hard enough; getting GENERALISBLE answers is going to be, well, if anyone succeeded, I think they would have earned at least a Masters. On Tue, 13 Oct 2020 at 23:38, Jonathan van Alteren <[hidden email]> wrote:
|
Here is an interesting article that could help as a start: Cheers, On Thu, Oct 15, 2020 at 8:41 PM Richard O'Keefe <[hidden email]> wrote:
Mariano Martinez Peck Email: [hidden email] Twitter: @MartinezPeck |
The energy comparison web site is a useful reference. However, it measures a combination of - hardware platform - operating system (for example, FASTA does oodles of output) - compiler - runtime system (for example, garbage collector) - algorithm. Where there are multiple algorithms for a single language, we can see that that matters a LOT. For example, the fastest Rust code for FASTA is five times faster than the slowest, and we can expect a similar range in energy use. In the case of Smalltalk, do we expect Pharo and Amber to have the same time or energy costs? One of my earliest papers examined a "language X vs language Y" paper where I pointed out that they had compared moderately bad language X code to appallingly bad language Y code and when you improved both the only real difference was the efficiency of the 'print' function in each language. For this reason, amongst others, if you want to compare *languages*, you need multiple implementations in each language, otherwise what you are measuring is as much programmer skill as anything else. The one supremely useful thing in the language efficiency paper is that all the code they used is on github, including the tool they used to measure energy use. (It's a software-only tool.) That means that you can do your own measurements, and that's what really matters. On Wed, 21 Oct 2020 at 03:29, Mariano Martinez Peck <[hidden email]> wrote:
|
Free forum by Nabble | Edit this page |