Does anyone know whether Pharo 7 is as fast as VW 7.10 or later? Do we have recent comparative benchmarks? The comparison should ignore potential GPU-based improvement in my algo; that will happen later. The test should involve some math, file streaming, and the parsing that entails—an ordinary mix of macrobenchmarks. The comparison should be based on both Pharo 7 and VW each running a single Smalltalk Process (one time-slice of one OS thread in one OS process). I need Pharo 7 speed to be comparable or better to justify the port. Pharo is definitely looking and working better. I’ve spend more time with it last few weeks than during the previous decade. Thanks to everyone for the effort and improvements. Shaping From: Shaping [mailto:[hidden email]] Hi Eliot. Pharo (& Squeak & Cuis) Float subclass BoxedFloat64 maps exactly to VW's Double. In 64-bit SmallFloat64 maps exactly to SmallDouble. But I wonder whether there is any issue here. STON would use the print strings for (PSC) Float / (VW) Double, and so deseerialization on Pharo would automatically produce the right class. Going in the other direction might need some help. APF needs support in PSC before one can port, but are representable as suitably-sized word arrays. There is no support for __float128 anywhere in the VM (e.g. not even in the FFI) on PSC as yet. I see Pharo’s WordArray. I’ll work on an APF for Pharo, as time permits. I’m using APFs in VW in the 300-bit range, and want to reduce the needed precision to 64 bits, to save space and time on large (5 million+) scalar time-series, both on the heap and during BOSSing (25 m save-time now). The problem is not so much an issue for the JulianDayNumber (JDN)-precision, which is adequate in this app at 14 to 15 digits (even though my JDN class subclasses APF, for now). Other calculations need the more extreme precision. I think I can make 128-bit floats work, and would really like to see a small, fast, boxed 128-bit float implementation in Pharo or VW. The APFs are big and slow. Where in the queue of planned improvements to Pharo does such a task lie? I suspect it’s not a very popular item. Broadening the issue somewhat, I’m trying to find as many good reasons as possible to justify the work needed to port all my VW stuff to Pharo. I’ve seen the references to Cog’s speed and coming speed-up. Are there recent (say, in the last year) benchmarks comparing VW and Pharo? Any details here would be very much appreciated. Having no namespaces in Pharo is, I think, the biggest impediment. I prefer not to prefix class names, but there may be fewer name-collisions than I suppose--maybe none. Still, I need to know how VW and Pharo classes map in order to place overrides and extensions correctly. Besides the mentioned float-class mappings is there a reference on this? Object allSubclasses in Pharo 7 64-bit, produces 14946 classes. Pharo is a little bigger than it used to be. I suppose I don’t need to check all unloaded packages because all classes in each of those will have the same unique prefix. Is that correct? Or, I could just load every package I can find, before I check names. But that little experiment has never gone well in the past. Is the Pharo-with-namespaces issue dead or merely suspended, awaiting a more fitting idea than what VW currently offers? Shaping On Tue, Nov 6, 2018 at 12:56 AM Shaping <[hidden email]> wrote:
-- _,,,^..^,,,_ best, Eliot |
Free forum by Nabble | Edit this page |