>
> "Thanks for the information. I have the impression that you stressed
> too much the speed of the new compiler.
Yes, sorry abou that.
>
> I know that you use it a lot and that reflectivity is completely
> based on it and working great.
Indeed, one way to formulate it is that everything that I needed works
great. There are some
things not yet perfect, but these are fixable.
I will send a list of bugs / plan for next steps soon.
>
> "Is SmaCC the major bottleneck, or is it tree traversal? If
> scanning/parsing is the slowest part, perhaps it is worth evaluating
> Alex Warth's OMeta as a potential replacement (
http://www.cs.ucla.edu/~awarth/ometa/
> ).
My intuition would be that OMeta should be *slower* then SmaCC. I did
not read the paper and did not benchmark it, but I thought that OMeta
would come
from the idea that with today's machines it makes sense to revisit the
abandoned paths of the 60ties and 70ties wrt. parsing. (Like the work
on Generalized
LR Parsing and Scannerless parsers done elsewhere). The idea here is
that as the machines are so fast and we have so much memory, we can
completely rething the algorithms used for parsing, e.g. the only
usefull property of LALR parsing is that it is fast and uses not that
much memory. Everything
else sucks. e.g. the need to carefully massage a simple undestandable
LR Grammar into LALR form... and the impossibility to compose
grammars. (This
is *really* cool with Scannerless Generalizes LR Parsing... very cool
for DSLs).
>
> "Very interesting Markus, thank you very much. Do you think there is
> still space for optimizations while keeping the Visitor pattern?
>
Most of the performance loss is the IR buildup, the other should be
SmaCC. The question is if it's really that
much of a problem to need a fix now...
Marcus
--
Marcus Denker --
[hidden email]
http://www.iam.unibe.ch/~denker