> > Hi Marcel, Tobias, > I perfectly understand what an Encoder is. OK I said it transforms stream instead of filters stream because I'm not academic enough ;). > I agree that the pattern has a lot of potential, though parallelism still is a problem in Smalltalk processing model. For the potential, see my other message, which was delayed due to me sending it from the wrong account, it should have come before my answer to you. The model is mostly synchronous, so no parallelism, but some level of concurrent processing can be added and is useful for asynchronous programming (network stuff, for example). > > But: > In a sense, the canvas did handle a stream of graphics instructions (draw a line, a circle, fill a rectangle etc…). Yep. > Even, if we don't really reify those instructions, and tell (canvas write: (Line from: ... to: ....)) but rather use a more direct (canvas drawLineFrom:to:...) send. > By making it an Encoder, it now handles both a stream of graphics instructions and a stream of objects (that can appropriately convert themselves to a stream of graphic instructions thru double dispatching). > This is a metonymy. Not really. Having both a message-based and an object-based interface is somewhat common in this model, with the double dispatch deconstructing objects into sets of message-sends (with further object parameters) where necessary. But yes, that’s always a bit of a tension. > I will repeat why it does not sound legitimate to me: > First, a metonymy is obscuring responsibilities. > Either that's an un-necessary indirection, because objects already know how to draw themselves, and it composes well already, because an object will ask the objects that it is composed of to render themselves. Of course the Canvas (and presumably other parts of the system) already follow this kind of pattern, as a pattern. The “Encoder” (I have struggled with naming, because it combines the role of a filter and a stream and some visitor-ish-ness), formalizes this pattern. The benefits are the usual ones: pluggability, documentation (if something is a subclass of X, I know what to expect), reuse, lower cognitive overhead, blah blah. But that would mean more widespread adoption, and considering the fact that this stuff has lingered in the image for 15+ years the need may just not be there.. > Or once we want to use it as a filter for a stream of arbitrary objects, we get a problem of composability (understand composing as giving a specific layout to the objects we want to render). Layout is not a responsibility of the canvas, the canvas needs to reproduce a layout that’s been created. > So we have to give greater responsabilities to the dumb canvas for this composition to happen. > > I showed that the only place where we make use of the double dispatching technic exhibit this problem of composability (we can't render in PostScript a Morph composed of BookMorph, because we can't put several pages into a page…). When I left it, a BookMorph would behave appropriately when embedded: it would “print” the visible page. Is that no longer the case? And having the “encoder” stick around is the way of remembering the top-level context even while adapting to specific objects in a nested hierarchy. Really quite similar to how super and self interact. > -------- > > Note about text rendering: we generally have to recompose the layout for different target (for example, if we want to render a BookMorph on A4 paper with specific margins...). For this composition to take place, we need to know the font metrics, and use some specific rules for margins, alignment, tab stops, etc... That's what a CompositionScanner does. > I fail to see where those PostScript font metrics are in Squeak? If you recompose/reflow a text when printing, that’s a serious bug. This was a really long time ago, but IIRC fonts and font metrics were a significant problem. > Rendering on PostScript is not an exception. If we are able to share some fonts then we can omit the composition step for most simple cases (like generating an EPS figure). But if we start with a bitmap font, rendering in PostScript will be very rough indeed. A “true WYSIWYG” approach would have been to encode the screen fonts as Type 3 bitmap fonts, or just dump the whole bitmap. But that would have sucked and wasn’t what Dan wanted. Dan wanted to have a “nice” printed version of the paper that had been composed in a BookMorph. Another would have been to have true printer compatible Type 1 fonts in Squeak. Yeah, right :-) So I hacked it: I chose a set of Postscript fonts that were as close to an approximation of the look and metrics of the screen fonts used, concentrating the ones used in the paper. I also added a jshow command that would justify text on the printer, because the metrics obviously wouldn’t match perfectly. For justified text, that’s very noticeable. The whole thing is decidedly “best effort” and was produced with a very specific goal and time-constraints. To do it right would have required massive changes to Squeak’s graphic subsystem, changes that were very much out of scope, and probably still are. That said, I did produce a version of Squeak that rendered its screen via roughly this mechanism on NeXT's DisplayPostscript. It was *epic*, but also pretty wonky, because of course the metrics didn’t match. > > For this reason, generating the PostScript in VW smalltalk goes thru a higher level indirection, the Document, which is somehow responsible for composing the layout for an arbitrary input stream of objects. > It has to cooperate with a GraphicsContext (the quivalent of our Canvas) for transforming fonts to nearest PostScript equivalent, measuring, etc… Transforming fonts is done on the class side of PostscriptCanvas, measuring is done in the printer when needed. Again, the additional infrastructure that would have been required to do this right would have been substantial, with an ongoing support burden and licensing headaches (AFM files etc.). And of course today the “correct” answer is to use the platform’s device-independent rendering API, which will take care of these sorts of problems. But that’s not the spirit of the system, last I checked. > VW has a PostScriptContext which is a specific GraphicsContext for low level rendering instructions, but that's not where everything takes place. The Document would handle the DSC for example (that sounds quite logical no?). Document handling DSC? That sounds wrong, but I am not familiar with the details. > Also note that a Document is not conceptually bound to a Postscript target, it could be anything, even LaTeX or Word backend, in which case it could eventually delegate the composition phase (which could work for a flow of text and small graphics, but would be more delicate for math expressions, tables and figures though). Yeah, again that’s at a whole different level. Cheers, Marcel |
In reply to this post by Marcel Weiher-3
Hello Marcel, Nicolas, Tobias and Dave
Thank you Marcel W. for the illustrations and pointing out that the NullEncoder (figures now included [5]) implements a pipe/and filter hierarchy. The video about app architectures [6] in which you describe different architecture styles is helpful. In particular you point out that the choice of a pipe/filter architecture has advantages in terms of memory consumption. Tobias, you ask [the fact that some canvas subclasses do not use the 'target' object] <citation> > That’s only technically true, and really an oversight. All canvases write their output somewhere, be it a bitmap, native display surface or other ‘DisplayMedium'. A canvas is a filter for converting morphs to this target ‘DisplayMedium’. > > And having filter-canvases would be really cool :-) You mean like ColorMappingCanvas, AlphaBlendingCanvas, and ShadowDrawingCanvas? (Which, by your account, should not have an own myCanvas but rather reuse target…)? </citation> My answer is that I do not see any benefit of using 'target' in these classes. But Marcel Weiher gave as well an example of a more elaborate NullEncoder/FlattenEncoder/Canvas hierarchy to be used in a pipe/filter architecture style. [7] So let's try to add more output classes to see how we can benefit from this architecture style. Marcel Weiher noted in 1999 [8] <citation> [10 Sept 1999,MPW] The current MorphicPostscript support includes both EPS and multi-page Postscript generation. BookMorphs generate multi-page files, all other Morphs generate an EPS ready for inclusion, though currently without a bitmap preview. Postscript generation is split between a high-level class that maps Morphic drawing commands to Postscript imaging model commands and a low level class for generating actual Postscript code for the commands, in order to facilitate drop-in replacements for SPDF, SVG or comparable formats. I also think this is a good base for supporting other device independent graphic models and even direct device independent drawing. </citation> So the best thing to see how the mechanism / architecture works is actually to try out to add a few more "output encoders" For example to render 1. a bookmorph (or just a series of morphs serving as 'slides') as a sequence of web pages or a series of slides in a presentation program, 2. a morph a SVG code 3. morphs as JSON descriptions to be used by a web services. 4. a sequence of morphs as a presentation for LibreOffice Impress export (LO Impress offers a 'flat XML format') 5. Powerpoint slides (zip archive generation needed) I attach a demo / start of a MorphicHTMLCanvas implementation (Monticello mcz file). The description of the implementation steps are under [9]. The demo does not actually run but brings up the context where it shows that it is useful to have access to a 'target' object -- a HTML/CSS encoder - from within the morph hierarchy. Run MorphicHTMLCanvas0Test new test01 The screen shot shows the method call hierarchy. The demo does not show yet at which abstraction level in terms of "drawing commands" aMorph fullDrawHTMLOn: self. should operate. A more elaborate version of MorphicHTMLCanvas and a JSON example added will follow. Kind regards Hannes [5] NullEncoder http://wiki.squeak.org/squeak/5052 NullEncoder implements a filter Object subclass: #NullEncoder instanceVariableNames: 'target filterSelector' classVariableNames: '' poolDictionaries: '' category: 'Morphic-Support' [6] UIKonf 2017 – Day 1 – Marcel Weiher – High Performance App Architecture https://www.youtube.com/watch?v=kHG_zw75SjE pipe/filter architecture at minute 12:00, but 0:00..12:00 is useful to get the context - traditional call/return 'architecture' style [7] Marcel Weiher -- http://forum.world.st/attachment/4974704/1/filterstream-hierarchy.pdf [8] Squeak Postscript support http://wiki.squeak.org/squeak/753 -------------------------------------------------- [9] Implementation steps to create a HTML/CSS canvas for rendering morphs as a web code. "1. create a canvas for rendering Morphs as HTML" Canvas subclass: #MorphicHTMLCanvas0 instanceVariableNames: 'morphLevel' classVariableNames: '' poolDictionaries: '' category: 'Add-Ons-HTML0' "2. create a test class for it" TestCase subclass: #MorphicHTMLCanvas0Test instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'Add-Ons-HTML0' "3. Add an encoder to be used by MorphicHTMLCanvas0 for generating HTML/CSS" PrintableEncoder subclass: #HTMLCSSEncoder0 instanceVariableNames: '' classVariableNames: '' poolDictionaries: '' category: 'Add-Ons-HTML0' "4a. Attach the HTMLCSSEncoder0 to the MorphicHTMLCanvas0" MorphicHTMLCanvas0 class defaultTarget ^HTMLCSSEncoder0 stream "4b. Add creation method" MorphicHTMLCanvas0 class morphAsHTML: aMorph | htmlCanvas | htmlCanvas := self new. htmlCanvas reset. htmlCanvas fullDrawMorph: aMorph . ^htmlCanvas contents. "4c. Add main method for rendering morphs as HTML" MorphicHTMLCanvas0 fullDraw: aMorph aMorph fullDrawHTMLOn: self. "5. within the Morph hierarchy implement messages to 'draw' morphs as HTML" Object fullDrawHTMLOn: aStream "do not do anything yet" "6. Morph" fullDrawHTMLOn: aCanvas self halt. "Here I can access aCanvas target and directly generate HTML and CSS code" "7a. set up test environment " MorphicHTMLCanvas0Test test01 self assert: (MorphicHTMLCanvas0 morphAsHTML: self createTestMorph) notNil "7b. an example morph for testing" MorphicHTMLCanvas0Test createTestMorph "MorphicHTMLCanvas0Test new createTestMorph openInWorld" | slide | slide := RectangleMorph new extent: 800 @ 600; position: 10 @ 50; color: Color blue. slide addMorph: (RectangleMorph new extent: 100 @ 100; position: 20 @ 60; color: Color yellow). ^ slide On 10/2/17, Marcel Weiher <[hidden email]> wrote: >> On Sep 30, 2017, at 7:34 , Tobias Pape <[hidden email] >> <mailto:[hidden email]>> wrote: >> >> >> No, thats Marcel Weiher. He did a quite a lot Squeak/Postscript Stuff. >> I CC'ed him. >> >> Marcel, can you comment on the Encoder Hierarchie? >> (Full thread here) > > Hi Tobias et al, > > thanks for tagging me. :-) > > The “Encoder” classes are a Squeak version of an “object oriented pipes and > filters” system I implemented in Objective-C in the late 90s: > https://github.com/mpw/MPWFoundation/tree/master/Streams.subproj > <https://github.com/mpw/MPWFoundation/tree/master/Streams.subproj> > > Why? Well, because the Postscript generation code was based on my > Objective-C Postscript processing code > (http://www.metaobject.com/Technology/#EGOS > <http://www.metaobject.com/Technology/#EGOS>), which is heavily based on > these filters. > > I have found the filters to be incredibly useful over the last 20 years, > partly because they compose so well: just like Unix pipes and filters, they > are symmetric so that their input protocol ( #writeObject: ) is the same as > their output protocol ( #writeObject:). The filterSelector is there to > allow filter-specific processing using double dispatch, but once the > processing is done the result is once again normalized to a #writeObject: > You can therefore combine these filters any way you want. > > > > > Another feature is that they are, like Unix filters, fundamentally > incremental, so multiple processing steps are interleaved and both input and > output can stream directly to/from disk/network. When doing pre-press in > the early 90ies (file sizes >> memory sizes) this was a useful feature, and > it is still helpful today, see for example the first part of my UIKonf > talk: > > https://www.youtube.com/watch?v=kHG_zw75SjE > <https://www.youtube.com/watch?v=kHG_zw75SjE> > > More recently, the fact that filters are dataflow-oriented and therefore > don’t care much about the control flow has made them useful in implementing > asynchronous processing pipelines Think “FRP” just simpler, more readable > and faster. The Microsoft To-Do network stack is implemented using this. > > With all the references to Unix P/F, it is probably no surprise that this > also subsumes Unix I/O, just with the advantage of an OO hierarchy of > shareable behavior. Oh, and also the interesting feature of subsuming both > filters and (output) streams, so ‘stdout’ in Objective-Smalltalk is one of > these, a ByteStream that knows how to serialize objects and output them to > some target that expects bytes. And in ‘stsh’ it’s a slight variant of > MPWByteStream that is more helpful to a human interacting. > > > > > > > > > > > Demo_how_to_render_morphs_as_HTML_2017-10-02.png (76K) Download Attachment Add-Ons-HTML0-hjh.1.mcz (4K) Download Attachment |
Attached is an updated of the previous demo of a HTML export for Morphs.
2 classes - with a dozen of methods in total provides basic Morphic export by using the NullEncoder/Canvas hierarchy. So far it shows a subclass of Canvas which makes use of the 'target' instance variable and where the combination of graphics command and write commands in this pipe and filter architecture is useful. I implement fullDrawHTMLOn: in the Morph hierarchy following the fullDrawPostscriptOn:aCanvas example. No support of borders and rounded corners yet. By making use of/overriding the existing drawOn: methods this should be possible. HTML and CSS are not yet separated. In fact two encoders might be needed 1. HTMLEncoder 2. CSSEncoder which should work together. I am not sure yet how to handle this. --Hannes Note: Another example would be to have a "JSONCanvas" which generates morph descriptions in JSON. On 10/2/17, H. Hirzel <[hidden email]> wrote: > Hello Marcel, Nicolas, Tobias and Dave > > Thank you Marcel W. for the illustrations and pointing out that the > NullEncoder (figures now included [5]) implements a pipe/and filter > hierarchy. > > The video about app architectures [6] in which you describe different > architecture styles is helpful. In particular you point out that the > choice of a pipe/filter architecture has advantages in terms of memory > consumption. > > Tobias, you ask > [the fact that some canvas subclasses do not use the 'target' object] > <citation> >> That’s only technically true, and really an oversight. All canvases >> write their output somewhere, be it a bitmap, native display surface or >> other ‘DisplayMedium'. A canvas is a filter for converting morphs to this >> target ‘DisplayMedium’. >> >> And having filter-canvases would be really cool :-) > > You mean like ColorMappingCanvas, AlphaBlendingCanvas, and > ShadowDrawingCanvas? > (Which, by your account, should not have an own myCanvas but rather > reuse target…)? > </citation> > > My answer is that I do not see any benefit of using 'target' in these > classes. > > But Marcel Weiher gave as well an example of a more elaborate > NullEncoder/FlattenEncoder/Canvas hierarchy to be used in a > pipe/filter architecture style. [7] > > So let's try to add more output classes to see how we can benefit from > this architecture style. > > Marcel Weiher noted in 1999 [8] > <citation> > [10 Sept 1999,MPW] > The current MorphicPostscript support includes both EPS and multi-page > Postscript generation. > > BookMorphs generate multi-page files, all other Morphs generate an EPS > ready for inclusion, though currently without a bitmap preview. > > Postscript generation is split between a high-level class that maps > Morphic drawing commands to Postscript imaging model commands and a > low level class for generating actual Postscript code for the > commands, in order to facilitate drop-in replacements for SPDF, SVG or > comparable formats. > > I also think this is a good base for supporting other device > independent graphic models and even direct device independent drawing. > </citation> > > So the best thing to see how the mechanism / architecture works is > actually to try out to add a few more "output encoders" > > For example to render > 1. a bookmorph (or just a series of morphs serving as 'slides') as a > sequence > of web pages or a series of slides in a presentation program, > 2. a morph a SVG code > 3. morphs as JSON descriptions to be used by a web services. > 4. a sequence of morphs as a presentation for LibreOffice Impress export > (LO Impress offers a 'flat XML format') > 5. Powerpoint slides (zip archive generation needed) > > > I attach a demo / start of a MorphicHTMLCanvas implementation > (Monticello mcz file). > The description of the implementation steps are under [9]. > > The demo does not actually run but brings up the context where it > shows that it is useful to have access to a 'target' object -- a > HTML/CSS encoder - from within the morph hierarchy. > > Run > MorphicHTMLCanvas0Test new test01 > > The screen shot shows the method call hierarchy. > > The demo does not show yet at which abstraction level in terms of > "drawing commands" > > aMorph fullDrawHTMLOn: self. > > should operate. A more elaborate version of MorphicHTMLCanvas and a > JSON example added will follow. > > Kind regards > Hannes > > > > [5] NullEncoder http://wiki.squeak.org/squeak/5052 > > NullEncoder implements a filter > > Object subclass: #NullEncoder > instanceVariableNames: 'target filterSelector' > classVariableNames: '' > poolDictionaries: '' > category: 'Morphic-Support' > > > > [6] UIKonf 2017 – Day 1 – Marcel Weiher – High Performance App Architecture > https://www.youtube.com/watch?v=kHG_zw75SjE > > pipe/filter architecture at > > minute 12:00, but 0:00..12:00 is useful to get the context > - traditional call/return 'architecture' style > > > [7] Marcel Weiher -- > http://forum.world.st/attachment/4974704/1/filterstream-hierarchy.pdf > > [8] Squeak Postscript support http://wiki.squeak.org/squeak/753 > > > -------------------------------------------------- > [9] Implementation steps to create a HTML/CSS canvas for rendering > morphs as a web code. > > > "1. create a canvas for rendering Morphs as HTML" > > Canvas subclass: #MorphicHTMLCanvas0 > instanceVariableNames: 'morphLevel' > classVariableNames: '' > poolDictionaries: '' > category: 'Add-Ons-HTML0' > > > "2. create a test class for it" > > TestCase subclass: #MorphicHTMLCanvas0Test > instanceVariableNames: '' > classVariableNames: '' > poolDictionaries: '' > category: 'Add-Ons-HTML0' > > > "3. Add an encoder to be used by MorphicHTMLCanvas0 for generating > HTML/CSS" > > PrintableEncoder subclass: #HTMLCSSEncoder0 > instanceVariableNames: '' > classVariableNames: '' > poolDictionaries: '' > category: 'Add-Ons-HTML0' > > > "4a. Attach the HTMLCSSEncoder0 to the MorphicHTMLCanvas0" > MorphicHTMLCanvas0 class > > defaultTarget > ^HTMLCSSEncoder0 stream > > > "4b. Add creation method" > MorphicHTMLCanvas0 class > > morphAsHTML: aMorph > | htmlCanvas | > htmlCanvas := self new. > htmlCanvas reset. > htmlCanvas fullDrawMorph: aMorph . > ^htmlCanvas contents. > > > "4c. Add main method for rendering morphs as HTML" > > MorphicHTMLCanvas0 > > fullDraw: aMorph > aMorph fullDrawHTMLOn: self. > > > "5. within the Morph hierarchy implement messages to 'draw' morphs as HTML" > Object > fullDrawHTMLOn: aStream > "do not do anything yet" > > > "6. Morph" > > fullDrawHTMLOn: aCanvas > > self halt. > > "Here I can access > > aCanvas target > > and directly generate HTML and CSS code" > > > "7a. set up test environment " > MorphicHTMLCanvas0Test > test01 > > self assert: (MorphicHTMLCanvas0 morphAsHTML: self createTestMorph) > notNil > > "7b. an example morph for testing" > MorphicHTMLCanvas0Test > > createTestMorph > "MorphicHTMLCanvas0Test new createTestMorph openInWorld" > | slide | > slide := RectangleMorph new extent: 800 @ 600; > position: 10 @ 50; > color: Color blue. > slide addMorph: (RectangleMorph new extent: 100 @ 100; > position: 20 @ 60; > color: Color yellow). > ^ slide > > > > > > > On 10/2/17, Marcel Weiher <[hidden email]> wrote: >>> On Sep 30, 2017, at 7:34 , Tobias Pape <[hidden email] >>> <mailto:[hidden email]>> wrote: >>> >>> >>> No, thats Marcel Weiher. He did a quite a lot Squeak/Postscript Stuff. >>> I CC'ed him. >>> >>> Marcel, can you comment on the Encoder Hierarchie? >>> (Full thread here) >> >> Hi Tobias et al, >> >> thanks for tagging me. :-) >> >> The “Encoder” classes are a Squeak version of an “object oriented pipes >> and >> filters” system I implemented in Objective-C in the late 90s: >> https://github.com/mpw/MPWFoundation/tree/master/Streams.subproj >> <https://github.com/mpw/MPWFoundation/tree/master/Streams.subproj> >> >> Why? Well, because the Postscript generation code was based on my >> Objective-C Postscript processing code >> (http://www.metaobject.com/Technology/#EGOS >> <http://www.metaobject.com/Technology/#EGOS>), which is heavily based on >> these filters. >> >> I have found the filters to be incredibly useful over the last 20 years, >> partly because they compose so well: just like Unix pipes and filters, >> they >> are symmetric so that their input protocol ( #writeObject: ) is the same >> as >> their output protocol ( #writeObject:). The filterSelector is there to >> allow filter-specific processing using double dispatch, but once the >> processing is done the result is once again normalized to a #writeObject: >> You can therefore combine these filters any way you want. >> >> >> >> >> Another feature is that they are, like Unix filters, fundamentally >> incremental, so multiple processing steps are interleaved and both input >> and >> output can stream directly to/from disk/network. When doing pre-press in >> the early 90ies (file sizes >> memory sizes) this was a useful feature, >> and >> it is still helpful today, see for example the first part of my UIKonf >> talk: >> >> https://www.youtube.com/watch?v=kHG_zw75SjE >> <https://www.youtube.com/watch?v=kHG_zw75SjE> >> >> More recently, the fact that filters are dataflow-oriented and therefore >> don’t care much about the control flow has made them useful in >> implementing >> asynchronous processing pipelines Think “FRP” just simpler, more >> readable >> and faster. The Microsoft To-Do network stack is implemented using this. >> >> With all the references to Unix P/F, it is probably no surprise that this >> also subsumes Unix I/O, just with the advantage of an OO hierarchy of >> shareable behavior. Oh, and also the interesting feature of subsuming >> both >> filters and (output) streams, so ‘stdout’ in Objective-Smalltalk is one >> of >> these, a ByteStream that knows how to serialize objects and output them >> to >> some target that expects bytes. And in ‘stsh’ it’s a slight variant of >> MPWByteStream that is more helpful to a human interacting. >> >> >> >> >> >> >> >> >> >> >> > Add-Ons-HTML1-hjh.2.mcz (6K) Download Attachment NullEncoder_FlattenEncoder_Canvas_MorphicHTMLCanvas1_Export_2017-10-05.png (37K) Download Attachment |
Free forum by Nabble | Edit this page |