Pharo 7 streams API

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Pharo 7 streams API

Peter Uhnak
Hi,

I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).

So a statement like this

aFile writeStreamDo: [ :stream |
stream lineEndConvention: #lf.
stream << '...'
].

has to be written like so

aFile writeStreamDo: [ :rawStream | |stream|
stream := (ZnNewLineWriterStream on: rawStream) forLf.
stream << '...'
].

which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.

Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.

aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
stream << '...'
]

To separate the composition from the usage?

Thanks,
Peter
Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Sven Van Caekenberghe-2
Peter,

> On 23 Jun 2018, at 15:39, Peter Uhnák <[hidden email]> wrote:
>
> Hi,
>
> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>
> So a statement like this
>
> aFile writeStreamDo: [ :stream |
> stream lineEndConvention: #lf.
> stream << '...'
> ].
>
> has to be written like so
>
> aFile writeStreamDo: [ :rawStream | |stream|
> stream := (ZnNewLineWriterStream on: rawStream) forLf.
> stream << '...'
> ].
>
> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>
> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>
> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
> stream << '...'
> ]
>
> To separate the composition from the usage?
>
> Thanks,
> Peter

The goals of the 'new' (they have existed for quite a while) streams is to go from single big complex do all classes and replace that with a composition of much simpler single purpose classes. Another goal is to reduce the API so that it becomes easier to create new stream classes. Of course, a consequence is that you need composition to get the functionality you want. But I think you understand the tradeoff.

If the mixing of configuration/setup with writing/reading bothers you, then I would suggest using two methods. One that does only the writing/reading assuming a limited API, and another that does the configuration/setup, using composition.

Note that the streams that you get already are a composition (most often a BinaryFileStream wrapped in a Buffered stream wrapped in a Encoding/Decoding stream). EOL translation is not a standard part of that. But there is quite some system code that does uses EOL translation.

Sven



Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Ben Coman


On 24 June 2018 at 00:53, Sven Van Caekenberghe <[hidden email]> wrote:
Peter,

> On 23 Jun 2018, at 15:39, Peter Uhnák <[hidden email]> wrote:
>
> Hi,
>
> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>
> So a statement like this
>
> aFile writeStreamDo: [ :stream |
>       stream lineEndConvention: #lf.
>       stream << '...'
> ].
>
> has to be written like so
>
> aFile writeStreamDo: [ :rawStream | |stream|
>       stream := (ZnNewLineWriterStream on: rawStream) forLf.
>       stream << '...'
> ].
>
> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>
> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>
> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>       stream << '...'
> ]
>
> To separate the composition from the usage?
>
> Thanks,
> Peter

The goals of the 'new' (they have existed for quite a while) streams is to go from single big complex do all classes and replace that with a composition of much simpler single purpose classes. Another goal is to reduce the API so that it becomes easier to create new stream classes. Of course, a consequence is that you need composition to get the functionality you want. But I think you understand the tradeoff.

If the mixing of configuration/setup with writing/reading bothers you, then I would suggest using two methods. One that does only the writing/reading assuming a limited API, and another that does the configuration/setup, using composition.

Note that the streams that you get already are a composition (most often a BinaryFileStream wrapped in a Buffered stream wrapped in a Encoding/Decoding stream). EOL translation is not a standard part of that. But there is quite some system code that does uses EOL translation.

There may be performance implications for streams opened frequently, but perhaps something like this would be useful...??
  aFile writeStreamDo: [ :stream |
       stream becomeWrappedBy: ( ZnNewLineWriterStream new forLf).
       stream << '...'

cheers -ben
Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Herby Vojčík
In reply to this post by Peter Uhnak


Peter Uhnák wrote on 23. 6. 2018 15:39:

> Hi,
>
> I'm starting to familiarize myself with new streams, and one thing I've
> noticed is the removal of #lineEndConvention (which I use all the time).
>
> So a statement like this
>
> aFile writeStreamDo: [ :stream |
> stream lineEndConvention: #lf.
> stream << '...'> ].
>
> has to be written like so
>
> aFile writeStreamDo: [ :rawStream | |stream|
> stream := (ZnNewLineWriterStream on: rawStream) forLf.
> stream << '...'
> ].
>
> which feels very messy because I am mixing writing with the
> configuration. And I don't even take account for buffered/encoded
> decorators. Plus it increases the incidental complexity -- I need
> another variable, and I can accidentally write to the wrong stream, etc.
>
> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make
> sense? E.g.
>
> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on:
> stream) ] do: [ :stream |
> stream << '...'
> ]

   aFile writeStreamDo: [ :rawStream |
     (ZnNewLineWriterStream on: rawStream) in: [ :stream |
       stream << '...' ] ].

As for transformation, I'd go for some more generic (functional?)
approach like:

   aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [
:stream |
     stream << '...' ]).

Herby

> To separate the composition from the usage?
>
> Thanks,
> Peter

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Richard O'Keefe
In reply to this post by Sven Van Caekenberghe-2
Experience with Java taught me to loath ethe "do I/O
by composing lots of little wrappers" approach for
several reasons:
 - the fact that the commonest case was not the
   default case, so that simple obvious code was
   somewhere between disgracefully slow and wrong
 - the extra complexity needed to get it right
   (having to know a plethora of wrapper classes
   instead of just getting sensible defaults)
 - the poor performance.

In my own Smalltalk system I hewed to the ANSI
Smalltalk standard and if you do
    FileStream read: aFileName
or  FileStream write: aFileName
you get a text file using the encoding set by
your environment's locale, the native line ending
convention, Simple, easy, and leaves Java's
I/O performance in the dust.

Of course wrappers are available for the rare
cases when you need them, but you don't _have_
to use them.  One reason this matters is that
there are two ways to write a stream:

  s2 := WrapperClass on: s1
    -- closing s2 just closes s2, not s1.
  s2 := WrapperClass onOwn: s1
    -- closing s2 closes s1 as well

And yes, both of these *are* used.


On 24 June 2018 at 04:53, Sven Van Caekenberghe <[hidden email]> wrote:
Peter,

> On 23 Jun 2018, at 15:39, Peter Uhnák <[hidden email]> wrote:
>
> Hi,
>
> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>
> So a statement like this
>
> aFile writeStreamDo: [ :stream |
>       stream lineEndConvention: #lf.
>       stream << '...'
> ].
>
> has to be written like so
>
> aFile writeStreamDo: [ :rawStream | |stream|
>       stream := (ZnNewLineWriterStream on: rawStream) forLf.
>       stream << '...'
> ].
>
> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>
> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>
> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>       stream << '...'
> ]
>
> To separate the composition from the usage?
>
> Thanks,
> Peter

The goals of the 'new' (they have existed for quite a while) streams is to go from single big complex do all classes and replace that with a composition of much simpler single purpose classes. Another goal is to reduce the API so that it becomes easier to create new stream classes. Of course, a consequence is that you need composition to get the functionality you want. But I think you understand the tradeoff.

If the mixing of configuration/setup with writing/reading bothers you, then I would suggest using two methods. One that does only the writing/reading assuming a limited API, and another that does the configuration/setup, using composition.

Note that the streams that you get already are a composition (most often a BinaryFileStream wrapped in a Buffered stream wrapped in a Encoding/Decoding stream). EOL translation is not a standard part of that. But there is quite some system code that does uses EOL translation.

Sven




Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Sven Van Caekenberghe-2
In reply to this post by Ben Coman


> On 24 Jun 2018, at 03:08, Ben Coman <[hidden email]> wrote:
>
>
>
> On 24 June 2018 at 00:53, Sven Van Caekenberghe <[hidden email]> wrote:
> Peter,
>
> > On 23 Jun 2018, at 15:39, Peter Uhnák <[hidden email]> wrote:
> >
> > Hi,
> >
> > I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
> >
> > So a statement like this
> >
> > aFile writeStreamDo: [ :stream |
> >       stream lineEndConvention: #lf.
> >       stream << '...'
> > ].
> >
> > has to be written like so
> >
> > aFile writeStreamDo: [ :rawStream | |stream|
> >       stream := (ZnNewLineWriterStream on: rawStream) forLf.
> >       stream << '...'
> > ].
> >
> > which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
> >
> > Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
> >
> > aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
> >       stream << '...'
> > ]
> >
> > To separate the composition from the usage?
> >
> > Thanks,
> > Peter
>
> The goals of the 'new' (they have existed for quite a while) streams is to go from single big complex do all classes and replace that with a composition of much simpler single purpose classes. Another goal is to reduce the API so that it becomes easier to create new stream classes. Of course, a consequence is that you need composition to get the functionality you want. But I think you understand the tradeoff.
>
> If the mixing of configuration/setup with writing/reading bothers you, then I would suggest using two methods. One that does only the writing/reading assuming a limited API, and another that does the configuration/setup, using composition.
>
> Note that the streams that you get already are a composition (most often a BinaryFileStream wrapped in a Buffered stream wrapped in a Encoding/Decoding stream). EOL translation is not a standard part of that. But there is quite some system code that does uses EOL translation.
>
> There may be performance implications for streams opened frequently, but perhaps something like this would be useful...??
>   aFile writeStreamDo: [ :stream |
>        stream becomeWrappedBy: ( ZnNewLineWriterStream new forLf).
>        stream << '...'
>
> cheers -ben

#becomeWrappedBy: would (indeed) have to use #become[Forward:] for little gain in reducing code density while adding complexity, IMHO.


Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Sven Van Caekenberghe-2
In reply to this post by Herby Vojčík


> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>
>
>
> Peter Uhnák wrote on 23. 6. 2018 15:39:
>> Hi,
>> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>> So a statement like this
>> aFile writeStreamDo: [ :stream |
>> stream lineEndConvention: #lf.
>> stream << '...'> ].
>> has to be written like so
>> aFile writeStreamDo: [ :rawStream | |stream|
>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>> stream << '...'
>> ].
>> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>> stream << '...'
>> ]
>
>  aFile writeStreamDo: [ :rawStream |
>    (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>      stream << '...' ] ].
>
> As for transformation, I'd go for some more generic (functional?) approach like:
>
>  aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [ :stream |
>    stream << '...' ]).

I like the first version with the (little known, but still standard and clear) #in: selector.
I can't see how the second is 'better', as it looks equally 'complex' but adds a new selector, #pipe:
All this, IMHO.

> Herby
>
>> To separate the composition from the usage?
>> Thanks,
>> Peter


Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Herby Vojčík


Sven Van Caekenberghe wrote on 2. 7. 2018 16:00:

>
>
>> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>>
>>
>>
>> Peter Uhnák wrote on 23. 6. 2018 15:39:
>>> Hi,
>>> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>>> So a statement like this
>>> aFile writeStreamDo: [ :stream |
>>> stream lineEndConvention: #lf.
>>> stream << '...'> ].
>>> has to be written like so
>>> aFile writeStreamDo: [ :rawStream | |stream|
>>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>>> stream << '...'
>>> ].
>>> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>>> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>>> stream << '...'
>>> ]
>>
>>   aFile writeStreamDo: [ :rawStream |
>>     (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>>       stream << '...' ] ].
>>
>> As for transformation, I'd go for some more generic (functional?) approach like:
>>
>>   aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [ :stream |
>>     stream << '...' ]).
>
> I like the first version with the (little known, but still standard and clear) #in: selector.
> I can't see how the second is 'better', as it looks equally 'complex' but adds a new selector, #pipe:
> All this, IMHO.

It's a bit more focused on the task, that being transform incoming
argument. Cf.

   #asString pipe: [ :aString | ... ]

instead of:

   [ :anObject | anObject asString in: [ :aString | ... ] ]

IOW, scope of anObject is limited (and if using #selector you don't even
need to come up with a name, like the rawStream in examples before).

>> Herby
>>
>>> To separate the composition from the usage?
>>> Thanks,
>>> Peter
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Sven Van Caekenberghe-2


> On 3 Jul 2018, at 10:08, Herbert Vojčík <[hidden email]> wrote:
>
>
>
> Sven Van Caekenberghe wrote on 2. 7. 2018 16:00:
>>> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>>>
>>>
>>>
>>> Peter Uhnák wrote on 23. 6. 2018 15:39:
>>>> Hi,
>>>> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>>>> So a statement like this
>>>> aFile writeStreamDo: [ :stream |
>>>> stream lineEndConvention: #lf.
>>>> stream << '...'> ].
>>>> has to be written like so
>>>> aFile writeStreamDo: [ :rawStream | |stream|
>>>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>>>> stream << '...'
>>>> ].
>>>> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>>>> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>>>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>>>> stream << '...'
>>>> ]
>>>
>>>  aFile writeStreamDo: [ :rawStream |
>>>    (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>>>      stream << '...' ] ].
>>>
>>> As for transformation, I'd go for some more generic (functional?) approach like:
>>>
>>>  aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [ :stream |
>>>    stream << '...' ]).
>> I like the first version with the (little known, but still standard and clear) #in: selector.
>> I can't see how the second is 'better', as it looks equally 'complex' but adds a new selector, #pipe:
>> All this, IMHO.
>
> It's a bit more focused on the task, that being transform incoming argument. Cf.
>
>  #asString pipe: [ :aString | ... ]
>
> instead of:
>
>  [ :anObject | anObject asString in: [ :aString | ... ] ]
>
> IOW, scope of anObject is limited (and if using #selector you don't even need to come up with a name, like the rawStream in examples before).

OK, maybe, what is your definition for #pipe: then ?

>>> Herby
>>>
>>>> To separate the composition from the usage?
>>>> Thanks,
>>>> Peter


Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Herby Vojčík


Sven Van Caekenberghe wrote on 3. 7. 2018 10:55:

>
>
>> On 3 Jul 2018, at 10:08, Herbert Vojčík <[hidden email]> wrote:
>>
>>
>>
>> Sven Van Caekenberghe wrote on 2. 7. 2018 16:00:
>>>> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>>>>
>>>>
>>>>
>>>> Peter Uhnák wrote on 23. 6. 2018 15:39:
>>>>> Hi,
>>>>> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>>>>> So a statement like this
>>>>> aFile writeStreamDo: [ :stream |
>>>>> stream lineEndConvention: #lf.
>>>>> stream << '...'> ].
>>>>> has to be written like so
>>>>> aFile writeStreamDo: [ :rawStream | |stream|
>>>>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>>>>> stream << '...'
>>>>> ].
>>>>> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>>>>> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>>>>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>>>>> stream << '...'
>>>>> ]
>>>>
>>>>   aFile writeStreamDo: [ :rawStream |
>>>>     (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>>>>       stream << '...' ] ].
>>>>
>>>> As for transformation, I'd go for some more generic (functional?) approach like:
>>>>
>>>>   aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [ :stream |
>>>>     stream << '...' ]).
>>> I like the first version with the (little known, but still standard and clear) #in: selector.
>>> I can't see how the second is 'better', as it looks equally 'complex' but adds a new selector, #pipe:
>>> All this, IMHO.
>>
>> It's a bit more focused on the task, that being transform incoming argument. Cf.
>>
>>   #asString pipe: [ :aString | ... ]
>>
>> instead of:
>>
>>   [ :anObject | anObject asString in: [ :aString | ... ] ]
>>
>> IOW, scope of anObject is limited (and if using #selector you don't even need to come up with a name, like the rawStream in examples before).
>
> OK, maybe, what is your definition for #pipe: then ?

Basically something like this:

(BlockClosure | Symbol) >> pipe: aBlock
   ^ [ :arg | aBlock value: (self value: arg) ]

>>>> Herby
>>>>
>>>>> To separate the composition from the usage?
>>>>> Thanks,
>>>>> Peter
>

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Herby Vojčík


Herbert Vojčík wrote on 3. 7. 2018 11:21:

>
>
> Sven Van Caekenberghe wrote on 3. 7. 2018 10:55:
>>
>>
>>> On 3 Jul 2018, at 10:08, Herbert Vojčík <[hidden email]> wrote:
>>>
>>>
>>>
>>> Sven Van Caekenberghe wrote on 2. 7. 2018 16:00:
>>>>> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>>>>>
>>>>>
>>>>>
>>>>> Peter Uhnák wrote on 23. 6. 2018 15:39:
>>>>>> Hi,
>>>>>> I'm starting to familiarize myself with new streams, and one thing
>>>>>> I've noticed is the removal of #lineEndConvention (which I use all
>>>>>> the time).
>>>>>> So a statement like this
>>>>>> aFile writeStreamDo: [ :stream |
>>>>>> stream lineEndConvention: #lf.
>>>>>> stream << '...'> ].
>>>>>> has to be written like so
>>>>>> aFile writeStreamDo: [ :rawStream | |stream|
>>>>>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>>>>>> stream << '...'
>>>>>> ].
>>>>>> which feels very messy because I am mixing writing with the
>>>>>> configuration. And I don't even take account for buffered/encoded
>>>>>> decorators. Plus it increases the incidental complexity -- I need
>>>>>> another variable, and I can accidentally write to the wrong
>>>>>> stream, etc.
>>>>>> Would a method like #writeStream:do: (or
>>>>>> #writeStreamTransform:do:) make sense? E.g.
>>>>>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on:
>>>>>> stream) ] do: [ :stream |
>>>>>> stream << '...'
>>>>>> ]
>>>>>
>>>>>   aFile writeStreamDo: [ :rawStream |
>>>>>     (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>>>>>       stream << '...' ] ].
>>>>>
>>>>> As for transformation, I'd go for some more generic (functional?)
>>>>> approach like:
>>>>>
>>>>>   aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [
>>>>> :stream |
>>>>>     stream << '...' ]).
>>>> I like the first version with the (little known, but still standard
>>>> and clear) #in: selector.
>>>> I can't see how the second is 'better', as it looks equally
>>>> 'complex' but adds a new selector, #pipe:
>>>> All this, IMHO.
>>>
>>> It's a bit more focused on the task, that being transform incoming
>>> argument. Cf.
>>>
>>>   #asString pipe: [ :aString | ... ]
>>>
>>> instead of:
>>>
>>>   [ :anObject | anObject asString in: [ :aString | ... ] ]
>>>
>>> IOW, scope of anObject is limited (and if using #selector you don't
>>> even need to come up with a name, like the rawStream in examples
>>> before).
>>
>> OK, maybe, what is your definition for #pipe: then ?
>
> Basically something like this:
>
> (BlockClosure | Symbol) >> pipe: aBlock
>    ^ [ :arg | aBlock value: (self value: arg) ]

Now that I think of, it is basically nothing more than function
composition, so it could have binary selector instead. That would save
parentheses, so the example would look like:

   aFile writeStreamDo:
     [ :x | ZnNewLineWriterStream on: x ] ++
     [ :stream | stream << '...' ].

>
>>>>> Herby
>>>>>
>>>>>> To separate the composition from the usage?
>>>>>> Thanks,
>>>>>> Peter
>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Sven Van Caekenberghe-2


> On 3 Jul 2018, at 11:28, Herbert Vojčík <[hidden email]> wrote:
>
>
>
> Herbert Vojčík wrote on 3. 7. 2018 11:21:
>> Sven Van Caekenberghe wrote on 3. 7. 2018 10:55:
>>>
>>>
>>>> On 3 Jul 2018, at 10:08, Herbert Vojčík <[hidden email]> wrote:
>>>>
>>>>
>>>>
>>>> Sven Van Caekenberghe wrote on 2. 7. 2018 16:00:
>>>>>> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> Peter Uhnák wrote on 23. 6. 2018 15:39:
>>>>>>> Hi,
>>>>>>> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>>>>>>> So a statement like this
>>>>>>> aFile writeStreamDo: [ :stream |
>>>>>>> stream lineEndConvention: #lf.
>>>>>>> stream << '...'> ].
>>>>>>> has to be written like so
>>>>>>> aFile writeStreamDo: [ :rawStream | |stream|
>>>>>>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>>>>>>> stream << '...'
>>>>>>> ].
>>>>>>> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>>>>>>> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>>>>>>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>>>>>>> stream << '...'
>>>>>>> ]
>>>>>>
>>>>>>  aFile writeStreamDo: [ :rawStream |
>>>>>>    (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>>>>>>      stream << '...' ] ].
>>>>>>
>>>>>> As for transformation, I'd go for some more generic (functional?) approach like:
>>>>>>
>>>>>>  aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [ :stream |
>>>>>>    stream << '...' ]).
>>>>> I like the first version with the (little known, but still standard and clear) #in: selector.
>>>>> I can't see how the second is 'better', as it looks equally 'complex' but adds a new selector, #pipe:
>>>>> All this, IMHO.
>>>>
>>>> It's a bit more focused on the task, that being transform incoming argument. Cf.
>>>>
>>>>  #asString pipe: [ :aString | ... ]
>>>>
>>>> instead of:
>>>>
>>>>  [ :anObject | anObject asString in: [ :aString | ... ] ]
>>>>
>>>> IOW, scope of anObject is limited (and if using #selector you don't even need to come up with a name, like the rawStream in examples before).
>>>
>>> OK, maybe, what is your definition for #pipe: then ?
>> Basically something like this:
>> (BlockClosure | Symbol) >> pipe: aBlock
>>   ^ [ :arg | aBlock value: (self value: arg) ]
>
> Now that I think of, it is basically nothing more than function composition, so it could have binary selector instead. That would save parentheses, so the example would look like:
>
>  aFile writeStreamDo:
>    [ :x | ZnNewLineWriterStream on: x ] ++
>    [ :stream | stream << '...' ].

Interesting ;-)

Why not even #, then ? (It is used to compose Exceptions into an ExceptionSet too, why not blocks into blocks).

An 'issue' that I see is that your #pipe: definition only works for single argument blocks, something that is probably unavoidable. IIRC this can be done in CommonLisp, but I am not 100% sure.

Most probably this idea of functional composition has been discussed before ...

>>>>>> Herby
>>>>>>
>>>>>>> To separate the composition from the usage?
>>>>>>> Thanks,
>>>>>>> Peter


Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Damien Pollet-2
On Tue, 3 Jul 2018 at 11:36, Sven Van Caekenberghe <[hidden email]> wrote:
Most probably this idea of functional composition has been discussed before ...

Transducers (aka. Reducers) come to mind… 
Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Steffen Märcker
I think, streams and functional composition match up nicely and  
transducers are a way to do this. I've introduced them earlier on this  
list. (I hesitated to weight into the discussion, as I won't have time to  
work on the Pharo port of Transducers until October.)

Let me give a simplified example. I assume the basic messages are  
#nextPut: and #close: to write to aStream and close it.

   aString
     transduce: LineEndCrLf flatMap
     reduce: (#nextPut: completing: #close)
     init: aStream

* Let aString be the source, i.e., some object that yields a sequence of  
characters:
   a CR b
* Let LineEndConventionLF a function that maps CR to #(CR LF):
   a CR b -> a #(CR LF) b
* #flatMap embeds #(CR LF) into the sequence:
   a CR LF b
* (#nextPut: completing: #close) puts each character on the stream and  
calls #close at the end:
   aStream
     nextPut: $a;
     nextPut: CR;
     nextPut: LF;
     nextPut: $b;
     close;
     yourself.
* #transduce:reduce:init: actually starts the writing process.

First, (LineEndConventionLF flatMap) is composable with other  
transformations, e.g., encoding. The example above would change to:

   aString
     transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
     reduce: (#nextPut: completing: #close)
     init: aByteStream

LineEndCrLf and EncodeUTF8 only have to know how to process single  
characters. Hence, they are highly reusable.

Second, as the source the transformations, the writing process and the  
data sink are distinct objects, we can freely interact with them and build  
arbitrary pipelines. It is straight-forward to come up with other  
iteration methods than #reduce:init:, e.g., step-wise processing of  
streams.

Best, Steffen

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Herby Vojčík
Solutions to different problems.

I proposed a simple generic thing that only composes functions, allowing for transformation of block arg.

Transducers seem like streaming, data-flow specific, thing.

Maybe yours helps the original problem in the long run.

I just tried to find something to solve more specific part of it while being such generic that it helps in other places as well.

Just pointing it out so there isn't a perception they are competing to solve same problem and only one should be selected.

Herby

On July 3, 2018 3:57:21 PM GMT+02:00, "Steffen Märcker" <[hidden email]> wrote:

>I think, streams and functional composition match up nicely and  
>transducers are a way to do this. I've introduced them earlier on this
>
>list. (I hesitated to weight into the discussion, as I won't have time
>to  
>work on the Pharo port of Transducers until October.)
>
>Let me give a simplified example. I assume the basic messages are  
>#nextPut: and #close: to write to aStream and close it.
>
>   aString
>     transduce: LineEndCrLf flatMap
>     reduce: (#nextPut: completing: #close)
>     init: aStream
>
>* Let aString be the source, i.e., some object that yields a sequence
>of  
>characters:
>   a CR b
>* Let LineEndConventionLF a function that maps CR to #(CR LF):
>   a CR b -> a #(CR LF) b
>* #flatMap embeds #(CR LF) into the sequence:
>   a CR LF b
>* (#nextPut: completing: #close) puts each character on the stream and
>
>calls #close at the end:
>   aStream
>     nextPut: $a;
>     nextPut: CR;
>     nextPut: LF;
>     nextPut: $b;
>     close;
>     yourself.
>* #transduce:reduce:init: actually starts the writing process.
>
>First, (LineEndConventionLF flatMap) is composable with other  
>transformations, e.g., encoding. The example above would change to:
>
>   aString
>     transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
>     reduce: (#nextPut: completing: #close)
>     init: aByteStream
>
>LineEndCrLf and EncodeUTF8 only have to know how to process single  
>characters. Hence, they are highly reusable.
>
>Second, as the source the transformations, the writing process and the
>
>data sink are distinct objects, we can freely interact with them and
>build  
>arbitrary pipelines. It is straight-forward to come up with other  
>iteration methods than #reduce:init:, e.g., step-wise processing of  
>streams.
>
>Best, Steffen

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Herby Vojčík
In reply to this post by Sven Van Caekenberghe-2


Sven Van Caekenberghe wrote on 3. 7. 2018 11:36:

>
>
>> On 3 Jul 2018, at 11:28, Herbert Vojčík <[hidden email]> wrote:
>>
>>
>>
>> Herbert Vojčík wrote on 3. 7. 2018 11:21:
>>> Sven Van Caekenberghe wrote on 3. 7. 2018 10:55:
>>>>
>>>>
>>>>> On 3 Jul 2018, at 10:08, Herbert Vojčík <[hidden email]> wrote:
>>>>>
>>>>>
>>>>>
>>>>> Sven Van Caekenberghe wrote on 2. 7. 2018 16:00:
>>>>>>> On 25 Jun 2018, at 12:56, Herbert Vojčík <[hidden email]> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Peter Uhnák wrote on 23. 6. 2018 15:39:
>>>>>>>> Hi,
>>>>>>>> I'm starting to familiarize myself with new streams, and one thing I've noticed is the removal of #lineEndConvention (which I use all the time).
>>>>>>>> So a statement like this
>>>>>>>> aFile writeStreamDo: [ :stream |
>>>>>>>> stream lineEndConvention: #lf.
>>>>>>>> stream << '...'> ].
>>>>>>>> has to be written like so
>>>>>>>> aFile writeStreamDo: [ :rawStream | |stream|
>>>>>>>> stream := (ZnNewLineWriterStream on: rawStream) forLf.
>>>>>>>> stream << '...'
>>>>>>>> ].
>>>>>>>> which feels very messy because I am mixing writing with the configuration. And I don't even take account for buffered/encoded decorators. Plus it increases the incidental complexity -- I need another variable, and I can accidentally write to the wrong stream, etc.
>>>>>>>> Would a method like #writeStream:do: (or #writeStreamTransform:do:) make sense? E.g.
>>>>>>>> aFile writeStreamTransform: [ :stream | (ZnNewLineWriterStream on: stream) ] do: [ :stream |
>>>>>>>> stream << '...'
>>>>>>>> ]
>>>>>>>
>>>>>>>   aFile writeStreamDo: [ :rawStream |
>>>>>>>     (ZnNewLineWriterStream on: rawStream) in: [ :stream |
>>>>>>>       stream << '...' ] ].
>>>>>>>
>>>>>>> As for transformation, I'd go for some more generic (functional?) approach like:
>>>>>>>
>>>>>>>   aFile writeStreamDo: ([:x | ZnNewLineWriterStream on: x] pipe: [ :stream |
>>>>>>>     stream << '...' ]).
>>>>>> I like the first version with the (little known, but still standard and clear) #in: selector.
>>>>>> I can't see how the second is 'better', as it looks equally 'complex' but adds a new selector, #pipe:
>>>>>> All this, IMHO.
>>>>>
>>>>> It's a bit more focused on the task, that being transform incoming argument. Cf.
>>>>>
>>>>>   #asString pipe: [ :aString | ... ]
>>>>>
>>>>> instead of:
>>>>>
>>>>>   [ :anObject | anObject asString in: [ :aString | ... ] ]
>>>>>
>>>>> IOW, scope of anObject is limited (and if using #selector you don't even need to come up with a name, like the rawStream in examples before).
>>>>
>>>> OK, maybe, what is your definition for #pipe: then ?
>>> Basically something like this:
>>> (BlockClosure | Symbol) >> pipe: aBlock
>>>    ^ [ :arg | aBlock value: (self value: arg) ]
>>
>> Now that I think of, it is basically nothing more than function composition, so it could have binary selector instead. That would save parentheses, so the example would look like:
>>
>>   aFile writeStreamDo:
>>     [ :x | ZnNewLineWriterStream on: x ] ++
>>     [ :stream | stream << '...' ].
>
> Interesting ;-)
>
> Why not even #, then ? (It is used to compose Exceptions into an ExceptionSet too, why not blocks into blocks).

Doesn't fit in my PoV. The spirit of #, is to produce some
collection-like thing. The function composition is not a collection,
it's not a BlockSet, it's a single block, welded together from parts,
but those are not important any more.

IMNSHO.

> An 'issue' that I see is that your #pipe: definition only works for single argument blocks, something that is probably unavoidable. IIRC this can be done in CommonLisp, but I am not 100% sure.
>
> Most probably this idea of functional composition has been discussed before ...
>
>>>>>>> Herby
>>>>>>>
>>>>>>>> To separate the composition from the usage?
>>>>>>>> Thanks,
>>>>>>>> Peter
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Steffen Märcker
In reply to this post by Herby Vojčík
No worries! =) Let me clarify the relation between Transducers and
function composition.

The basic component in the framework is the so-called ReducingFunction.
This are functions that take two arguments, an 'intermediate value' and a
'current element', and map them to a new intermediate value, i.e., rf : A
x I -> A.
In the example, #nextPut: is a reducing function, since it takes a stream
and an element to put to the stream (I assume #nextPut: would return the
stream).

Basic operations like mapping, filtering, partitioning etc. are generic
and independent of streams/collections/whatsoever. Hence, they should be
resuable. This can be achieved by Transducers which are objects that take
reducing functions and transform them to incorporate the additional
functionality, e.g., mapping. Their signature is similar to xf : (A x I ->
A) -> (A x I -> A).

Function composition of transducer objects chains multiple basic
operations and allows to attach them to a reducing function. In fact, the
implementation indeed uses function composition for this purpose. However,
its up to the context how to make use of these functions, e.g., via
#reduce:init:.

Feel free to ask if anything remains unclear! =)

Best, Steffen


Am .07.2018, 16:20 Uhr, schrieb <[hidden email]>:

> Solutions to different problems.
>
> I proposed a simple generic thing that only composes functions, allowing  
> for transformation of block arg.
>
> Transducers seem like streaming, data-flow specific, thing.
>
> Maybe yours helps the original problem in the long run.
>
> I just tried to find something to solve more specific part of it while  
> being such generic that it helps in other places as well.
>
> Just pointing it out so there isn't a perception they are competing to  
> solve same problem and only one should be selected.
>
> Herby
>
> On July 3, 2018 3:57:21 PM GMT+02:00, "Steffen Märcker" <[hidden email]>  
> wrote:
>> I think, streams and functional composition match up nicely and
>> transducers are a way to do this. I've introduced them earlier on this
>>
>> list. (I hesitated to weight into the discussion, as I won't have time
>> to
>> work on the Pharo port of Transducers until October.)
>>
>> Let me give a simplified example. I assume the basic messages are
>> #nextPut: and #close: to write to aStream and close it.
>>
>>   aString
>>     transduce: LineEndCrLf flatMap
>>     reduce: (#nextPut: completing: #close)
>>     init: aStream
>>
>> * Let aString be the source, i.e., some object that yields a sequence
>> of
>> characters:
>>   a CR b
>> * Let LineEndConventionLF a function that maps CR to #(CR LF):
>>   a CR b -> a #(CR LF) b
>> * #flatMap embeds #(CR LF) into the sequence:
>>   a CR LF b
>> * (#nextPut: completing: #close) puts each character on the stream and
>>
>> calls #close at the end:
>>   aStream
>>     nextPut: $a;
>>     nextPut: CR;
>>     nextPut: LF;
>>     nextPut: $b;
>>     close;
>>     yourself.
>> * #transduce:reduce:init: actually starts the writing process.
>>
>> First, (LineEndConventionLF flatMap) is composable with other
>> transformations, e.g., encoding. The example above would change to:
>>
>>   aString
>>     transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
>>     reduce: (#nextPut: completing: #close)
>>     init: aByteStream
>>
>> LineEndCrLf and EncodeUTF8 only have to know how to process single
>> characters. Hence, they are highly reusable.
>>
>> Second, as the source the transformations, the writing process and the
>>
>> data sink are distinct objects, we can freely interact with them and
>> build
>> arbitrary pipelines. It is straight-forward to come up with other
>> iteration methods than #reduce:init:, e.g., step-wise processing of
>> streams.
>>
>> Best, Steffen

Reply | Threaded
Open this post in threaded view
|

Re: Pharo 7 streams API

Steffen Märcker
In reply to this post by Herby Vojčík
No worries! =)

Let me clarify the relation between Transducers and function composition.

The main component in the framework are so-called ReducingFunctions, which  
are the operations you want to perform. They are functions that take two  
arguments, an 'intermediate value' and a 'current element', and map them  
to a new intermediate value, i.e.,
rf : A x I -> A.
In the example, #nextPut: is a reducing function, since it takes a stream  
and an element to put to the stream (I assume #nextPut: returns the stream  
itself).

Basic operations like mapping, filtering, partitioning etc. are generic
and independent of streams/collections/whatsoever. Hence, they should be
resuable. This can be achieved by Transducers which are objects that take
a reducing function and transform it to incorporate the additional
functionality, e.g., mapping. The transducers signature is similar to
xf : (A x I -> A) -> (A x I -> A).
The classic approach adds these basic operations by wrapping the data  
(collections/streams). In contrast, transducers add them to the operations.

Function composition of transducer objects chains multiple basic
operations and allows to attach them to a reducing function. In fact, the
implementation indeed uses function composition for this purpose. However,
its up to the context how to make use of these functions, e.g., via
#reduce:init:.

Feel free to ask if anything remains unclear! =)

Best, Steffen


Am .07.2018, 16:20 Uhr, schrieb <[hidden email]>:

> Solutions to different problems.
>
> I proposed a simple generic thing that only composes functions, allowing  
> for transformation of block arg.
>
> Transducers seem like streaming, data-flow specific, thing.
>
> Maybe yours helps the original problem in the long run.
>
> I just tried to find something to solve more specific part of it while  
> being such generic that it helps in other places as well.
>
> Just pointing it out so there isn't a perception they are competing to  
> solve same problem and only one should be selected.
>
> Herby
>
> On July 3, 2018 3:57:21 PM GMT+02:00, "Steffen Märcker" <[hidden email]>  
> wrote:
>> I think, streams and functional composition match up nicely and
>> transducers are a way to do this. I've introduced them earlier on this
>>
>> list. (I hesitated to weight into the discussion, as I won't have time
>> to
>> work on the Pharo port of Transducers until October.)
>>
>> Let me give a simplified example. I assume the basic messages are
>> #nextPut: and #close: to write to aStream and close it.
>>
>>   aString
>>     transduce: LineEndCrLf flatMap
>>     reduce: (#nextPut: completing: #close)
>>     init: aStream
>>
>> * Let aString be the source, i.e., some object that yields a sequence
>> of
>> characters:
>>   a CR b
>> * Let LineEndConventionLF a function that maps CR to #(CR LF):
>>   a CR b -> a #(CR LF) b
>> * #flatMap embeds #(CR LF) into the sequence:
>>   a CR LF b
>> * (#nextPut: completing: #close) puts each character on the stream and
>>
>> calls #close at the end:
>>   aStream
>>     nextPut: $a;
>>     nextPut: CR;
>>     nextPut: LF;
>>     nextPut: $b;
>>     close;
>>     yourself.
>> * #transduce:reduce:init: actually starts the writing process.
>>
>> First, (LineEndConventionLF flatMap) is composable with other
>> transformations, e.g., encoding. The example above would change to:
>>
>>   aString
>>     transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
>>     reduce: (#nextPut: completing: #close)
>>     init: aByteStream
>>
>> LineEndCrLf and EncodeUTF8 only have to know how to process single
>> characters. Hence, they are highly reusable.
>>
>> Second, as the source the transformations, the writing process and the
>>
>> data sink are distinct objects, we can freely interact with them and
>> build
>> arbitrary pipelines. It is straight-forward to come up with other
>> iteration methods than #reduce:init:, e.g., step-wise processing of
>> streams.
>>
>> Best, Steffen