Porting Transducers to Pharo

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
44 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Stephane Ducasse-3
We do not work with fileouts :)
You should produce packages with also a configuration and published them on smalltalkhub or git and 
in the MetaRepository. 
You can also add package comments

On Sat, Jun 3, 2017 at 10:29 PM, Steffen Märcker <[hidden email]> wrote:
Dear all,

attached are updated file-outs. I fixed a couple of annoyances that slipped through yesterday evening. Most notable:

1) Random generator now works.
2) Early termination via Reduced exception does MNU anymore.
3) Printing a transducer holding a block does not MNU anymore.

Please, give it a spin and tell me your impressions. (At least) The coin-flipping the example from the package comment works now:

scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
"experiment cannot be re-used"
samples := Random new
              reduce: experiment
              init: Bag new.
"transform and reduce in one step"
samples := Random new
              transduce: scale * sides * count
              reduce: collect
              init: Bag new.
"assemble coin (eduction) and flip (reduction) objects"
coin := sides <~ scale <~ Random new.
flip := Bag <~ count.
"flip coin =)"
samples := flip <~ coin.

Cheers!
Steffen



Am .06.2017, 23:08 Uhr, schrieb Steffen Märcker <[hidden email]>:

Thanks, this appears to work.  Attached you'll find the file-out from
VisualWorks and the file-out from Pharo (includes package comment).

Cheers!
Steffen


Am .06.2017, 20:06 Uhr, schrieb Yanni Chiu <[hidden email]>:

To get the extension methods into the Transducers package, the following
worked for me - edit the category to have the prefix '*Transducers-'

2710c2710

< !Number methodsFor: 'transforming' stamp: ' 2/6/17 15:38'!

---

!Number methodsFor: '*Transducers-transforming' stamp: ' 2/6/17 15:38'!


On Fri, Jun 2, 2017 at 11:05 AM, Steffen Märcker <[hidden email]> wrote:

Dear all,

thanks for the many suggestions. I didn't had time to test all
import/export ways yet. But for now, I can report on two:

1) NGFileOuter
Unfortunately It raised several MNUs in my image. I'll investigate them
later.

2) FileOut30 (VW Contributed)
I was able to file out the code except for the package definition.
Replacing {category: ''} in the class definitions with {package:
'Transducers'} fixed that. However, methods that extend existing classes
did not end up in the Transducers package. Is there a similar easy
change
to the file-out making that happen? Also I'd like to add the package
comment if that's possible.

Most things appear to work as far as I can see. Two exceptions:
1) Random is a subclass of Stream in VW and in Pharo it is not. Hence,
I'll have to copy some methods from Stream to Random.
2) I used #beImmutable in VW but I couldn't yet figure out how to make
objects immutable in Pharo.

However, until the tests are ported, I cannot guarantee. Porting the
test
suite will be another beast, since I rely on the excellent
mocking/stubbing
library DoubleAgents by Randy Coulman. I am not sure how I will handle
that. In general, I think it would be really worth the effort to be
ported
to Pharo, too. DoubleAgents is pretty powerful and produces easy to read
and understand mocking/stubbing code. Personally, I prefer it clearly,
e.g., over Mocketry (no offence intended!).

Attached you'll find the file-out that I loaded into Pharo. The issues
above are not addressed yet. However, the following example works:

| scale sides count collect experiment random samples coin flip |
scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).

samples := random
              reduce: experiment
              init: Bag new.

samples := random
              transduce: scale * sides * count
              reduce: collect
              init: Bag new.

coin := sides <~ scale <~ random.
flip := Bag <~ count.

samples := flip <~ coin.


Best, Steffen



Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse
<[hidden email]
>:

There is a package for that NGFileOuter or something like that on cincom
store.
We used it for mobydic code.

On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <
[hidden email]>
wrote:

If I remember correctly, there is a parcel in VisualWorks to export a
file
out (Squeak format).

@Milton, can you give a hand to Steffen?

Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:

Thanks for the encouraging response! First question: Which is the
recommended (friction free) way to exchange code between VW and Pharo?

Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <
[hidden email]
>:

I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre


Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Stephane Ducasse-3
In reply to this post by Steffen Märcker
About NG this is strange but may be we were working on an old VW version. 

- I can help producing a nice document :)

Do you mean like the booklets published over the last weeks? This would be great.

yes now I do not want to help promoting a syntax that alienates me (and others because other people reported the saem to me). :)
 

Do you have an idea, how to add a package comment to the simple file-out it used? I think, a simple message send should suffice.

produce a monticello package, select the package and comment. 


Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Stephane Ducasse-3
In reply to this post by Steffen Märcker
Hi Steffen


> The short answer is that the compact notation turned out to work much better
> for me in my code, especially, if multiple transducers are involved. But
> that's my personal taste. You can choose which suits you better. In fact,
>
>   1000 take.
>
> just sits on top and simply calls
>
>   Take number: 1000.

To me this is much much better.


> If the need arises, we could of course factor the compact notation out into
> a separate package.
Good idea

 Btw, would you prefer (Take n: 1000) over (Take number:
> 1000)?

I tend to prefer explicit selector :)


> Damien, you're right, I experimented with additional styles. Right now, we
> already have in the basic Transducer package:
>
>   (collection transduce: #squared map * 1000 take. "which is equal to"
>   (collection transduce: #squared map) transduce: 1000 take.
>
> Basically, one can split #transduce:reduce:init: into single calls of
> #transduce:, #reduce:, and #init:, depending on the needs.
> I also have an (unfinished) extension, that allows to write:
>
>   (collection transduce map: #squared) take: 1000.

To me this is much mre readable.
I cannot and do not want to use the other forms.


> This feels familiar, but becomes a bit hard to read if more than two steps
> are needed.
>
>   collection transduce
>                map: #squared;
>                take: 1000.

Why this is would hard to read. We do that all the time everywhere.


> I think, this alternative would reads nicely. But as the message chain has
> to modify the underlying object (an eduction), very snaky side effects may
> occur. E.g., consider
>
>   eduction := collection transduce.
>   squared  := eduction map: #squared.
>   take     := squared take: 1000.
>
> Now, all three variables hold onto the same object, which first squares all
> elements and than takes the first 1000.

This is because the programmer did not understand what he did. No?



Stef

PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.


>
> Best,
> Steffen
>
>
>
>
>
> Am .06.2017, 21:28 Uhr, schrieb Damien Pollet
> <[hidden email]>:
>
>> If I recall correctly, there is an alternate protocol that looks more like
>> xtreams or the traditional select/collect iterations.
>>
>> On 2 June 2017 at 21:12, Stephane Ducasse <[hidden email]> wrote:
>>
>>> I have a design question
>>>
>>> why the library is implemented in functional style vs messages?
>>> I do not see why this is needed. To my eyes the compact notation
>>> goes against readibility of code and it feels ad-hoc in Smalltalk.
>>>
>>>
>>> I really prefer
>>>
>>> square := Map function: #squared.
>>> take := Take number: 1000.
>>>
>>> Because I know that I can read it and understand it.
>>> From that perspective I prefer Xtreams.
>>>
>>> Stef
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am the developer of the library 'Transducers' for VisualWorks. It was
>>>> formerly known as 'Reducers', but this name was a poor choice. I'd like
>>>> to
>>>> port it to Pharo, if there is any interest on your side. I hope to learn
>>>> more about Pharo in this process, since I am mainly a VW guy. And most
>>>> likely, I will come up with a bunch of questions. :-)
>>>>
>>>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very
>>>> happy to hear your optinions, questions and I hope we can start a
>>>> fruitful
>>>> discussion - even if there is not Pharo port yet.
>>>>
>>>> Best, Steffen
>>>>
>>>>
>>>>
>>>> Transducers are building blocks that encapsulate how to process elements
>>>> of a data sequence independently of the underlying input and output
>>>> source.
>>>>
>>>>
>>>>
>>>> # Overview
>>>>
>>>> ## Encapsulate
>>>> Implementations of enumeration methods, such as #collect:, have the
>>>> logic
>>>> how to process a single element in common.
>>>> However, that logic is reimplemented each and every time. Transducers
>>>> make
>>>> it explicit and facilitate re-use and coherent behavior.
>>>> For example:
>>>> - #collect: requires mapping: (aBlock1 map)
>>>> - #select: requires filtering: (aBlock2 filter)
>>>>
>>>>
>>>> ## Compose
>>>> In practice, algorithms often require multiple processing steps, e.g.,
>>>> mapping only a filtered set of elements.
>>>> Transducers are inherently composable, and hereby, allow to make the
>>>> combination of steps explicit.
>>>> Since transducers do not build intermediate collections, their
>>>> composition
>>>> is memory-efficient.
>>>> For example:
>>>> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>>>>
>>>>
>>>> ## Re-Use
>>>> Transducers are decoupled from the input and output sources, and hence,
>>>> they can be reused in different contexts.
>>>> For example:
>>>> - enumeration of collections
>>>> - processing of streams
>>>> - communicating via channels
>>>>
>>>>
>>>>
>>>> # Usage by Example
>>>>
>>>> We build a coin flipping experiment and count the occurrence of heads
>>>> and
>>>> tails.
>>>>
>>>> First, we associate random numbers with the sides of a coin.
>>>>
>>>>     scale := [:x | (x * 2 + 1) floor] map.
>>>>     sides := #(heads tails) replace.
>>>>
>>>> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
>>>> Sides is a transducer that replaces the numbers with heads an tails by
>>>> lookup in an array.
>>>> Next, we choose a number of samples.
>>>>
>>>>     count := 1000 take.
>>>>
>>>> Count is a transducer that takes 1000 elements from a source.
>>>> We keep track of the occurrences of heads an tails using a bag.
>>>>
>>>>     collect := [:bag :c | bag add: c; yourself].
>>>>
>>>> Collect is binary block (reducing function) that collects events in a
>>>> bag.
>>>> We assemble the experiment by transforming the block using the
>>>> transducers.
>>>>
>>>>     experiment := (scale * sides * count) transform: collect.
>>>>
>>>>   From left to right we see the steps involved: scale, sides, count and
>>>> collect.
>>>> Transforming assembles these steps into a binary block (reducing
>>>> function)
>>>> we can use to run the experiment.
>>>>
>>>>     samples := Random new
>>>>                   reduce: experiment
>>>>                   init: Bag new.
>>>>
>>>> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
>>>> To execute a transformation and a reduction together, we can use
>>>> #transduce:reduce:init:.
>>>>
>>>>     samples := Random new
>>>>                   transduce: scale * sides * count
>>>>                   reduce: collect
>>>>                   init: Bag new.
>>>>
>>>> We can also express the experiment as data-flow using #<~.
>>>> This enables us to build objects that can be re-used in other
>>>> experiments.
>>>>
>>>>     coin := sides <~ scale <~ Random new.
>>>>     flip := Bag <~ count.
>>>>
>>>> Coin is an eduction, i.e., it binds transducers to a source and
>>>> understands #reduce:init: among others.
>>>> Flip is a transformed reduction, i.e., it binds transducers to a
>>>> reducing
>>>> function and an initial value.
>>>> By sending #<~, we draw further samples from flipping the coin.
>>>>
>>>>     samples := flip <~ coin.
>>>>
>>>> This yields a new Bag with another 1000 samples.
>>>>
>>>>
>>>>
>>>> # Basic Concepts
>>>>
>>>> ## Reducing Functions
>>>>
>>>> A reducing function represents a single step in processing a data
>>>> sequence.
>>>> It takes an accumulated result and a value, and returns a new
>>>> accumulated
>>>> result.
>>>> For example:
>>>>
>>>>     collect := [:col :e | col add: e; yourself].
>>>>     sum := #+.
>>>>
>>>> A reducing function can also be ternary, i.e., it takes an accumulated
>>>> result, a key and a value.
>>>> For example:
>>>>
>>>>     collect := [:dic :k :v | dict at: k put: v; yourself].
>>>>
>>>> Reducing functions may be equipped with an optional completing action.
>>>> After finishing processing, it is invoked exactly once, e.g., to free
>>>> resources.
>>>>
>>>>     stream := [:str :e | str nextPut: each; yourself] completing:
>>>> #close.
>>>>     absSum := #+ completing: #abs
>>>>
>>>> A reducing function can end processing early by signaling Reduced with a
>>>> result.
>>>> This mechanism also enables the treatment of infinite sources.
>>>>
>>>>     nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
>>>> [res]].
>>>>
>>>> The primary approach to process a data sequence is the reducing protocol
>>>> with the messages #reduce:init: and #transduce:reduce:init: if
>>>> transducers
>>>> are involved.
>>>> The behavior is similar to #inject:into: but in addition it takes care
>>>> of:
>>>> - handling binary and ternary reducing functions,
>>>> - invoking the completing action after finishing, and
>>>> - stopping the reduction if Reduced is signaled.
>>>> The message #transduce:reduce:init: just combines the transformation and
>>>> the reducing step.
>>>>
>>>> However, as reducing functions are step-wise in nature, an application
>>>> may
>>>> choose other means to process its data.
>>>>
>>>>
>>>> ## Reducibles
>>>>
>>>> A data source is called reducible if it implements the reducing
>>>> protocol.
>>>> Default implementations are provided for collections and streams.
>>>> Additionally, blocks without an argument are reducible, too.
>>>> This allows to adapt to custom data sources without additional effort.
>>>> For example:
>>>>
>>>>     "XStreams adaptor"
>>>>     xstream := filename reading.
>>>>     reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>>>>
>>>>     "natural numbers"
>>>>     n := 0.
>>>>     reducible := [n := n+1].
>>>>
>>>>
>>>> ## Transducers
>>>>
>>>> A transducer is an object that transforms a reducing function into
>>>> another.
>>>> Transducers encapsulate common steps in processing data sequences, such
>>>> as
>>>> map, filter, concatenate, and flatten.
>>>> A transducer transforms a reducing function into another via #transform:
>>>> in order to add those steps.
>>>> They can be composed using #* which yields a new transducer that does
>>>> both
>>>> transformations.
>>>> Most transducers require an argument, typically blocks, symbols or
>>>> numbers:
>>>>
>>>>     square := Map function: #squared.
>>>>     take := Take number: 1000.
>>>>
>>>> To facilitate compact notation, the argument types implement
>>>> corresponding
>>>> methods:
>>>>
>>>>     squareAndTake := #squared map * 1000 take.
>>>>
>>>> Transducers requiring no argument are singletons and can be accessed by
>>>> their class name.
>>>>
>>>>     flattenAndDedupe := Flatten * Dedupe.
>>>>
>>>>
>>>>
>>>> # Advanced Concepts
>>>>
>>>> ## Data flows
>>>>
>>>> Processing a sequence of data can often be regarded as a data flow.
>>>> The operator #<~ allows define a flow from a data source through
>>>> processing steps to a drain.
>>>> For example:
>>>>
>>>>     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>>>>     fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>>>>
>>>> In both examples #<~ is only used to set up the data flow using reducing
>>>> functions and transducers.
>>>> In contrast to streams, transducers are completely independent from
>>>> input
>>>> and output sources.
>>>> Hence, we have a clear separation of reading data, writing data and
>>>> processing elements.
>>>> - Sources know how to iterate over data with a reducing function, e.g.,
>>>> via #reduce:init:.
>>>> - Drains know how to collect data using a reducing function.
>>>> - Transducers know how to process single elements.
>>>>
>>>>
>>>> ## Reductions
>>>>
>>>> A reduction binds an initial value or a block yielding an initial value
>>>> to
>>>> a reducing function.
>>>> The idea is to define a ready-to-use process that can be applied in
>>>> different contexts.
>>>> Reducibles handle reductions via #reduce: and #transduce:reduce:
>>>> For example:
>>>>
>>>>     sum := #+ init: 0.
>>>>     sum1 := #(1 1 1) reduce: sum.
>>>>     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>>>>
>>>>     asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>>>>     set1 := #(1 1 1) reduce: asSet.
>>>>     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>>>>
>>>> By combining a transducer with a reduction, a process can be further
>>>> modified.
>>>>
>>>>     sumOdds := sum <~ #odd filter
>>>>     setOdds := asSet <~ #odd filter
>>>>
>>>>
>>>> ## Eductions
>>>>
>>>> An eduction combines a reducible data sources with a transducer.
>>>> The idea is to define a transformed (virtual) data source that needs not
>>>> to be stored in memory.
>>>>
>>>>     odds1 := #odd filter <~ #(1 2 3) readStream.
>>>>     odds2 := #odd filter <~ (1 to 1000).
>>>>
>>>> Depending on the underlying source, eductions can be processed once
>>>> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
>>>> Since no intermediate data is stored, transducers actions are lazy,
>>>> i.e.,
>>>> they are invoked each time the eduction is processed.
>>>>
>>>>
>>>>
>>>> # Origins
>>>>
>>>> Transducers is based on the same-named Clojure library and its ideas.
>>>> Please see:
>>>> http://clojure.org/transducers
>>>>
>>>>
>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

philippeback
Coupling this with Olek's work on the DataFrame could really come handy.

Phil

On Mon, Jun 5, 2017 at 9:14 AM, Stephane Ducasse <[hidden email]> wrote:
Hi Steffen


> The short answer is that the compact notation turned out to work much better
> for me in my code, especially, if multiple transducers are involved. But
> that's my personal taste. You can choose which suits you better. In fact,
>
>   1000 take.
>
> just sits on top and simply calls
>
>   Take number: 1000.

To me this is much much better.


> If the need arises, we could of course factor the compact notation out into
> a separate package.
Good idea

 Btw, would you prefer (Take n: 1000) over (Take number:
> 1000)?

I tend to prefer explicit selector :)


> Damien, you're right, I experimented with additional styles. Right now, we
> already have in the basic Transducer package:
>
>   (collection transduce: #squared map * 1000 take. "which is equal to"
>   (collection transduce: #squared map) transduce: 1000 take.
>
> Basically, one can split #transduce:reduce:init: into single calls of
> #transduce:, #reduce:, and #init:, depending on the needs.
> I also have an (unfinished) extension, that allows to write:
>
>   (collection transduce map: #squared) take: 1000.

To me this is much mre readable.
I cannot and do not want to use the other forms.


> This feels familiar, but becomes a bit hard to read if more than two steps
> are needed.
>
>   collection transduce
>                map: #squared;
>                take: 1000.

Why this is would hard to read. We do that all the time everywhere.


> I think, this alternative would reads nicely. But as the message chain has
> to modify the underlying object (an eduction), very snaky side effects may
> occur. E.g., consider
>
>   eduction := collection transduce.
>   squared  := eduction map: #squared.
>   take     := squared take: 1000.
>
> Now, all three variables hold onto the same object, which first squares all
> elements and than takes the first 1000.

This is because the programmer did not understand what he did. No?



Stef

PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.


>
> Best,
> Steffen
>
>
>
>
>
> Am .06.2017, 21:28 Uhr, schrieb Damien Pollet
> <[hidden email]>:
>
>> If I recall correctly, there is an alternate protocol that looks more like
>> xtreams or the traditional select/collect iterations.
>>
>> On 2 June 2017 at 21:12, Stephane Ducasse <[hidden email]> wrote:
>>
>>> I have a design question
>>>
>>> why the library is implemented in functional style vs messages?
>>> I do not see why this is needed. To my eyes the compact notation
>>> goes against readibility of code and it feels ad-hoc in Smalltalk.
>>>
>>>
>>> I really prefer
>>>
>>> square := Map function: #squared.
>>> take := Take number: 1000.
>>>
>>> Because I know that I can read it and understand it.
>>> From that perspective I prefer Xtreams.
>>>
>>> Stef
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
>>>
>>>> Hi,
>>>>
>>>> I am the developer of the library 'Transducers' for VisualWorks. It was
>>>> formerly known as 'Reducers', but this name was a poor choice. I'd like
>>>> to
>>>> port it to Pharo, if there is any interest on your side. I hope to learn
>>>> more about Pharo in this process, since I am mainly a VW guy. And most
>>>> likely, I will come up with a bunch of questions. :-)
>>>>
>>>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very
>>>> happy to hear your optinions, questions and I hope we can start a
>>>> fruitful
>>>> discussion - even if there is not Pharo port yet.
>>>>
>>>> Best, Steffen
>>>>
>>>>
>>>>
>>>> Transducers are building blocks that encapsulate how to process elements
>>>> of a data sequence independently of the underlying input and output
>>>> source.
>>>>
>>>>
>>>>
>>>> # Overview
>>>>
>>>> ## Encapsulate
>>>> Implementations of enumeration methods, such as #collect:, have the
>>>> logic
>>>> how to process a single element in common.
>>>> However, that logic is reimplemented each and every time. Transducers
>>>> make
>>>> it explicit and facilitate re-use and coherent behavior.
>>>> For example:
>>>> - #collect: requires mapping: (aBlock1 map)
>>>> - #select: requires filtering: (aBlock2 filter)
>>>>
>>>>
>>>> ## Compose
>>>> In practice, algorithms often require multiple processing steps, e.g.,
>>>> mapping only a filtered set of elements.
>>>> Transducers are inherently composable, and hereby, allow to make the
>>>> combination of steps explicit.
>>>> Since transducers do not build intermediate collections, their
>>>> composition
>>>> is memory-efficient.
>>>> For example:
>>>> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>>>>
>>>>
>>>> ## Re-Use
>>>> Transducers are decoupled from the input and output sources, and hence,
>>>> they can be reused in different contexts.
>>>> For example:
>>>> - enumeration of collections
>>>> - processing of streams
>>>> - communicating via channels
>>>>
>>>>
>>>>
>>>> # Usage by Example
>>>>
>>>> We build a coin flipping experiment and count the occurrence of heads
>>>> and
>>>> tails.
>>>>
>>>> First, we associate random numbers with the sides of a coin.
>>>>
>>>>     scale := [:x | (x * 2 + 1) floor] map.
>>>>     sides := #(heads tails) replace.
>>>>
>>>> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
>>>> Sides is a transducer that replaces the numbers with heads an tails by
>>>> lookup in an array.
>>>> Next, we choose a number of samples.
>>>>
>>>>     count := 1000 take.
>>>>
>>>> Count is a transducer that takes 1000 elements from a source.
>>>> We keep track of the occurrences of heads an tails using a bag.
>>>>
>>>>     collect := [:bag :c | bag add: c; yourself].
>>>>
>>>> Collect is binary block (reducing function) that collects events in a
>>>> bag.
>>>> We assemble the experiment by transforming the block using the
>>>> transducers.
>>>>
>>>>     experiment := (scale * sides * count) transform: collect.
>>>>
>>>>   From left to right we see the steps involved: scale, sides, count and
>>>> collect.
>>>> Transforming assembles these steps into a binary block (reducing
>>>> function)
>>>> we can use to run the experiment.
>>>>
>>>>     samples := Random new
>>>>                   reduce: experiment
>>>>                   init: Bag new.
>>>>
>>>> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
>>>> To execute a transformation and a reduction together, we can use
>>>> #transduce:reduce:init:.
>>>>
>>>>     samples := Random new
>>>>                   transduce: scale * sides * count
>>>>                   reduce: collect
>>>>                   init: Bag new.
>>>>
>>>> We can also express the experiment as data-flow using #<~.
>>>> This enables us to build objects that can be re-used in other
>>>> experiments.
>>>>
>>>>     coin := sides <~ scale <~ Random new.
>>>>     flip := Bag <~ count.
>>>>
>>>> Coin is an eduction, i.e., it binds transducers to a source and
>>>> understands #reduce:init: among others.
>>>> Flip is a transformed reduction, i.e., it binds transducers to a
>>>> reducing
>>>> function and an initial value.
>>>> By sending #<~, we draw further samples from flipping the coin.
>>>>
>>>>     samples := flip <~ coin.
>>>>
>>>> This yields a new Bag with another 1000 samples.
>>>>
>>>>
>>>>
>>>> # Basic Concepts
>>>>
>>>> ## Reducing Functions
>>>>
>>>> A reducing function represents a single step in processing a data
>>>> sequence.
>>>> It takes an accumulated result and a value, and returns a new
>>>> accumulated
>>>> result.
>>>> For example:
>>>>
>>>>     collect := [:col :e | col add: e; yourself].
>>>>     sum := #+.
>>>>
>>>> A reducing function can also be ternary, i.e., it takes an accumulated
>>>> result, a key and a value.
>>>> For example:
>>>>
>>>>     collect := [:dic :k :v | dict at: k put: v; yourself].
>>>>
>>>> Reducing functions may be equipped with an optional completing action.
>>>> After finishing processing, it is invoked exactly once, e.g., to free
>>>> resources.
>>>>
>>>>     stream := [:str :e | str nextPut: each; yourself] completing:
>>>> #close.
>>>>     absSum := #+ completing: #abs
>>>>
>>>> A reducing function can end processing early by signaling Reduced with a
>>>> result.
>>>> This mechanism also enables the treatment of infinite sources.
>>>>
>>>>     nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
>>>> [res]].
>>>>
>>>> The primary approach to process a data sequence is the reducing protocol
>>>> with the messages #reduce:init: and #transduce:reduce:init: if
>>>> transducers
>>>> are involved.
>>>> The behavior is similar to #inject:into: but in addition it takes care
>>>> of:
>>>> - handling binary and ternary reducing functions,
>>>> - invoking the completing action after finishing, and
>>>> - stopping the reduction if Reduced is signaled.
>>>> The message #transduce:reduce:init: just combines the transformation and
>>>> the reducing step.
>>>>
>>>> However, as reducing functions are step-wise in nature, an application
>>>> may
>>>> choose other means to process its data.
>>>>
>>>>
>>>> ## Reducibles
>>>>
>>>> A data source is called reducible if it implements the reducing
>>>> protocol.
>>>> Default implementations are provided for collections and streams.
>>>> Additionally, blocks without an argument are reducible, too.
>>>> This allows to adapt to custom data sources without additional effort.
>>>> For example:
>>>>
>>>>     "XStreams adaptor"
>>>>     xstream := filename reading.
>>>>     reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>>>>
>>>>     "natural numbers"
>>>>     n := 0.
>>>>     reducible := [n := n+1].
>>>>
>>>>
>>>> ## Transducers
>>>>
>>>> A transducer is an object that transforms a reducing function into
>>>> another.
>>>> Transducers encapsulate common steps in processing data sequences, such
>>>> as
>>>> map, filter, concatenate, and flatten.
>>>> A transducer transforms a reducing function into another via #transform:
>>>> in order to add those steps.
>>>> They can be composed using #* which yields a new transducer that does
>>>> both
>>>> transformations.
>>>> Most transducers require an argument, typically blocks, symbols or
>>>> numbers:
>>>>
>>>>     square := Map function: #squared.
>>>>     take := Take number: 1000.
>>>>
>>>> To facilitate compact notation, the argument types implement
>>>> corresponding
>>>> methods:
>>>>
>>>>     squareAndTake := #squared map * 1000 take.
>>>>
>>>> Transducers requiring no argument are singletons and can be accessed by
>>>> their class name.
>>>>
>>>>     flattenAndDedupe := Flatten * Dedupe.
>>>>
>>>>
>>>>
>>>> # Advanced Concepts
>>>>
>>>> ## Data flows
>>>>
>>>> Processing a sequence of data can often be regarded as a data flow.
>>>> The operator #<~ allows define a flow from a data source through
>>>> processing steps to a drain.
>>>> For example:
>>>>
>>>>     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>>>>     fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>>>>
>>>> In both examples #<~ is only used to set up the data flow using reducing
>>>> functions and transducers.
>>>> In contrast to streams, transducers are completely independent from
>>>> input
>>>> and output sources.
>>>> Hence, we have a clear separation of reading data, writing data and
>>>> processing elements.
>>>> - Sources know how to iterate over data with a reducing function, e.g.,
>>>> via #reduce:init:.
>>>> - Drains know how to collect data using a reducing function.
>>>> - Transducers know how to process single elements.
>>>>
>>>>
>>>> ## Reductions
>>>>
>>>> A reduction binds an initial value or a block yielding an initial value
>>>> to
>>>> a reducing function.
>>>> The idea is to define a ready-to-use process that can be applied in
>>>> different contexts.
>>>> Reducibles handle reductions via #reduce: and #transduce:reduce:
>>>> For example:
>>>>
>>>>     sum := #+ init: 0.
>>>>     sum1 := #(1 1 1) reduce: sum.
>>>>     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>>>>
>>>>     asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>>>>     set1 := #(1 1 1) reduce: asSet.
>>>>     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>>>>
>>>> By combining a transducer with a reduction, a process can be further
>>>> modified.
>>>>
>>>>     sumOdds := sum <~ #odd filter
>>>>     setOdds := asSet <~ #odd filter
>>>>
>>>>
>>>> ## Eductions
>>>>
>>>> An eduction combines a reducible data sources with a transducer.
>>>> The idea is to define a transformed (virtual) data source that needs not
>>>> to be stored in memory.
>>>>
>>>>     odds1 := #odd filter <~ #(1 2 3) readStream.
>>>>     odds2 := #odd filter <~ (1 to 1000).
>>>>
>>>> Depending on the underlying source, eductions can be processed once
>>>> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
>>>> Since no intermediate data is stored, transducers actions are lazy,
>>>> i.e.,
>>>> they are invoked each time the eduction is processed.
>>>>
>>>>
>>>>
>>>> # Origins
>>>>
>>>> Transducers is based on the same-named Clojure library and its ideas.
>>>> Please see:
>>>> http://clojure.org/transducers
>>>>
>>>>
>



Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Stephane Ducasse-3
I assumed so. I used file-outs only to get something working quickly. =)  
Properly published code will follow as soon as I am more familiar with  
code management in Pharo.


Am .06.2017, 09:05 Uhr, schrieb Stephane Ducasse <[hidden email]>:

> We do not work with fileouts :)
> You should produce packages with also a configuration and published them  
> on
> smalltalkhub or git and
> in the MetaRepository.
> You can also add package comments
>
> On Sat, Jun 3, 2017 at 10:29 PM, Steffen Märcker <[hidden email]> wrote:
>
>> Dear all,
>>
>> attached are updated file-outs. I fixed a couple of annoyances that
>> slipped through yesterday evening. Most notable:
>>
>> 1) Random generator now works.
>> 2) Early termination via Reduced exception does MNU anymore.
>> 3) Printing a transducer holding a block does not MNU anymore.
>>
>> Please, give it a spin and tell me your impressions. (At least) The
>> coin-flipping the example from the package comment works now:
>>
>> scale := [:x | (x * 2 + 1) floor] map.
>> sides := #(heads tails) replace.
>> count := 1000 take.
>> collect := [:bag :c | bag add: c; yourself].
>> experiment := (scale * sides * count) transform: collect.
>> "experiment cannot be re-used"
>> samples := Random new
>>               reduce: experiment
>>               init: Bag new.
>> "transform and reduce in one step"
>> samples := Random new
>>               transduce: scale * sides * count
>>               reduce: collect
>>               init: Bag new.
>> "assemble coin (eduction) and flip (reduction) objects"
>> coin := sides <~ scale <~ Random new.
>> flip := Bag <~ count.
>> "flip coin =)"
>> samples := flip <~ coin.
>>
>> Cheers!
>> Steffen
>>
>>
>>
>> Am .06.2017, 23:08 Uhr, schrieb Steffen Märcker <[hidden email]>:
>>
>> Thanks, this appears to work.  Attached you'll find the file-out from
>>> VisualWorks and the file-out from Pharo (includes package comment).
>>>
>>> Cheers!
>>> Steffen
>>>
>>>
>>> Am .06.2017, 20:06 Uhr, schrieb Yanni Chiu <[hidden email]>:
>>>
>>> To get the extension methods into the Transducers package, the  
>>> following
>>>> worked for me - edit the category to have the prefix '*Transducers-'
>>>>
>>>> 2710c2710
>>>>
>>>> < !Number methodsFor: 'transforming' stamp: ' 2/6/17 15:38'!
>>>>
>>>> ---
>>>>
>>>> !Number methodsFor: '*Transducers-transforming' stamp: ' 2/6/17  
>>>> 15:38'!
>>>>>
>>>>
>>>>
>>>> On Fri, Jun 2, 2017 at 11:05 AM, Steffen Märcker <[hidden email]>  
>>>> wrote:
>>>>
>>>> Dear all,
>>>>>
>>>>> thanks for the many suggestions. I didn't had time to test all
>>>>> import/export ways yet. But for now, I can report on two:
>>>>>
>>>>> 1) NGFileOuter
>>>>> Unfortunately It raised several MNUs in my image. I'll investigate  
>>>>> them
>>>>> later.
>>>>>
>>>>> 2) FileOut30 (VW Contributed)
>>>>> I was able to file out the code except for the package definition.
>>>>> Replacing {category: ''} in the class definitions with {package:
>>>>> 'Transducers'} fixed that. However, methods that extend existing  
>>>>> classes
>>>>> did not end up in the Transducers package. Is there a similar easy
>>>>> change
>>>>> to the file-out making that happen? Also I'd like to add the package
>>>>> comment if that's possible.
>>>>>
>>>>> Most things appear to work as far as I can see. Two exceptions:
>>>>> 1) Random is a subclass of Stream in VW and in Pharo it is not.  
>>>>> Hence,
>>>>> I'll have to copy some methods from Stream to Random.
>>>>> 2) I used #beImmutable in VW but I couldn't yet figure out how to  
>>>>> make
>>>>> objects immutable in Pharo.
>>>>>
>>>>> However, until the tests are ported, I cannot guarantee. Porting the
>>>>> test
>>>>> suite will be another beast, since I rely on the excellent
>>>>> mocking/stubbing
>>>>> library DoubleAgents by Randy Coulman. I am not sure how I will  
>>>>> handle
>>>>> that. In general, I think it would be really worth the effort to be
>>>>> ported
>>>>> to Pharo, too. DoubleAgents is pretty powerful and produces easy to  
>>>>> read
>>>>> and understand mocking/stubbing code. Personally, I prefer it  
>>>>> clearly,
>>>>> e.g., over Mocketry (no offence intended!).
>>>>>
>>>>> Attached you'll find the file-out that I loaded into Pharo. The  
>>>>> issues
>>>>> above are not addressed yet. However, the following example works:
>>>>>
>>>>> | scale sides count collect experiment random samples coin flip |
>>>>> scale := [:x | (x * 2 + 1) floor] map.
>>>>> sides := #(heads tails) replace.
>>>>> count := 1000 take.
>>>>> collect := [:bag :c | bag add: c; yourself].
>>>>> experiment := (scale * sides * count) transform: collect.
>>>>> random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).
>>>>>
>>>>> samples := random
>>>>>               reduce: experiment
>>>>>               init: Bag new.
>>>>>
>>>>> samples := random
>>>>>               transduce: scale * sides * count
>>>>>               reduce: collect
>>>>>               init: Bag new.
>>>>>
>>>>> coin := sides <~ scale <~ random.
>>>>> flip := Bag <~ count.
>>>>>
>>>>> samples := flip <~ coin.
>>>>>
>>>>>
>>>>> Best, Steffen
>>>>>
>>>>>
>>>>>
>>>>> Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse
>>>>> <[hidden email]
>>>>> >:
>>>>>
>>>>> There is a package for that NGFileOuter or something like that on  
>>>>> cincom
>>>>>
>>>>>> store.
>>>>>> We used it for mobydic code.
>>>>>>
>>>>>> On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <
>>>>>> [hidden email]>
>>>>>> wrote:
>>>>>>
>>>>>> If I remember correctly, there is a parcel in VisualWorks to export  
>>>>>> a
>>>>>> file
>>>>>>
>>>>>>> out (Squeak format).
>>>>>>>
>>>>>>> @Milton, can you give a hand to Steffen?
>>>>>>>
>>>>>>> Alexandre
>>>>>>> --
>>>>>>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>>>>>>> Alexandre Bergel  http://www.bergel.eu
>>>>>>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]>  
>>>>>>> wrote:
>>>>>>>
>>>>>>> Thanks for the encouraging response! First question: Which is the
>>>>>>> recommended (friction free) way to exchange code between VW and  
>>>>>>> Pharo?
>>>>>>>
>>>>>>> Cheers!
>>>>>>> Steffen
>>>>>>>
>>>>>>> Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <
>>>>>>> [hidden email]
>>>>>>> >:
>>>>>>>
>>>>>>> I second Sven. This is very exciting!
>>>>>>>
>>>>>>> Let us know when you have something ready to be tested.
>>>>>>>
>>>>>>> Alexandre
>>>>>>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Stephane Ducasse-3
Hi!

>> If the need arises, we could of course factor the compact notation out  
>> into
>> a separate package.
>Good idea
>[...] I do not want to help promoting a syntax that alienates me (and
> others because other people reported the saem to me).

I understand. Btw, I'd really, really appreciate if others post their  
thoughts and feedback here as well. Discussion helps moving things  
forward. =)


>>   (collection transduce map: #squared) take: 1000.
>
> To me this is much more readable.

Well, I'll provide that extension once it is finished.

> I cannot and do not want to use the other forms.


>>   collection transduce
>>                map: #squared;
>>                take: 1000.
>>
>> But as the message chain has to modify the underlying object
>> (an eduction), very snaky side effects my occur. E.g., consider
>>
>>   eduction := collection transduce.
>>   squared  := eduction map: #squared.
>>   take     := squared take: 1000.
>>
>> Now, all three variables hold onto the same object, which first squares  
>> all elements and than takes the first 1000.
>
> This is because the programmer did not understand what he did. No?

Sure. ;-) Nevertheless, it would be very hard to debug. All of which are  
the reasons I wouldn't implement that variant. ;-)


> PS: I played with infinite stream and iteration back in 1993 in CLOS.
> Now I do not like to mix things because it breaks my flow of thinking.


I am not sure whether I understand what your mean by mixing. Concerning  
transducers, the ability to handle infinite sources in only a (natural)  
side-effect of the ability to finish reductions before all elements are  
processed, e.g., like #detect: and such.

Best, Steffen




Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by philippeback
Hi Phil,

> Coupling this with Olek's work on the DataFrame could really come handy.

I am new to this list. Could you please elaborate?

Cheers!
Steffen



> On Mon, Jun 5, 2017 at 9:14 AM, Stephane Ducasse  
> <[hidden email]>
> wrote:
>
>> Hi Steffen
>>
>>
>> > The short answer is that the compact notation turned out to work much
>> better
>> > for me in my code, especially, if multiple transducers are involved.  
>> But
>> > that's my personal taste. You can choose which suits you better. In  
>> fact,
>> >
>> >   1000 take.
>> >
>> > just sits on top and simply calls
>> >
>> >   Take number: 1000.
>>
>> To me this is much much better.
>>
>>
>> > If the need arises, we could of course factor the compact notation out
>> into
>> > a separate package.
>> Good idea
>>
>>  Btw, would you prefer (Take n: 1000) over (Take number:
>> > 1000)?
>>
>> I tend to prefer explicit selector :)
>>
>>
>> > Damien, you're right, I experimented with additional styles. Right  
>> now,
>> we
>> > already have in the basic Transducer package:
>> >
>> >   (collection transduce: #squared map * 1000 take. "which is equal to"
>> >   (collection transduce: #squared map) transduce: 1000 take.
>> >
>> > Basically, one can split #transduce:reduce:init: into single calls of
>> > #transduce:, #reduce:, and #init:, depending on the needs.
>> > I also have an (unfinished) extension, that allows to write:
>> >
>> >   (collection transduce map: #squared) take: 1000.
>>
>> To me this is much mre readable.
>> I cannot and do not want to use the other forms.
>>
>>
>> > This feels familiar, but becomes a bit hard to read if more than two
>> steps
>> > are needed.
>> >
>> >   collection transduce
>> >                map: #squared;
>> >                take: 1000.
>>
>> Why this is would hard to read. We do that all the time everywhere.
>>
>>
>> > I think, this alternative would reads nicely. But as the message chain
>> has
>> > to modify the underlying object (an eduction), very snaky side effects
>> may
>> > occur. E.g., consider
>> >
>> >   eduction := collection transduce.
>> >   squared  := eduction map: #squared.
>> >   take     := squared take: 1000.
>> >
>> > Now, all three variables hold onto the same object, which first  
>> squares
>> all
>> > elements and than takes the first 1000.
>>
>> This is because the programmer did not understand what he did. No?
>>
>>
>>
>> Stef
>>
>> PS: I played with infinite stream and iteration back in 1993 in CLOS.
>> Now I do not like to mix things because it breaks my flow of thinking.
>>
>>
>> >
>> > Best,
>> > Steffen
>> >
>> >
>> >
>> >
>> >
>> > Am .06.2017, 21:28 Uhr, schrieb Damien Pollet
>> > <[hidden email]>:
>> >
>> >> If I recall correctly, there is an alternate protocol that looks more
>> like
>> >> xtreams or the traditional select/collect iterations.
>> >>
>> >> On 2 June 2017 at 21:12, Stephane Ducasse <[hidden email]>
>> wrote:
>> >>
>> >>> I have a design question
>> >>>
>> >>> why the library is implemented in functional style vs messages?
>> >>> I do not see why this is needed. To my eyes the compact notation
>> >>> goes against readibility of code and it feels ad-hoc in Smalltalk.
>> >>>
>> >>>
>> >>> I really prefer
>> >>>
>> >>> square := Map function: #squared.
>> >>> take := Take number: 1000.
>> >>>
>> >>> Because I know that I can read it and understand it.
>> >>> From that perspective I prefer Xtreams.
>> >>>
>> >>> Stef
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]>
>> wrote:
>> >>>
>> >>>> Hi,
>> >>>>
>> >>>> I am the developer of the library 'Transducers' for VisualWorks. It
>> was
>> >>>> formerly known as 'Reducers', but this name was a poor choice. I'd
>> like
>> >>>> to
>> >>>> port it to Pharo, if there is any interest on your side. I hope to
>> learn
>> >>>> more about Pharo in this process, since I am mainly a VW guy. And  
>> most
>> >>>> likely, I will come up with a bunch of questions. :-)
>> >>>>
>> >>>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be
>> very
>> >>>> happy to hear your optinions, questions and I hope we can start a
>> >>>> fruitful
>> >>>> discussion - even if there is not Pharo port yet.
>> >>>>
>> >>>> Best, Steffen
>> >>>>
>> >>>>
>> >>>>
>> >>>> Transducers are building blocks that encapsulate how to process
>> elements
>> >>>> of a data sequence independently of the underlying input and output
>> >>>> source.
>> >>>>
>> >>>>
>> >>>>
>> >>>> # Overview
>> >>>>
>> >>>> ## Encapsulate
>> >>>> Implementations of enumeration methods, such as #collect:, have the
>> >>>> logic
>> >>>> how to process a single element in common.
>> >>>> However, that logic is reimplemented each and every time.  
>> Transducers
>> >>>> make
>> >>>> it explicit and facilitate re-use and coherent behavior.
>> >>>> For example:
>> >>>> - #collect: requires mapping: (aBlock1 map)
>> >>>> - #select: requires filtering: (aBlock2 filter)
>> >>>>
>> >>>>
>> >>>> ## Compose
>> >>>> In practice, algorithms often require multiple processing steps,  
>> e.g.,
>> >>>> mapping only a filtered set of elements.
>> >>>> Transducers are inherently composable, and hereby, allow to make  
>> the
>> >>>> combination of steps explicit.
>> >>>> Since transducers do not build intermediate collections, their
>> >>>> composition
>> >>>> is memory-efficient.
>> >>>> For example:
>> >>>> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map
>> elements"
>> >>>>
>> >>>>
>> >>>> ## Re-Use
>> >>>> Transducers are decoupled from the input and output sources, and
>> hence,
>> >>>> they can be reused in different contexts.
>> >>>> For example:
>> >>>> - enumeration of collections
>> >>>> - processing of streams
>> >>>> - communicating via channels
>> >>>>
>> >>>>
>> >>>>
>> >>>> # Usage by Example
>> >>>>
>> >>>> We build a coin flipping experiment and count the occurrence of  
>> heads
>> >>>> and
>> >>>> tails.
>> >>>>
>> >>>> First, we associate random numbers with the sides of a coin.
>> >>>>
>> >>>>     scale := [:x | (x * 2 + 1) floor] map.
>> >>>>     sides := #(heads tails) replace.
>> >>>>
>> >>>> Scale is a transducer that maps numbers x between 0 and 1 to 1 and  
>> 2.
>> >>>> Sides is a transducer that replaces the numbers with heads an  
>> tails by
>> >>>> lookup in an array.
>> >>>> Next, we choose a number of samples.
>> >>>>
>> >>>>     count := 1000 take.
>> >>>>
>> >>>> Count is a transducer that takes 1000 elements from a source.
>> >>>> We keep track of the occurrences of heads an tails using a bag.
>> >>>>
>> >>>>     collect := [:bag :c | bag add: c; yourself].
>> >>>>
>> >>>> Collect is binary block (reducing function) that collects events  
>> in a
>> >>>> bag.
>> >>>> We assemble the experiment by transforming the block using the
>> >>>> transducers.
>> >>>>
>> >>>>     experiment := (scale * sides * count) transform: collect.
>> >>>>
>> >>>>   From left to right we see the steps involved: scale, sides, count
>> and
>> >>>> collect.
>> >>>> Transforming assembles these steps into a binary block (reducing
>> >>>> function)
>> >>>> we can use to run the experiment.
>> >>>>
>> >>>>     samples := Random new
>> >>>>                   reduce: experiment
>> >>>>                   init: Bag new.
>> >>>>
>> >>>> Here, we use #reduce:init:, which is mostly similar to  
>> #inject:into:.
>> >>>> To execute a transformation and a reduction together, we can use
>> >>>> #transduce:reduce:init:.
>> >>>>
>> >>>>     samples := Random new
>> >>>>                   transduce: scale * sides * count
>> >>>>                   reduce: collect
>> >>>>                   init: Bag new.
>> >>>>
>> >>>> We can also express the experiment as data-flow using #<~.
>> >>>> This enables us to build objects that can be re-used in other
>> >>>> experiments.
>> >>>>
>> >>>>     coin := sides <~ scale <~ Random new.
>> >>>>     flip := Bag <~ count.
>> >>>>
>> >>>> Coin is an eduction, i.e., it binds transducers to a source and
>> >>>> understands #reduce:init: among others.
>> >>>> Flip is a transformed reduction, i.e., it binds transducers to a
>> >>>> reducing
>> >>>> function and an initial value.
>> >>>> By sending #<~, we draw further samples from flipping the coin.
>> >>>>
>> >>>>     samples := flip <~ coin.
>> >>>>
>> >>>> This yields a new Bag with another 1000 samples.
>> >>>>
>> >>>>
>> >>>>
>> >>>> # Basic Concepts
>> >>>>
>> >>>> ## Reducing Functions
>> >>>>
>> >>>> A reducing function represents a single step in processing a data
>> >>>> sequence.
>> >>>> It takes an accumulated result and a value, and returns a new
>> >>>> accumulated
>> >>>> result.
>> >>>> For example:
>> >>>>
>> >>>>     collect := [:col :e | col add: e; yourself].
>> >>>>     sum := #+.
>> >>>>
>> >>>> A reducing function can also be ternary, i.e., it takes an  
>> accumulated
>> >>>> result, a key and a value.
>> >>>> For example:
>> >>>>
>> >>>>     collect := [:dic :k :v | dict at: k put: v; yourself].
>> >>>>
>> >>>> Reducing functions may be equipped with an optional completing  
>> action.
>> >>>> After finishing processing, it is invoked exactly once, e.g., to  
>> free
>> >>>> resources.
>> >>>>
>> >>>>     stream := [:str :e | str nextPut: each; yourself] completing:
>> >>>> #close.
>> >>>>     absSum := #+ completing: #abs
>> >>>>
>> >>>> A reducing function can end processing early by signaling Reduced
>> with a
>> >>>> result.
>> >>>> This mechanism also enables the treatment of infinite sources.
>> >>>>
>> >>>>     nonNil := [:res :e | e ifNil: [Reduced signalWith: res]  
>> ifFalse:
>> >>>> [res]].
>> >>>>
>> >>>> The primary approach to process a data sequence is the reducing
>> protocol
>> >>>> with the messages #reduce:init: and #transduce:reduce:init: if
>> >>>> transducers
>> >>>> are involved.
>> >>>> The behavior is similar to #inject:into: but in addition it takes  
>> care
>> >>>> of:
>> >>>> - handling binary and ternary reducing functions,
>> >>>> - invoking the completing action after finishing, and
>> >>>> - stopping the reduction if Reduced is signaled.
>> >>>> The message #transduce:reduce:init: just combines the  
>> transformation
>> and
>> >>>> the reducing step.
>> >>>>
>> >>>> However, as reducing functions are step-wise in nature, an  
>> application
>> >>>> may
>> >>>> choose other means to process its data.
>> >>>>
>> >>>>
>> >>>> ## Reducibles
>> >>>>
>> >>>> A data source is called reducible if it implements the reducing
>> >>>> protocol.
>> >>>> Default implementations are provided for collections and streams.
>> >>>> Additionally, blocks without an argument are reducible, too.
>> >>>> This allows to adapt to custom data sources without additional  
>> effort.
>> >>>> For example:
>> >>>>
>> >>>>     "XStreams adaptor"
>> >>>>     xstream := filename reading.
>> >>>>     reducible := [[xstream get] on: Incomplete do: [Reduced  
>> signal]].
>> >>>>
>> >>>>     "natural numbers"
>> >>>>     n := 0.
>> >>>>     reducible := [n := n+1].
>> >>>>
>> >>>>
>> >>>> ## Transducers
>> >>>>
>> >>>> A transducer is an object that transforms a reducing function into
>> >>>> another.
>> >>>> Transducers encapsulate common steps in processing data sequences,
>> such
>> >>>> as
>> >>>> map, filter, concatenate, and flatten.
>> >>>> A transducer transforms a reducing function into another via
>> #transform:
>> >>>> in order to add those steps.
>> >>>> They can be composed using #* which yields a new transducer that  
>> does
>> >>>> both
>> >>>> transformations.
>> >>>> Most transducers require an argument, typically blocks, symbols or
>> >>>> numbers:
>> >>>>
>> >>>>     square := Map function: #squared.
>> >>>>     take := Take number: 1000.
>> >>>>
>> >>>> To facilitate compact notation, the argument types implement
>> >>>> corresponding
>> >>>> methods:
>> >>>>
>> >>>>     squareAndTake := #squared map * 1000 take.
>> >>>>
>> >>>> Transducers requiring no argument are singletons and can be  
>> accessed
>> by
>> >>>> their class name.
>> >>>>
>> >>>>     flattenAndDedupe := Flatten * Dedupe.
>> >>>>
>> >>>>
>> >>>>
>> >>>> # Advanced Concepts
>> >>>>
>> >>>> ## Data flows
>> >>>>
>> >>>> Processing a sequence of data can often be regarded as a data flow.
>> >>>> The operator #<~ allows define a flow from a data source through
>> >>>> processing steps to a drain.
>> >>>> For example:
>> >>>>
>> >>>>     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>> >>>>     fileOut writeStream <~ #isSeparator filter <~ fileIn  
>> readStream.
>> >>>>
>> >>>> In both examples #<~ is only used to set up the data flow using
>> reducing
>> >>>> functions and transducers.
>> >>>> In contrast to streams, transducers are completely independent from
>> >>>> input
>> >>>> and output sources.
>> >>>> Hence, we have a clear separation of reading data, writing data and
>> >>>> processing elements.
>> >>>> - Sources know how to iterate over data with a reducing function,
>> e.g.,
>> >>>> via #reduce:init:.
>> >>>> - Drains know how to collect data using a reducing function.
>> >>>> - Transducers know how to process single elements.
>> >>>>
>> >>>>
>> >>>> ## Reductions
>> >>>>
>> >>>> A reduction binds an initial value or a block yielding an initial
>> value
>> >>>> to
>> >>>> a reducing function.
>> >>>> The idea is to define a ready-to-use process that can be applied in
>> >>>> different contexts.
>> >>>> Reducibles handle reductions via #reduce: and #transduce:reduce:
>> >>>> For example:
>> >>>>
>> >>>>     sum := #+ init: 0.
>> >>>>     sum1 := #(1 1 1) reduce: sum.
>> >>>>     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>> >>>>
>> >>>>     asSet := [:set :e | set add: e; yourself] initializer: [Set  
>> new].
>> >>>>     set1 := #(1 1 1) reduce: asSet.
>> >>>>     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>> >>>>
>> >>>> By combining a transducer with a reduction, a process can be  
>> further
>> >>>> modified.
>> >>>>
>> >>>>     sumOdds := sum <~ #odd filter
>> >>>>     setOdds := asSet <~ #odd filter
>> >>>>
>> >>>>
>> >>>> ## Eductions
>> >>>>
>> >>>> An eduction combines a reducible data sources with a transducer.
>> >>>> The idea is to define a transformed (virtual) data source that  
>> needs
>> not
>> >>>> to be stored in memory.
>> >>>>
>> >>>>     odds1 := #odd filter <~ #(1 2 3) readStream.
>> >>>>     odds2 := #odd filter <~ (1 to 1000).
>> >>>>
>> >>>> Depending on the underlying source, eductions can be processed once
>> >>>> (streams, e.g., odds1) or multiple times (collections, e.g.,  
>> odds2).
>> >>>> Since no intermediate data is stored, transducers actions are lazy,
>> >>>> i.e.,
>> >>>> they are invoked each time the eduction is processed.
>> >>>>
>> >>>>
>> >>>>
>> >>>> # Origins
>> >>>>
>> >>>> Transducers is based on the same-named Clojure library and its  
>> ideas.
>> >>>> Please see:
>> >>>> http://clojure.org/transducers
>> >>>>
>> >>>>
>> >
>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo - Name Clash

Steffen Märcker
In reply to this post by Steffen Märcker
Hi,

I found some name clash with the message #reduce: in Pharo. It is already  
declared in SequencableCollection. Additionally, Collection>>fold: just  
calls #reduce:, which makes the difference between folding and reducing a  
bit unclear.

How should I handle this situation? I see the following options:
- I could simply solve the situation at runtime depending on the argument  
(e.g., using double-dispatching).
- I could check whether it might be possible to separate #fold: and  
#reduce: with the semantics:
   - reduce: starts with an initial value and the first collection item.
   - fold: uses no initial value and starts with the first two items.

In the Transducers library, there are two variants of reduce:
- #reduce:init: reduce using a block and an intial value
- #reduce: reduce using a block that carries an intial value or an  
initializer block.

Ciao,
Steffen


Am .05.2017, 14:23 Uhr, schrieb Steffen Märcker <[hidden email]>:

> Hi,
>
> I am the developer of the library 'Transducers' for VisualWorks. It was  
> formerly known as 'Reducers', but this name was a poor choice. I'd like  
> to port it to Pharo, if there is any interest on your side. I hope to  
> learn more about Pharo in this process, since I am mainly a VW guy. And  
> most likely, I will come up with a bunch of questions. :-)
>
> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very  
> happy to hear your optinions, questions and I hope we can start a  
> fruitful discussion - even if there is not Pharo port yet.
>
> Best, Steffen
>
>
>
> Transducers are building blocks that encapsulate how to process elements
> of a data sequence independently of the underlying input and output  
> source.
>
>
>
> # Overview
>
> ## Encapsulate
> Implementations of enumeration methods, such as #collect:, have the logic
> how to process a single element in common.
> However, that logic is reimplemented each and every time. Transducers  
> make
> it explicit and facilitate re-use and coherent behavior.
> For example:
> - #collect: requires mapping: (aBlock1 map)
> - #select: requires filtering: (aBlock2 filter)
>
>
> ## Compose
> In practice, algorithms often require multiple processing steps, e.g.,
> mapping only a filtered set of elements.
> Transducers are inherently composable, and hereby, allow to make the
> combination of steps explicit.
> Since transducers do not build intermediate collections, their  
> composition
> is memory-efficient.
> For example:
> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>
>
> ## Re-Use
> Transducers are decoupled from the input and output sources, and hence,
> they can be reused in different contexts.
> For example:
> - enumeration of collections
> - processing of streams
> - communicating via channels
>
>
>
> # Usage by Example
>
> We build a coin flipping experiment and count the occurrence of heads and
> tails.
>
> First, we associate random numbers with the sides of a coin.
>
>      scale := [:x | (x * 2 + 1) floor] map.
>      sides := #(heads tails) replace.
>
> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
> Sides is a transducer that replaces the numbers with heads an tails by
> lookup in an array.
> Next, we choose a number of samples.
>
>      count := 1000 take.
>
> Count is a transducer that takes 1000 elements from a source.
> We keep track of the occurrences of heads an tails using a bag.
>
>      collect := [:bag :c | bag add: c; yourself].
>
> Collect is binary block (reducing function) that collects events in a  
> bag.
> We assemble the experiment by transforming the block using the  
> transducers.
>
>      experiment := (scale * sides * count) transform: collect.
>
>    From left to right we see the steps involved: scale, sides, count and
> collect.
> Transforming assembles these steps into a binary block (reducing  
> function)
> we can use to run the experiment.
>
>      samples := Random new
>                    reduce: experiment
>                    init: Bag new.
>
> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
> To execute a transformation and a reduction together, we can use
> #transduce:reduce:init:.
>
>      samples := Random new
>                    transduce: scale * sides * count
>                    reduce: collect
>                    init: Bag new.
>
> We can also express the experiment as data-flow using #<~.
> This enables us to build objects that can be re-used in other  
> experiments.
>
>      coin := sides <~ scale <~ Random new.
>      flip := Bag <~ count.
>
> Coin is an eduction, i.e., it binds transducers to a source and
> understands #reduce:init: among others.
> Flip is a transformed reduction, i.e., it binds transducers to a reducing
> function and an initial value.
> By sending #<~, we draw further samples from flipping the coin.
>
>      samples := flip <~ coin.
>
> This yields a new Bag with another 1000 samples.
>
>
>
> # Basic Concepts
>
> ## Reducing Functions
>
> A reducing function represents a single step in processing a data  
> sequence.
> It takes an accumulated result and a value, and returns a new accumulated
> result.
> For example:
>
>      collect := [:col :e | col add: e; yourself].
>      sum := #+.
>
> A reducing function can also be ternary, i.e., it takes an accumulated
> result, a key and a value.
> For example:
>
>      collect := [:dic :k :v | dict at: k put: v; yourself].
>
> Reducing functions may be equipped with an optional completing action.
> After finishing processing, it is invoked exactly once, e.g., to free
> resources.
>
>      stream := [:str :e | str nextPut: each; yourself] completing:  
> #close.
>      absSum := #+ completing: #abs
>
> A reducing function can end processing early by signaling Reduced with a
> result.
> This mechanism also enables the treatment of infinite sources.
>
>      nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:  
> [res]].
>
> The primary approach to process a data sequence is the reducing protocol
> with the messages #reduce:init: and #transduce:reduce:init: if  
> transducers
> are involved.
> The behavior is similar to #inject:into: but in addition it takes care  
> of:
> - handling binary and ternary reducing functions,
> - invoking the completing action after finishing, and
> - stopping the reduction if Reduced is signaled.
> The message #transduce:reduce:init: just combines the transformation and
> the reducing step.
>
> However, as reducing functions are step-wise in nature, an application  
> may
> choose other means to process its data.
>
>
> ## Reducibles
>
> A data source is called reducible if it implements the reducing protocol.
> Default implementations are provided for collections and streams.
> Additionally, blocks without an argument are reducible, too.
> This allows to adapt to custom data sources without additional effort.
> For example:
>
>      "XStreams adaptor"
>      xstream := filename reading.
>      reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>
>      "natural numbers"
>      n := 0.
>      reducible := [n := n+1].
>
>
> ## Transducers
>
> A transducer is an object that transforms a reducing function into  
> another.
> Transducers encapsulate common steps in processing data sequences, such  
> as
> map, filter, concatenate, and flatten.
> A transducer transforms a reducing function into another via #transform:
> in order to add those steps.
> They can be composed using #* which yields a new transducer that does  
> both
> transformations.
> Most transducers require an argument, typically blocks, symbols or  
> numbers:
>
>      square := Map function: #squared.
>      take := Take number: 1000.
>
> To facilitate compact notation, the argument types implement  
> corresponding
> methods:
>
>      squareAndTake := #squared map * 1000 take.
>
> Transducers requiring no argument are singletons and can be accessed by
> their class name.
>
>      flattenAndDedupe := Flatten * Dedupe.
>
>
>
> # Advanced Concepts
>
> ## Data flows
>
> Processing a sequence of data can often be regarded as a data flow.
> The operator #<~ allows define a flow from a data source through
> processing steps to a drain.
> For example:
>
>      squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>      fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>
> In both examples #<~ is only used to set up the data flow using reducing
> functions and transducers.
> In contrast to streams, transducers are completely independent from input
> and output sources.
> Hence, we have a clear separation of reading data, writing data and
> processing elements.
> - Sources know how to iterate over data with a reducing function, e.g.,
> via #reduce:init:.
> - Drains know how to collect data using a reducing function.
> - Transducers know how to process single elements.
>
>
> ## Reductions
>
> A reduction binds an initial value or a block yielding an initial value  
> to
> a reducing function.
> The idea is to define a ready-to-use process that can be applied in
> different contexts.
> Reducibles handle reductions via #reduce: and #transduce:reduce:
> For example:
>
>      sum := #+ init: 0.
>      sum1 := #(1 1 1) reduce: sum.
>      sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>
>      asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>      set1 := #(1 1 1) reduce: asSet.
>      set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>
> By combining a transducer with a reduction, a process can be further
> modified.
>
>      sumOdds := sum <~ #odd filter
>      setOdds := asSet <~ #odd filter
>
>
> ## Eductions
>
> An eduction combines a reducible data sources with a transducer.
> The idea is to define a transformed (virtual) data source that needs not
> to be stored in memory.
>
>      odds1 := #odd filter <~ #(1 2 3) readStream.
>      odds2 := #odd filter <~ (1 to 1000).
>
> Depending on the underlying source, eductions can be processed once
> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
> Since no intermediate data is stored, transducers actions are lazy, i.e.,
> they are invoked each time the eduction is processed.
>
>
>
> # Origins
>
> Transducers is based on the same-named Clojure library and its ideas.
> Please see:
> http://clojure.org/transducers

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

philippeback
In reply to this post by Steffen Märcker
Hi Steffen,

I am willing to help you create the package in SmalltalkHub or Github based on your files/changeset.

Do you have a github and/or SmalltalkHub account?

Best,
Phil


On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker <[hidden email]> wrote:
Hi!

If the need arises, we could of course factor the compact notation out into
a separate package.
Good idea
[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).

I understand. Btw, I'd really, really appreciate if others post their thoughts and feedback here as well. Discussion helps moving things forward. =)


  (collection transduce map: #squared) take: 1000.

To me this is much more readable.

Well, I'll provide that extension once it is finished.

I cannot and do not want to use the other forms.


  collection transduce
               map: #squared;
               take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take     := squared take: 1000.

Now, all three variables hold onto the same object, which first squares all elements and than takes the first 1000.

This is because the programmer did not understand what he did. No?

Sure. ;-) Nevertheless, it would be very hard to debug. All of which are the reasons I wouldn't implement that variant. ;-)


PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.


I am not sure whether I understand what your mean by mixing. Concerning transducers, the ability to handle infinite sources in only a (natural) side-effect of the ability to finish reductions before all elements are processed, e.g., like #detect: and such.

Best, Steffen






Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

philippeback
In reply to this post by Steffen Märcker

On Tue, Jun 6, 2017 at 1:09 PM, Steffen Märcker <[hidden email]> wrote:
Hi Phil,

Coupling this with Olek's work on the DataFrame could really come handy.

I am new to this list. Could you please elaborate?

Cheers!
Steffen



On Mon, Jun 5, 2017 at 9:14 AM, Stephane Ducasse <[hidden email]>
wrote:

Hi Steffen


> The short answer is that the compact notation turned out to work much
better
> for me in my code, especially, if multiple transducers are involved. But
> that's my personal taste. You can choose which suits you better. In fact,
>
>   1000 take.
>
> just sits on top and simply calls
>
>   Take number: 1000.

To me this is much much better.


> If the need arises, we could of course factor the compact notation out
into
> a separate package.
Good idea

 Btw, would you prefer (Take n: 1000) over (Take number:
> 1000)?

I tend to prefer explicit selector :)


> Damien, you're right, I experimented with additional styles. Right now,
we
> already have in the basic Transducer package:
>
>   (collection transduce: #squared map * 1000 take. "which is equal to"
>   (collection transduce: #squared map) transduce: 1000 take.
>
> Basically, one can split #transduce:reduce:init: into single calls of
> #transduce:, #reduce:, and #init:, depending on the needs.
> I also have an (unfinished) extension, that allows to write:
>
>   (collection transduce map: #squared) take: 1000.

To me this is much mre readable.
I cannot and do not want to use the other forms.


> This feels familiar, but becomes a bit hard to read if more than two
steps
> are needed.
>
>   collection transduce
>                map: #squared;
>                take: 1000.

Why this is would hard to read. We do that all the time everywhere.


> I think, this alternative would reads nicely. But as the message chain
has
> to modify the underlying object (an eduction), very snaky side effects
may
> occur. E.g., consider
>
>   eduction := collection transduce.
>   squared  := eduction map: #squared.
>   take     := squared take: 1000.
>
> Now, all three variables hold onto the same object, which first squares
all
> elements and than takes the first 1000.

This is because the programmer did not understand what he did. No?



Stef

PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.


>
> Best,
> Steffen
>
>
>
>
>
> Am .06.2017, 21:28 Uhr, schrieb Damien Pollet
> <[hidden email]>:
>
>> If I recall correctly, there is an alternate protocol that looks more
like
>> xtreams or the traditional select/collect iterations.
>>
>> On 2 June 2017 at 21:12, Stephane Ducasse <[hidden email]>
wrote:
>>
>>> I have a design question
>>>
>>> why the library is implemented in functional style vs messages?
>>> I do not see why this is needed. To my eyes the compact notation
>>> goes against readibility of code and it feels ad-hoc in Smalltalk.
>>>
>>>
>>> I really prefer
>>>
>>> square := Map function: #squared.
>>> take := Take number: 1000.
>>>
>>> Because I know that I can read it and understand it.
>>> From that perspective I prefer Xtreams.
>>>
>>> Stef
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]>
wrote:
>>>
>>>> Hi,
>>>>
>>>> I am the developer of the library 'Transducers' for VisualWorks. It
was
>>>> formerly known as 'Reducers', but this name was a poor choice. I'd
like
>>>> to
>>>> port it to Pharo, if there is any interest on your side. I hope to
learn
>>>> more about Pharo in this process, since I am mainly a VW guy. And most
>>>> likely, I will come up with a bunch of questions. :-)
>>>>
>>>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be
very
>>>> happy to hear your optinions, questions and I hope we can start a
>>>> fruitful
>>>> discussion - even if there is not Pharo port yet.
>>>>
>>>> Best, Steffen
>>>>
>>>>
>>>>
>>>> Transducers are building blocks that encapsulate how to process
elements
>>>> of a data sequence independently of the underlying input and output
>>>> source.
>>>>
>>>>
>>>>
>>>> # Overview
>>>>
>>>> ## Encapsulate
>>>> Implementations of enumeration methods, such as #collect:, have the
>>>> logic
>>>> how to process a single element in common.
>>>> However, that logic is reimplemented each and every time. Transducers
>>>> make
>>>> it explicit and facilitate re-use and coherent behavior.
>>>> For example:
>>>> - #collect: requires mapping: (aBlock1 map)
>>>> - #select: requires filtering: (aBlock2 filter)
>>>>
>>>>
>>>> ## Compose
>>>> In practice, algorithms often require multiple processing steps, e.g.,
>>>> mapping only a filtered set of elements.
>>>> Transducers are inherently composable, and hereby, allow to make the
>>>> combination of steps explicit.
>>>> Since transducers do not build intermediate collections, their
>>>> composition
>>>> is memory-efficient.
>>>> For example:
>>>> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map
elements"
>>>>
>>>>
>>>> ## Re-Use
>>>> Transducers are decoupled from the input and output sources, and
hence,
>>>> they can be reused in different contexts.
>>>> For example:
>>>> - enumeration of collections
>>>> - processing of streams
>>>> - communicating via channels
>>>>
>>>>
>>>>
>>>> # Usage by Example
>>>>
>>>> We build a coin flipping experiment and count the occurrence of heads
>>>> and
>>>> tails.
>>>>
>>>> First, we associate random numbers with the sides of a coin.
>>>>
>>>>     scale := [:x | (x * 2 + 1) floor] map.
>>>>     sides := #(heads tails) replace.
>>>>
>>>> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
>>>> Sides is a transducer that replaces the numbers with heads an tails by
>>>> lookup in an array.
>>>> Next, we choose a number of samples.
>>>>
>>>>     count := 1000 take.
>>>>
>>>> Count is a transducer that takes 1000 elements from a source.
>>>> We keep track of the occurrences of heads an tails using a bag.
>>>>
>>>>     collect := [:bag :c | bag add: c; yourself].
>>>>
>>>> Collect is binary block (reducing function) that collects events in a
>>>> bag.
>>>> We assemble the experiment by transforming the block using the
>>>> transducers.
>>>>
>>>>     experiment := (scale * sides * count) transform: collect.
>>>>
>>>>   From left to right we see the steps involved: scale, sides, count
and
>>>> collect.
>>>> Transforming assembles these steps into a binary block (reducing
>>>> function)
>>>> we can use to run the experiment.
>>>>
>>>>     samples := Random new
>>>>                   reduce: experiment
>>>>                   init: Bag new.
>>>>
>>>> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
>>>> To execute a transformation and a reduction together, we can use
>>>> #transduce:reduce:init:.
>>>>
>>>>     samples := Random new
>>>>                   transduce: scale * sides * count
>>>>                   reduce: collect
>>>>                   init: Bag new.
>>>>
>>>> We can also express the experiment as data-flow using #<~.
>>>> This enables us to build objects that can be re-used in other
>>>> experiments.
>>>>
>>>>     coin := sides <~ scale <~ Random new.
>>>>     flip := Bag <~ count.
>>>>
>>>> Coin is an eduction, i.e., it binds transducers to a source and
>>>> understands #reduce:init: among others.
>>>> Flip is a transformed reduction, i.e., it binds transducers to a
>>>> reducing
>>>> function and an initial value.
>>>> By sending #<~, we draw further samples from flipping the coin.
>>>>
>>>>     samples := flip <~ coin.
>>>>
>>>> This yields a new Bag with another 1000 samples.
>>>>
>>>>
>>>>
>>>> # Basic Concepts
>>>>
>>>> ## Reducing Functions
>>>>
>>>> A reducing function represents a single step in processing a data
>>>> sequence.
>>>> It takes an accumulated result and a value, and returns a new
>>>> accumulated
>>>> result.
>>>> For example:
>>>>
>>>>     collect := [:col :e | col add: e; yourself].
>>>>     sum := #+.
>>>>
>>>> A reducing function can also be ternary, i.e., it takes an accumulated
>>>> result, a key and a value.
>>>> For example:
>>>>
>>>>     collect := [:dic :k :v | dict at: k put: v; yourself].
>>>>
>>>> Reducing functions may be equipped with an optional completing action.
>>>> After finishing processing, it is invoked exactly once, e.g., to free
>>>> resources.
>>>>
>>>>     stream := [:str :e | str nextPut: each; yourself] completing:
>>>> #close.
>>>>     absSum := #+ completing: #abs
>>>>
>>>> A reducing function can end processing early by signaling Reduced
with a
>>>> result.
>>>> This mechanism also enables the treatment of infinite sources.
>>>>
>>>>     nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
>>>> [res]].
>>>>
>>>> The primary approach to process a data sequence is the reducing
protocol
>>>> with the messages #reduce:init: and #transduce:reduce:init: if
>>>> transducers
>>>> are involved.
>>>> The behavior is similar to #inject:into: but in addition it takes care
>>>> of:
>>>> - handling binary and ternary reducing functions,
>>>> - invoking the completing action after finishing, and
>>>> - stopping the reduction if Reduced is signaled.
>>>> The message #transduce:reduce:init: just combines the transformation
and
>>>> the reducing step.
>>>>
>>>> However, as reducing functions are step-wise in nature, an application
>>>> may
>>>> choose other means to process its data.
>>>>
>>>>
>>>> ## Reducibles
>>>>
>>>> A data source is called reducible if it implements the reducing
>>>> protocol.
>>>> Default implementations are provided for collections and streams.
>>>> Additionally, blocks without an argument are reducible, too.
>>>> This allows to adapt to custom data sources without additional effort.
>>>> For example:
>>>>
>>>>     "XStreams adaptor"
>>>>     xstream := filename reading.
>>>>     reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>>>>
>>>>     "natural numbers"
>>>>     n := 0.
>>>>     reducible := [n := n+1].
>>>>
>>>>
>>>> ## Transducers
>>>>
>>>> A transducer is an object that transforms a reducing function into
>>>> another.
>>>> Transducers encapsulate common steps in processing data sequences,
such
>>>> as
>>>> map, filter, concatenate, and flatten.
>>>> A transducer transforms a reducing function into another via
#transform:
>>>> in order to add those steps.
>>>> They can be composed using #* which yields a new transducer that does
>>>> both
>>>> transformations.
>>>> Most transducers require an argument, typically blocks, symbols or
>>>> numbers:
>>>>
>>>>     square := Map function: #squared.
>>>>     take := Take number: 1000.
>>>>
>>>> To facilitate compact notation, the argument types implement
>>>> corresponding
>>>> methods:
>>>>
>>>>     squareAndTake := #squared map * 1000 take.
>>>>
>>>> Transducers requiring no argument are singletons and can be accessed
by
>>>> their class name.
>>>>
>>>>     flattenAndDedupe := Flatten * Dedupe.
>>>>
>>>>
>>>>
>>>> # Advanced Concepts
>>>>
>>>> ## Data flows
>>>>
>>>> Processing a sequence of data can often be regarded as a data flow.
>>>> The operator #<~ allows define a flow from a data source through
>>>> processing steps to a drain.
>>>> For example:
>>>>
>>>>     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>>>>     fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>>>>
>>>> In both examples #<~ is only used to set up the data flow using
reducing
>>>> functions and transducers.
>>>> In contrast to streams, transducers are completely independent from
>>>> input
>>>> and output sources.
>>>> Hence, we have a clear separation of reading data, writing data and
>>>> processing elements.
>>>> - Sources know how to iterate over data with a reducing function,
e.g.,
>>>> via #reduce:init:.
>>>> - Drains know how to collect data using a reducing function.
>>>> - Transducers know how to process single elements.
>>>>
>>>>
>>>> ## Reductions
>>>>
>>>> A reduction binds an initial value or a block yielding an initial
value
>>>> to
>>>> a reducing function.
>>>> The idea is to define a ready-to-use process that can be applied in
>>>> different contexts.
>>>> Reducibles handle reductions via #reduce: and #transduce:reduce:
>>>> For example:
>>>>
>>>>     sum := #+ init: 0.
>>>>     sum1 := #(1 1 1) reduce: sum.
>>>>     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>>>>
>>>>     asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>>>>     set1 := #(1 1 1) reduce: asSet.
>>>>     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>>>>
>>>> By combining a transducer with a reduction, a process can be further
>>>> modified.
>>>>
>>>>     sumOdds := sum <~ #odd filter
>>>>     setOdds := asSet <~ #odd filter
>>>>
>>>>
>>>> ## Eductions
>>>>
>>>> An eduction combines a reducible data sources with a transducer.
>>>> The idea is to define a transformed (virtual) data source that needs
not
>>>> to be stored in memory.
>>>>
>>>>     odds1 := #odd filter <~ #(1 2 3) readStream.
>>>>     odds2 := #odd filter <~ (1 to 1000).
>>>>
>>>> Depending on the underlying source, eductions can be processed once
>>>> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
>>>> Since no intermediate data is stored, transducers actions are lazy,
>>>> i.e.,
>>>> they are invoked each time the eduction is processed.
>>>>
>>>>
>>>>
>>>> # Origins
>>>>
>>>> Transducers is based on the same-named Clojure library and its ideas.
>>>> Please see:
>>>> http://clojure.org/transducers
>>>>
>>>>
>





Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by philippeback
Hi Phil,

that's great. I do have a GitHub account (merkste) but none at  
SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?

Best, Steffen



Am .06.2017, 14:09 Uhr, schrieb [hidden email] <[hidden email]>:

> Hi Steffen,
>
> I am willing to help you create the package in SmalltalkHub or Github  
> based
> on your files/changeset.
>
> Do you have a github and/or SmalltalkHub account?
>
> Best,
> Phil
>
>
> On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker <[hidden email]> wrote:
>
>> Hi!
>>
>> If the need arises, we could of course factor the compact notation out  
>> into
>>>> a separate package.
>>>>
>>> Good idea
>>> [...] I do not want to help promoting a syntax that alienates me (and
>>> others because other people reported the saem to me).
>>>
>>
>> I understand. Btw, I'd really, really appreciate if others post their
>> thoughts and feedback here as well. Discussion helps moving things  
>> forward.
>> =)
>>
>>
>>   (collection transduce map: #squared) take: 1000.
>>>>
>>>
>>> To me this is much more readable.
>>>
>>
>> Well, I'll provide that extension once it is finished.
>>
>> I cannot and do not want to use the other forms.
>>>
>>
>>
>>   collection transduce
>>>>                map: #squared;
>>>>                take: 1000.
>>>>
>>>> But as the message chain has to modify the underlying object
>>>> (an eduction), very snaky side effects my occur. E.g., consider
>>>>
>>>>   eduction := collection transduce.
>>>>   squared  := eduction map: #squared.
>>>>   take     := squared take: 1000.
>>>>
>>>> Now, all three variables hold onto the same object, which first  
>>>> squares
>>>> all elements and than takes the first 1000.
>>>>
>>>
>>> This is because the programmer did not understand what he did. No?
>>>
>>
>> Sure. ;-) Nevertheless, it would be very hard to debug. All of which are
>> the reasons I wouldn't implement that variant. ;-)
>>
>>
>> PS: I played with infinite stream and iteration back in 1993 in CLOS.
>>> Now I do not like to mix things because it breaks my flow of thinking.
>>>
>>
>>
>> I am not sure whether I understand what your mean by mixing. Concerning
>> transducers, the ability to handle infinite sources in only a (natural)
>> side-effect of the ability to finish reductions before all elements are
>> processed, e.g., like #detect: and such.
>>
>> Best, Steffen
>>
>>
>>
>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Damien Pollet-2
I wouldn't bother with SmalltalkHub at this point. Check Iceberg, it's the future: https://github.com/pharo-vcs/iceberg/

On 7 June 2017 at 11:29, Steffen Märcker <[hidden email]> wrote:
Hi Phil,

that's great. I do have a GitHub account (merkste) but none at SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?

Best, Steffen




Am .06.2017, 14:09 Uhr, schrieb [hidden email] <[hidden email]>:

Hi Steffen,

I am willing to help you create the package in SmalltalkHub or Github based
on your files/changeset.

Do you have a github and/or SmalltalkHub account?

Best,
Phil


On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker <[hidden email]> wrote:

Hi!

If the need arises, we could of course factor the compact notation out into
a separate package.

Good idea
[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).


I understand. Btw, I'd really, really appreciate if others post their
thoughts and feedback here as well. Discussion helps moving things forward.
=)


  (collection transduce map: #squared) take: 1000.


To me this is much more readable.


Well, I'll provide that extension once it is finished.

I cannot and do not want to use the other forms.



  collection transduce
               map: #squared;
               take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take     := squared take: 1000.

Now, all three variables hold onto the same object, which first squares
all elements and than takes the first 1000.


This is because the programmer did not understand what he did. No?


Sure. ;-) Nevertheless, it would be very hard to debug. All of which are
the reasons I wouldn't implement that variant. ;-)


PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.



I am not sure whether I understand what your mean by mixing. Concerning
transducers, the ability to handle infinite sources in only a (natural)
side-effect of the ability to finish reductions before all elements are
processed, e.g., like #detect: and such.

Best, Steffen







Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

philippeback
In reply to this post by Steffen Märcker
Use Iceberg with Pharo6.0

There are techtalks videos about it.


Phil


On Wed, Jun 7, 2017 at 11:29 AM, Steffen Märcker <[hidden email]> wrote:
Hi Phil,

that's great. I do have a GitHub account (merkste) but none at SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?

Best, Steffen



Am .06.2017, 14:09 Uhr, schrieb [hidden email] <[hidden email]>:

Hi Steffen,

I am willing to help you create the package in SmalltalkHub or Github based
on your files/changeset.

Do you have a github and/or SmalltalkHub account?

Best,
Phil


On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker <[hidden email]> wrote:

Hi!

If the need arises, we could of course factor the compact notation out into
a separate package.

Good idea
[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).


I understand. Btw, I'd really, really appreciate if others post their
thoughts and feedback here as well. Discussion helps moving things forward.
=)


  (collection transduce map: #squared) take: 1000.


To me this is much more readable.


Well, I'll provide that extension once it is finished.

I cannot and do not want to use the other forms.



  collection transduce
               map: #squared;
               take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take     := squared take: 1000.

Now, all three variables hold onto the same object, which first squares
all elements and than takes the first 1000.


This is because the programmer did not understand what he did. No?


Sure. ;-) Nevertheless, it would be very hard to debug. All of which are
the reasons I wouldn't implement that variant. ;-)


PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.



I am not sure whether I understand what your mean by mixing. Concerning
transducers, the ability to handle infinite sources in only a (natural)
side-effect of the ability to finish reductions before all elements are
processed, e.g., like #detect: and such.

Best, Steffen








Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

philippeback
In reply to this post by Steffen Märcker
Hi Steffen,

I have created a repo on Github with the code and added you as an admin on it as well.


I've put a README. md with your notes, and under packages/ you'll find 3 packages:

Transducers-Core
Transducers-Examples
Transducers-Tests

Tests is empty and Examples has the example from the mailing list.

I ran the Renraku quality assistant tool and there are a few issues in the package.

I did all of this in a Pharo 6 image using Iceberg.

Inline image 1

I guess that you can clone the repo using the Origin you see in the screenshot.
I used https out of lazyness because I didn't wanted to deal with ssh keys etc.

Tell me how it goes for you.

I read your note about DoubleAgents for the tests, well, yeah this one we do not have. I read about it on the blog of its maker and it looked decent indeed.

For the critics browser, it goes like this:

Inline image 2

Then look at the warnings. A bunch of them are non issues but there are Undeclared things in need of a fix (e.g. IndexNotFoundError)

Best,
Phil


On Wed, Jun 7, 2017 at 11:29 AM, Steffen Märcker <[hidden email]> wrote:
Hi Phil,

that's great. I do have a GitHub account (merkste) but none at SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?

Best, Steffen



Am .06.2017, 14:09 Uhr, schrieb [hidden email] <[hidden email]>:

Hi Steffen,

I am willing to help you create the package in SmalltalkHub or Github based
on your files/changeset.

Do you have a github and/or SmalltalkHub account?

Best,
Phil


On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker <[hidden email]> wrote:

Hi!

If the need arises, we could of course factor the compact notation out into
a separate package.

Good idea
[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).


I understand. Btw, I'd really, really appreciate if others post their
thoughts and feedback here as well. Discussion helps moving things forward.
=)


  (collection transduce map: #squared) take: 1000.


To me this is much more readable.


Well, I'll provide that extension once it is finished.

I cannot and do not want to use the other forms.



  collection transduce
               map: #squared;
               take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take     := squared take: 1000.

Now, all three variables hold onto the same object, which first squares
all elements and than takes the first 1000.


This is because the programmer did not understand what he did. No?


Sure. ;-) Nevertheless, it would be very hard to debug. All of which are
the reasons I wouldn't implement that variant. ;-)


PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.



I am not sure whether I understand what your mean by mixing. Concerning
transducers, the ability to handle infinite sources in only a (natural)
side-effect of the ability to finish reductions before all elements are
processed, e.g., like #detect: and such.

Best, Steffen








Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Denis Kudriashov

2017-06-07 22:04 GMT+02:00 [hidden email] <[hidden email]>:
I read your note about DoubleAgents for the tests, well, yeah this one we do not have. I read about it on the blog of its maker and it looked decent indeed.

I think Mocketry can easily replace DoubleAgents but API is different and most tests will needed rewrites
Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by philippeback
Hi Phil,

thanks a lot for your effort and valuable input. I am having a look at  
STIG for VW in the hope that I can set up a common repository for the VW  
and Pharo version. Though, the next days I won't work on the port, because  
I am quite busy at the moment.

> Tell me how it goes for you.

I'll let you know soon.

> I read your note about DoubleAgents for the tests, well, yeah this one we
> do not have. I read about it on the blog of its maker and it looked  
> decent indeed.

I'll check whether a port is doable with reasonable effort and the  
author's blessing. =)

> Then look at the warnings. A bunch of them are non issues but there are
> Undeclared things in need of a fix (e.g. IndexNotFoundError)

Luckily, most of them are straight forward to resolve. However, I'd really  
like to hear your opinion on the name clash with #reduce: in the base (see  
other thread). A solution could be to use double-dispatching (or similar)  
to distinguish between the two cases
- reduce: aBlock, and
- reduce: aReduction
However, I have a slight preference to redefine #fold: and #reduce:, since  
now (as far as I can see), they are redundant. E.g.,
- fold: aBlock         "reduce as defined right now"
- reduce: aBlock init: "value "reduce starting with an initial value"
- reduce: aReduction   "reduce with block and value from a reduction"

Kind regards,
Steffen


> Best,
> Phil
>
>
> On Wed, Jun 7, 2017 at 11:29 AM, Steffen Märcker <[hidden email]> wrote:
>
>> Hi Phil,
>>
>> that's great. I do have a GitHub account (merkste) but none at
>> SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?
>>
>> Best, Steffen
>>
>>
>>
>> Am .06.2017, 14:09 Uhr, schrieb [hidden email] <[hidden email]>:
>>
>> Hi Steffen,
>>>
>>> I am willing to help you create the package in SmalltalkHub or Github
>>> based
>>> on your files/changeset.
>>>
>>> Do you have a github and/or SmalltalkHub account?
>>>
>>> Best,
>>> Phil
>>>
>>>
>>> On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker <[hidden email]> wrote:
>>>
>>> Hi!
>>>>
>>>> If the need arises, we could of course factor the compact notation out
>>>> into
>>>>
>>>>> a separate package.
>>>>>>
>>>>>> Good idea
>>>>> [...] I do not want to help promoting a syntax that alienates me (and
>>>>> others because other people reported the saem to me).
>>>>>
>>>>>
>>>> I understand. Btw, I'd really, really appreciate if others post their
>>>> thoughts and feedback here as well. Discussion helps moving things
>>>> forward.
>>>> =)
>>>>
>>>>
>>>>   (collection transduce map: #squared) take: 1000.
>>>>
>>>>>
>>>>>>
>>>>> To me this is much more readable.
>>>>>
>>>>>
>>>> Well, I'll provide that extension once it is finished.
>>>>
>>>> I cannot and do not want to use the other forms.
>>>>
>>>>>
>>>>>
>>>>
>>>>   collection transduce
>>>>
>>>>>                map: #squared;
>>>>>>                take: 1000.
>>>>>>
>>>>>> But as the message chain has to modify the underlying object
>>>>>> (an eduction), very snaky side effects my occur. E.g., consider
>>>>>>
>>>>>>   eduction := collection transduce.
>>>>>>   squared  := eduction map: #squared.
>>>>>>   take     := squared take: 1000.
>>>>>>
>>>>>> Now, all three variables hold onto the same object, which first  
>>>>>> squares
>>>>>> all elements and than takes the first 1000.
>>>>>>
>>>>>>
>>>>> This is because the programmer did not understand what he did. No?
>>>>>
>>>>>
>>>> Sure. ;-) Nevertheless, it would be very hard to debug. All of which  
>>>> are
>>>> the reasons I wouldn't implement that variant. ;-)
>>>>
>>>>
>>>> PS: I played with infinite stream and iteration back in 1993 in CLOS.
>>>>
>>>>> Now I do not like to mix things because it breaks my flow of  
>>>>> thinking.
>>>>>
>>>>>
>>>>
>>>> I am not sure whether I understand what your mean by mixing.  
>>>> Concerning
>>>> transducers, the ability to handle infinite sources in only a  
>>>> (natural)
>>>> side-effect of the ability to finish reductions before all elements  
>>>> are
>>>> processed, e.g., like #detect: and such.
>>>>
>>>> Best, Steffen
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Denis Kudriashov
In fact, I moved from Mocketry to DoubleAgents quite some time ago... =)

Am .06.2017, 23:04 Uhr, schrieb Denis Kudriashov <[hidden email]>:

> 2017-06-07 22:04 GMT+02:00 [hidden email] <[hidden email]>:
>
>> I read your note about DoubleAgents for the tests, well, yeah this one  
>> we
>> do not have. I read about it on the blog of its maker and it looked  
>> decent
>> indeed.
>>
>
> I think Mocketry can easily replace DoubleAgents but API is different and
> most tests will needed rewrites



Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Denis Kudriashov

2017-06-08 15:37 GMT+02:00 Steffen Märcker <[hidden email]>:
In fact, I moved from Mocketry to DoubleAgents quite some time ago... =)

It was greatly updated from VW version. (look for details here http://dionisiydk.blogspot.fr/2016/04/new-version-of-mocketry-30.html)
Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by philippeback
Hi Phil,

> Tell me how it goes for you.

I played a bit with Iceberg in Pharo, managed to check out the repository,  
and load Transducers-Core. As a bit of a surprise, multiple packages show  
up in the SystemBrowser; one for each method category, e.g.,  
Transducers-accessing and Transducers-class initialization. Is this  
expected? Maybe I am just missing something obvious here since I am not  
familiar with the tools.

However, I think Iceberg will facilitate a nice way exchange code between  
VW an Pharo. I managed to export the packages using STIG from VW and the  
differences seem to be sufficiently small.

> I've put a README. md with your notes, and under packages/ you'll find 3
> packages:
>
> Transducers-Core
> Transducers-Examples
> Transducers-Tests

Did you rename the main package from Transducers to Transducers-Core for a  
special reason? And is it a convention in Pharo to use the dash '-'  
instead of space to separate parts of a package name?

Cheers!
Steffen

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Damien Pollet-2
On 14 June 2017 at 17:21, Steffen Märcker <[hidden email]> wrote:
I played a bit with Iceberg in Pharo, managed to check out the repository, and load Transducers-Core. As a bit of a surprise, multiple packages show up in the SystemBrowser; one for each method category, e.g., Transducers-accessing and Transducers-class initialization. Is this expected? Maybe I am just missing something obvious here since I am not familiar with the tools.

Looks like a bug… not sure if I've seen this one being discussed recently or if it's a new one, though.
 
Did you rename the main package from Transducers to Transducers-Core for a special reason? And is it a convention in Pharo to use the dash '-' instead of space to separate parts of a package name?

Yes, that's the Pharo naming convention.

123