Porting Transducers to Pharo

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
44 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Porting Transducers to Pharo

Steffen Märcker
Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was  
formerly known as 'Reducers', but this name was a poor choice. I'd like to  
port it to Pharo, if there is any interest on your side. I hope to learn  
more about Pharo in this process, since I am mainly a VW guy. And most  
likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very  
happy to hear your optinions, questions and I hope we can start a fruitful  
discussion - even if there is not Pharo port yet.

Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers make
it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their composition
is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

     scale := [:x | (x * 2 + 1) floor] map.
     sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

     count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

     collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a bag.
We assemble the experiment by transforming the block using the transducers.

     experiment := (scale * sides * count) transform: collect.

   From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing function)
we can use to run the experiment.

     samples := Random new
                   reduce: experiment
                   init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

     samples := Random new
                   transduce: scale * sides * count
                   reduce: collect
                   init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other experiments.

     coin := sides <~ scale <~ Random new.
     flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

     samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data sequence.
It takes an accumulated result and a value, and returns a new accumulated
result.
For example:

     collect := [:col :e | col add: e; yourself].
     sum := #+.

A reducing function can also be ternary, i.e., it takes an accumulated
result, a key and a value.
For example:

     collect := [:dic :k :v | dict at: k put: v; yourself].

Reducing functions may be equipped with an optional completing action.
After finishing processing, it is invoked exactly once, e.g., to free
resources.

     stream := [:str :e | str nextPut: each; yourself] completing: #close.
     absSum := #+ completing: #abs

A reducing function can end processing early by signaling Reduced with a
result.
This mechanism also enables the treatment of infinite sources.

     nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:  
[res]].

The primary approach to process a data sequence is the reducing protocol
with the messages #reduce:init: and #transduce:reduce:init: if transducers
are involved.
The behavior is similar to #inject:into: but in addition it takes care of:
- handling binary and ternary reducing functions,
- invoking the completing action after finishing, and
- stopping the reduction if Reduced is signaled.
The message #transduce:reduce:init: just combines the transformation and
the reducing step.

However, as reducing functions are step-wise in nature, an application may
choose other means to process its data.


## Reducibles

A data source is called reducible if it implements the reducing protocol.
Default implementations are provided for collections and streams.
Additionally, blocks without an argument are reducible, too.
This allows to adapt to custom data sources without additional effort.
For example:

     "XStreams adaptor"
     xstream := filename reading.
     reducible := [[xstream get] on: Incomplete do: [Reduced signal]].

     "natural numbers"
     n := 0.
     reducible := [n := n+1].


## Transducers

A transducer is an object that transforms a reducing function into another.
Transducers encapsulate common steps in processing data sequences, such as
map, filter, concatenate, and flatten.
A transducer transforms a reducing function into another via #transform:
in order to add those steps.
They can be composed using #* which yields a new transducer that does both
transformations.
Most transducers require an argument, typically blocks, symbols or numbers:

     square := Map function: #squared.
     take := Take number: 1000.

To facilitate compact notation, the argument types implement corresponding
methods:

     squareAndTake := #squared map * 1000 take.

Transducers requiring no argument are singletons and can be accessed by
their class name.

     flattenAndDedupe := Flatten * Dedupe.



# Advanced Concepts

## Data flows

Processing a sequence of data can often be regarded as a data flow.
The operator #<~ allows define a flow from a data source through
processing steps to a drain.
For example:

     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
     fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.

In both examples #<~ is only used to set up the data flow using reducing
functions and transducers.
In contrast to streams, transducers are completely independent from input
and output sources.
Hence, we have a clear separation of reading data, writing data and
processing elements.
- Sources know how to iterate over data with a reducing function, e.g.,
via #reduce:init:.
- Drains know how to collect data using a reducing function.
- Transducers know how to process single elements.


## Reductions

A reduction binds an initial value or a block yielding an initial value to
a reducing function.
The idea is to define a ready-to-use process that can be applied in
different contexts.
Reducibles handle reductions via #reduce: and #transduce:reduce:
For example:

     sum := #+ init: 0.
     sum1 := #(1 1 1) reduce: sum.
     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.

     asSet := [:set :e | set add: e; yourself] initializer: [Set new].
     set1 := #(1 1 1) reduce: asSet.
     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.

By combining a transducer with a reduction, a process can be further
modified.

     sumOdds := sum <~ #odd filter
     setOdds := asSet <~ #odd filter


## Eductions

An eduction combines a reducible data sources with a transducer.
The idea is to define a transformed (virtual) data source that needs not
to be stored in memory.

     odds1 := #odd filter <~ #(1 2 3) readStream.
     odds2 := #odd filter <~ (1 to 1000).

Depending on the underlying source, eductions can be processed once
(streams, e.g., odds1) or multiple times (collections, e.g., odds2).
Since no intermediate data is stored, transducers actions are lazy, i.e.,
they are invoked each time the eduction is processed.



# Origins

Transducers is based on the same-named Clojure library and its ideas.
Please see:
http://clojure.org/transducers

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Sven Van Caekenberghe-2

> On 31 May 2017, at 14:23, Steffen Märcker <[hidden email]> wrote:
>
> Hi,
>
> I am the developer of the library 'Transducers' for VisualWorks. It was formerly known as 'Reducers', but this name was a poor choice. I'd like to port it to Pharo, if there is any interest on your side. I hope to learn more about Pharo in this process, since I am mainly a VW guy. And most likely, I will come up with a bunch of questions. :-)
>
> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very happy to hear your optinions, questions and I hope we can start a fruitful discussion - even if there is not Pharo port yet.
>
> Best, Steffen

Hi Steffen,

Looks like very interesting stuff. Would make an nice library/framework for Pharo.

Sven

> Transducers are building blocks that encapsulate how to process elements
> of a data sequence independently of the underlying input and output source.
>
>
>
> # Overview
>
> ## Encapsulate
> Implementations of enumeration methods, such as #collect:, have the logic
> how to process a single element in common.
> However, that logic is reimplemented each and every time. Transducers make
> it explicit and facilitate re-use and coherent behavior.
> For example:
> - #collect: requires mapping: (aBlock1 map)
> - #select: requires filtering: (aBlock2 filter)
>
>
> ## Compose
> In practice, algorithms often require multiple processing steps, e.g.,
> mapping only a filtered set of elements.
> Transducers are inherently composable, and hereby, allow to make the
> combination of steps explicit.
> Since transducers do not build intermediate collections, their composition
> is memory-efficient.
> For example:
> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>
>
> ## Re-Use
> Transducers are decoupled from the input and output sources, and hence,
> they can be reused in different contexts.
> For example:
> - enumeration of collections
> - processing of streams
> - communicating via channels
>
>
>
> # Usage by Example
>
> We build a coin flipping experiment and count the occurrence of heads and
> tails.
>
> First, we associate random numbers with the sides of a coin.
>
>    scale := [:x | (x * 2 + 1) floor] map.
>    sides := #(heads tails) replace.
>
> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
> Sides is a transducer that replaces the numbers with heads an tails by
> lookup in an array.
> Next, we choose a number of samples.
>
>    count := 1000 take.
>
> Count is a transducer that takes 1000 elements from a source.
> We keep track of the occurrences of heads an tails using a bag.
>
>    collect := [:bag :c | bag add: c; yourself].
>
> Collect is binary block (reducing function) that collects events in a bag.
> We assemble the experiment by transforming the block using the transducers.
>
>    experiment := (scale * sides * count) transform: collect.
>
>  From left to right we see the steps involved: scale, sides, count and
> collect.
> Transforming assembles these steps into a binary block (reducing function)
> we can use to run the experiment.
>
>    samples := Random new
>                  reduce: experiment
>                  init: Bag new.
>
> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
> To execute a transformation and a reduction together, we can use
> #transduce:reduce:init:.
>
>    samples := Random new
>                  transduce: scale * sides * count
>                  reduce: collect
>                  init: Bag new.
>
> We can also express the experiment as data-flow using #<~.
> This enables us to build objects that can be re-used in other experiments.
>
>    coin := sides <~ scale <~ Random new.
>    flip := Bag <~ count.
>
> Coin is an eduction, i.e., it binds transducers to a source and
> understands #reduce:init: among others.
> Flip is a transformed reduction, i.e., it binds transducers to a reducing
> function and an initial value.
> By sending #<~, we draw further samples from flipping the coin.
>
>    samples := flip <~ coin.
>
> This yields a new Bag with another 1000 samples.
>
>
>
> # Basic Concepts
>
> ## Reducing Functions
>
> A reducing function represents a single step in processing a data sequence.
> It takes an accumulated result and a value, and returns a new accumulated
> result.
> For example:
>
>    collect := [:col :e | col add: e; yourself].
>    sum := #+.
>
> A reducing function can also be ternary, i.e., it takes an accumulated
> result, a key and a value.
> For example:
>
>    collect := [:dic :k :v | dict at: k put: v; yourself].
>
> Reducing functions may be equipped with an optional completing action.
> After finishing processing, it is invoked exactly once, e.g., to free
> resources.
>
>    stream := [:str :e | str nextPut: each; yourself] completing: #close.
>    absSum := #+ completing: #abs
>
> A reducing function can end processing early by signaling Reduced with a
> result.
> This mechanism also enables the treatment of infinite sources.
>
>    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse: [res]].
>
> The primary approach to process a data sequence is the reducing protocol
> with the messages #reduce:init: and #transduce:reduce:init: if transducers
> are involved.
> The behavior is similar to #inject:into: but in addition it takes care of:
> - handling binary and ternary reducing functions,
> - invoking the completing action after finishing, and
> - stopping the reduction if Reduced is signaled.
> The message #transduce:reduce:init: just combines the transformation and
> the reducing step.
>
> However, as reducing functions are step-wise in nature, an application may
> choose other means to process its data.
>
>
> ## Reducibles
>
> A data source is called reducible if it implements the reducing protocol.
> Default implementations are provided for collections and streams.
> Additionally, blocks without an argument are reducible, too.
> This allows to adapt to custom data sources without additional effort.
> For example:
>
>    "XStreams adaptor"
>    xstream := filename reading.
>    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>
>    "natural numbers"
>    n := 0.
>    reducible := [n := n+1].
>
>
> ## Transducers
>
> A transducer is an object that transforms a reducing function into another.
> Transducers encapsulate common steps in processing data sequences, such as
> map, filter, concatenate, and flatten.
> A transducer transforms a reducing function into another via #transform:
> in order to add those steps.
> They can be composed using #* which yields a new transducer that does both
> transformations.
> Most transducers require an argument, typically blocks, symbols or numbers:
>
>    square := Map function: #squared.
>    take := Take number: 1000.
>
> To facilitate compact notation, the argument types implement corresponding
> methods:
>
>    squareAndTake := #squared map * 1000 take.
>
> Transducers requiring no argument are singletons and can be accessed by
> their class name.
>
>    flattenAndDedupe := Flatten * Dedupe.
>
>
>
> # Advanced Concepts
>
> ## Data flows
>
> Processing a sequence of data can often be regarded as a data flow.
> The operator #<~ allows define a flow from a data source through
> processing steps to a drain.
> For example:
>
>    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>
> In both examples #<~ is only used to set up the data flow using reducing
> functions and transducers.
> In contrast to streams, transducers are completely independent from input
> and output sources.
> Hence, we have a clear separation of reading data, writing data and
> processing elements.
> - Sources know how to iterate over data with a reducing function, e.g.,
> via #reduce:init:.
> - Drains know how to collect data using a reducing function.
> - Transducers know how to process single elements.
>
>
> ## Reductions
>
> A reduction binds an initial value or a block yielding an initial value to
> a reducing function.
> The idea is to define a ready-to-use process that can be applied in
> different contexts.
> Reducibles handle reductions via #reduce: and #transduce:reduce:
> For example:
>
>    sum := #+ init: 0.
>    sum1 := #(1 1 1) reduce: sum.
>    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>
>    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>    set1 := #(1 1 1) reduce: asSet.
>    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>
> By combining a transducer with a reduction, a process can be further
> modified.
>
>    sumOdds := sum <~ #odd filter
>    setOdds := asSet <~ #odd filter
>
>
> ## Eductions
>
> An eduction combines a reducible data sources with a transducer.
> The idea is to define a transformed (virtual) data source that needs not
> to be stored in memory.
>
>    odds1 := #odd filter <~ #(1 2 3) readStream.
>    odds2 := #odd filter <~ (1 to 1000).
>
> Depending on the underlying source, eductions can be processed once
> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
> Since no intermediate data is stored, transducers actions are lazy, i.e.,
> they are invoked each time the eduction is processed.
>
>
>
> # Origins
>
> Transducers is based on the same-named Clojure library and its ideas.
> Please see:
> http://clojure.org/transducers
>


Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Damien Pollet-2
As you know I experimented with that a while ago. My code is at http://smalltalkhub.com/#!/~cdlm/Experiments/source

On 31 May 2017 at 15:00, Sven Van Caekenberghe <[hidden email]> wrote:

> On 31 May 2017, at 14:23, Steffen Märcker <[hidden email]> wrote:
>
> Hi,
>
> I am the developer of the library 'Transducers' for VisualWorks. It was formerly known as 'Reducers', but this name was a poor choice. I'd like to port it to Pharo, if there is any interest on your side. I hope to learn more about Pharo in this process, since I am mainly a VW guy. And most likely, I will come up with a bunch of questions. :-)
>
> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very happy to hear your optinions, questions and I hope we can start a fruitful discussion - even if there is not Pharo port yet.
>
> Best, Steffen

Hi Steffen,

Looks like very interesting stuff. Would make an nice library/framework for Pharo.

Sven

> Transducers are building blocks that encapsulate how to process elements
> of a data sequence independently of the underlying input and output source.
>
>
>
> # Overview
>
> ## Encapsulate
> Implementations of enumeration methods, such as #collect:, have the logic
> how to process a single element in common.
> However, that logic is reimplemented each and every time. Transducers make
> it explicit and facilitate re-use and coherent behavior.
> For example:
> - #collect: requires mapping: (aBlock1 map)
> - #select: requires filtering: (aBlock2 filter)
>
>
> ## Compose
> In practice, algorithms often require multiple processing steps, e.g.,
> mapping only a filtered set of elements.
> Transducers are inherently composable, and hereby, allow to make the
> combination of steps explicit.
> Since transducers do not build intermediate collections, their composition
> is memory-efficient.
> For example:
> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>
>
> ## Re-Use
> Transducers are decoupled from the input and output sources, and hence,
> they can be reused in different contexts.
> For example:
> - enumeration of collections
> - processing of streams
> - communicating via channels
>
>
>
> # Usage by Example
>
> We build a coin flipping experiment and count the occurrence of heads and
> tails.
>
> First, we associate random numbers with the sides of a coin.
>
>    scale := [:x | (x * 2 + 1) floor] map.
>    sides := #(heads tails) replace.
>
> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
> Sides is a transducer that replaces the numbers with heads an tails by
> lookup in an array.
> Next, we choose a number of samples.
>
>    count := 1000 take.
>
> Count is a transducer that takes 1000 elements from a source.
> We keep track of the occurrences of heads an tails using a bag.
>
>    collect := [:bag :c | bag add: c; yourself].
>
> Collect is binary block (reducing function) that collects events in a bag.
> We assemble the experiment by transforming the block using the transducers.
>
>    experiment := (scale * sides * count) transform: collect.
>
>  From left to right we see the steps involved: scale, sides, count and
> collect.
> Transforming assembles these steps into a binary block (reducing function)
> we can use to run the experiment.
>
>    samples := Random new
>                  reduce: experiment
>                  init: Bag new.
>
> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
> To execute a transformation and a reduction together, we can use
> #transduce:reduce:init:.
>
>    samples := Random new
>                  transduce: scale * sides * count
>                  reduce: collect
>                  init: Bag new.
>
> We can also express the experiment as data-flow using #<~.
> This enables us to build objects that can be re-used in other experiments.
>
>    coin := sides <~ scale <~ Random new.
>    flip := Bag <~ count.
>
> Coin is an eduction, i.e., it binds transducers to a source and
> understands #reduce:init: among others.
> Flip is a transformed reduction, i.e., it binds transducers to a reducing
> function and an initial value.
> By sending #<~, we draw further samples from flipping the coin.
>
>    samples := flip <~ coin.
>
> This yields a new Bag with another 1000 samples.
>
>
>
> # Basic Concepts
>
> ## Reducing Functions
>
> A reducing function represents a single step in processing a data sequence.
> It takes an accumulated result and a value, and returns a new accumulated
> result.
> For example:
>
>    collect := [:col :e | col add: e; yourself].
>    sum := #+.
>
> A reducing function can also be ternary, i.e., it takes an accumulated
> result, a key and a value.
> For example:
>
>    collect := [:dic :k :v | dict at: k put: v; yourself].
>
> Reducing functions may be equipped with an optional completing action.
> After finishing processing, it is invoked exactly once, e.g., to free
> resources.
>
>    stream := [:str :e | str nextPut: each; yourself] completing: #close.
>    absSum := #+ completing: #abs
>
> A reducing function can end processing early by signaling Reduced with a
> result.
> This mechanism also enables the treatment of infinite sources.
>
>    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse: [res]].
>
> The primary approach to process a data sequence is the reducing protocol
> with the messages #reduce:init: and #transduce:reduce:init: if transducers
> are involved.
> The behavior is similar to #inject:into: but in addition it takes care of:
> - handling binary and ternary reducing functions,
> - invoking the completing action after finishing, and
> - stopping the reduction if Reduced is signaled.
> The message #transduce:reduce:init: just combines the transformation and
> the reducing step.
>
> However, as reducing functions are step-wise in nature, an application may
> choose other means to process its data.
>
>
> ## Reducibles
>
> A data source is called reducible if it implements the reducing protocol.
> Default implementations are provided for collections and streams.
> Additionally, blocks without an argument are reducible, too.
> This allows to adapt to custom data sources without additional effort.
> For example:
>
>    "XStreams adaptor"
>    xstream := filename reading.
>    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>
>    "natural numbers"
>    n := 0.
>    reducible := [n := n+1].
>
>
> ## Transducers
>
> A transducer is an object that transforms a reducing function into another.
> Transducers encapsulate common steps in processing data sequences, such as
> map, filter, concatenate, and flatten.
> A transducer transforms a reducing function into another via #transform:
> in order to add those steps.
> They can be composed using #* which yields a new transducer that does both
> transformations.
> Most transducers require an argument, typically blocks, symbols or numbers:
>
>    square := Map function: #squared.
>    take := Take number: 1000.
>
> To facilitate compact notation, the argument types implement corresponding
> methods:
>
>    squareAndTake := #squared map * 1000 take.
>
> Transducers requiring no argument are singletons and can be accessed by
> their class name.
>
>    flattenAndDedupe := Flatten * Dedupe.
>
>
>
> # Advanced Concepts
>
> ## Data flows
>
> Processing a sequence of data can often be regarded as a data flow.
> The operator #<~ allows define a flow from a data source through
> processing steps to a drain.
> For example:
>
>    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>
> In both examples #<~ is only used to set up the data flow using reducing
> functions and transducers.
> In contrast to streams, transducers are completely independent from input
> and output sources.
> Hence, we have a clear separation of reading data, writing data and
> processing elements.
> - Sources know how to iterate over data with a reducing function, e.g.,
> via #reduce:init:.
> - Drains know how to collect data using a reducing function.
> - Transducers know how to process single elements.
>
>
> ## Reductions
>
> A reduction binds an initial value or a block yielding an initial value to
> a reducing function.
> The idea is to define a ready-to-use process that can be applied in
> different contexts.
> Reducibles handle reductions via #reduce: and #transduce:reduce:
> For example:
>
>    sum := #+ init: 0.
>    sum1 := #(1 1 1) reduce: sum.
>    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>
>    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>    set1 := #(1 1 1) reduce: asSet.
>    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>
> By combining a transducer with a reduction, a process can be further
> modified.
>
>    sumOdds := sum <~ #odd filter
>    setOdds := asSet <~ #odd filter
>
>
> ## Eductions
>
> An eduction combines a reducible data sources with a transducer.
> The idea is to define a transformed (virtual) data source that needs not
> to be stored in memory.
>
>    odds1 := #odd filter <~ #(1 2 3) readStream.
>    odds2 := #odd filter <~ (1 to 1000).
>
> Depending on the underlying source, eductions can be processed once
> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
> Since no intermediate data is stored, transducers actions are lazy, i.e.,
> they are invoked each time the eduction is processed.
>
>
>
> # Origins
>
> Transducers is based on the same-named Clojure library and its ideas.
> Please see:
> http://clojure.org/transducers
>



Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

abergel
In reply to this post by Sven Van Caekenberghe-2
I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On May 31, 2017, at 9:00 AM, Sven Van Caekenberghe <[hidden email]> wrote:


On 31 May 2017, at 14:23, Steffen Märcker <[hidden email]> wrote:

Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was formerly known as 'Reducers', but this name was a poor choice. I'd like to port it to Pharo, if there is any interest on your side. I hope to learn more about Pharo in this process, since I am mainly a VW guy. And most likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very happy to hear your optinions, questions and I hope we can start a fruitful discussion - even if there is not Pharo port yet.

Best, Steffen

Hi Steffen,

Looks like very interesting stuff. Would make an nice library/framework for Pharo.

Sven

Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers make
it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their composition
is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

  scale := [:x | (x * 2 + 1) floor] map.
  sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

  count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

  collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a bag.
We assemble the experiment by transforming the block using the transducers.

  experiment := (scale * sides * count) transform: collect.

From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing function)
we can use to run the experiment.

  samples := Random new
                reduce: experiment
                init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

  samples := Random new
                transduce: scale * sides * count
                reduce: collect
                init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other experiments.

  coin := sides <~ scale <~ Random new.
  flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

  samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data sequence.
It takes an accumulated result and a value, and returns a new accumulated
result.
For example:

  collect := [:col :e | col add: e; yourself].
  sum := #+.

A reducing function can also be ternary, i.e., it takes an accumulated
result, a key and a value.
For example:

  collect := [:dic :k :v | dict at: k put: v; yourself].

Reducing functions may be equipped with an optional completing action.
After finishing processing, it is invoked exactly once, e.g., to free
resources.

  stream := [:str :e | str nextPut: each; yourself] completing: #close.
  absSum := #+ completing: #abs

A reducing function can end processing early by signaling Reduced with a
result.
This mechanism also enables the treatment of infinite sources.

  nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse: [res]].

The primary approach to process a data sequence is the reducing protocol
with the messages #reduce:init: and #transduce:reduce:init: if transducers
are involved.
The behavior is similar to #inject:into: but in addition it takes care of:
- handling binary and ternary reducing functions,
- invoking the completing action after finishing, and
- stopping the reduction if Reduced is signaled.
The message #transduce:reduce:init: just combines the transformation and
the reducing step.

However, as reducing functions are step-wise in nature, an application may
choose other means to process its data.


## Reducibles

A data source is called reducible if it implements the reducing protocol.
Default implementations are provided for collections and streams.
Additionally, blocks without an argument are reducible, too.
This allows to adapt to custom data sources without additional effort.
For example:

  "XStreams adaptor"
  xstream := filename reading.
  reducible := [[xstream get] on: Incomplete do: [Reduced signal]].

  "natural numbers"
  n := 0.
  reducible := [n := n+1].


## Transducers

A transducer is an object that transforms a reducing function into another.
Transducers encapsulate common steps in processing data sequences, such as
map, filter, concatenate, and flatten.
A transducer transforms a reducing function into another via #transform:
in order to add those steps.
They can be composed using #* which yields a new transducer that does both
transformations.
Most transducers require an argument, typically blocks, symbols or numbers:

  square := Map function: #squared.
  take := Take number: 1000.

To facilitate compact notation, the argument types implement corresponding
methods:

  squareAndTake := #squared map * 1000 take.

Transducers requiring no argument are singletons and can be accessed by
their class name.

  flattenAndDedupe := Flatten * Dedupe.



# Advanced Concepts

## Data flows

Processing a sequence of data can often be regarded as a data flow.
The operator #<~ allows define a flow from a data source through
processing steps to a drain.
For example:

  squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
  fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.

In both examples #<~ is only used to set up the data flow using reducing
functions and transducers.
In contrast to streams, transducers are completely independent from input
and output sources.
Hence, we have a clear separation of reading data, writing data and
processing elements.
- Sources know how to iterate over data with a reducing function, e.g.,
via #reduce:init:.
- Drains know how to collect data using a reducing function.
- Transducers know how to process single elements.


## Reductions

A reduction binds an initial value or a block yielding an initial value to
a reducing function.
The idea is to define a ready-to-use process that can be applied in
different contexts.
Reducibles handle reductions via #reduce: and #transduce:reduce:
For example:

  sum := #+ init: 0.
  sum1 := #(1 1 1) reduce: sum.
  sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.

  asSet := [:set :e | set add: e; yourself] initializer: [Set new].
  set1 := #(1 1 1) reduce: asSet.
  set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.

By combining a transducer with a reduction, a process can be further
modified.

  sumOdds := sum <~ #odd filter
  setOdds := asSet <~ #odd filter


## Eductions

An eduction combines a reducible data sources with a transducer.
The idea is to define a transformed (virtual) data source that needs not
to be stored in memory.

  odds1 := #odd filter <~ #(1 2 3) readStream.
  odds2 := #odd filter <~ (1 to 1000).

Depending on the underlying source, eductions can be processed once
(streams, e.g., odds1) or multiple times (collections, e.g., odds2).
Since no intermediate data is stored, transducers actions are lazy, i.e.,
they are invoked each time the eduction is processed.



# Origins

Transducers is based on the same-named Clojure library and its ideas.
Please see:
http://clojure.org/transducers




Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Damien Pollet-2
Hello Damien,

I remember very well. How far did you get? Did you kick of a discussion on  
one of the Pharo lists? And did FileTree become a convenient way to  
exchange code between VW and Pharo?

Best,
Steffen


Am .05.2017, 16:16 Uhr, schrieb Damien Pollet  
<[hidden email]>:

> As you know I experimented with that a while ago. My code is at
> http://smalltalkhub.com/#!/~cdlm/Experiments/source
>
> On 31 May 2017 at 15:00, Sven Van Caekenberghe <[hidden email]> wrote:
>
>>
>> > On 31 May 2017, at 14:23, Steffen Märcker <[hidden email]> wrote:
>> >
>> > Hi,
>> >
>> > I am the developer of the library 'Transducers' for VisualWorks. It  
>> was
>> formerly known as 'Reducers', but this name was a poor choice. I'd like  
>> to
>> port it to Pharo, if there is any interest on your side. I hope to learn
>> more about Pharo in this process, since I am mainly a VW guy. And most
>> likely, I will come up with a bunch of questions. :-)
>> >
>> > Meanwhile, I'll cross-post the introduction from VWnc below. I'd be  
>> very
>> happy to hear your optinions, questions and I hope we can start a  
>> fruitful
>> discussion - even if there is not Pharo port yet.
>> >
>> > Best, Steffen
>>
>> Hi Steffen,
>>
>> Looks like very interesting stuff. Would make an nice library/framework
>> for Pharo.
>>
>> Sven
>>
>> > Transducers are building blocks that encapsulate how to process  
>> elements
>> > of a data sequence independently of the underlying input and output
>> source.
>> >
>> >
>> >
>> > # Overview
>> >
>> > ## Encapsulate
>> > Implementations of enumeration methods, such as #collect:, have the  
>> logic
>> > how to process a single element in common.
>> > However, that logic is reimplemented each and every time. Transducers
>> make
>> > it explicit and facilitate re-use and coherent behavior.
>> > For example:
>> > - #collect: requires mapping: (aBlock1 map)
>> > - #select: requires filtering: (aBlock2 filter)
>> >
>> >
>> > ## Compose
>> > In practice, algorithms often require multiple processing steps, e.g.,
>> > mapping only a filtered set of elements.
>> > Transducers are inherently composable, and hereby, allow to make the
>> > combination of steps explicit.
>> > Since transducers do not build intermediate collections, their
>> composition
>> > is memory-efficient.
>> > For example:
>> > - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map  
>> elements"
>> >
>> >
>> > ## Re-Use
>> > Transducers are decoupled from the input and output sources, and  
>> hence,
>> > they can be reused in different contexts.
>> > For example:
>> > - enumeration of collections
>> > - processing of streams
>> > - communicating via channels
>> >
>> >
>> >
>> > # Usage by Example
>> >
>> > We build a coin flipping experiment and count the occurrence of heads  
>> and
>> > tails.
>> >
>> > First, we associate random numbers with the sides of a coin.
>> >
>> >    scale := [:x | (x * 2 + 1) floor] map.
>> >    sides := #(heads tails) replace.
>> >
>> > Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
>> > Sides is a transducer that replaces the numbers with heads an tails by
>> > lookup in an array.
>> > Next, we choose a number of samples.
>> >
>> >    count := 1000 take.
>> >
>> > Count is a transducer that takes 1000 elements from a source.
>> > We keep track of the occurrences of heads an tails using a bag.
>> >
>> >    collect := [:bag :c | bag add: c; yourself].
>> >
>> > Collect is binary block (reducing function) that collects events in a
>> bag.
>> > We assemble the experiment by transforming the block using the
>> transducers.
>> >
>> >    experiment := (scale * sides * count) transform: collect.
>> >
>> >  From left to right we see the steps involved: scale, sides, count and
>> > collect.
>> > Transforming assembles these steps into a binary block (reducing
>> function)
>> > we can use to run the experiment.
>> >
>> >    samples := Random new
>> >                  reduce: experiment
>> >                  init: Bag new.
>> >
>> > Here, we use #reduce:init:, which is mostly similar to #inject:into:.
>> > To execute a transformation and a reduction together, we can use
>> > #transduce:reduce:init:.
>> >
>> >    samples := Random new
>> >                  transduce: scale * sides * count
>> >                  reduce: collect
>> >                  init: Bag new.
>> >
>> > We can also express the experiment as data-flow using #<~.
>> > This enables us to build objects that can be re-used in other
>> experiments.
>> >
>> >    coin := sides <~ scale <~ Random new.
>> >    flip := Bag <~ count.
>> >
>> > Coin is an eduction, i.e., it binds transducers to a source and
>> > understands #reduce:init: among others.
>> > Flip is a transformed reduction, i.e., it binds transducers to a  
>> reducing
>> > function and an initial value.
>> > By sending #<~, we draw further samples from flipping the coin.
>> >
>> >    samples := flip <~ coin.
>> >
>> > This yields a new Bag with another 1000 samples.
>> >
>> >
>> >
>> > # Basic Concepts
>> >
>> > ## Reducing Functions
>> >
>> > A reducing function represents a single step in processing a data
>> sequence.
>> > It takes an accumulated result and a value, and returns a new  
>> accumulated
>> > result.
>> > For example:
>> >
>> >    collect := [:col :e | col add: e; yourself].
>> >    sum := #+.
>> >
>> > A reducing function can also be ternary, i.e., it takes an accumulated
>> > result, a key and a value.
>> > For example:
>> >
>> >    collect := [:dic :k :v | dict at: k put: v; yourself].
>> >
>> > Reducing functions may be equipped with an optional completing action.
>> > After finishing processing, it is invoked exactly once, e.g., to free
>> > resources.
>> >
>> >    stream := [:str :e | str nextPut: each; yourself] completing:  
>> #close.
>> >    absSum := #+ completing: #abs
>> >
>> > A reducing function can end processing early by signaling Reduced  
>> with a
>> > result.
>> > This mechanism also enables the treatment of infinite sources.
>> >
>> >    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
>> [res]].
>> >
>> > The primary approach to process a data sequence is the reducing  
>> protocol
>> > with the messages #reduce:init: and #transduce:reduce:init: if
>> transducers
>> > are involved.
>> > The behavior is similar to #inject:into: but in addition it takes care
>> of:
>> > - handling binary and ternary reducing functions,
>> > - invoking the completing action after finishing, and
>> > - stopping the reduction if Reduced is signaled.
>> > The message #transduce:reduce:init: just combines the transformation  
>> and
>> > the reducing step.
>> >
>> > However, as reducing functions are step-wise in nature, an application
>> may
>> > choose other means to process its data.
>> >
>> >
>> > ## Reducibles
>> >
>> > A data source is called reducible if it implements the reducing  
>> protocol.
>> > Default implementations are provided for collections and streams.
>> > Additionally, blocks without an argument are reducible, too.
>> > This allows to adapt to custom data sources without additional effort.
>> > For example:
>> >
>> >    "XStreams adaptor"
>> >    xstream := filename reading.
>> >    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>> >
>> >    "natural numbers"
>> >    n := 0.
>> >    reducible := [n := n+1].
>> >
>> >
>> > ## Transducers
>> >
>> > A transducer is an object that transforms a reducing function into
>> another.
>> > Transducers encapsulate common steps in processing data sequences,  
>> such
>> as
>> > map, filter, concatenate, and flatten.
>> > A transducer transforms a reducing function into another via  
>> #transform:
>> > in order to add those steps.
>> > They can be composed using #* which yields a new transducer that does
>> both
>> > transformations.
>> > Most transducers require an argument, typically blocks, symbols or
>> numbers:
>> >
>> >    square := Map function: #squared.
>> >    take := Take number: 1000.
>> >
>> > To facilitate compact notation, the argument types implement
>> corresponding
>> > methods:
>> >
>> >    squareAndTake := #squared map * 1000 take.
>> >
>> > Transducers requiring no argument are singletons and can be accessed  
>> by
>> > their class name.
>> >
>> >    flattenAndDedupe := Flatten * Dedupe.
>> >
>> >
>> >
>> > # Advanced Concepts
>> >
>> > ## Data flows
>> >
>> > Processing a sequence of data can often be regarded as a data flow.
>> > The operator #<~ allows define a flow from a data source through
>> > processing steps to a drain.
>> > For example:
>> >
>> >    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>> >    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>> >
>> > In both examples #<~ is only used to set up the data flow using  
>> reducing
>> > functions and transducers.
>> > In contrast to streams, transducers are completely independent from  
>> input
>> > and output sources.
>> > Hence, we have a clear separation of reading data, writing data and
>> > processing elements.
>> > - Sources know how to iterate over data with a reducing function,  
>> e.g.,
>> > via #reduce:init:.
>> > - Drains know how to collect data using a reducing function.
>> > - Transducers know how to process single elements.
>> >
>> >
>> > ## Reductions
>> >
>> > A reduction binds an initial value or a block yielding an initial  
>> value
>> to
>> > a reducing function.
>> > The idea is to define a ready-to-use process that can be applied in
>> > different contexts.
>> > Reducibles handle reductions via #reduce: and #transduce:reduce:
>> > For example:
>> >
>> >    sum := #+ init: 0.
>> >    sum1 := #(1 1 1) reduce: sum.
>> >    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>> >
>> >    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>> >    set1 := #(1 1 1) reduce: asSet.
>> >    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>> >
>> > By combining a transducer with a reduction, a process can be further
>> > modified.
>> >
>> >    sumOdds := sum <~ #odd filter
>> >    setOdds := asSet <~ #odd filter
>> >
>> >
>> > ## Eductions
>> >
>> > An eduction combines a reducible data sources with a transducer.
>> > The idea is to define a transformed (virtual) data source that needs  
>> not
>> > to be stored in memory.
>> >
>> >    odds1 := #odd filter <~ #(1 2 3) readStream.
>> >    odds2 := #odd filter <~ (1 to 1000).
>> >
>> > Depending on the underlying source, eductions can be processed once
>> > (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
>> > Since no intermediate data is stored, transducers actions are lazy,  
>> i.e.,
>> > they are invoked each time the eduction is processed.
>> >
>> >
>> >
>> > # Origins
>> >
>> > Transducers is based on the same-named Clojure library and its ideas.
>> > Please see:
>> > http://clojure.org/transducers
>> >
>>
>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by abergel
Thanks for the encouraging response! First question: Which is the  
recommended (friction free) way to exchange code between VW and Pharo?

Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <[hidden email]>:

> I second Sven. This is very exciting!
>
> Let us know when you have something ready to be tested.
>
> Alexandre



Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Esteban A. Maringolo
FileTree is the Pharo implementation of the Cypress format for
packages, STIG is the implementation of Cypress for VW.

The latest changes are available in
<https://github.com/martinmcclure/STIG>, but it doesn't work out of
the box.

I'm interested in having a way to share code back and forth between
Pharo and VW, currently I'm moving code from Pharo to VW, but using a
modified version of the exporter available in Roassal, and the process
is cumbersome.

Regards!

Esteban A. Maringolo


2017-05-31 11:32 GMT-03:00 Steffen Märcker <[hidden email]>:

> Thanks for the encouraging response! First question: Which is the
> recommended (friction free) way to exchange code between VW and Pharo?
>
> Cheers!
> Steffen
>
>
> Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <[hidden email]>:
>
>> I second Sven. This is very exciting!
>>
>> Let us know when you have something ready to be tested.
>>
>> Alexandre
>
>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Damien Pollet-2
In reply to this post by Steffen Märcker
I got stumped / life moved on…

From the top of my head, my port was working for the most part (the tests rely on a mocking library that should be ported as well; translating to either BabyMock2 or Mocketry seems possible on the surface but there is some impedance mismatch there between the features and DSLs of those libs)

I had to do a fileout and fiddle in VW to do it in the squeak format, then had to grep/sed/hand-rewrite some non-portable stuff (replacing class references with corresponding ones, removing namespaces…).

For FileTree, it was conforming to a different version of Cypress than STIG; I started exploring how to fix that but lost interest.


On 31 May 2017 at 16:29, Steffen Märcker <[hidden email]> wrote:
Hello Damien,

I remember very well. How far did you get? Did you kick of a discussion on one of the Pharo lists? And did FileTree become a convenient way to exchange code between VW and Pharo?

Best,
Steffen



Am .05.2017, 16:16 Uhr, schrieb Damien Pollet <[hidden email]>:

As you know I experimented with that a while ago. My code is at
http://smalltalkhub.com/#!/~cdlm/Experiments/source

On 31 May 2017 at 15:00, Sven Van Caekenberghe <[hidden email]> wrote:


> On 31 May 2017, at 14:23, Steffen Märcker <[hidden email]> wrote:
>
> Hi,
>
> I am the developer of the library 'Transducers' for VisualWorks. It was
formerly known as 'Reducers', but this name was a poor choice. I'd like to
port it to Pharo, if there is any interest on your side. I hope to learn
more about Pharo in this process, since I am mainly a VW guy. And most
likely, I will come up with a bunch of questions. :-)
>
> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very
happy to hear your optinions, questions and I hope we can start a fruitful
discussion - even if there is not Pharo port yet.
>
> Best, Steffen

Hi Steffen,

Looks like very interesting stuff. Would make an nice library/framework
for Pharo.

Sven

> Transducers are building blocks that encapsulate how to process elements
> of a data sequence independently of the underlying input and output
source.
>
>
>
> # Overview
>
> ## Encapsulate
> Implementations of enumeration methods, such as #collect:, have the logic
> how to process a single element in common.
> However, that logic is reimplemented each and every time. Transducers
make
> it explicit and facilitate re-use and coherent behavior.
> For example:
> - #collect: requires mapping: (aBlock1 map)
> - #select: requires filtering: (aBlock2 filter)
>
>
> ## Compose
> In practice, algorithms often require multiple processing steps, e.g.,
> mapping only a filtered set of elements.
> Transducers are inherently composable, and hereby, allow to make the
> combination of steps explicit.
> Since transducers do not build intermediate collections, their
composition
> is memory-efficient.
> For example:
> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>
>
> ## Re-Use
> Transducers are decoupled from the input and output sources, and hence,
> they can be reused in different contexts.
> For example:
> - enumeration of collections
> - processing of streams
> - communicating via channels
>
>
>
> # Usage by Example
>
> We build a coin flipping experiment and count the occurrence of heads and
> tails.
>
> First, we associate random numbers with the sides of a coin.
>
>    scale := [:x | (x * 2 + 1) floor] map.
>    sides := #(heads tails) replace.
>
> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
> Sides is a transducer that replaces the numbers with heads an tails by
> lookup in an array.
> Next, we choose a number of samples.
>
>    count := 1000 take.
>
> Count is a transducer that takes 1000 elements from a source.
> We keep track of the occurrences of heads an tails using a bag.
>
>    collect := [:bag :c | bag add: c; yourself].
>
> Collect is binary block (reducing function) that collects events in a
bag.
> We assemble the experiment by transforming the block using the
transducers.
>
>    experiment := (scale * sides * count) transform: collect.
>
>  From left to right we see the steps involved: scale, sides, count and
> collect.
> Transforming assembles these steps into a binary block (reducing
function)
> we can use to run the experiment.
>
>    samples := Random new
>                  reduce: experiment
>                  init: Bag new.
>
> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
> To execute a transformation and a reduction together, we can use
> #transduce:reduce:init:.
>
>    samples := Random new
>                  transduce: scale * sides * count
>                  reduce: collect
>                  init: Bag new.
>
> We can also express the experiment as data-flow using #<~.
> This enables us to build objects that can be re-used in other
experiments.
>
>    coin := sides <~ scale <~ Random new.
>    flip := Bag <~ count.
>
> Coin is an eduction, i.e., it binds transducers to a source and
> understands #reduce:init: among others.
> Flip is a transformed reduction, i.e., it binds transducers to a reducing
> function and an initial value.
> By sending #<~, we draw further samples from flipping the coin.
>
>    samples := flip <~ coin.
>
> This yields a new Bag with another 1000 samples.
>
>
>
> # Basic Concepts
>
> ## Reducing Functions
>
> A reducing function represents a single step in processing a data
sequence.
> It takes an accumulated result and a value, and returns a new accumulated
> result.
> For example:
>
>    collect := [:col :e | col add: e; yourself].
>    sum := #+.
>
> A reducing function can also be ternary, i.e., it takes an accumulated
> result, a key and a value.
> For example:
>
>    collect := [:dic :k :v | dict at: k put: v; yourself].
>
> Reducing functions may be equipped with an optional completing action.
> After finishing processing, it is invoked exactly once, e.g., to free
> resources.
>
>    stream := [:str :e | str nextPut: each; yourself] completing: #close.
>    absSum := #+ completing: #abs
>
> A reducing function can end processing early by signaling Reduced with a
> result.
> This mechanism also enables the treatment of infinite sources.
>
>    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
[res]].
>
> The primary approach to process a data sequence is the reducing protocol
> with the messages #reduce:init: and #transduce:reduce:init: if
transducers
> are involved.
> The behavior is similar to #inject:into: but in addition it takes care
of:
> - handling binary and ternary reducing functions,
> - invoking the completing action after finishing, and
> - stopping the reduction if Reduced is signaled.
> The message #transduce:reduce:init: just combines the transformation and
> the reducing step.
>
> However, as reducing functions are step-wise in nature, an application
may
> choose other means to process its data.
>
>
> ## Reducibles
>
> A data source is called reducible if it implements the reducing protocol.
> Default implementations are provided for collections and streams.
> Additionally, blocks without an argument are reducible, too.
> This allows to adapt to custom data sources without additional effort.
> For example:
>
>    "XStreams adaptor"
>    xstream := filename reading.
>    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>
>    "natural numbers"
>    n := 0.
>    reducible := [n := n+1].
>
>
> ## Transducers
>
> A transducer is an object that transforms a reducing function into
another.
> Transducers encapsulate common steps in processing data sequences, such
as
> map, filter, concatenate, and flatten.
> A transducer transforms a reducing function into another via #transform:
> in order to add those steps.
> They can be composed using #* which yields a new transducer that does
both
> transformations.
> Most transducers require an argument, typically blocks, symbols or
numbers:
>
>    square := Map function: #squared.
>    take := Take number: 1000.
>
> To facilitate compact notation, the argument types implement
corresponding
> methods:
>
>    squareAndTake := #squared map * 1000 take.
>
> Transducers requiring no argument are singletons and can be accessed by
> their class name.
>
>    flattenAndDedupe := Flatten * Dedupe.
>
>
>
> # Advanced Concepts
>
> ## Data flows
>
> Processing a sequence of data can often be regarded as a data flow.
> The operator #<~ allows define a flow from a data source through
> processing steps to a drain.
> For example:
>
>    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>
> In both examples #<~ is only used to set up the data flow using reducing
> functions and transducers.
> In contrast to streams, transducers are completely independent from input
> and output sources.
> Hence, we have a clear separation of reading data, writing data and
> processing elements.
> - Sources know how to iterate over data with a reducing function, e.g.,
> via #reduce:init:.
> - Drains know how to collect data using a reducing function.
> - Transducers know how to process single elements.
>
>
> ## Reductions
>
> A reduction binds an initial value or a block yielding an initial value
to
> a reducing function.
> The idea is to define a ready-to-use process that can be applied in
> different contexts.
> Reducibles handle reductions via #reduce: and #transduce:reduce:
> For example:
>
>    sum := #+ init: 0.
>    sum1 := #(1 1 1) reduce: sum.
>    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>
>    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>    set1 := #(1 1 1) reduce: asSet.
>    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>
> By combining a transducer with a reduction, a process can be further
> modified.
>
>    sumOdds := sum <~ #odd filter
>    setOdds := asSet <~ #odd filter
>
>
> ## Eductions
>
> An eduction combines a reducible data sources with a transducer.
> The idea is to define a transformed (virtual) data source that needs not
> to be stored in memory.
>
>    odds1 := #odd filter <~ #(1 2 3) readStream.
>    odds2 := #odd filter <~ (1 to 1000).
>
> Depending on the underlying source, eductions can be processed once
> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
> Since no intermediate data is stored, transducers actions are lazy, i.e.,
> they are invoked each time the eduction is processed.
>
>
>
> # Origins
>
> Transducers is based on the same-named Clojure library and its ideas.
> Please see:
> http://clojure.org/transducers
>




Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

abergel
In reply to this post by Steffen Märcker
If I remember correctly, there is a parcel in VisualWorks to export a file out (Squeak format).

@Milton, can you give a hand to Steffen?

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:

Thanks for the encouraging response! First question: Which is the recommended (friction free) way to exchange code between VW and Pharo?

Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <[hidden email]>:

I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre




Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Peter Uhnak
In reply to this post by Steffen Märcker
You can also give a try to use SIF (Smalltalk Interchange Format)

https://github.com/peteruhnak/sif

There is parcel file in VW subfolder, and docs here: https://github.com/peteruhnak/sif/blob/master/docs.md

I've used it about year ago for some small code exchanges; but of course you will have some issues with regards to namespaces, but it is doable.

If you decide to try it please let me know if it work/didn't work for you (so I can update the github page).

Peter


On Wed, May 31, 2017 at 04:32:53PM +0200, Steffen Märcker wrote:

> Thanks for the encouraging response! First question: Which is the
> recommended (friction free) way to exchange code between VW and Pharo?
>
> Cheers!
> Steffen
>
> Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <[hidden email]>:
>
> >I second Sven. This is very exciting!
> >
> >Let us know when you have something ready to be tested.
> >
> >Alexandre
>
>
>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Stephane Ducasse-3
In reply to this post by abergel
There is a package for that NGFileOuter or something like that on cincom store. 
We used it for mobydic code. 

On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <[hidden email]> wrote:
If I remember correctly, there is a parcel in VisualWorks to export a file out (Squeak format).

@Milton, can you give a hand to Steffen?

Alexandre
-- 
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:

Thanks for the encouraging response! First question: Which is the recommended (friction free) way to exchange code between VW and Pharo?

Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <[hidden email]>:

I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre





Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
Dear all,

thanks for the many suggestions. I didn't had time to test all  
import/export ways yet. But for now, I can report on two:

1) NGFileOuter
Unfortunately It raised several MNUs in my image. I'll investigate them  
later.

2) FileOut30 (VW Contributed)
I was able to file out the code except for the package definition.  
Replacing {category: ''} in the class definitions with {package:  
'Transducers'} fixed that. However, methods that extend existing classes  
did not end up in the Transducers package. Is there a similar easy change  
to the file-out making that happen? Also I'd like to add the package  
comment if that's possible.

Most things appear to work as far as I can see. Two exceptions:
1) Random is a subclass of Stream in VW and in Pharo it is not. Hence,  
I'll have to copy some methods from Stream to Random.
2) I used #beImmutable in VW but I couldn't yet figure out how to make  
objects immutable in Pharo.

However, until the tests are ported, I cannot guarantee. Porting the test  
suite will be another beast, since I rely on the excellent  
mocking/stubbing library DoubleAgents by Randy Coulman. I am not sure how  
I will handle that. In general, I think it would be really worth the  
effort to be ported to Pharo, too. DoubleAgents is pretty powerful and  
produces easy to read and understand mocking/stubbing code. Personally, I  
prefer it clearly, e.g., over Mocketry (no offence intended!).

Attached you'll find the file-out that I loaded into Pharo. The issues  
above are not addressed yet. However, the following example works:

| scale sides count collect experiment random samples coin flip |
scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).

samples := random
               reduce: experiment
               init: Bag new.

samples := random
               transduce: scale * sides * count
               reduce: collect
               init: Bag new.

coin := sides <~ scale <~ random.
flip := Bag <~ count.

samples := flip <~ coin.


Best, Steffen


Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse <[hidden email]>:

> There is a package for that NGFileOuter or something like that on cincom
> store.
> We used it for mobydic code.
>
> On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel  
> <[hidden email]>
> wrote:
>
>> If I remember correctly, there is a parcel in VisualWorks to export a  
>> file
>> out (Squeak format).
>>
>> @Milton, can you give a hand to Steffen?
>>
>> Alexandre
>> --
>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>> Alexandre Bergel  http://www.bergel.eu
>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>
>>
>>
>> On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:
>>
>> Thanks for the encouraging response! First question: Which is the
>> recommended (friction free) way to exchange code between VW and Pharo?
>>
>> Cheers!
>> Steffen
>>
>> Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel  
>> <[hidden email]
>> >:
>>
>> I second Sven. This is very exciting!
>>
>> Let us know when you have something ready to be tested.
>>
>> Alexandre
>>
>>
>>
>>
>>

transducers.pharo.st (111K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Yanni Chiu-2

To get the extension methods into the Transducers package, the following worked for me - edit the category to have the prefix '*Transducers-'

2710c2710

< !Number methodsFor: 'transforming' stamp: ' 2/6/17 15:38'!

---

> !Number methodsFor: '*Transducers-transforming' stamp: ' 2/6/17 15:38'!



On Fri, Jun 2, 2017 at 11:05 AM, Steffen Märcker <[hidden email]> wrote:
Dear all,

thanks for the many suggestions. I didn't had time to test all import/export ways yet. But for now, I can report on two:

1) NGFileOuter
Unfortunately It raised several MNUs in my image. I'll investigate them later.

2) FileOut30 (VW Contributed)
I was able to file out the code except for the package definition. Replacing {category: ''} in the class definitions with {package: 'Transducers'} fixed that. However, methods that extend existing classes did not end up in the Transducers package. Is there a similar easy change to the file-out making that happen? Also I'd like to add the package comment if that's possible.

Most things appear to work as far as I can see. Two exceptions:
1) Random is a subclass of Stream in VW and in Pharo it is not. Hence, I'll have to copy some methods from Stream to Random.
2) I used #beImmutable in VW but I couldn't yet figure out how to make objects immutable in Pharo.

However, until the tests are ported, I cannot guarantee. Porting the test suite will be another beast, since I rely on the excellent mocking/stubbing library DoubleAgents by Randy Coulman. I am not sure how I will handle that. In general, I think it would be really worth the effort to be ported to Pharo, too. DoubleAgents is pretty powerful and produces easy to read and understand mocking/stubbing code. Personally, I prefer it clearly, e.g., over Mocketry (no offence intended!).

Attached you'll find the file-out that I loaded into Pharo. The issues above are not addressed yet. However, the following example works:

| scale sides count collect experiment random samples coin flip |
scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).

samples := random
              reduce: experiment
              init: Bag new.

samples := random
              transduce: scale * sides * count
              reduce: collect
              init: Bag new.

coin := sides <~ scale <~ random.
flip := Bag <~ count.

samples := flip <~ coin.


Best, Steffen



Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse <[hidden email]>:

There is a package for that NGFileOuter or something like that on cincom
store.
We used it for mobydic code.

On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <[hidden email]>
wrote:

If I remember correctly, there is a parcel in VisualWorks to export a file
out (Squeak format).

@Milton, can you give a hand to Steffen?

Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:

Thanks for the encouraging response! First question: Which is the
recommended (friction free) way to exchange code between VW and Pharo?

Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <[hidden email]
>:

I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre





Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Stephane Ducasse-3
In reply to this post by Steffen Märcker
Hi steffen

This is a great news. We need cool frameworks. 
- There is a package on cincom store to support the migration from VW to Pharo. FileOuter something. The name escapes my mind now. We updated it last year to help porting one application to Pharo. 
- I can help producing a nice document :)

On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was formerly known as 'Reducers', but this name was a poor choice. I'd like to port it to Pharo, if there is any interest on your side. I hope to learn more about Pharo in this process, since I am mainly a VW guy. And most likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very happy to hear your optinions, questions and I hope we can start a fruitful discussion - even if there is not Pharo port yet.

Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers make
it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their composition
is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

    scale := [:x | (x * 2 + 1) floor] map.
    sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

    count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

    collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a bag.
We assemble the experiment by transforming the block using the transducers.

    experiment := (scale * sides * count) transform: collect.

  From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing function)
we can use to run the experiment.

    samples := Random new
                  reduce: experiment
                  init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

    samples := Random new
                  transduce: scale * sides * count
                  reduce: collect
                  init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other experiments.

    coin := sides <~ scale <~ Random new.
    flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

    samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data sequence.
It takes an accumulated result and a value, and returns a new accumulated
result.
For example:

    collect := [:col :e | col add: e; yourself].
    sum := #+.

A reducing function can also be ternary, i.e., it takes an accumulated
result, a key and a value.
For example:

    collect := [:dic :k :v | dict at: k put: v; yourself].

Reducing functions may be equipped with an optional completing action.
After finishing processing, it is invoked exactly once, e.g., to free
resources.

    stream := [:str :e | str nextPut: each; yourself] completing: #close.
    absSum := #+ completing: #abs

A reducing function can end processing early by signaling Reduced with a
result.
This mechanism also enables the treatment of infinite sources.

    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse: [res]].

The primary approach to process a data sequence is the reducing protocol
with the messages #reduce:init: and #transduce:reduce:init: if transducers
are involved.
The behavior is similar to #inject:into: but in addition it takes care of:
- handling binary and ternary reducing functions,
- invoking the completing action after finishing, and
- stopping the reduction if Reduced is signaled.
The message #transduce:reduce:init: just combines the transformation and
the reducing step.

However, as reducing functions are step-wise in nature, an application may
choose other means to process its data.


## Reducibles

A data source is called reducible if it implements the reducing protocol.
Default implementations are provided for collections and streams.
Additionally, blocks without an argument are reducible, too.
This allows to adapt to custom data sources without additional effort.
For example:

    "XStreams adaptor"
    xstream := filename reading.
    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].

    "natural numbers"
    n := 0.
    reducible := [n := n+1].


## Transducers

A transducer is an object that transforms a reducing function into another.
Transducers encapsulate common steps in processing data sequences, such as
map, filter, concatenate, and flatten.
A transducer transforms a reducing function into another via #transform:
in order to add those steps.
They can be composed using #* which yields a new transducer that does both
transformations.
Most transducers require an argument, typically blocks, symbols or numbers:

    square := Map function: #squared.
    take := Take number: 1000.

To facilitate compact notation, the argument types implement corresponding
methods:

    squareAndTake := #squared map * 1000 take.

Transducers requiring no argument are singletons and can be accessed by
their class name.

    flattenAndDedupe := Flatten * Dedupe.



# Advanced Concepts

## Data flows

Processing a sequence of data can often be regarded as a data flow.
The operator #<~ allows define a flow from a data source through
processing steps to a drain.
For example:

    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.

In both examples #<~ is only used to set up the data flow using reducing
functions and transducers.
In contrast to streams, transducers are completely independent from input
and output sources.
Hence, we have a clear separation of reading data, writing data and
processing elements.
- Sources know how to iterate over data with a reducing function, e.g.,
via #reduce:init:.
- Drains know how to collect data using a reducing function.
- Transducers know how to process single elements.


## Reductions

A reduction binds an initial value or a block yielding an initial value to
a reducing function.
The idea is to define a ready-to-use process that can be applied in
different contexts.
Reducibles handle reductions via #reduce: and #transduce:reduce:
For example:

    sum := #+ init: 0.
    sum1 := #(1 1 1) reduce: sum.
    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.

    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
    set1 := #(1 1 1) reduce: asSet.
    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.

By combining a transducer with a reduction, a process can be further
modified.

    sumOdds := sum <~ #odd filter
    setOdds := asSet <~ #odd filter


## Eductions

An eduction combines a reducible data sources with a transducer.
The idea is to define a transformed (virtual) data source that needs not
to be stored in memory.

    odds1 := #odd filter <~ #(1 2 3) readStream.
    odds2 := #odd filter <~ (1 to 1000).

Depending on the underlying source, eductions can be processed once
(streams, e.g., odds1) or multiple times (collections, e.g., odds2).
Since no intermediate data is stored, transducers actions are lazy, i.e.,
they are invoked each time the eduction is processed.



# Origins

Transducers is based on the same-named Clojure library and its ideas.
Please see:
http://clojure.org/transducers


Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Stephane Ducasse-3
In reply to this post by Steffen Märcker
I have a design question

why the library is implemented in functional style vs messages?
I do not see why this is needed. To my eyes the compact notation 
goes against readibility of code and it feels ad-hoc in Smalltalk. 


I really prefer

square := Map function: #squared.
take := Take number: 1000.

Because I know that I can read it and understand it. 
From that perspective I prefer Xtreams. 

Stef









On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was formerly known as 'Reducers', but this name was a poor choice. I'd like to port it to Pharo, if there is any interest on your side. I hope to learn more about Pharo in this process, since I am mainly a VW guy. And most likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very happy to hear your optinions, questions and I hope we can start a fruitful discussion - even if there is not Pharo port yet.

Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers make
it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their composition
is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

    scale := [:x | (x * 2 + 1) floor] map.
    sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

    count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

    collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a bag.
We assemble the experiment by transforming the block using the transducers.

    experiment := (scale * sides * count) transform: collect.

  From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing function)
we can use to run the experiment.

    samples := Random new
                  reduce: experiment
                  init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

    samples := Random new
                  transduce: scale * sides * count
                  reduce: collect
                  init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other experiments.

    coin := sides <~ scale <~ Random new.
    flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

    samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data sequence.
It takes an accumulated result and a value, and returns a new accumulated
result.
For example:

    collect := [:col :e | col add: e; yourself].
    sum := #+.

A reducing function can also be ternary, i.e., it takes an accumulated
result, a key and a value.
For example:

    collect := [:dic :k :v | dict at: k put: v; yourself].

Reducing functions may be equipped with an optional completing action.
After finishing processing, it is invoked exactly once, e.g., to free
resources.

    stream := [:str :e | str nextPut: each; yourself] completing: #close.
    absSum := #+ completing: #abs

A reducing function can end processing early by signaling Reduced with a
result.
This mechanism also enables the treatment of infinite sources.

    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse: [res]].

The primary approach to process a data sequence is the reducing protocol
with the messages #reduce:init: and #transduce:reduce:init: if transducers
are involved.
The behavior is similar to #inject:into: but in addition it takes care of:
- handling binary and ternary reducing functions,
- invoking the completing action after finishing, and
- stopping the reduction if Reduced is signaled.
The message #transduce:reduce:init: just combines the transformation and
the reducing step.

However, as reducing functions are step-wise in nature, an application may
choose other means to process its data.


## Reducibles

A data source is called reducible if it implements the reducing protocol.
Default implementations are provided for collections and streams.
Additionally, blocks without an argument are reducible, too.
This allows to adapt to custom data sources without additional effort.
For example:

    "XStreams adaptor"
    xstream := filename reading.
    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].

    "natural numbers"
    n := 0.
    reducible := [n := n+1].


## Transducers

A transducer is an object that transforms a reducing function into another.
Transducers encapsulate common steps in processing data sequences, such as
map, filter, concatenate, and flatten.
A transducer transforms a reducing function into another via #transform:
in order to add those steps.
They can be composed using #* which yields a new transducer that does both
transformations.
Most transducers require an argument, typically blocks, symbols or numbers:

    square := Map function: #squared.
    take := Take number: 1000.

To facilitate compact notation, the argument types implement corresponding
methods:

    squareAndTake := #squared map * 1000 take.

Transducers requiring no argument are singletons and can be accessed by
their class name.

    flattenAndDedupe := Flatten * Dedupe.



# Advanced Concepts

## Data flows

Processing a sequence of data can often be regarded as a data flow.
The operator #<~ allows define a flow from a data source through
processing steps to a drain.
For example:

    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.

In both examples #<~ is only used to set up the data flow using reducing
functions and transducers.
In contrast to streams, transducers are completely independent from input
and output sources.
Hence, we have a clear separation of reading data, writing data and
processing elements.
- Sources know how to iterate over data with a reducing function, e.g.,
via #reduce:init:.
- Drains know how to collect data using a reducing function.
- Transducers know how to process single elements.


## Reductions

A reduction binds an initial value or a block yielding an initial value to
a reducing function.
The idea is to define a ready-to-use process that can be applied in
different contexts.
Reducibles handle reductions via #reduce: and #transduce:reduce:
For example:

    sum := #+ init: 0.
    sum1 := #(1 1 1) reduce: sum.
    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.

    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
    set1 := #(1 1 1) reduce: asSet.
    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.

By combining a transducer with a reduction, a process can be further
modified.

    sumOdds := sum <~ #odd filter
    setOdds := asSet <~ #odd filter


## Eductions

An eduction combines a reducible data sources with a transducer.
The idea is to define a transformed (virtual) data source that needs not
to be stored in memory.

    odds1 := #odd filter <~ #(1 2 3) readStream.
    odds2 := #odd filter <~ (1 to 1000).

Depending on the underlying source, eductions can be processed once
(streams, e.g., odds1) or multiple times (collections, e.g., odds2).
Since no intermediate data is stored, transducers actions are lazy, i.e.,
they are invoked each time the eduction is processed.



# Origins

Transducers is based on the same-named Clojure library and its ideas.
Please see:
http://clojure.org/transducers


Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Damien Pollet-2
If I recall correctly, there is an alternate protocol that looks more like xtreams or the traditional select/collect iterations.

On 2 June 2017 at 21:12, Stephane Ducasse <[hidden email]> wrote:
I have a design question

why the library is implemented in functional style vs messages?
I do not see why this is needed. To my eyes the compact notation 
goes against readibility of code and it feels ad-hoc in Smalltalk. 


I really prefer

square := Map function: #squared.
take := Take number: 1000.

Because I know that I can read it and understand it. 
From that perspective I prefer Xtreams. 

Stef









On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was formerly known as 'Reducers', but this name was a poor choice. I'd like to port it to Pharo, if there is any interest on your side. I hope to learn more about Pharo in this process, since I am mainly a VW guy. And most likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very happy to hear your optinions, questions and I hope we can start a fruitful discussion - even if there is not Pharo port yet.

Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers make
it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their composition
is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

    scale := [:x | (x * 2 + 1) floor] map.
    sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

    count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

    collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a bag.
We assemble the experiment by transforming the block using the transducers.

    experiment := (scale * sides * count) transform: collect.

  From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing function)
we can use to run the experiment.

    samples := Random new
                  reduce: experiment
                  init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

    samples := Random new
                  transduce: scale * sides * count
                  reduce: collect
                  init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other experiments.

    coin := sides <~ scale <~ Random new.
    flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

    samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data sequence.
It takes an accumulated result and a value, and returns a new accumulated
result.
For example:

    collect := [:col :e | col add: e; yourself].
    sum := #+.

A reducing function can also be ternary, i.e., it takes an accumulated
result, a key and a value.
For example:

    collect := [:dic :k :v | dict at: k put: v; yourself].

Reducing functions may be equipped with an optional completing action.
After finishing processing, it is invoked exactly once, e.g., to free
resources.

    stream := [:str :e | str nextPut: each; yourself] completing: #close.
    absSum := #+ completing: #abs

A reducing function can end processing early by signaling Reduced with a
result.
This mechanism also enables the treatment of infinite sources.

    nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse: [res]].

The primary approach to process a data sequence is the reducing protocol
with the messages #reduce:init: and #transduce:reduce:init: if transducers
are involved.
The behavior is similar to #inject:into: but in addition it takes care of:
- handling binary and ternary reducing functions,
- invoking the completing action after finishing, and
- stopping the reduction if Reduced is signaled.
The message #transduce:reduce:init: just combines the transformation and
the reducing step.

However, as reducing functions are step-wise in nature, an application may
choose other means to process its data.


## Reducibles

A data source is called reducible if it implements the reducing protocol.
Default implementations are provided for collections and streams.
Additionally, blocks without an argument are reducible, too.
This allows to adapt to custom data sources without additional effort.
For example:

    "XStreams adaptor"
    xstream := filename reading.
    reducible := [[xstream get] on: Incomplete do: [Reduced signal]].

    "natural numbers"
    n := 0.
    reducible := [n := n+1].


## Transducers

A transducer is an object that transforms a reducing function into another.
Transducers encapsulate common steps in processing data sequences, such as
map, filter, concatenate, and flatten.
A transducer transforms a reducing function into another via #transform:
in order to add those steps.
They can be composed using #* which yields a new transducer that does both
transformations.
Most transducers require an argument, typically blocks, symbols or numbers:

    square := Map function: #squared.
    take := Take number: 1000.

To facilitate compact notation, the argument types implement corresponding
methods:

    squareAndTake := #squared map * 1000 take.

Transducers requiring no argument are singletons and can be accessed by
their class name.

    flattenAndDedupe := Flatten * Dedupe.



# Advanced Concepts

## Data flows

Processing a sequence of data can often be regarded as a data flow.
The operator #<~ allows define a flow from a data source through
processing steps to a drain.
For example:

    squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
    fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.

In both examples #<~ is only used to set up the data flow using reducing
functions and transducers.
In contrast to streams, transducers are completely independent from input
and output sources.
Hence, we have a clear separation of reading data, writing data and
processing elements.
- Sources know how to iterate over data with a reducing function, e.g.,
via #reduce:init:.
- Drains know how to collect data using a reducing function.
- Transducers know how to process single elements.


## Reductions

A reduction binds an initial value or a block yielding an initial value to
a reducing function.
The idea is to define a ready-to-use process that can be applied in
different contexts.
Reducibles handle reductions via #reduce: and #transduce:reduce:
For example:

    sum := #+ init: 0.
    sum1 := #(1 1 1) reduce: sum.
    sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.

    asSet := [:set :e | set add: e; yourself] initializer: [Set new].
    set1 := #(1 1 1) reduce: asSet.
    set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.

By combining a transducer with a reduction, a process can be further
modified.

    sumOdds := sum <~ #odd filter
    setOdds := asSet <~ #odd filter


## Eductions

An eduction combines a reducible data sources with a transducer.
The idea is to define a transformed (virtual) data source that needs not
to be stored in memory.

    odds1 := #odd filter <~ #(1 2 3) readStream.
    odds2 := #odd filter <~ (1 to 1000).

Depending on the underlying source, eductions can be processed once
(streams, e.g., odds1) or multiple times (collections, e.g., odds2).
Since no intermediate data is stored, transducers actions are lazy, i.e.,
they are invoked each time the eduction is processed.



# Origins

Transducers is based on the same-named Clojure library and its ideas.
Please see:
http://clojure.org/transducers



Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Stephane Ducasse-3
Hi Stephane!

> This is a great news. We need cool frameworks.

I am really curious how well it will work for others. =)

> - There is a package on cincom store to support the migration from VW to
> Pharo. FileOuter something. The name escapes my mind now. We updated it
> last year to help porting one application to Pharo.

I think it is FileOuterNG (at least your name appears quit often in the  
commits ;-) ).
Unfortunately, I didn't get it to work straight away and got some MNU. But  
it it is very likely, that this is my fault and I missed something  
important. I'll try it again later.

> - I can help producing a nice document :)

Do you mean like the booklets published over the last weeks? This would be  
great.

Do you have an idea, how to add a package comment to the simple file-out  
it used? I think, a simple message send should suffice.

Cheers!
Steffen



Am .06.2017, 21:06 Uhr, schrieb Stephane Ducasse <[hidden email]>:

> Hi steffen
>

>
> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
>
>> Hi,
>>
>> I am the developer of the library 'Transducers' for VisualWorks. It was
>> formerly known as 'Reducers', but this name was a poor choice. I'd like  
>> to
>> port it to Pharo, if there is any interest on your side. I hope to learn
>> more about Pharo in this process, since I am mainly a VW guy. And most
>> likely, I will come up with a bunch of questions. :-)
>>
>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very
>> happy to hear your optinions, questions and I hope we can start a  
>> fruitful
>> discussion - even if there is not Pharo port yet.
>>
>> Best, Steffen
>>
>>
>>
>> Transducers are building blocks that encapsulate how to process elements
>> of a data sequence independently of the underlying input and output  
>> source.
>>
>>
>>
>> # Overview
>>
>> ## Encapsulate
>> Implementations of enumeration methods, such as #collect:, have the  
>> logic
>> how to process a single element in common.
>> However, that logic is reimplemented each and every time. Transducers  
>> make
>> it explicit and facilitate re-use and coherent behavior.
>> For example:
>> - #collect: requires mapping: (aBlock1 map)
>> - #select: requires filtering: (aBlock2 filter)
>>
>>
>> ## Compose
>> In practice, algorithms often require multiple processing steps, e.g.,
>> mapping only a filtered set of elements.
>> Transducers are inherently composable, and hereby, allow to make the
>> combination of steps explicit.
>> Since transducers do not build intermediate collections, their  
>> composition
>> is memory-efficient.
>> For example:
>> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"
>>
>>
>> ## Re-Use
>> Transducers are decoupled from the input and output sources, and hence,
>> they can be reused in different contexts.
>> For example:
>> - enumeration of collections
>> - processing of streams
>> - communicating via channels
>>
>>
>>
>> # Usage by Example
>>
>> We build a coin flipping experiment and count the occurrence of heads  
>> and
>> tails.
>>
>> First, we associate random numbers with the sides of a coin.
>>
>>     scale := [:x | (x * 2 + 1) floor] map.
>>     sides := #(heads tails) replace.
>>
>> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
>> Sides is a transducer that replaces the numbers with heads an tails by
>> lookup in an array.
>> Next, we choose a number of samples.
>>
>>     count := 1000 take.
>>
>> Count is a transducer that takes 1000 elements from a source.
>> We keep track of the occurrences of heads an tails using a bag.
>>
>>     collect := [:bag :c | bag add: c; yourself].
>>
>> Collect is binary block (reducing function) that collects events in a  
>> bag.
>> We assemble the experiment by transforming the block using the  
>> transducers.
>>
>>     experiment := (scale * sides * count) transform: collect.
>>
>>   From left to right we see the steps involved: scale, sides, count and
>> collect.
>> Transforming assembles these steps into a binary block (reducing  
>> function)
>> we can use to run the experiment.
>>
>>     samples := Random new
>>                   reduce: experiment
>>                   init: Bag new.
>>
>> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
>> To execute a transformation and a reduction together, we can use
>> #transduce:reduce:init:.
>>
>>     samples := Random new
>>                   transduce: scale * sides * count
>>                   reduce: collect
>>                   init: Bag new.
>>
>> We can also express the experiment as data-flow using #<~.
>> This enables us to build objects that can be re-used in other  
>> experiments.
>>
>>     coin := sides <~ scale <~ Random new.
>>     flip := Bag <~ count.
>>
>> Coin is an eduction, i.e., it binds transducers to a source and
>> understands #reduce:init: among others.
>> Flip is a transformed reduction, i.e., it binds transducers to a  
>> reducing
>> function and an initial value.
>> By sending #<~, we draw further samples from flipping the coin.
>>
>>     samples := flip <~ coin.
>>
>> This yields a new Bag with another 1000 samples.
>>
>>
>>
>> # Basic Concepts
>>
>> ## Reducing Functions
>>
>> A reducing function represents a single step in processing a data  
>> sequence.
>> It takes an accumulated result and a value, and returns a new  
>> accumulated
>> result.
>> For example:
>>
>>     collect := [:col :e | col add: e; yourself].
>>     sum := #+.
>>
>> A reducing function can also be ternary, i.e., it takes an accumulated
>> result, a key and a value.
>> For example:
>>
>>     collect := [:dic :k :v | dict at: k put: v; yourself].
>>
>> Reducing functions may be equipped with an optional completing action.
>> After finishing processing, it is invoked exactly once, e.g., to free
>> resources.
>>
>>     stream := [:str :e | str nextPut: each; yourself] completing:  
>> #close.
>>     absSum := #+ completing: #abs
>>
>> A reducing function can end processing early by signaling Reduced with a
>> result.
>> This mechanism also enables the treatment of infinite sources.
>>
>>     nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
>> [res]].
>>
>> The primary approach to process a data sequence is the reducing protocol
>> with the messages #reduce:init: and #transduce:reduce:init: if  
>> transducers
>> are involved.
>> The behavior is similar to #inject:into: but in addition it takes care  
>> of:
>> - handling binary and ternary reducing functions,
>> - invoking the completing action after finishing, and
>> - stopping the reduction if Reduced is signaled.
>> The message #transduce:reduce:init: just combines the transformation and
>> the reducing step.
>>
>> However, as reducing functions are step-wise in nature, an application  
>> may
>> choose other means to process its data.
>>
>>
>> ## Reducibles
>>
>> A data source is called reducible if it implements the reducing  
>> protocol.
>> Default implementations are provided for collections and streams.
>> Additionally, blocks without an argument are reducible, too.
>> This allows to adapt to custom data sources without additional effort.
>> For example:
>>
>>     "XStreams adaptor"
>>     xstream := filename reading.
>>     reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>>
>>     "natural numbers"
>>     n := 0.
>>     reducible := [n := n+1].
>>
>>
>> ## Transducers
>>
>> A transducer is an object that transforms a reducing function into  
>> another.
>> Transducers encapsulate common steps in processing data sequences, such  
>> as
>> map, filter, concatenate, and flatten.
>> A transducer transforms a reducing function into another via #transform:
>> in order to add those steps.
>> They can be composed using #* which yields a new transducer that does  
>> both
>> transformations.
>> Most transducers require an argument, typically blocks, symbols or  
>> numbers:
>>
>>     square := Map function: #squared.
>>     take := Take number: 1000.
>>
>> To facilitate compact notation, the argument types implement  
>> corresponding
>> methods:
>>
>>     squareAndTake := #squared map * 1000 take.
>>
>> Transducers requiring no argument are singletons and can be accessed by
>> their class name.
>>
>>     flattenAndDedupe := Flatten * Dedupe.
>>
>>
>>
>> # Advanced Concepts
>>
>> ## Data flows
>>
>> Processing a sequence of data can often be regarded as a data flow.
>> The operator #<~ allows define a flow from a data source through
>> processing steps to a drain.
>> For example:
>>
>>     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>>     fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>>
>> In both examples #<~ is only used to set up the data flow using reducing
>> functions and transducers.
>> In contrast to streams, transducers are completely independent from  
>> input
>> and output sources.
>> Hence, we have a clear separation of reading data, writing data and
>> processing elements.
>> - Sources know how to iterate over data with a reducing function, e.g.,
>> via #reduce:init:.
>> - Drains know how to collect data using a reducing function.
>> - Transducers know how to process single elements.
>>
>>
>> ## Reductions
>>
>> A reduction binds an initial value or a block yielding an initial value  
>> to
>> a reducing function.
>> The idea is to define a ready-to-use process that can be applied in
>> different contexts.
>> Reducibles handle reductions via #reduce: and #transduce:reduce:
>> For example:
>>
>>     sum := #+ init: 0.
>>     sum1 := #(1 1 1) reduce: sum.
>>     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>>
>>     asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>>     set1 := #(1 1 1) reduce: asSet.
>>     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>>
>> By combining a transducer with a reduction, a process can be further
>> modified.
>>
>>     sumOdds := sum <~ #odd filter
>>     setOdds := asSet <~ #odd filter
>>
>>
>> ## Eductions
>>
>> An eduction combines a reducible data sources with a transducer.
>> The idea is to define a transformed (virtual) data source that needs not
>> to be stored in memory.
>>
>>     odds1 := #odd filter <~ #(1 2 3) readStream.
>>     odds2 := #odd filter <~ (1 to 1000).
>>
>> Depending on the underlying source, eductions can be processed once
>> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
>> Since no intermediate data is stored, transducers actions are lazy,  
>> i.e.,
>> they are invoked each time the eduction is processed.
>>
>>
>>
>> # Origins
>>
>> Transducers is based on the same-named Clojure library and its ideas.
>> Please see:
>> http://clojure.org/transducers
>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Damien Pollet-2
Hi Stephane & Damien!

The short answer is that the compact notation turned out to work much  
better for me in my code, especially, if multiple transducers are  
involved. But that's my personal taste. You can choose which suits you  
better. In fact,

   1000 take.

just sits on top and simply calls

   Take number: 1000.

If the need arises, we could of course factor the compact notation out  
into a separate package. Btw, would you prefer (Take n: 1000) over (Take  
number: 1000)?

Damien, you're right, I experimented with additional styles. Right now, we  
already have in the basic Transducer package:

   (collection transduce: #squared map * 1000 take. "which is equal to"
   (collection transduce: #squared map) transduce: 1000 take.

Basically, one can split #transduce:reduce:init: into single calls of  
#transduce:, #reduce:, and #init:, depending on the needs.
I also have an (unfinished) extension, that allows to write:

   (collection transduce map: #squared) take: 1000.

This feels familiar, but becomes a bit hard to read if more than two steps  
are needed.

   collection transduce
                map: #squared;
                take: 1000.

I think, this alternative would reads nicely. But as the message chain has  
to modify the underlying object (an eduction), very snaky side effects may  
occur. E.g., consider

   eduction := collection transduce.
   squared  := eduction map: #squared.
   take     := squared take: 1000.

Now, all three variables hold onto the same object, which first squares  
all elements and than takes the first 1000.

Best,
Steffen




Am .06.2017, 21:28 Uhr, schrieb Damien Pollet  
<[hidden email]>:

> If I recall correctly, there is an alternate protocol that looks more  
> like
> xtreams or the traditional select/collect iterations.
>
> On 2 June 2017 at 21:12, Stephane Ducasse <[hidden email]>  
> wrote:
>
>> I have a design question
>>
>> why the library is implemented in functional style vs messages?
>> I do not see why this is needed. To my eyes the compact notation
>> goes against readibility of code and it feels ad-hoc in Smalltalk.
>>
>>
>> I really prefer
>>
>> square := Map function: #squared.
>> take := Take number: 1000.
>>
>> Because I know that I can read it and understand it.
>> From that perspective I prefer Xtreams.
>>
>> Stef
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker <[hidden email]> wrote:
>>
>>> Hi,
>>>
>>> I am the developer of the library 'Transducers' for VisualWorks. It was
>>> formerly known as 'Reducers', but this name was a poor choice. I'd  
>>> like to
>>> port it to Pharo, if there is any interest on your side. I hope to  
>>> learn
>>> more about Pharo in this process, since I am mainly a VW guy. And most
>>> likely, I will come up with a bunch of questions. :-)
>>>
>>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be  
>>> very
>>> happy to hear your optinions, questions and I hope we can start a  
>>> fruitful
>>> discussion - even if there is not Pharo port yet.
>>>
>>> Best, Steffen
>>>
>>>
>>>
>>> Transducers are building blocks that encapsulate how to process  
>>> elements
>>> of a data sequence independently of the underlying input and output
>>> source.
>>>
>>>
>>>
>>> # Overview
>>>
>>> ## Encapsulate
>>> Implementations of enumeration methods, such as #collect:, have the  
>>> logic
>>> how to process a single element in common.
>>> However, that logic is reimplemented each and every time. Transducers  
>>> make
>>> it explicit and facilitate re-use and coherent behavior.
>>> For example:
>>> - #collect: requires mapping: (aBlock1 map)
>>> - #select: requires filtering: (aBlock2 filter)
>>>
>>>
>>> ## Compose
>>> In practice, algorithms often require multiple processing steps, e.g.,
>>> mapping only a filtered set of elements.
>>> Transducers are inherently composable, and hereby, allow to make the
>>> combination of steps explicit.
>>> Since transducers do not build intermediate collections, their  
>>> composition
>>> is memory-efficient.
>>> For example:
>>> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map  
>>> elements"
>>>
>>>
>>> ## Re-Use
>>> Transducers are decoupled from the input and output sources, and hence,
>>> they can be reused in different contexts.
>>> For example:
>>> - enumeration of collections
>>> - processing of streams
>>> - communicating via channels
>>>
>>>
>>>
>>> # Usage by Example
>>>
>>> We build a coin flipping experiment and count the occurrence of heads  
>>> and
>>> tails.
>>>
>>> First, we associate random numbers with the sides of a coin.
>>>
>>>     scale := [:x | (x * 2 + 1) floor] map.
>>>     sides := #(heads tails) replace.
>>>
>>> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
>>> Sides is a transducer that replaces the numbers with heads an tails by
>>> lookup in an array.
>>> Next, we choose a number of samples.
>>>
>>>     count := 1000 take.
>>>
>>> Count is a transducer that takes 1000 elements from a source.
>>> We keep track of the occurrences of heads an tails using a bag.
>>>
>>>     collect := [:bag :c | bag add: c; yourself].
>>>
>>> Collect is binary block (reducing function) that collects events in a  
>>> bag.
>>> We assemble the experiment by transforming the block using the
>>> transducers.
>>>
>>>     experiment := (scale * sides * count) transform: collect.
>>>
>>>   From left to right we see the steps involved: scale, sides, count and
>>> collect.
>>> Transforming assembles these steps into a binary block (reducing  
>>> function)
>>> we can use to run the experiment.
>>>
>>>     samples := Random new
>>>                   reduce: experiment
>>>                   init: Bag new.
>>>
>>> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
>>> To execute a transformation and a reduction together, we can use
>>> #transduce:reduce:init:.
>>>
>>>     samples := Random new
>>>                   transduce: scale * sides * count
>>>                   reduce: collect
>>>                   init: Bag new.
>>>
>>> We can also express the experiment as data-flow using #<~.
>>> This enables us to build objects that can be re-used in other  
>>> experiments.
>>>
>>>     coin := sides <~ scale <~ Random new.
>>>     flip := Bag <~ count.
>>>
>>> Coin is an eduction, i.e., it binds transducers to a source and
>>> understands #reduce:init: among others.
>>> Flip is a transformed reduction, i.e., it binds transducers to a  
>>> reducing
>>> function and an initial value.
>>> By sending #<~, we draw further samples from flipping the coin.
>>>
>>>     samples := flip <~ coin.
>>>
>>> This yields a new Bag with another 1000 samples.
>>>
>>>
>>>
>>> # Basic Concepts
>>>
>>> ## Reducing Functions
>>>
>>> A reducing function represents a single step in processing a data
>>> sequence.
>>> It takes an accumulated result and a value, and returns a new  
>>> accumulated
>>> result.
>>> For example:
>>>
>>>     collect := [:col :e | col add: e; yourself].
>>>     sum := #+.
>>>
>>> A reducing function can also be ternary, i.e., it takes an accumulated
>>> result, a key and a value.
>>> For example:
>>>
>>>     collect := [:dic :k :v | dict at: k put: v; yourself].
>>>
>>> Reducing functions may be equipped with an optional completing action.
>>> After finishing processing, it is invoked exactly once, e.g., to free
>>> resources.
>>>
>>>     stream := [:str :e | str nextPut: each; yourself] completing:  
>>> #close.
>>>     absSum := #+ completing: #abs
>>>
>>> A reducing function can end processing early by signaling Reduced with  
>>> a
>>> result.
>>> This mechanism also enables the treatment of infinite sources.
>>>
>>>     nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:
>>> [res]].
>>>
>>> The primary approach to process a data sequence is the reducing  
>>> protocol
>>> with the messages #reduce:init: and #transduce:reduce:init: if  
>>> transducers
>>> are involved.
>>> The behavior is similar to #inject:into: but in addition it takes care  
>>> of:
>>> - handling binary and ternary reducing functions,
>>> - invoking the completing action after finishing, and
>>> - stopping the reduction if Reduced is signaled.
>>> The message #transduce:reduce:init: just combines the transformation  
>>> and
>>> the reducing step.
>>>
>>> However, as reducing functions are step-wise in nature, an application  
>>> may
>>> choose other means to process its data.
>>>
>>>
>>> ## Reducibles
>>>
>>> A data source is called reducible if it implements the reducing  
>>> protocol.
>>> Default implementations are provided for collections and streams.
>>> Additionally, blocks without an argument are reducible, too.
>>> This allows to adapt to custom data sources without additional effort.
>>> For example:
>>>
>>>     "XStreams adaptor"
>>>     xstream := filename reading.
>>>     reducible := [[xstream get] on: Incomplete do: [Reduced signal]].
>>>
>>>     "natural numbers"
>>>     n := 0.
>>>     reducible := [n := n+1].
>>>
>>>
>>> ## Transducers
>>>
>>> A transducer is an object that transforms a reducing function into
>>> another.
>>> Transducers encapsulate common steps in processing data sequences,  
>>> such as
>>> map, filter, concatenate, and flatten.
>>> A transducer transforms a reducing function into another via  
>>> #transform:
>>> in order to add those steps.
>>> They can be composed using #* which yields a new transducer that does  
>>> both
>>> transformations.
>>> Most transducers require an argument, typically blocks, symbols or
>>> numbers:
>>>
>>>     square := Map function: #squared.
>>>     take := Take number: 1000.
>>>
>>> To facilitate compact notation, the argument types implement  
>>> corresponding
>>> methods:
>>>
>>>     squareAndTake := #squared map * 1000 take.
>>>
>>> Transducers requiring no argument are singletons and can be accessed by
>>> their class name.
>>>
>>>     flattenAndDedupe := Flatten * Dedupe.
>>>
>>>
>>>
>>> # Advanced Concepts
>>>
>>> ## Data flows
>>>
>>> Processing a sequence of data can often be regarded as a data flow.
>>> The operator #<~ allows define a flow from a data source through
>>> processing steps to a drain.
>>> For example:
>>>
>>>     squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
>>>     fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.
>>>
>>> In both examples #<~ is only used to set up the data flow using  
>>> reducing
>>> functions and transducers.
>>> In contrast to streams, transducers are completely independent from  
>>> input
>>> and output sources.
>>> Hence, we have a clear separation of reading data, writing data and
>>> processing elements.
>>> - Sources know how to iterate over data with a reducing function, e.g.,
>>> via #reduce:init:.
>>> - Drains know how to collect data using a reducing function.
>>> - Transducers know how to process single elements.
>>>
>>>
>>> ## Reductions
>>>
>>> A reduction binds an initial value or a block yielding an initial  
>>> value to
>>> a reducing function.
>>> The idea is to define a ready-to-use process that can be applied in
>>> different contexts.
>>> Reducibles handle reductions via #reduce: and #transduce:reduce:
>>> For example:
>>>
>>>     sum := #+ init: 0.
>>>     sum1 := #(1 1 1) reduce: sum.
>>>     sum2 := (1 to: 1000) transduce: #odd filter reduce: sum.
>>>
>>>     asSet := [:set :e | set add: e; yourself] initializer: [Set new].
>>>     set1 := #(1 1 1) reduce: asSet.
>>>     set2 := #(1 to: 1000) transduce: #odd filter reduce: asSet.
>>>
>>> By combining a transducer with a reduction, a process can be further
>>> modified.
>>>
>>>     sumOdds := sum <~ #odd filter
>>>     setOdds := asSet <~ #odd filter
>>>
>>>
>>> ## Eductions
>>>
>>> An eduction combines a reducible data sources with a transducer.
>>> The idea is to define a transformed (virtual) data source that needs  
>>> not
>>> to be stored in memory.
>>>
>>>     odds1 := #odd filter <~ #(1 2 3) readStream.
>>>     odds2 := #odd filter <~ (1 to 1000).
>>>
>>> Depending on the underlying source, eductions can be processed once
>>> (streams, e.g., odds1) or multiple times (collections, e.g., odds2).
>>> Since no intermediate data is stored, transducers actions are lazy,  
>>> i.e.,
>>> they are invoked each time the eduction is processed.
>>>
>>>
>>>
>>> # Origins
>>>
>>> Transducers is based on the same-named Clojure library and its ideas.
>>> Please see:
>>> http://clojure.org/transducers
>>>
>>>

Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
In reply to this post by Yanni Chiu-2
Thanks, this appears to work.  Attached you'll find the file-out from
VisualWorks and the file-out from Pharo (includes package comment).

Cheers!
Steffen


Am .06.2017, 20:06 Uhr, schrieb Yanni Chiu <[hidden email]>:

> To get the extension methods into the Transducers package, the following
> worked for me - edit the category to have the prefix '*Transducers-'
>
> 2710c2710
>
> < !Number methodsFor: 'transforming' stamp: ' 2/6/17 15:38'!
>
> ---
>
>> !Number methodsFor: '*Transducers-transforming' stamp: ' 2/6/17 15:38'!
>
>
> On Fri, Jun 2, 2017 at 11:05 AM, Steffen Märcker <[hidden email]> wrote:
>
>> Dear all,
>>
>> thanks for the many suggestions. I didn't had time to test all
>> import/export ways yet. But for now, I can report on two:
>>
>> 1) NGFileOuter
>> Unfortunately It raised several MNUs in my image. I'll investigate them
>> later.
>>
>> 2) FileOut30 (VW Contributed)
>> I was able to file out the code except for the package definition.
>> Replacing {category: ''} in the class definitions with {package:
>> 'Transducers'} fixed that. However, methods that extend existing classes
>> did not end up in the Transducers package. Is there a similar easy  
>> change
>> to the file-out making that happen? Also I'd like to add the package
>> comment if that's possible.
>>
>> Most things appear to work as far as I can see. Two exceptions:
>> 1) Random is a subclass of Stream in VW and in Pharo it is not. Hence,
>> I'll have to copy some methods from Stream to Random.
>> 2) I used #beImmutable in VW but I couldn't yet figure out how to make
>> objects immutable in Pharo.
>>
>> However, until the tests are ported, I cannot guarantee. Porting the  
>> test
>> suite will be another beast, since I rely on the excellent  
>> mocking/stubbing
>> library DoubleAgents by Randy Coulman. I am not sure how I will handle
>> that. In general, I think it would be really worth the effort to be  
>> ported
>> to Pharo, too. DoubleAgents is pretty powerful and produces easy to read
>> and understand mocking/stubbing code. Personally, I prefer it clearly,
>> e.g., over Mocketry (no offence intended!).
>>
>> Attached you'll find the file-out that I loaded into Pharo. The issues
>> above are not addressed yet. However, the following example works:
>>
>> | scale sides count collect experiment random samples coin flip |
>> scale := [:x | (x * 2 + 1) floor] map.
>> sides := #(heads tails) replace.
>> count := 1000 take.
>> collect := [:bag :c | bag add: c; yourself].
>> experiment := (scale * sides * count) transform: collect.
>> random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).
>>
>> samples := random
>>               reduce: experiment
>>               init: Bag new.
>>
>> samples := random
>>               transduce: scale * sides * count
>>               reduce: collect
>>               init: Bag new.
>>
>> coin := sides <~ scale <~ random.
>> flip := Bag <~ count.
>>
>> samples := flip <~ coin.
>>
>>
>> Best, Steffen
>>
>>
>>
>> Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse  
>> <[hidden email]
>> >:
>>
>> There is a package for that NGFileOuter or something like that on cincom
>>> store.
>>> We used it for mobydic code.
>>>
>>> On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <
>>> [hidden email]>
>>> wrote:
>>>
>>> If I remember correctly, there is a parcel in VisualWorks to export a  
>>> file
>>>> out (Squeak format).
>>>>
>>>> @Milton, can you give a hand to Steffen?
>>>>
>>>> Alexandre
>>>> --
>>>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>>>> Alexandre Bergel  http://www.bergel.eu
>>>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>>>
>>>>
>>>>
>>>> On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:
>>>>
>>>> Thanks for the encouraging response! First question: Which is the
>>>> recommended (friction free) way to exchange code between VW and Pharo?
>>>>
>>>> Cheers!
>>>> Steffen
>>>>
>>>> Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <
>>>> [hidden email]
>>>> >:
>>>>
>>>> I second Sven. This is very exciting!
>>>>
>>>> Let us know when you have something ready to be tested.
>>>>
>>>> Alexandre
>>>>
>>>>
>>>>
>>>>

Transducers.pharo.st (155K) Download Attachment
Transducers.vw.st (114K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Porting Transducers to Pharo

Steffen Märcker
Dear all,

attached are updated file-outs. I fixed a couple of annoyances that  
slipped through yesterday evening. Most notable:

1) Random generator now works.
2) Early termination via Reduced exception does MNU anymore.
3) Printing a transducer holding a block does not MNU anymore.

Please, give it a spin and tell me your impressions. (At least) The  
coin-flipping the example from the package comment works now:

scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
"experiment cannot be re-used"
samples := Random new
               reduce: experiment
               init: Bag new.
"transform and reduce in one step"
samples := Random new
               transduce: scale * sides * count
               reduce: collect
               init: Bag new.
"assemble coin (eduction) and flip (reduction) objects"
coin := sides <~ scale <~ Random new.
flip := Bag <~ count.
"flip coin =)"
samples := flip <~ coin.

Cheers!
Steffen


Am .06.2017, 23:08 Uhr, schrieb Steffen Märcker <[hidden email]>:

> Thanks, this appears to work.  Attached you'll find the file-out from
> VisualWorks and the file-out from Pharo (includes package comment).
>
> Cheers!
> Steffen
>
>
> Am .06.2017, 20:06 Uhr, schrieb Yanni Chiu <[hidden email]>:
>
>> To get the extension methods into the Transducers package, the following
>> worked for me - edit the category to have the prefix '*Transducers-'
>>
>> 2710c2710
>>
>> < !Number methodsFor: 'transforming' stamp: ' 2/6/17 15:38'!
>>
>> ---
>>
>>> !Number methodsFor: '*Transducers-transforming' stamp: ' 2/6/17 15:38'!
>>
>>
>> On Fri, Jun 2, 2017 at 11:05 AM, Steffen Märcker <[hidden email]> wrote:
>>
>>> Dear all,
>>>
>>> thanks for the many suggestions. I didn't had time to test all
>>> import/export ways yet. But for now, I can report on two:
>>>
>>> 1) NGFileOuter
>>> Unfortunately It raised several MNUs in my image. I'll investigate them
>>> later.
>>>
>>> 2) FileOut30 (VW Contributed)
>>> I was able to file out the code except for the package definition.
>>> Replacing {category: ''} in the class definitions with {package:
>>> 'Transducers'} fixed that. However, methods that extend existing  
>>> classes
>>> did not end up in the Transducers package. Is there a similar easy
>>> change
>>> to the file-out making that happen? Also I'd like to add the package
>>> comment if that's possible.
>>>
>>> Most things appear to work as far as I can see. Two exceptions:
>>> 1) Random is a subclass of Stream in VW and in Pharo it is not. Hence,
>>> I'll have to copy some methods from Stream to Random.
>>> 2) I used #beImmutable in VW but I couldn't yet figure out how to make
>>> objects immutable in Pharo.
>>>
>>> However, until the tests are ported, I cannot guarantee. Porting the
>>> test
>>> suite will be another beast, since I rely on the excellent
>>> mocking/stubbing
>>> library DoubleAgents by Randy Coulman. I am not sure how I will handle
>>> that. In general, I think it would be really worth the effort to be
>>> ported
>>> to Pharo, too. DoubleAgents is pretty powerful and produces easy to  
>>> read
>>> and understand mocking/stubbing code. Personally, I prefer it clearly,
>>> e.g., over Mocketry (no offence intended!).
>>>
>>> Attached you'll find the file-out that I loaded into Pharo. The issues
>>> above are not addressed yet. However, the following example works:
>>>
>>> | scale sides count collect experiment random samples coin flip |
>>> scale := [:x | (x * 2 + 1) floor] map.
>>> sides := #(heads tails) replace.
>>> count := 1000 take.
>>> collect := [:bag :c | bag add: c; yourself].
>>> experiment := (scale * sides * count) transform: collect.
>>> random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).
>>>
>>> samples := random
>>>               reduce: experiment
>>>               init: Bag new.
>>>
>>> samples := random
>>>               transduce: scale * sides * count
>>>               reduce: collect
>>>               init: Bag new.
>>>
>>> coin := sides <~ scale <~ random.
>>> flip := Bag <~ count.
>>>
>>> samples := flip <~ coin.
>>>
>>>
>>> Best, Steffen
>>>
>>>
>>>
>>> Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse
>>> <[hidden email]
>>> >:
>>>
>>> There is a package for that NGFileOuter or something like that on  
>>> cincom
>>>> store.
>>>> We used it for mobydic code.
>>>>
>>>> On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <
>>>> [hidden email]>
>>>> wrote:
>>>>
>>>> If I remember correctly, there is a parcel in VisualWorks to export a
>>>> file
>>>>> out (Squeak format).
>>>>>
>>>>> @Milton, can you give a hand to Steffen?
>>>>>
>>>>> Alexandre
>>>>> --
>>>>> _,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
>>>>> Alexandre Bergel  http://www.bergel.eu
>>>>> ^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.
>>>>>
>>>>>
>>>>>
>>>>> On May 31, 2017, at 10:32 AM, Steffen Märcker <[hidden email]> wrote:
>>>>>
>>>>> Thanks for the encouraging response! First question: Which is the
>>>>> recommended (friction free) way to exchange code between VW and  
>>>>> Pharo?
>>>>>
>>>>> Cheers!
>>>>> Steffen
>>>>>
>>>>> Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel <
>>>>> [hidden email]
>>>>> >:
>>>>>
>>>>> I second Sven. This is very exciting!
>>>>>
>>>>> Let us know when you have something ready to be tested.
>>>>>
>>>>> Alexandre
>>>>>
>>>>>

Transducers.pharo.st (150K) Download Attachment
Transducers.vw.st (118K) Download Attachment
123