Login  Register

Memoization Question

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options Options
Embed post
Permalink
Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

Memoization Question

Torsten Bergmann
2398 posts
Phil wrote:
>How can I do memoization in Pharo?
 
See here   https://gist.github.com/jcromartie/2705526
Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

Re: Memoization Question

Torsten Bergmann
2398 posts
Phil wrote:
>thanks for the pointer.
>Now, how do I use that?


First: create a method BlockClosure>>#memoized in your Pharo image (see https://gist.github.com/jcromartie/2705526)

memoized
"returns memoized version of an unary function"
| cache |
cache := Dictionary new.
^ [ :x | cache at: x ifAbsentPut: [ self value: x ] ]!


So the idea of #memoized is to return a new "outer" block who evaluates the original block only when the computation
of the inner block was not yet done/the result of the inner evaluation is not yet cached.


As you see from the comment it works for "unary functions" which would mean it works
for "one argument blocks". Simple example:

|square memo |
square:= [:x | Transcript show: 'Processing'. x * x].
memo := square memoized.
memo value: 2.
memo value: 3.
memo value: 2.

The Transcript only shows "Processing" two times instead of three times, so only for the first two #value: calls
we have to calculate x * x.

For the last evaluation on the memo block (with argument 2 again) the result is already known from the cache,
so the square block is not evaluated again. Instead the cached result could be returned which could be faster
than doing the computation again.

x*x is only an example - this "memoization" can usually speed up when a block is costly (performancewise) and
is often called with the same arguments. A precondition (for this caching) is that the block evalutes the
same way/returns the same result for the same input argument.


You also try the two prepared workspaces in Pharo:

Non-memoized version: http://ws.stfx.eu/5ZE87XSFL9IH
Memoized Version: http://ws.stfx.eu/QLQLSLT7PZ4C

Hope this makes it more clear. This simple example is a nice performance trick and shows again the power
of Smalltalk and block closures.

Dont know about the license - but we could ask author John Cromartie for MIT license to include this simple
method into the standard image. I will try to contact him.

Bye
T.

Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

Re: Memoization Question

Sven Van Caekenberghe-2
5697 posts
This is cool indeed.

I find an unlimited cache a bit coarse, would not really work in practice. What about

memoizedUsing: cache
  ^ [ :x | cache at: x ifAbsentPut: [ self value: x ] ]

then you could pass in a tuned LRUCache instance, among others.

Sven

> On 11 Nov 2014, at 16:24, Torsten Bergmann <[hidden email]> wrote:
>
> Phil wrote:
>> thanks for the pointer.
>> Now, how do I use that?
>
>
> First: create a method BlockClosure>>#memoized in your Pharo image (see https://gist.github.com/jcromartie/2705526)
>
> memoized
> "returns memoized version of an unary function"
> | cache |
> cache := Dictionary new.
> ^ [ :x | cache at: x ifAbsentPut: [ self value: x ] ]!
>
>
> So the idea of #memoized is to return a new "outer" block who evaluates the original block only when the computation
> of the inner block was not yet done/the result of the inner evaluation is not yet cached.
>
>
> As you see from the comment it works for "unary functions" which would mean it works
> for "one argument blocks". Simple example:
>
> |square memo |
> square:= [:x | Transcript show: 'Processing'. x * x].
> memo := square memoized.
> memo value: 2.
> memo value: 3.
> memo value: 2.
>
> The Transcript only shows "Processing" two times instead of three times, so only for the first two #value: calls
> we have to calculate x * x.
>
> For the last evaluation on the memo block (with argument 2 again) the result is already known from the cache,
> so the square block is not evaluated again. Instead the cached result could be returned which could be faster
> than doing the computation again.
>
> x*x is only an example - this "memoization" can usually speed up when a block is costly (performancewise) and
> is often called with the same arguments. A precondition (for this caching) is that the block evalutes the
> same way/returns the same result for the same input argument.
>
>
> You also try the two prepared workspaces in Pharo:
>
> Non-memoized version: http://ws.stfx.eu/5ZE87XSFL9IH
> Memoized Version: http://ws.stfx.eu/QLQLSLT7PZ4C
>
> Hope this makes it more clear. This simple example is a nice performance trick and shows again the power
> of Smalltalk and block closures.
>
> Dont know about the license - but we could ask author John Cromartie for MIT license to include this simple
> method into the standard image. I will try to contact him.
>
> Bye
> T.
>


Reply | Threaded
Open this post in threaded view
| More
Print post
Permalink

Re: Memoization Question

wernerk
198 posts
Hi,
now this a nice idea, especially the memoizedUsing:cache idea. i would
really appreciate it, if that would be MIT licenced.
werner