[pharo-project/pharo-core] 06d05b: 40165

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

[pharo-project/pharo-core] 06d05b: 40165

Eliot Miranda-3
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server <[hidden email]>
  Date:   2014-08-11 (Mon, 11 Aug 2014)

  Changed paths:
    A Athens-Morphic.package/AthensMorphicGradientPaint.class/instance/private/convertGradientToPaintOn_.st
    A Athens-Morphic.package/AthensMorphicGradientPaint.class/instance/rendering/athensFillPath_on_.st
    M Athens-Morphic.package/AthensMorphicGradientPaint.class/instance/rendering/athensFillRectangle_on_.st
    A Athens-Morphic.package/extension/MenuItemMorph/instance/drawBackgroundOnAthensCanvas_.st
    A Athens-Morphic.package/extension/MenuItemMorph/instance/drawIconAndLabelOnAthensCanvas_.st
    A Athens-Morphic.package/extension/MenuItemMorph/instance/drawIconOnAthensCanvas_.st
    A Athens-Morphic.package/extension/MenuItemMorph/instance/drawLabelOnAthensCanvas_.st
    A Athens-Morphic.package/extension/MenuItemMorph/instance/drawOnAthensCanvas_.st
    A Athens-Morphic.package/extension/MenuItemMorph/instance/drawSubMenuMarkerOnAthensCanvas_.st
    A Collections-Strings.package/String.class/instance/converting/asComment.st
    R Collections-Strings.package/String.class/instance/converting/asIdentifier_.st
    R Collections-Strings.package/String.class/instance/converting/asLegalSelector.st
    R Collections-Strings.package/String.class/instance/converting/asPathName.st
    R Collections-Strings.package/String.class/instance/converting/asSmalltalkComment.st
    A Collections-Strings.package/String.class/instance/converting/asUncommentedCode.st
    R Collections-Strings.package/String.class/instance/converting/asUncommentedSmalltalkCode.st
    R Collections-Strings.package/String.class/instance/converting/convertToSystemString.st
    M Collections-Strings.package/String.class/instance/converting/findSelector.st
    R Collections-Strings.package/String.class/instance/internet/withInternetLineEndings.st
    R Collections-Strings.package/String.class/instance/internet/withLineEndings_.st
    R Collections-Strings.package/String.class/instance/internet/withSqueakLineEndings.st
    R Collections-Strings.package/String.class/instance/internet/withUnixLineEndings.st
    R Collections-Strings.package/String.class/instance/internet/withoutQuoting.st
    R Collections-Strings.package/String.class/instance/paragraph support/encompassLine_.st
    R Collections-Strings.package/String.class/instance/paragraph support/encompassParagraph_.st
    R Collections-Strings.package/String.class/instance/paragraph support/endOfParagraphBefore_.st
    R Collections-Strings.package/String.class/instance/paragraph support/indentationIfBlank_.st
    A Collections-Strings.package/String.class/instance/platform conventions/withInternetLineEndings.st
    A Collections-Strings.package/String.class/instance/platform conventions/withLineEndings_.st
    A Collections-Strings.package/String.class/instance/platform conventions/withSqueakLineEndings.st
    A Collections-Strings.package/String.class/instance/platform conventions/withUnixLineEndings.st
    A Collections-Strings.package/String.class/instance/platform conventions/withoutQuoting.st
    R Collections-Strings.package/String.class/instance/translating/translated.st
    R Collections-Strings.package/String.class/instance/translating/translatedIfCorresponds.st
    R Collections-Strings.package/String.class/instance/translating/translatedTo_.st
    M CollectionsTests.package/StringTest.class/instance/tests - converting/testAsSmalltalkComment.st
    A Compression.package/extension/String/instance/convertToSystemString.st
    A Morphic-Base.package/MenuLineMorph.class/instance/drawing/drawOnAthensCanvas_.st
    M Morphic-Base.package/MenuLineMorph.class/instance/drawing/drawOn_.st
    A Morphic-Base.package/MenuLineMorph.class/instance/private/baseColor.st
    A Morphic-Base.package/extension/String/instance/indentationIfBlank_.st
    M Morphic-Core.package/WorldState.class/instance/stepping/runStepMethodsIn_.st
    A ScriptLoader40.package/ScriptLoader.class/instance/pharo - scripts/script165.st
    A ScriptLoader40.package/ScriptLoader.class/instance/pharo - updates/update40165.st
    M ScriptLoader40.package/ScriptLoader.class/instance/public/commentForCurrentUpdate.st
    A System-Localization.package/extension/String/instance/translated.st
    A System-Localization.package/extension/String/instance/translatedIfCorresponds.st
    A System-Localization.package/extension/String/instance/translatedTo_.st
    A Text-Core.package/extension/String/instance/encompassLine_.st
    A Text-Core.package/extension/String/instance/encompassParagraph_.st
    A Text-Core.package/extension/String/instance/endOfParagraphBefore_.st
    M Text-Edition.package/SmalltalkEditor.class/instance/shortcuts/toggleCommentOnSelectionOrLine.st
    R Tool-Finder.package/FinderUI.class/class/accessing/searchedTextListMaxSize.st
    R Tool-Finder.package/FinderUI.class/class/settings/finderUISettingsOn_.st
    M Tool-Finder.package/FinderUI.class/instance/accessing/searchedTextListMaxSize.st
    M Tool-Finder.package/MethodFinder.class/instance/initialize/initialize2.st
    R Tool-Transcript.package/ThreadSafeTranscriptPluggableTextMorph.class/README.md
    R Tool-Transcript.package/ThreadSafeTranscriptPluggableTextMorph.class/definition.st
    R Tool-Transcript.package/ThreadSafeTranscriptPluggableTextMorph.class/instance/drawing/drawSubmorphsOn_.st
    R Tool-Transcript.package/ThreadSafeTranscriptPluggableTextMorph.class/instance/initialization/initialize.st
    R Tool-Transcript.package/ThreadSafeTranscriptPluggableTextMorph.class/instance/transcript/update_.st
    M Tool-Transcript.package/extension/ThreadSafeTranscript/instance/openLabel_.st
    M Transcript.package/ThreadSafeTranscript.class/README.md
    M Transcript.package/ThreadSafeTranscript.class/definition.st
    M Transcript.package/ThreadSafeTranscript.class/instance/initialization/initialize.st
    M Transcript.package/ThreadSafeTranscript.class/instance/protected low level/contents.st
    M Transcript.package/ThreadSafeTranscript.class/instance/streaming/clear.st
    M Transcript.package/ThreadSafeTranscript.class/instance/streaming/endEntry.st
    A Transcript.package/ThreadSafeTranscript.class/instance/updating/stepGlobal.st

  Log Message:
  -----------
  40165
13773 implement drawOnAthensCanvas on MenuItemMorph and MenuLineMorph
        https://pharo.fogbugz.com/f/cases/13773

13754 cleaning strings API
        https://pharo.fogbugz.com/f/cases/13754

13815 Finder: remove non-needed preference
        https://pharo.fogbugz.com/f/cases/13815

13826 Extended search... fooled by block arguments
        https://pharo.fogbugz.com/f/cases/13826

13789 AthensWrapMorph can not paint gradient fill
        https://pharo.fogbugz.com/f/cases/13789

13806 Remove ThreadSafeTranscriptPluggableTextMorph
        https://pharo.fogbugz.com/f/cases/13806

http://files.pharo.org/image/40/40165.zip


Reply | Threaded
Open this post in threaded view
|

Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Ben Coman
GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?


Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.

I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript conents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

cheers -ben
Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Eliot Miranda-2



On Tue, Aug 12, 2014 at 5:27 AM, Ben Coman <[hidden email]> wrote:
GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?


Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.
I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript conents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

IMO 20,000 characters is way too little.  I routinely set my limit to 200,000.   So b) but with a higher limit.  a) is pointlessly slow.

In your implementation list, surely 2. is the only thing that makes sense? If the limit can remain soft then anyone that wants to see more simply raises their limit.  1. & 3. are implementation details left up to you.  As long as the transcript is reasonably fast and has the same protocol as WriteStream (it needs to support printOn: and storeOn: et al) implement it as you will.  BTW, the Squeak transcript is appallingly slow.

--
best,
Eliot
Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

stepharo
In reply to this post by Ben Coman
HI Ben
This is really cool that you spend time on it! Thanks thanks thanks!
Really thanks!

if you need a circularList I'm coding one for the systemLogger.
Just ask.

(I'm about to change the API of the new linked list I wrote (I never
like the Link and ValueLink that we have in the system)
but Link for a linkedList is as good as Cell (if I do not like Link :))  
so I'm about to stop thinking that I prefer Cell and I will make sure
that both classes use the
same vocabulary as our ancient linkedList. :)
May be I'm becoming wise after all :)

Stef


Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Max Leske
In reply to this post by Ben Coman

On 12.08.2014, at 14:27, Ben Coman <[hidden email]> wrote:

GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?

Hi Ben

I tried to figure this out with Mariano. Apparently the Transcript parts come from an improvement that was suggested by Pavel Krivanek (see this issue: http://code.google.com/p/pharo/issues/detail?id=6428). The code presumably was tailored to the needs that Hazel (today called Seed http://smalltalkhub.com/#!/~Guille/Seed) had. We couldn’t find any other reason why we would do *anything* Transcript related, especially since the command line handlers print to stdout anyway.

Mariano suggested I CC Ben and Guille, so I’m doing that. Maybe one of the two can shed some light on that method. From my point of view I don’t see why we should keep that part of the code.

Cheers,
Max



Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.

I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript conents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

cheers -ben

Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Guillermo Polito
Well, I have no idea why that line is there :). It's old (so I do not remember), a tweak (read ´hack´) I probably did to make the bootstrap work at the time, and I don't think it's correct that line is there, so maybe we should remove it? :)


On Tue, Aug 12, 2014 at 10:03 PM, Max Leske <[hidden email]> wrote:

On 12.08.2014, at 14:27, Ben Coman <[hidden email]> wrote:

GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?

Hi Ben

I tried to figure this out with Mariano. Apparently the Transcript parts come from an improvement that was suggested by Pavel Krivanek (see this issue: http://code.google.com/p/pharo/issues/detail?id=6428). The code presumably was tailored to the needs that Hazel (today called Seed http://smalltalkhub.com/#!/~Guille/Seed) had. We couldn’t find any other reason why we would do *anything* Transcript related, especially since the command line handlers print to stdout anyway.

Mariano suggested I CC Ben and Guille, so I’m doing that. Maybe one of the two can shed some light on that method. From my point of view I don’t see why we should keep that part of the code.

Cheers,
Max



Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.

I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript conents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

cheers -ben


Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Ben Coman
In reply to this post by Eliot Miranda-2
Eliot Miranda wrote:
On Tue, Aug 12, 2014 at 5:27 AM, Ben Coman <[hidden email]> wrote:
GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?


Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.
I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript contents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

IMO 20,000 characters is way too little.  I routinely set my limit to 200,000.   So b) but with a higher limit.  a) is pointlessly slow.
Thanks Eliot. Its good to know what is expected from Transcript.

In your implementation list, surely 2. is the only thing that makes sense?

I agree, (2.) would be efficient.  It just would have been precluded if (a) was required.

If the limit can remain soft then anyone that wants to see more simply raises their limit. 

Do you mean you'd like Transcript characterLimit to be a Preference ?

1. & 3. are implementation details left up to you. 

I can do (1.) with Steph's new circular list.  Doing (2.) means (3.) wont be required for this, but its probably beneficial so I'll log it as separate Issue.

As long as the transcript is reasonably fast and has the same protocol as WriteStream (it needs to support printOn: and storeOn: et al) implement it as you will.  BTW, the Squeak transcript is appallingly slow.

--
best,
Eliot

Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Ben Coman
In reply to this post by stepharo
stepharo wrote:
> HI Ben
> This is really cool that you spend time on it! Thanks thanks thanks!
> Really thanks!
>
> if you need a circularList I'm coding one for the systemLogger.
> Just ask.

This would be useful to limit the size of the underlying stream if high
priority thread should fill too much before the UI thread can clear it.  
Can you point me at it.  Will I be able to do something like...
    stream := WriteSteam on: CircularList new.

cheers -ben

>
> (I'm about to change the API of the new linked list I wrote (I never
> like the Link and ValueLink that we have in the system)
> but Link for a linkedList is as good as Cell (if I do not like Link
> :))  so I'm about to stop thinking that I prefer Cell and I will make
> sure that both classes use the
> same vocabulary as our ancient linkedList. :)
> May be I'm becoming wise after all :)
>
> Stef
>
>
>


Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Ben Coman
In reply to this post by Guillermo Polito
Guillermo Polito wrote:
Well, I have no idea why that line is there :). It's old (so I do not remember), a tweak (read ´hack´) I probably did to make the bootstrap work at the time, and I don't think it's correct that line is there, so maybe we should remove it? :)

Okay.  I'll leave that to you guys.  I'm happy to proceed on the basis that its not "expected" to break anything.
cheers -ben



On Tue, Aug 12, 2014 at 10:03 PM, Max Leske <[hidden email]> wrote:

On 12.08.2014, at 14:27, Ben Coman <[hidden email]> wrote:

GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?

Hi Ben

I tried to figure this out with Mariano. Apparently the Transcript parts come from an improvement that was suggested by Pavel Krivanek (see this issue: http://code.google.com/p/pharo/issues/detail?id=6428). The code presumably was tailored to the needs that Hazel (today called Seed http://smalltalkhub.com/#!/~Guille/Seed) had. We couldn’t find any other reason why we would do *anything* Transcript related, especially since the command line handlers print to stdout anyway.

Mariano suggested I CC Ben and Guille, so I’m doing that. Maybe one of the two can shed some light on that method. From my point of view I don’t see why we should keep that part of the code.

Cheers,
Max



Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.

I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript conents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

cheers -ben



Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Eliot Miranda-2
In reply to this post by Ben Coman



On Wed, Aug 13, 2014 at 8:24 AM, Ben Coman <[hidden email]> wrote:
Eliot Miranda wrote:
On Tue, Aug 12, 2014 at 5:27 AM, Ben Coman <[hidden email]> wrote:
GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?


Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.
I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript contents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

IMO 20,000 characters is way too little.  I routinely set my limit to 200,000.   So b) but with a higher limit.  a) is pointlessly slow.
Thanks Eliot. Its good to know what is expected from Transcript.

In your implementation list, surely 2. is the only thing that makes sense?

I agree, (2.) would be efficient.  It just would have been precluded if (a) was required.

If the limit can remain soft then anyone that wants to see more simply raises their limit. 

Do you mean you'd like Transcript characterLimit to be a Preference ?

I'm not sure.  If it was a preference I'd use it, but few others may find it useful.  As I say, I need at east 20,000 chars on the transcript to effectively develop Cog; the VM simulator produces lots of output and realistically a transcript is the only convenient place to view it.   So whether it was a preference ofr not I'd make sure the limit was >= 20k.

1. & 3. are implementation details left up to you. 

I can do (1.) with Steph's new circular list.  Doing (2.) means (3.) wont be required for this, but its probably beneficial so I'll log it as separate Issue.

As long as the transcript is reasonably fast and has the same protocol as WriteStream (it needs to support printOn: and storeOn: et al) implement it as you will.  BTW, the Squeak transcript is appallingly slow.
--
best,
Eliot
Reply | Threaded
Open this post in threaded view
|

Re: Faster ThreadSafeTranscript Re: [pharo-project/pharo-core] 06d05b: 40165

Max Leske
I opened a case for removing ThreadSafeTranscript from the FLCommandlineHandler. Proposed slice is in inbox.

Cheers,
Max

On 13.08.2014, at 19:17, Eliot Miranda <[hidden email]> wrote:




On Wed, Aug 13, 2014 at 8:24 AM, Ben Coman <[hidden email]> wrote:
Eliot Miranda wrote:
On Tue, Aug 12, 2014 at 5:27 AM, Ben Coman <[hidden email]> wrote:
GitHub wrote:
  Branch: refs/heads/4.0
  Home:   https://github.com/pharo-project/pharo-core
  Commit: 06d05bd822deee4a79736d9f99d4a666ca1637eb
      https://github.com/pharo-project/pharo-core/commit/06d05bd822deee4a79736d9f99d4a666ca1637eb
  Author: Jenkins Build Server [hidden email]
  Date:   2014-08-11 (Mon, 11 Aug 2014)


13806 Remove ThreadSafeTranscriptPluggableTextMorph
	https://pharo.fogbugz.com/f/cases/13806


  

For anyone concerned about the performance of writing to Transcript from higher priority threads, just reporting that altering ThreadSafeTranscript to be safe for Morphic without ThreadSafeTranscriptPluggableTextMorph had a side effect of enhancing performance by 25x.   With two runs of the following script...

    Smalltalk garbageCollect.
    Transcript open. "close after each run"
    [    Transcript crShow: (     
        [
            | string |
            string := '-'.   
            1 to: 2000 do:
            [        :n |
                     string := string , '-', n printString.
                    Transcript show: string.
            ].
            (Delay forMilliseconds: 10) wait.
        ]) timeToRun.
    ] forkAt: 41.



Build 40162 reports timeToRun of 0:00:00:02.483 & 0:00:00:02.451
Build 40165 reports timeToRun of 0:00:00:00.037 & 0:00:00:00.099


Now I had meant to ask... I notice that FLFuelCommandLineHandler installs ThreadSafeTranscript, so I wonder it is affected by this change. Can some Fuel experts comment?


Also I am looking for some advice for a minor downside I just noticed.  The whole script above can complete between steps, so the entire output ends up in the PluggableTextMorph its size without being culled, which causes making a selection become really slow.  Normally the excess text shown by Transcript is culled in half [1] by PluggableTextMorph>>appendEntry each time #changed: is called.

PluggabletextMorph>>appendEntry
    "Append the text in the model's writeStream to the editable text. "
    textMorph asText size > model characterLimit ifTrue:   "<---[0]"
        ["Knock off first half of text"
        self selectInvisiblyFrom: 1 to: textMorph asText size // 2.   "<---[1]"
        self replaceSelectionWith: Text new].
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size.
    self replaceSelectionWith: model contents asText.  "<----[2]"
    self selectInvisiblyFrom: textMorph asText size + 1 to: textMorph asText size

That works fine when #appendEntry is being called with lots of small changes, but for a single large change the entire change ends up in PluggableTextMorph via [2]. In this case
    model characterLimit  "--> 20,000"     [0]
    model contents size "--> 5,671,343"    [2]
where model == Transcript.

So what is the behaviour you'd like to when too much is sent to Transcript?
a. Show all content however briefly.
b. Only the last 20,000 characters are put into the PluggableTextMorph, and the earlier data thrown away.
I see a few ways to deal with this:
1. Limit the stream inside Transcript to a maximum 20,000 characters by basing it on some circular buffer.
2. Have "Transcript contents" return only the last 20,000 characters of its stream.
3. Limit to text sent to #replaceSelectionWith: [2]  to 20,000 characters.

Thoughts anyone?

IMO 20,000 characters is way too little.  I routinely set my limit to 200,000.   So b) but with a higher limit.  a) is pointlessly slow.
Thanks Eliot. Its good to know what is expected from Transcript.

In your implementation list, surely 2. is the only thing that makes sense?

I agree, (2.) would be efficient.  It just would have been precluded if (a) was required.

If the limit can remain soft then anyone that wants to see more simply raises their limit. 

Do you mean you'd like Transcript characterLimit to be a Preference ?

I'm not sure.  If it was a preference I'd use it, but few others may find it useful.  As I say, I need at east 20,000 chars on the transcript to effectively develop Cog; the VM simulator produces lots of output and realistically a transcript is the only convenient place to view it.   So whether it was a preference ofr not I'd make sure the limit was >= 20k.

1. & 3. are implementation details left up to you. 

I can do (1.) with Steph's new circular list.  Doing (2.) means (3.) wont be required for this, but its probably beneficial so I'll log it as separate Issue.

As long as the transcript is reasonably fast and has the same protocol as WriteStream (it needs to support printOn: and storeOn: et al) implement it as you will.  BTW, the Squeak transcript is appallingly slow.
--
best,
Eliot