Slow compilation on one of my Windows PCs

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
37 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Adventurer


On 2015/06/30 07:42 PM, Peter Uhnák wrote:
> Debian (host) "'4,855 per second'"
>
> Ubuntu (vbox) "'4,709 per second'"
>
> Win XP (vbox) "'504.099 per second'"
My my Win 7 (host) ( i7 CPU @ 2.10GHz)
=> 862.000 per second

Still a far cry from what the Linux boxes do.  I'm watching with interest.

Craig

Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
In reply to this post by Mariano Martinez Peck
I'm gonna try to reply to all of you and try your ideas

Stephan Eggermont wrote
On 30-06-15 14:37, Jan Blizničenko wrote:
> Unfortunately no - all benchmarks I made with antivirus disabled.

Including the Microsoft stuff itself? (Security Essentials/Windows
Defender) Does your desktop have a drive > 2TB?
Yes, all such things I have disabled / removed.


Mariano Martinez Peck wrote
Can you think of any other window process that could be detecting changes
all the time in .changes and therefore have an impact in the performance?
Do you have such pharo image in dropbox or similar service?
I don't think so. I tried even running Windows in diagnostic startup (safe mode / running with only core processes and services) and it had no effect.


Mariano Martinez Peck wrote
I would check the processes running and CPU of Windows while loading the
code...
I was looking at it some time ago and it didn't seem like processor is too much busy, but as I think about it, it is suspiciously almost not busy at all ... ~1 % of processor usage. On laptop it fluctuates between 5 and 10 %. I'm talking about usage when benchmarking the "store" code.


philippeback wrote
Silly question: do you have a couple of Nautilus windows open?

Loading stuff generates annoucements and Nautilus updates are killing
performance.
No, only open thing is Playground from which I run the benchmarking codes.


Sven Van Caekenberghe-2 wrote
> On 30 Jun 2015, at 18:36, Peter Uhnák <[hidden email]> wrote:
>
> I think we've safely established that the bottleneck is disk operations.

Let's take it one level down then,

[ 'foo.txt' asFileReference in: [ :file |
    file writeStreamDo: [ :out |
      3 timesRepeat: [ out << String loremIpsum ] ].
    file ensureDelete ] ] bench

  => "'512.595 per second'"

We could experiment with variants, calling #flush, writing by character, adding buffering, etc, but this is a start. At least this takes the source code stuff out of the equations.
"654.738 per second" on desktop
"1,271 per second" on laptop


Thank you all for ideas.
Jan
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Mariano Martinez Peck
Then the only thing I can think of is a vm primitive that is implemented differently in linux/mac than windows vm...

On Tue, Jun 30, 2015 at 7:07 PM, Jan Blizničenko <[hidden email]> wrote:
I'm gonna try to reply to all of you and try your ideas


Stephan Eggermont wrote
> On 30-06-15 14:37, Jan Blizničenko wrote:
>> Unfortunately no - all benchmarks I made with antivirus disabled.
>
> Including the Microsoft stuff itself? (Security Essentials/Windows
> Defender) Does your desktop have a drive > 2TB?

Yes, all such things I have disabled / removed.



Mariano Martinez Peck wrote
> Can you think of any other window process that could be detecting changes
> all the time in .changes and therefore have an impact in the performance?
> Do you have such pharo image in dropbox or similar service?

I don't think so. I tried even running Windows in diagnostic startup (safe
mode / running with only core processes and services) and it had no effect.



Mariano Martinez Peck wrote
> I would check the processes running and CPU of Windows while loading the
> code...

I was looking at it some time ago and it didn't seem like processor is too
much busy, but as I think about it, it is suspiciously almost not busy at
all ... ~1 % of processor usage. On laptop it fluctuates between 5 and 10 %.
I'm talking about usage when benchmarking the "store" code.



philippeback wrote
> Silly question: do you have a couple of Nautilus windows open?
>
> Loading stuff generates annoucements and Nautilus updates are killing
> performance.

No, only open thing is Playground from which I run the benchmarking codes.



Sven Van Caekenberghe-2 wrote
>> On 30 Jun 2015, at 18:36, Peter Uhnák &lt;

> i.uhnak@

> &gt; wrote:
>>
>> I think we've safely established that the bottleneck is disk operations.
>
> Let's take it one level down then,
>
> [ 'foo.txt' asFileReference in: [ :file |
>     file writeStreamDo: [ :out |
>       3 timesRepeat: [ out << String loremIpsum ] ].
>     file ensureDelete ] ] bench
>
>   => "'512.595 per second'"
>
> We could experiment with variants, calling #flush, writing by character,
> adding buffering, etc, but this is a start. At least this takes the source
> code stuff out of the equations.

"654.738 per second" on desktop
"1,271 per second" on laptop


Thank you all for ideas.
Jan



--
View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835019.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.




--
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
In reply to this post by Sven Van Caekenberghe-2
I tried Time profiler on

|code method|
code := 'a'.

method := Object compiler
source: code;
requestor: nil;
failBlock: [ ^nil ];
compile.

method putSource: code inFile: 2 withPreamble: [:f | f cr; nextPut: $!; nextChunkPut: 'Behavior method'; cr].


desktop:
37.0% {38ms} MultiByteFileStream(WriteStream)>>nextChunkPut:
34.0% {35ms} MultiByteFileStream(WriteStream)>>cr
29.0% {30ms} InMidstOfFileinNotification class(Exception class)>>signal

laptop:
60.0% {4ms} MultiByteFileStream(WriteStream)>>cr
40.0% {2ms} InMidstOfFileinNotification class(Exception class)>>signal

Jan

Sven Van Caekenberghe-2 wrote
> On 30 Jun 2015, at 19:42, Peter Uhnák <[hidden email]> wrote:
>
> Debian (host) "'4,855 per second'"
>
> Ubuntu (vbox) "'4,709 per second'"
>
> Win XP (vbox) "'504.099 per second'"
>
> The difference here is just one order of magnitude and not two... so maybe I should take apart CompileMethod>>putSource:inFile:withPreambule statement by statement.
>
> Is there some more convenient way to do it than wrap each statement in a block? Some way to profile a method or something.

Open the Time Profiler and write your expression there is how you do analysis on the whole tree. It is not perfect but should help you find the bottleneck.

> Peter
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Nicolai Hess
In reply to this post by Mariano Martinez Peck


2015-07-01 0:39 GMT+02:00 Mariano Martinez Peck <[hidden email]>:
Then the only thing I can think of is a vm primitive that is implemented differently in linux/mac than windows vm...

Yes, the vm primitives, like I already told some messages above.
FilePrimitives ARE slow on windows.

We may get better performance, if we disable windows file cache/buffering and use slightly different ways to do the CreateFile and WriteFile
calls, and do the buffering on our own, but this is not easy.

Luckily (?), this may be alread work if we use the std file api (?)

A small dirty test:

store bench
latest vm ->  '22.098 per second'
modified vm -> '31,027 per second'

(in the modified vm I replaced all file operation from
sqWin32FilePrims.c
with the code from
sqFilePluginBasicPrims.c)

The result may seem strange, because both implementation will actually use win32s CreateFile/WriteFile methods, but
maybe the second one uses better caching/buffering.

nicolai





 

On Tue, Jun 30, 2015 at 7:07 PM, Jan Blizničenko <[hidden email]> wrote:
I'm gonna try to reply to all of you and try your ideas


Stephan Eggermont wrote
> On 30-06-15 14:37, Jan Blizničenko wrote:
>> Unfortunately no - all benchmarks I made with antivirus disabled.
>
> Including the Microsoft stuff itself? (Security Essentials/Windows
> Defender) Does your desktop have a drive > 2TB?

Yes, all such things I have disabled / removed.



Mariano Martinez Peck wrote
> Can you think of any other window process that could be detecting changes
> all the time in .changes and therefore have an impact in the performance?
> Do you have such pharo image in dropbox or similar service?

I don't think so. I tried even running Windows in diagnostic startup (safe
mode / running with only core processes and services) and it had no effect.



Mariano Martinez Peck wrote
> I would check the processes running and CPU of Windows while loading the
> code...

I was looking at it some time ago and it didn't seem like processor is too
much busy, but as I think about it, it is suspiciously almost not busy at
all ... ~1 % of processor usage. On laptop it fluctuates between 5 and 10 %.
I'm talking about usage when benchmarking the "store" code.



philippeback wrote
> Silly question: do you have a couple of Nautilus windows open?
>
> Loading stuff generates annoucements and Nautilus updates are killing
> performance.

No, only open thing is Playground from which I run the benchmarking codes.



Sven Van Caekenberghe-2 wrote
>> On 30 Jun 2015, at 18:36, Peter Uhnák &lt;

> i.uhnak@

> &gt; wrote:
>>
>> I think we've safely established that the bottleneck is disk operations.
>
> Let's take it one level down then,
>
> [ 'foo.txt' asFileReference in: [ :file |
>     file writeStreamDo: [ :out |
>       3 timesRepeat: [ out << String loremIpsum ] ].
>     file ensureDelete ] ] bench
>
>   => "'512.595 per second'"
>
> We could experiment with variants, calling #flush, writing by character,
> adding buffering, etc, but this is a start. At least this takes the source
> code stuff out of the equations.

"654.738 per second" on desktop
"1,271 per second" on laptop


Thank you all for ideas.
Jan



--
View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835019.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.




--

Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

David T. Lewis
On Wed, Jul 01, 2015 at 01:39:25AM +0200, Nicolai Hess wrote:
> 2015-07-01 0:39 GMT+02:00 Mariano Martinez Peck <[hidden email]>:
>
> > Then the only thing I can think of is a vm primitive that is implemented
> > differently in linux/mac than windows vm...
> >
>
> Yes, the vm primitives, like I already told some messages above.
> FilePrimitives ARE slow on windows.

Is that actually true? I have not done any work on Windows in a while, but
I recall that file primitives on Windows were quite efficient. They work
differently because they operate at the level of direct file handles rather
than stdio streams, but this may be faster in many cases. So unless you
have measurable evidence that the Windows IO is slower, don't assume it
to be true.

Dave


Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
In reply to this post by Nicolai Hess
Sounds good, could I try it and run all those benchmarks etc. on it on my PCs?

Jan

Nicolai Hess wrote
Yes, the vm primitives, like I already told some messages above.
FilePrimitives ARE slow on windows.

We may get better performance, if we disable windows file cache/buffering
and use slightly different ways to do the CreateFile and WriteFile
calls, and do the buffering on our own, but this is not easy.

Luckily (?), this may be alread work if we use the std file api (?)

A small dirty test:

store bench
latest vm ->  '22.098 per second'
modified vm -> '31,027 per second'

(in the modified vm I replaced all file operation from
sqWin32FilePrims.c
with the code from
sqFilePluginBasicPrims.c)

The result may seem strange, because both implementation will actually use
win32s CreateFile/WriteFile methods, but
maybe the second one uses better caching/buffering.

nicolai
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Nicolai Hess


2015-07-01 7:46 GMT+02:00 Jan Blizničenko <[hidden email]>:
Sounds good, could I try it and run all those benchmarks etc. on it on my
PCs?

Jan
It seems this all comes down to calling FlushFileBuffers.

You can check what would happen if we just disabling the explicit flushing
(comment out 
self primFlush: fileID
in StandardFileStream>>#flush)
and run the benchmarks.


(Of course, I don't know what side effects this have, *I* think WriteFile calls flush on
its own, and of course *if* this solves the problem, we can not just disable call to flush in the
image, but have to change the windows file plugin).




 


Nicolai Hess wrote
> Yes, the vm primitives, like I already told some messages above.
> FilePrimitives ARE slow on windows.
>
> We may get better performance, if we disable windows file cache/buffering
> and use slightly different ways to do the CreateFile and WriteFile
> calls, and do the buffering on our own, but this is not easy.
>
> Luckily (?), this may be alread work if we use the std file api (?)
>
> A small dirty test:
>
> store bench
> latest vm ->  '22.098 per second'
> modified vm -> '31,027 per second'
>
> (in the modified vm I replaced all file operation from
> sqWin32FilePrims.c
> with the code from
> sqFilePluginBasicPrims.c)
>
> The result may seem strange, because both implementation will actually use
> win32s CreateFile/WriteFile methods, but
> maybe the second one uses better caching/buffering.
>
> nicolai





--
View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835032.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.


Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
I tried commenting primFlush: fileID in StandardFileStream>>#flush on my desktop PC and the "store" benchmark's speed improved significantly.

Original result on Windows 7: 11 per sec
Result without flushing on Windows 7: 9 430 per sec
Original result on Linux Mint 17: 26 590 per sec
Result without flushing on Linux Mint 17: 34 879 per sec

Mentioned Linux Mint is in VirtualBox on the same PC.

Also loading of Roassal2 now takes 58 seconds insted of 386.

Jan

Nicolai Hess wrote
2015-07-01 7:46 GMT+02:00 Jan Blizničenko <[hidden email]>:

> Sounds good, could I try it and run all those benchmarks etc. on it on my
> PCs?
>
> Jan
>

With this:
http://stackoverflow.com/questions/14290337/is-fwrite-faster-than-writefile-in-windows
It seems this all comes down to calling FlushFileBuffers.

You can check what would happen if we just disabling the explicit flushing
(comment out
self primFlush: fileID
in StandardFileStream>>#flush)
and run the benchmarks.


(Of course, I don't know what side effects this have, *I* think WriteFile
calls flush on
its own, and of course *if* this solves the problem, we can not just
disable call to flush in the
image, but have to change the windows file plugin).






>
>
> Nicolai Hess wrote
> > Yes, the vm primitives, like I already told some messages above.
> > FilePrimitives ARE slow on windows.
> >
> > We may get better performance, if we disable windows file cache/buffering
> > and use slightly different ways to do the CreateFile and WriteFile
> > calls, and do the buffering on our own, but this is not easy.
> >
> > Luckily (?), this may be alread work if we use the std file api (?)
> >
> > A small dirty test:
> >
> > store bench
> > latest vm ->  '22.098 per second'
> > modified vm -> '31,027 per second'
> >
> > (in the modified vm I replaced all file operation from
> > sqWin32FilePrims.c
> > with the code from
> > sqFilePluginBasicPrims.c)
> >
> > The result may seem strange, because both implementation will actually
> use
> > win32s CreateFile/WriteFile methods, but
> > maybe the second one uses better caching/buffering.
> >
> > nicolai
>
>
>
>
>
> --
> View this message in context:
> http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835032.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>
>
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
I'm experimenting with commenting the flush automatically by startup script and loading now takes reasonable amount of time ( StandardFileStream compile: 'flush'. ).
I haven't found any drawbacks so far, but it doesn't mean anything and that "manual" flushing probably is there for a reason, what is the reason?

Jan

Jan Blizničenko wrote
I tried commenting primFlush: fileID in StandardFileStream>>#flush on my desktop PC and the "store" benchmark's speed improved significantly.

Original result on Windows 7: 11 per sec
Result without flushing on Windows 7: 9 430 per sec
Original result on Linux Mint 17: 26 590 per sec
Result without flushing on Linux Mint 17: 34 879 per sec

Mentioned Linux Mint is in VirtualBox on the same PC.

Also loading of Roassal2 now takes 58 seconds insted of 386.

Jan
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Sven Van Caekenberghe-2
#flush on a stream means pushing all data to the final destination, clearing buffers, doing actual network transfers.

What can happen when you disable that ?

That some data does not arrive where it should I guess.

Mind that #close most of the time does an automatic/implicit #flush.

Anyway, I don't think disabling #flush is a real solution.

> On 02 Jul 2015, at 17:46, Jan Blizničenko <[hidden email]> wrote:
>
> I'm experimenting with commenting the flush automatically by startup script
> and loading now takes reasonable amount of time ( StandardFileStream
> compile: 'flush'. ).
> I haven't found any drawbacks so far, but it doesn't mean anything and that
> "manual" flushing probably is there for a reason, what is the reason?
>
> Jan
>
>
> Jan Blizničenko wrote
>> I tried commenting primFlush: fileID in StandardFileStream>>#flush on my
>> desktop PC and the "store" benchmark's speed improved significantly.
>>
>> Original result on Windows 7: 11 per sec
>> Result without flushing on Windows 7: 9 430 per sec
>> Original result on Linux Mint 17: 26 590 per sec
>> Result without flushing on Linux Mint 17: 34 879 per sec
>>
>> Mentioned Linux Mint is in VirtualBox on the same PC.
>>
>> Also loading of Roassal2 now takes 58 seconds insted of 386.
>>
>> Jan
>
>
>
>
>
> --
> View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835421.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.


Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Nicolai Hess


2015-07-02 19:01 GMT+02:00 Sven Van Caekenberghe <[hidden email]>:
#flush on a stream means pushing all data to the final destination, clearing buffers, doing actual network transfers.

What can happen when you disable that ?

That some data does not arrive where it should I guess.

Mind that #close most of the time does an automatic/implicit #flush.

Anyway, I don't think disabling #flush is a real solution.

+1

Maybe it is enough, if we remove the call to "self flush" in WriteStream>>#nextChunkPut:
?

I see that squeak does not call flush, and :) in Squeak WriteStream>>#flush is empty (!)
(But I think in squeak and pharo are many differences for stream and source/changes handling)

 

> On 02 Jul 2015, at 17:46, Jan Blizničenko <[hidden email]> wrote:
>
> I'm experimenting with commenting the flush automatically by startup script
> and loading now takes reasonable amount of time ( StandardFileStream
> compile: 'flush'. ).
> I haven't found any drawbacks so far, but it doesn't mean anything and that
> "manual" flushing probably is there for a reason, what is the reason?
>
> Jan
>
>
> Jan Blizničenko wrote
>> I tried commenting primFlush: fileID in StandardFileStream>>#flush on my
>> desktop PC and the "store" benchmark's speed improved significantly.
>>
>> Original result on Windows 7: 11 per sec
>> Result without flushing on Windows 7: 9 430 per sec
>> Original result on Linux Mint 17: 26 590 per sec
>> Result without flushing on Linux Mint 17: 34 879 per sec
>>
>> Mentioned Linux Mint is in VirtualBox on the same PC.
>>
>> Also loading of Roassal2 now takes 58 seconds insted of 386.
>>
>> Jan
>
>
>
>
>
> --
> View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835421.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
Flush is, in the store benchmark I used, called from from MultiByteFileStream(WriteStream) >>#nextChunkPut: and from CompiledMethod>>#putSource:inFile:withPreamble: ... when I comment out calling flush in these two methods, it is fast like when whole content of flush method is commented out. However, If I keep these two flush calls commented, but I once call flush manually in the end of the "store" code, it is slow again (well, instead of 8-18 runs per sec it runs 40-50 times per sec, but it is far from 9000 when flush does not happen at all). It seems to me like the problem is not in how many times flush is called, but more in what actually happens when flush is called (how is the primitive primFlush: achieved on Windows).

Jan

Nicolai Hess wrote
2015-07-02 19:01 GMT+02:00 Sven Van Caekenberghe <[hidden email]>:

> #flush on a stream means pushing all data to the final destination,
> clearing buffers, doing actual network transfers.
>
> What can happen when you disable that ?
>
> That some data does not arrive where it should I guess.
>
> Mind that #close most of the time does an automatic/implicit #flush.
>
> Anyway, I don't think disabling #flush is a real solution.
>

+1

Maybe it is enough, if we remove the call to "self flush" in
WriteStream>>#nextChunkPut:
?

I see that squeak does not call flush, and :) in Squeak WriteStream>>#flush
is empty (!)
(But I think in squeak and pharo are many differences for stream and
source/changes handling)
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Sven Van Caekenberghe-2
Jan,

I find it rather hard to believe you are the only Windows users for who #flush is slow. This is such a fundamental operation that it should have been caught earlier. Remember that the Pharo code itself is cross platform identical. Anyway, that is how I feel it.

Just an idea: does your source code contain non-Latin-1 characters (i.e. WideStrings), even a single one ? Maybe your name itself (the č).

Sven

> On 02 Jul 2015, at 23:39, Jan Blizničenko <[hidden email]> wrote:
>
> Flush is, in the store benchmark I used, called from from
> MultiByteFileStream(WriteStream) >>#nextChunkPut: and from
> CompiledMethod>>#putSource:inFile:withPreamble: ... when I comment out
> calling flush in these two methods, it is fast like when whole content of
> flush method is commented out. However, If I keep these two flush calls
> commented, but I once call flush manually in the end of the "store" code, it
> is slow again (well, instead of 8-18 runs per sec it runs 40-50 times per
> sec, but it is far from 9000 when flush does not happen at all). It seems to
> me like the problem is not in how many times flush is called, but more in
> what actually happens when flush is called (how is the primitive primFlush:
> achieved on Windows).
>
> Jan
>
>
> Nicolai Hess wrote
>> 2015-07-02 19:01 GMT+02:00 Sven Van Caekenberghe &lt;
>
>> sven@
>
>> &gt;:
>>
>>> #flush on a stream means pushing all data to the final destination,
>>> clearing buffers, doing actual network transfers.
>>>
>>> What can happen when you disable that ?
>>>
>>> That some data does not arrive where it should I guess.
>>>
>>> Mind that #close most of the time does an automatic/implicit #flush.
>>>
>>> Anyway, I don't think disabling #flush is a real solution.
>>>
>>
>> +1
>>
>> Maybe it is enough, if we remove the call to "self flush" in
>> WriteStream>>#nextChunkPut:
>> ?
>>
>> I see that squeak does not call flush, and :) in Squeak
>> WriteStream>>#flush
>> is empty (!)
>> (But I think in squeak and pharo are many differences for stream and
>> source/changes handling)
>
>
>
>
>
> --
> View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835471.html
> Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.
>


Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

philippeback
In reply to this post by Jan Blizničenko


On Thu, Jul 2, 2015 at 11:39 PM, Jan Blizničenko <[hidden email]> wrote:
Flush is, in the store benchmark I used, called from from
MultiByteFileStream(WriteStream) >>#nextChunkPut: and from
CompiledMethod>>#putSource:inFile:withPreamble: ... when I comment out
calling flush in these two methods, it is fast like when whole content of
flush method is commented out. However, If I keep these two flush calls
commented, but I once call flush manually in the end of the "store" code, it
is slow again (well, instead of 8-18 runs per sec it runs 40-50 times per
sec, but it is far from 9000 when flush does not happen at all). It seems to
me like the problem is not in how many times flush is called, but more in
what actually happens when flush is called (how is the primitive primFlush:
achieved on Windows).

In pharo-vm\platforms\win32\plugins\FilePlugin\sqWin32FilePrims.c

sqInt sqFileFlush(SQFile *f) {
  if (!sqFileValid(f))
    FAIL();
  /* note: ignores the return value in case of read-only access */
  FlushFileBuffers(FILE_HANDLE(f));
  return 1;

"If you perform one of these explicit flush operations, you aren't letting the disk cache do its job."

I guess we do just that.

More on that issue:


Side question: is write caching enabled on that drive?

Phil

Jan


Nicolai Hess wrote
> 2015-07-02 19:01 GMT+02:00 Sven Van Caekenberghe &lt;

> sven@

> &gt;:
>
>> #flush on a stream means pushing all data to the final destination,
>> clearing buffers, doing actual network transfers.
>>
>> What can happen when you disable that ?
>>
>> That some data does not arrive where it should I guess.
>>
>> Mind that #close most of the time does an automatic/implicit #flush.
>>
>> Anyway, I don't think disabling #flush is a real solution.
>>
>
> +1
>
> Maybe it is enough, if we remove the call to "self flush" in
> WriteStream>>#nextChunkPut:
> ?
>
> I see that squeak does not call flush, and :) in Squeak
> WriteStream>>#flush
> is empty (!)
> (But I think in squeak and pharo are many differences for stream and
> source/changes handling)





--
View this message in context: http://forum.world.st/Slow-compilation-on-one-of-my-Windows-PCs-tp4834668p4835471.html
Sent from the Pharo Smalltalk Users mailing list archive at Nabble.com.



Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
In reply to this post by Sven Van Caekenberghe-2
Well, Peter posted here his results on his Windows XP and it is slow on his PC, too.

About those non-latin characters... lots of these benchmarks are made on compilation of ASCII characters as far as I know, without our project ever loaded. I tested whole loading also on Roassal2 package where I don't expect non-ASCII characters (any way to find out for sure?).

Sven Van Caekenberghe-2 wrote
Jan,

I find it rather hard to believe you are the only Windows users for who #flush is slow. This is such a fundamental operation that it should have been caught earlier. Remember that the Pharo code itself is cross platform identical. Anyway, that is how I feel it.

Just an idea: does your source code contain non-Latin-1 characters (i.e. WideStrings), even a single one ? Maybe your name itself (the č).

Sven
Reply | Threaded
Open this post in threaded view
|

Re: Slow compilation on one of my Windows PCs

Jan Blizničenko
In reply to this post by philippeback
I tried making store benchmark based on caching settings on Windows which look like this: http://www.windows7library.com/blog/wp-content/uploads/2011/07/Write-Caching-Policy.jpg

both options checked: desktop 690 per sec, laptop 490 per sec
caching checked (enabled), second one not checked (which is default setting): desktop 18 per sec, laptop 480 per sec
neither checked: desktop 17 per sec, laptop 9 per sec

All is still very far from more than 20 000 on linux in virtual box on the same PC.

Also pretty curious how it has different influence on them. I should note that desktop HDD is much older - laptop is 3 years old, but desktop HDD is WD Blue WD6400AAKS made almost 8 years ago. However, disc monitoring utilities report that it is in perfect health (but still old, technologies improved since then I believe).

Jan

philippeback wrote
Side question: is write caching enabled on that drive?

Phil
12