Hi Nicolas,
"Nicolas Cellier"<[hidden email]> wrote: > Hi again, > I now have ported two more packages > - Xtreams-Transforms > - Xtreams-Substreams > and their tests This is great! I loaded your code into Pharo 1.1 and things seem to be working quite well. There was a complaint about missing SharedQueue2, I just created a dummy subclass of SharedQueue with that name and things seemed to load fine. XTRecyclingCenter seems to be subclass of XTWriteStream, it should be Object subclass, maybe a typo ? > I did not have any portability problem with those... > But that's because I did not handle the Character encoder/decoder. > Consequently, I have 8 tests failing (the Base64 related tests) I was thinking, we could implement just 'encoding: #ascii' quite easily, to make this reasonably usable at least for applications that are fine with that. We're actually contemplating about implementing our own encoders for Xtreams too. The VW ones are tied to the classic streams more than we like. You might have noticed some rather hairy parts in the encoding streams yourself, where we're trying to work around some of the issues it creates. The advantage of reusing the existing encoders was that there are quite a few of those available, so reimplementing all that would be a drag. But we can come up with a scheme where we can reimplement at least the common ones and in VW we can still preserve hooking into the old ones for the rest. I can give that a try on VW side in the meantime so you could get those for free. > Plus 4 other tests failing because of my poor implementation of > #after:do: (forking processes in a SUnit TestCase can't be that > obvious). I looked at this, and I think this is how #after:do: should look like: after: aDelay do: aBlock "Evaluate the argument block delayed after the specified duration." | watchdog | watchdog := [ aDelay wait. aBlock value. ] newProcess. watchdog priority: Processor userInterruptPriority. watchdog resume. This would assume that the 2 tests calling #timeout:server:client: would use a Delay instance instead of a Duration, which I'd be fine with. However making that change doesn't quite get the tests running. It's blowing up with a DNU on the 'output close' bit in #terminate:server:client: with output being nil, which I'm having trouble figuring out. I can't find who could possibly be nilling it out. I'm somewhat struggling finding my way around the Pharo tools. The test seems to otherwise pass but the DNUs from the background process isn't nice. Also when I just click on the test in the TestRunner, I actually get four DNUs, not just one as I would expect. So I'm kinda stuck, not sure how to move forward without help from someone who knows his way around Pharo. I also get odd failure from #testWriteCollectingMultipleBufferSize, which seems to run fine (against collection) when I run the equivalent in a workspace, but strangely fails when running via the #timeout:server:client: construct, i.e. when client and server run in separate processes. Hm, now that I think of it, they sure could fail if something preempts the client, server processes at the right moment. I'll have to rethink that again. > Now, the easy part of the port (copy/paste) is almost ended. > Once we manage a compatible way to handle pragmas, PEG Parser should > port quite easily too. I wouldn't worry about the Parser stuff at this point. > Then, the harder work begins: > - File/Socket/Pipe > - Pointers (in External Heap) I wouldn't worry, about the external heap stuff either. It's neat, but probably not something many people will miss. > - Character encoding/decoding I'll see if I can help with this from the VW side. > - Compression/Decompression Is Zlib linked into the VM in Squeak too ? The compression streams are written directly against the ZLib API, so there aren't any VW specific dependencies other than how those calls are made. Similarly the crypto streams go directly against the EVP API in LibCrypto in OpenSSL, so as long as we can abstract over how those are called, the stream implementation should work as is. > If you think you can help in any of these, please tell. If you could compile a list of changes that you'd like us to adopt on the VW side, I'd certainly look at that. I did read your posts but it's not entirely clear what you'd like to handle on Squeak side and what on VW side. A fileout would be best to avoid any confusion, but a description is fine too. I'm also unclear about the become: discussion. Don't write streams in Squeak become: the underlying collection when they grow it ? Cheers, Martin |
On Sun, 10 Oct 2010, [hidden email] wrote:
> Hi Nicolas, > > "Nicolas Cellier"<[hidden email]> wrote: >> Hi again, >> I now have ported two more packages >> - Xtreams-Transforms >> - Xtreams-Substreams >> and their tests > > This is great! I loaded your code into Pharo 1.1 and things seem to be working quite well. There was a complaint about missing SharedQueue2, I just created a dummy subclass of SharedQueue with that name and things seemed to load fine. > XTRecyclingCenter seems to be subclass of XTWriteStream, it should be Object subclass, maybe a typo ? > >> I did not have any portability problem with those... >> But that's because I did not handle the Character encoder/decoder. >> Consequently, I have 8 tests failing (the Base64 related tests) > > I was thinking, we could implement just 'encoding: #ascii' quite easily, to make this reasonably usable at least for applications that are fine with that. We're actually contemplating about implementing our own encoders for Xtreams too. The VW ones are tied to the classic streams more than we like. You might have noticed some rather hairy parts in the encoding streams yourself, where we're trying to work around some of the issues it creates. The advantage of reusing the existing encoders was that there are quite a few of those available, so reimplementing all that would be a drag. But we can come up with a scheme where we can reimplement at least the common ones and in VW we can still preserve hooking into the old ones for the rest. I can give that a try on VW side in the meantime so you could get those for free. > >> Plus 4 other tests failing because of my poor implementation of >> #after:do: (forking processes in a SUnit TestCase can't be that >> obvious). > > I looked at this, and I think this is how #after:do: should look like: > > after: aDelay do: aBlock > "Evaluate the argument block delayed after the specified duration." > > | watchdog | > watchdog := [ > aDelay wait. > aBlock value. > ] newProcess. > watchdog priority: Processor userInterruptPriority. > watchdog resume. > > This would assume that the 2 tests calling #timeout:server:client: would use a Delay instance instead of a Duration, which I'd be fine with. However making that change doesn't quite get the tests running. It's blowing up with a DNU on the 'output close' bit in #terminate:server:client: with output being nil, which I'm having trouble figuring out. I can't find who could possibly be nilling it out. I'm somewhat struggling finding my way around the Pharo tools. The test seems to otherwise pass but the DNUs from the background process isn't nice. Also when I just click on the test in the TestRunner, I actually get four DNUs, not just one as I would expect. So I'm kinda stuck, not sure how to move forward without help from someone who knows his way around Pharo. > I also get odd failure from #testWriteCollectingMultipleBufferSize, which seems to run fine (against collection) when I run the equivalent in a workspace, but strangely fails when running via the #timeout:server:client: construct, i.e. when client and server run in separate processes. Hm, now that I think of it, they sure could fail if something preempts the client, server processes at the right moment. I'll have to rethink that again. These problems should be solved with the latest version of CoreTests. > >> Now, the easy part of the port (copy/paste) is almost ended. >> Once we manage a compatible way to handle pragmas, PEG Parser should >> port quite easily too. > > I wouldn't worry about the Parser stuff at this point. > >> Then, the harder work begins: >> - File/Socket/Pipe >> - Pointers (in External Heap) > > I wouldn't worry, about the external heap stuff either. It's neat, but probably not something many people will miss. > >> - Character encoding/decoding > > I'll see if I can help with this from the VW side. > >> - Compression/Decompression > > Is Zlib linked into the VM in Squeak too ? The compression streams are written directly against the ZLib API, so there aren't any VW specific dependencies other than how those calls are made. Similarly the crypto streams go directly against the EVP API in LibCrypto in OpenSSL, so as long as we can abstract over how those are called, the stream implementation should work as is. > >> If you think you can help in any of these, please tell. > > If you could compile a list of changes that you'd like us to adopt on the VW side, I'd certainly look at that. I did read your posts but it's not entirely clear what you'd like to handle on Squeak side and what on VW side. A fileout would be best to avoid any confusion, but a description is fine too. > > I'm also unclear about the become: discussion. Don't write streams in Squeak become: the underlying collection when they grow it ? No. Squeak uses direct pointers, so it doesn't have an object table, therefore #become: is very expensive. See the Storage Management section of http://ftp.squeak.org/docs/OOPSLA.Squeak.html for details. Levente > > Cheers, > > Martin > > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > |
In reply to this post by mkobetic
On Oct 10, 2010, at 5:18 PM, [hidden email] wrote: > Hi Nicolas, > > "Nicolas Cellier"<[hidden email]> wrote: >> Hi again, >> I now have ported two more packages >> - Xtreams-Transforms >> - Xtreams-Substreams >> and their tests If you get Interpreting and Marshaling going we'll have ST-ST communication between Pharo<->Squeak<->VisualWorks with a fast binary protocol. I suspect some trickery will be required to deal with namespaces, as the protocol assumes 'full names' for classes when matching between the ends. Michael |
In reply to this post by mkobetic
On Oct 10, 2010, at 5:18 PM, [hidden email] wrote: > Hi Nicolas, > > "Nicolas Cellier"<[hidden email]> wrote: >> Hi again, >> I now have ported two more packages >> - Xtreams-Transforms >> - Xtreams-Substreams >> and their tests If you get Interpreting and Marshaling going we'll have ST-ST communication between Pharo<->Squeak<->VisualWorks with a fast binary protocol. I suspect some trickery will be required to deal with namespaces, as the protocol assumes 'full names' for classes when matching between the ends. Michael |
In reply to this post by mkobetic
"Nicolas Cellier"<[hidden email]> wrote:
> Then, the harder work begins: > - File/Socket/Pipe > - Pointers (in External Heap) > - Character encoding/decoding I just published XtreamsDevelopment(387) with following changes: * abstracted Encoder out of encoded streams to allow implementing our own encodings. * Encoder.Encoders registers known encoders * Encoder.DialectEncoder provides a hook to access dialect specific encoders * added ASCIIEncoder implementing ASCII encoding portably * added VWEncoder wrapping VisualWorks StreamEncoders Obviously VWEncoder won't work in Squeak and should eventually move to a VW specific package. But at the moment it might be useful as an example of how to take advantage of the existing encoders in Squeak. I briefly looked at the Multilingual categories in Pharo and there seem to be quite a few encodings available through the TextConverter hierarchy. You've expressed some reservations about that, but it might be worth taking advantage of those at least initially. Anyway with the above there should be at least an #ascii encoder available for now. |
In reply to this post by Levente Uzonyi-2
2010/10/11 Levente Uzonyi <[hidden email]>:
> On Sun, 10 Oct 2010, [hidden email] wrote: > >> Hi Nicolas, >> >> "Nicolas Cellier"<[hidden email]> wrote: >>> >>> Hi again, >>> I now have ported two more packages >>> - Xtreams-Transforms >>> - Xtreams-Substreams >>> and their tests >> >> This is great! I loaded your code into Pharo 1.1 and things seem to be >> working quite well. There was a complaint about missing SharedQueue2, I just >> created a dummy subclass of SharedQueue with that name and things seemed to >> load fine. >> XTRecyclingCenter seems to be subclass of XTWriteStream, it should be >> Object subclass, maybe a typo ? >> >>> I did not have any portability problem with those... >>> But that's because I did not handle the Character encoder/decoder. >>> Consequently, I have 8 tests failing (the Base64 related tests) >> >> I was thinking, we could implement just 'encoding: #ascii' quite easily, >> to make this reasonably usable at least for applications that are fine with >> that. We're actually contemplating about implementing our own encoders for >> Xtreams too. The VW ones are tied to the classic streams more than we like. >> You might have noticed some rather hairy parts in the encoding streams >> yourself, where we're trying to work around some of the issues it creates. >> The advantage of reusing the existing encoders was that there are quite a >> few of those available, so reimplementing all that would be a drag. But we >> can come up with a scheme where we can reimplement at least the common ones >> and in VW we can still preserve hooking into the old ones for the rest. I >> can give that a try on VW side in the meantime so you could get those for >> free. >> >>> Plus 4 other tests failing because of my poor implementation of >>> #after:do: (forking processes in a SUnit TestCase can't be that >>> obvious). >> >> I looked at this, and I think this is how #after:do: should look like: >> >> after: aDelay do: aBlock >> Â Â Â Â "Evaluate the argument block delayed after the specified duration." >> >> Â Â Â Â | watchdog | >> Â Â Â Â watchdog := [ >> Â Â Â Â Â Â Â Â aDelay wait. >> Â Â Â Â Â Â Â Â aBlock value. >> Â Â Â Â ] newProcess. >> Â Â Â Â watchdog priority: Processor userInterruptPriority. >> Â Â Â Â watchdog resume. >> >> This would assume that the 2 tests calling #timeout:server:client: would >> use a Delay instance instead of a Duration, which I'd be fine with. However >> making that change doesn't quite get the tests running. It's blowing up with >> a DNU on the 'output close' bit in #terminate:server:client: with output >> being nil, which I'm having trouble figuring out. I can't find who could >> possibly be nilling it out. I'm somewhat struggling finding my way around >> the Pharo tools. The test seems to otherwise pass but the DNUs from the >> background process isn't nice. Also when I just click on the test in the >> TestRunner, I actually get four DNUs, not just one as I would expect. So I'm >> kinda stuck, not sure how to move forward without help from someone who >> knows his way around Pharo. >> I also get odd failure from #testWriteCollectingMultipleBufferSize, which >> seems to run fine (against collection) when I run the equivalent in a >> workspace, but strangely fails when running via the #timeout:server:client: >> construct, i.e. when client and server run in separate processes. Hm, now >> that I think of it, they sure could fail if something preempts the client, >> server processes at the right moment. I'll have to rethink that again. > > These problems should be solved with the latest version of CoreTests. > > Levente Thanks! |
On Mon, 11 Oct 2010, Nicolas Cellier wrote:
> 2010/10/11 Levente Uzonyi <[hidden email]>: >> On Sun, 10 Oct 2010, [hidden email] wrote: >> >>> Hi Nicolas, >>> >>> "Nicolas Cellier"<[hidden email]> wrote: >>>> >>>> Hi again, >>>> I now have ported two more packages >>>> - Xtreams-Transforms >>>> - Xtreams-Substreams >>>> and their tests >>> >>> This is great! I loaded your code into Pharo 1.1 and things seem to be >>> working quite well. There was a complaint about missing SharedQueue2, I just >>> created a dummy subclass of SharedQueue with that name and things seemed to >>> load fine. >>> XTRecyclingCenter seems to be subclass of XTWriteStream, it should be >>> Object subclass, maybe a typo ? >>> >>>> I did not have any portability problem with those... >>>> But that's because I did not handle the Character encoder/decoder. >>>> Consequently, I have 8 tests failing (the Base64 related tests) >>> >>> I was thinking, we could implement just 'encoding: #ascii' quite easily, >>> to make this reasonably usable at least for applications that are fine with >>> that. We're actually contemplating about implementing our own encoders for >>> Xtreams too. The VW ones are tied to the classic streams more than we like. >>> You might have noticed some rather hairy parts in the encoding streams >>> yourself, where we're trying to work around some of the issues it creates. >>> The advantage of reusing the existing encoders was that there are quite a >>> few of those available, so reimplementing all that would be a drag. But we >>> can come up with a scheme where we can reimplement at least the common ones >>> and in VW we can still preserve hooking into the old ones for the rest. I >>> can give that a try on VW side in the meantime so you could get those for >>> free. >>> >>>> Plus 4 other tests failing because of my poor implementation of >>>> #after:do: (forking processes in a SUnit TestCase can't be that >>>> obvious). >>> >>> I looked at this, and I think this is how #after:do: should look like: >>> >>> after: aDelay do: aBlock >>> Â Â Â Â "Evaluate the argument block delayed after the specified duration." >>> >>> Â Â Â Â | watchdog | >>> Â Â Â Â watchdog := [ >>> Â Â Â Â Â Â Â Â aDelay wait. >>> Â Â Â Â Â Â Â Â aBlock value. >>> Â Â Â Â ] newProcess. >>> Â Â Â Â watchdog priority: Processor userInterruptPriority. >>> Â Â Â Â watchdog resume. >>> >>> This would assume that the 2 tests calling #timeout:server:client: would >>> use a Delay instance instead of a Duration, which I'd be fine with. However >>> making that change doesn't quite get the tests running. It's blowing up with >>> a DNU on the 'output close' bit in #terminate:server:client: with output >>> being nil, which I'm having trouble figuring out. I can't find who could >>> possibly be nilling it out. I'm somewhat struggling finding my way around >>> the Pharo tools. The test seems to otherwise pass but the DNUs from the >>> background process isn't nice. Also when I just click on the test in the >>> TestRunner, I actually get four DNUs, not just one as I would expect. So I'm >>> kinda stuck, not sure how to move forward without help from someone who >>> knows his way around Pharo. >>> I also get odd failure from #testWriteCollectingMultipleBufferSize, which >>> seems to run fine (against collection) when I run the equivalent in a >>> workspace, but strangely fails when running via the #timeout:server:client: >>> construct, i.e. when client and server run in separate processes. Hm, now >>> that I think of it, they sure could fail if something preempts the client, >>> server processes at the right moment. I'll have to rethink that again. >> >> These problems should be solved with the latest version of CoreTests. >> >> Levente > > Thanks! Cog and the test fails, 200 ms works well. It typically happens when running #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to make sure it doesn't fail with SqueakVM or on slower machines. Levente > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > |
2010/10/11 Levente Uzonyi <[hidden email]>:
>>> >>> These problems should be solved with the latest version of CoreTests. >>> >>> Levente >> >> Thanks! > > I just realized that sometimes the 100 ms timeout is not enough even on Cog > and the test fails, 200 ms works well. It typically happens when running > #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to > make sure it doesn't fail with SqueakVM or on slower machines. > > > Levente > Just proceed (in http://www.squeaksource.com/Xtreams) if you want to. |
In reply to this post by Levente Uzonyi-2
>
> I just realized that sometimes the 100 ms timeout is not enough even on Cog > and the test fails, 200 ms works well. It typically happens when running > #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to > make sure it doesn't fail with SqueakVM or on slower machines. > > > Levente > Hmm, I increased to 1000ms, but still get some random failures... Nicolas |
On Tue, 12 Oct 2010, Nicolas Cellier wrote:
>> >> I just realized that sometimes the 100 ms timeout is not enough even on Cog >> and the test fails, 200 ms works well. It typically happens when running >> #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, to >> make sure it doesn't fail with SqueakVM or on slower machines. >> >> >> Levente >> > > Hmm, I increased to 1000ms, but still get some random failures... Yes, it's a bit strange. If I change it to 200ms, every test passes, no random failures. If I increase it to 1000ms, I get random failures. Levente > > Nicolas > > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project > |
On Tue, 12 Oct 2010, Levente Uzonyi wrote:
> On Tue, 12 Oct 2010, Nicolas Cellier wrote: > >>> >>> I just realized that sometimes the 100 ms timeout is not enough even on >>> Cog >>> and the test fails, 200 ms works well. It typically happens when running >>> #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, >>> to >>> make sure it doesn't fail with SqueakVM or on slower machines. >>> >>> >>> Levente >>> >> >> Hmm, I increased to 1000ms, but still get some random failures... > > Yes, it's a bit strange. If I change it to 200ms, every test passes, no > random failures. If I increase it to 1000ms, I get random failures. Okay, I really tracked down the cause of the problem. The tests perform simple producer-consumer scenarios, but the consumers won't wait at all. If there's nothing to consume, the test fails. The randomness come from the scheduler. The server process is started first, the client is the second. If the server process can produce enough input to the client process the test will pass. If you decrease the priority of the client process by one in #timeout:server:client:, the randomness will be gone and the tests will reliably pass, because the client won't be able to starve the server. To avoid false timeouts I had to increase the timeout value to 2000 milliseconds using SqueakVM. I also found an issue: the process in XTTransformWriteStream doesn't terminate. If you run the tests, you'll get 12 lingering processes. Levente > > > Levente > >> >> Nicolas >> >> _______________________________________________ >> Pharo-project mailing list >> [hidden email] >> http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project >> > > |
2010/10/12 Levente Uzonyi <[hidden email]>:
> On Tue, 12 Oct 2010, Levente Uzonyi wrote: > >> On Tue, 12 Oct 2010, Nicolas Cellier wrote: >> >>>> >>>> I just realized that sometimes the 100 ms timeout is not enough even on >>>> Cog >>>> and the test fails, 200 ms works well. It typically happens when running >>>> #testReadWriteLargeAmount. I'd suggest increasing it to 1000 ms or more, >>>> to >>>> make sure it doesn't fail with SqueakVM or on slower machines. >>>> >>>> >>>> Levente >>>> >>> >>> Hmm, I increased to 1000ms, but still get some random failures... >> >> Yes, it's a bit strange. If I change it to 200ms, every test passes, no >> random failures. If I increase it to 1000ms, I get random failures. > > Okay, I really tracked down the cause of the problem. The tests perform > simple producer-consumer scenarios, but the consumers won't wait at all. If > there's nothing to consume, the test fails. The randomness come from the > scheduler. The server process is started first, the client is the second. If > the server process can produce enough input to the client process the test > will pass. > If you decrease the priority of the client process by one in > #timeout:server:client:, the randomness will be gone and the tests will > reliably pass, because the client won't be able to starve the server. To > avoid false timeouts I had to increase the timeout value to 2000 > milliseconds using SqueakVM. > Good! > I also found an issue: the process in XTTransformWriteStream doesn't > terminate. If you run the tests, you'll get 12 lingering processes. > > Ah yes, the #drainBuffer process... Nicolas > Levente |
In reply to this post by Levente Uzonyi-2
"Levente Uzonyi"<[hidden email]> wrote:
> Okay, I really tracked down the cause of the problem. The tests perform > simple producer-consumer scenarios, but the consumers won't wait at all. > If there's nothing to consume, the test fails. The randomness come from > the scheduler. The server process is started first, the client is the > second. If the server process can produce enough input to the client > process the test will pass. > If you decrease the priority of the client process by one in > #timeout:server:client:, the randomness will be gone and the tests will > reliably pass, because the client won't be able to starve the server. To > avoid false timeouts I had to increase the timeout value to 2000 > milliseconds using SqueakVM. An alternative solution I was thinking about would be to dispense with the processes in default case and simply run the server and client blocks sequentially. The cases where we really need the processes are channels with limited capacity (e.g. Socket or Pipe) where the writer needs to be able to block (when the OS buffers fill up) and let the reader consume some of the output before it continues writing. In these cases there aren't any shared resources so process contention should not be an issue. The other advantage would be that cases with the default behavior would be easier to debug and would not be subject to timeouts. As it is I still get timeouts sporadically even when it's set to 1 second. > I also found an issue: the process in XTTransformWriteStream doesn't > terminate. If you run the tests, you'll get 12 lingering processes. I don't see that happening in Pharo 1.1. Normally the process should end normally when the stream is closed (the close synchronizes with the process via the closeReady semaphore). Maybe there's a missing close somewhere in the test suite ? |
On 12 Oct 2010, at 16:29, [hidden email] wrote: > "Levente Uzonyi"<[hidden email]> wrote: >> Okay, I really tracked down the cause of the problem. The tests perform >> simple producer-consumer scenarios, but the consumers won't wait at all. >> If there's nothing to consume, the test fails. The randomness come from >> the scheduler. The server process is started first, the client is the >> second. If the server process can produce enough input to the client >> process the test will pass. >> If you decrease the priority of the client process by one in >> #timeout:server:client:, the randomness will be gone and the tests will >> reliably pass, because the client won't be able to starve the server. To >> avoid false timeouts I had to increase the timeout value to 2000 >> milliseconds using SqueakVM. > > An alternative solution I was thinking about would be to dispense with the processes in default case and simply run the server and client blocks sequentially. The cases where we really need the processes are channels with limited capacity (e.g. Socket or Pipe) where the writer needs to be able to block (when the OS buffers fill up) and let the reader consume some of the output before it continues writing. In these cases there aren't any shared resources so process contention should not be an issue. The other advantage would be that cases with the default behavior would be easier to debug and would not be subject to timeouts. As it is I still get timeouts sporadically even when it's set to 1 second. Yes, having strangely behaving unit tests is really annoying, especially when all this process stuff would not really be needed. >> I also found an issue: the process in XTTransformWriteStream doesn't >> terminate. If you run the tests, you'll get 12 lingering processes. > > I don't see that happening in Pharo 1.1. Normally the process should end normally when the stream is closed (the close synchronizes with the process via the closeReady semaphore). Maybe there's a missing close somewhere in the test suite ? I have these hanging processes as well (you probably know that the Process Browser does not auto update by default ?), I've never seen unit tests that did that... > _______________________________________________ > Pharo-project mailing list > [hidden email] > http://lists.gforge.inria.fr/cgi-bin/mailman/listinfo/pharo-project |
In reply to this post by Levente Uzonyi-2
"Sven Van Caekenberghe"<[hidden email]> wrote:
> >> I also found an issue: the process in XTTransformWriteStream doesn't > >> terminate. If you run the tests, you'll get 12 lingering processes. > > > > I don't see that happening in Pharo 1.1. Normally the process should end normally when the stream is closed (the close synchronizes with the process via the closeReady semaphore). Maybe there's a missing close somewhere in the test suite ? > > I have these hanging processes as well (you probably know that the Process Browser does not auto update by default ?), I've never seen unit tests that did that... OK, I can see this on VW side too. The following test cases fail to close the transform write stream (against our own advice, sic): Xtreams.ReadingWritingTest>>#testWriteRejecting Xtreams.ReadingWritingTest>>#testWriteSelecting Xtreams.ReadingWritingTest>>#testWriteTransformHexToByte Xtreams.ReadingWritingTest>>#testWriteTransforming1into2 Xtreams.ReadingWritingTest>>#testWriteTransforming2into1 I'll make the fixes on VW side. |
Free forum by Nabble | Edit this page |