Administrator
|
Sometimes, OSProcess hangs even though the underlying command has completed.
For example, on Mac 10.7.3: p := PipeableOSProcess waitForCommand: 'ps -ax -o pid,command' "never returns" even though "p upToEnd" immediately returns the output if I use #command: instead of waiting. How best to handle this? Thanks. Sean
Cheers,
Sean |
On Wed, Apr 18, 2012 at 02:47:38PM -0700, Sean P. DeNigris wrote:
> Sometimes, OSProcess hangs even though the underlying command has completed. > > For example, on Mac 10.7.3: > p := PipeableOSProcess waitForCommand: 'ps -ax -o pid,command' "never > returns" > even though "p upToEnd" immediately returns the output if I use #command: > instead of waiting. > > How best to handle this? > > Thanks. > Sean PipeableOSProcess class>>waitForCommand: is just a loop that waits for the command (running in an external process) to complete. If the command does not complete, it keeps looping. The program that you are running (/bin/ps) will try to exit after it has written all of its output to its standard output, which in this case is a pipe connected back to your VM. But if the pipe fills up (the operating system will put some limit on this), the /bin/ps program will block trying to write to its output until somebody reads some of that data from the pipe. If nobody reads the output, the /bin/ps program will just block forever, and it will seem to be "stuck" and will never exit. What to do? Read the data from the output of the PipeableOSProcess. The /bin/ps program will then be unblocked, and it will finish writing its output and then exit normally. Dave |
Hi. I use OSProcess in Windows (XP and Seven) and I notice that it hang when I open the image with OSProcess already loaded. I try in Seaside-3.0.7-OneClick and Pharo-1.4-14438-OneClick, and I get same error.
Step to reproduce: 1. Load OSProcess in a clean image. I use this snippet: Gofer new squeaksource: 'MetacelloRepository'; package: 'ConfigurationOfOSProcess'; load. ((Smalltalk at: #ConfigurationOfOSProcess) project version: #stable) load. 2. Save and quit. 3. Open same image, with OSProcess already load. 4. Inspect this:
OSProcess command: 'dir'. You can notice that process never is completed, however the operating system window where running is already closed. If you inspect previous code after step 1 (before save and quit) the process is completed.
In another hand, I notice that in the ConfigurationOfOSProcess>>baseline44 there is a OSProcess-Tests package, but this there isn't in the repo, and the tests classes are loaded with just load OSProcess package. I don't want load the tests in the deploy images. Maybe I do something wrong.
Regards. El 18 de abril de 2012 21:07, David T. Lewis <[hidden email]> escribió:
|
Forget to say something, if I unload and reload the OSProcess package, then starts working well again. El 19 de abril de 2012 09:27, Gastón Dall' Oglio <[hidden email]> escribió:
|
Administrator
|
In reply to this post by David T. Lewis
David, It doesn't seem like that's the problem because it's intermittent. I can run the exact same command with the same output and sometimes it recognizes that it's complete and sometimes not. I don't know if it's related, but when I do: p := PipeableOSProcess command: '/usr/bin/expect /path/to/expect.exp'. where expect.exp is a wrapper to supply login credentials to ssh, I get a similar problem... p next: 100. "This returns the welcome text from the server... ssh has successfully connected" p nextPutAll: 'ls', Character cr asString. "I send the ls command to the remote server" p atEnd. "Primitive failed <primitive: 'primitiveFileSetPosition' module: 'FilePlugin'> in AttachableFileStream(StandardFileStream)>>primSetPosition:to:" "weird, no?" p upToEnd. "This hangs, even though there is output in the pipe" p next: 5. "In fact, if I get the output a little at a time, I can see the result of the ls command" What do you think? Thanks. Sean
Cheers,
Sean |
Administrator
|
In reply to this post by David T. Lewis
Here's the OSProcess test results:
* Pharo 1.3 * Mac Lion 10.7.3 * recent Cog VM from the Pharo CI server * latest version of OSProcess (the all-in-one package, not the broken up ones) 192 run, 110 passes, 0 expected failures, 79 failures, 3 errors, 0 unexpected passes Is that what you would expect?! 3 errors: #('AioEventHandlerTestCase>>#testPrimAioModuleName' 'UnixProcessAccessorTestCase>>#testDupTo' 'UnixProcessTestCase>>#testEightLeafSqueakTree') 79 failures: #('AioEventHandlerTestCase>>#testEnableHandleAndDisable' 'AioEventHandlerTestCase>>#testFileReadableEvent' 'AioEventHandlerTestCase>>#testFileWritableEvent' 'AioEventHandlerTestCase>>#testHandleForFile' 'AioEventHandlerTestCase>>#testHandleForSocket' 'AioEventHandlerTestCase>>#testPrimAioModuleVersionString' 'AioEventHandlerTestCase>>#testSocketReadableEvent' 'AioEventHandlerTestCase>>#testSuspendAioForSocketReadableEvent' 'UnixProcessAccessorTestCase>>#testRedirectStdOutTo' 'UnixProcessTestCase>>#testCatAFile' 'UnixProcessTestCase>>#testClassForkHeadlessSqueakAndDo' 'UnixProcessTestCase>>#testClassForkHeadlessSqueakAndDoThenQuit' 'UnixProcessTestCase>>#testClassForkSqueak' 'UnixProcessTestCase>>#testClassForkSqueakAndDo' 'UnixProcessTestCase>>#testClassForkSqueakAndDoThenQuit' 'UnixProcessTestCase>>#testForkHeadlessSqueakAndDo' 'UnixProcessTestCase>>#testForkHeadlessSqueakAndDoThenQuit' 'UnixProcessTestCase>>#testForkSqueak' 'UnixProcessTestCase>>#testForkSqueakAndDo' 'UnixProcessTestCase>>#testForkSqueakAndDoThenQuit' 'UnixProcessTestCase>>#testHeadlessChild' 'UnixProcessTestCase>>#testRunCommand' 'UnixProcessTestCase>>#testSpawnTenHeadlessChildren' 'UnixProcessUnixFileLockingTestCase>>#testCooperatingProcesses01' 'UnixProcessUnixFileLockingTestCase>>#testCooperatingProcesses02' 'UnixProcessUnixFileLockingTestCase>>#testCooperatingProcesses03' 'UnixProcessUnixFileLockingTestCase>>#testCooperatingProcesses04' 'UnixProcessUnixFileLockingTestCase>>#testCooperatingProcesses05' 'UnixProcessUnixFileLockingTestCase>>#testFailFileLockOnLockedFile' 'UnixProcessUnixFileLockingTestCase>>#testFailLockOnLockedOverlappedRegion' 'UnixProcessUnixFileLockingTestCase>>#testFailLockOnLockedRegion' 'UnixProcessUnixFileLockingTestCase>>#testFailLockOnLockedSupersetRegion' 'UnixProcessUnixFileLockingTestCase>>#testFailRegionLockOnLockedFile' 'UnixProcessUnixFileLockingTestCase>>#testLockEntireFileForWrite01' 'UnixProcessUnixFileLockingTestCase>>#testLockEntireFileForWrite02' 'UnixProcessUnixFileLockingTestCase>>#testLockEntireFileForWrite03' 'UnixProcessUnixFileLockingTestCase>>#testLockEntireFileForWrite04' 'UnixProcessUnixFileLockingTestCase>>#testLockEntireFileForWrite05' 'UnixProcessUnixFileLockingTestCase>>#testLockEntireFileForWrite06' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForRead01' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForRead02' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite01' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite02' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite03' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite04' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite05' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite06' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite07' 'UnixProcessUnixFileLockingTestCase>>#testLockRegionForWrite08' 'UnixProcessUnixFileLockingTestCase>>#testNoFailLockOnAdjacentLockedRegions' 'UnixProcessUnixFileLockingTestCase>>#testNoFailLockOnDifferentLockedRegion' 'UnixProcessWin32FileLockingTestCase>>#testCooperatingProcesses01' 'UnixProcessWin32FileLockingTestCase>>#testCooperatingProcesses02' 'UnixProcessWin32FileLockingTestCase>>#testCooperatingProcesses03' 'UnixProcessWin32FileLockingTestCase>>#testCooperatingProcesses04' 'UnixProcessWin32FileLockingTestCase>>#testCooperatingProcesses05' 'UnixProcessWin32FileLockingTestCase>>#testFailFileLockOnLockedFile' 'UnixProcessWin32FileLockingTestCase>>#testFailLockOnLockedOverlappedRegion' 'UnixProcessWin32FileLockingTestCase>>#testFailLockOnLockedRegion' 'UnixProcessWin32FileLockingTestCase>>#testFailLockOnLockedSupersetRegion' 'UnixProcessWin32FileLockingTestCase>>#testFailRegionLockOnLockedFile' 'UnixProcessWin32FileLockingTestCase>>#testLockEntireFileForWrite01' 'UnixProcessWin32FileLockingTestCase>>#testLockEntireFileForWrite02' 'UnixProcessWin32FileLockingTestCase>>#testLockEntireFileForWrite03' 'UnixProcessWin32FileLockingTestCase>>#testLockEntireFileForWrite04' 'UnixProcessWin32FileLockingTestCase>>#testLockEntireFileForWrite05' 'UnixProcessWin32FileLockingTestCase>>#testLockEntireFileForWrite06' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForRead01' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForRead02' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite01' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite02' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite03' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite04' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite05' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite06' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite07' 'UnixProcessWin32FileLockingTestCase>>#testLockRegionForWrite08' 'UnixProcessWin32FileLockingTestCase>>#testNoFailLockOnAdjacentLockedRegions' 'UnixProcessWin32FileLockingTestCase>>#testNoFailLockOnDifferentLockedRegion')
Cheers,
Sean |
Administrator
|
In reply to this post by Sean P. DeNigris
I debugged this with the Stack VM from Jenkins. It's hanging in sqFileBasicPluginPrims.c on line 360: bytesRead = fread(dst, 1, count, file); This line seems to never return, although the docs suggest that it should just return an error or less data if the requested data is not available. HTH, Sean
Cheers,
Sean |
On 25 April 2012 19:48, Sean P. DeNigris <[hidden email]> wrote:
> > Sean P. DeNigris wrote >> >> p upToEnd. "This hangs, even though there is output in the pipe" >> > > I debugged this with the Stack VM from Jenkins. It's hanging in > sqFileBasicPluginPrims.c on line 360: > bytesRead = fread(dst, 1, count, file); > > This line seems to never return, although the docs suggest that it should > just return an error or less data if the requested data is not available. > fread is just a thin wrapper around read() call. it blocks the caller upon completing an operation. you can set a file handle to non-blocking mode using fcntl() call with O_NONBLOCK flag. Which i think is a first and foremost thing which VM should do to prevent blocking like that :) > HTH, > Sean > > -- > View this message in context: http://forum.world.st/OSProcess-is-process-done-tp4569088p4587479.html > Sent from the Pharo Smalltalk mailing list archive at Nabble.com. > -- Best regards, Igor Stasenko. |
Administrator
|
In reply to this post by Sean P. DeNigris
Begin forwarded message:
Cheers,
Sean |
Administrator
|
In reply to this post by Sean P. DeNigris
p := PipeableOSProcess command: '/usr/bin/expect /path/to/expect_file.exp ', password, ' ', ipAddress. p setNonBlockingOutput. "This was added. Otherwise, #upToEnd hangs the image" p nextPutAll: 'ls', Character cr asString. p upToEnd. p nextPutAll: 'pwd', Character cr asString. p upToEnd. The reason that I didn't think to do this before is that I thought the whole point of #upToEnd was that it was *inherently* non-blocking...
Yes, they all passed except the PipeableOSProcessTestCase ones... The other problem ("p := PipeableOSProcess waitForCommand: 'ps -ax -o pid,command'" hangs forever) is not happening right now. It's returning as soon as the process is finished, so I'll have to get back to you if it pops up again... Thanks for all the help! And is a fix required for #upToEnd based on the above? Sean
Cheers,
Sean |
Administrator
|
In reply to this post by Sean P. DeNigris
Begin forwarded message:
Cheers,
Sean |
Administrator
|
Dave, thanks for being so patient and generous with the explanations! I'm still a little confused about blocking and output. I'm exploring some of your suggestions from the last message, but one thing is bothering me... PipeableOSProcess>>#upToEnd eventually calls AttachableFileStream>>#upToEnd, which tries to perform a buffered read by "self nextInto: 1000" (which eventually calls primitiveFileRead which calls sqFileReadIntoAt which calls fread with count arg of 1000). I'm pretty sure this sets up a race condition. In http://forum.world.st/Standard-input-in-Pharo-td2173080.html#a4607687 , I found out that fread blocks forever under the following conditions: 1. there is less data available than was asked for 2. no EOF is encountered I think this is why it seems that my (and others') reports of hanging OSP images are inconsistent. If the process has generated at least 1000 characters output before the call to #upToEnd, or if (e.g. "p pipeFromOutput next: 10") less output is asked for than is available, there is no problem. But as soon as one more character is asked for than is available, the vm hangs in fread forever. I don't know what the solution is, but the current behavior doesn't seem workable. I'd be happy to work with you as much as I can to fix it... Thanks again. Sean
Cheers,
Sean |
Administrator
|
After further investigation, it seems to me that blocking #upToEnd is functionally the same as #upToEndOfFile, because its test to stop reading data is StandardFileStream>>atEnd, which calls feof(). Therefore, if there is no EOF, it will keep reading until the pipe is out of data, and then hang in fread on the following iteration.
Cheers,
Sean |
On 5 May 2012 00:21, Sean P. DeNigris <[hidden email]> wrote:
> > Sean P. DeNigris wrote >> >> PipeableOSProcess>>#upToEnd eventually calls >> AttachableFileStream>>#upToEnd, which tries to perform a buffered read by >> "self nextInto: 1000" (which eventually calls primitiveFileRead which >> calls sqFileReadIntoAt which calls fread with count arg of 1000). >> > > After further investigation, it seems to me that blocking #upToEnd is > functionally the same as #upToEndOfFile, because its test to stop reading > data is StandardFileStream>>atEnd, which calls feof(). Therefore, if there > is no EOF, it will keep reading until the pipe is out of data, and then hang > in fread on the following iteration. > stdin upToEnd makes no sense.. one should not expect that data which comes to stdin has any notion of "end", therefore he should never use this method on such stream. stdin/out are unbound (endless) streams , and use things like eof(), and other of such sort should be discouraged.. since it is same as asking infinity atEnd. > -- > View this message in context: http://forum.world.st/OSProcess-is-process-done-tp4569088p4609991.html > Sent from the Pharo Smalltalk mailing list archive at Nabble.com. > -- Best regards, Igor Stasenko. |
Administrator
|
Yes! It is nonsensical!! Well #upToEndOfFile makes no sense, but as I understood Dave's description, #upToEnd means "return all available data". However, eof is still getting checked, which I think is a bug. Right now, OSP delegates to the same stream primitives as all other files, which relies on eof checks. Maybe that is the problem...
Cheers,
Sean |
In reply to this post by Igor Stasenko
On Fri, May 4, 2012 at 3:28 PM, Igor Stasenko <[hidden email]> wrote:
Um, no. See below. stdin/out are unbound (endless) streams , and use things like eof(), Um, no. One can redirect a file to stdin. One can type EOF to stdin. EOF definitely *does* make sense for stdin. stdin. stdout and stderr are merely well-defined stream names. they can be bound to arbitrary streams, infinite or otherewise. In unix shells piping is built using dup with fork & exec to arrange that some program reads and writes to specific pipe files in a full pipe.
best, Eliot |
Administrator
|
Ah, good points. Let me rephrase... in the most common cases, I think waiting for EOF in stdin is not what one wants/expects.
Cheers,
Sean |
In reply to this post by Eliot Miranda-2
On 5 May 2012 00:56, Eliot Miranda <[hidden email]> wrote:
> > > On Fri, May 4, 2012 at 3:28 PM, Igor Stasenko <[hidden email]> wrote: >> >> On 5 May 2012 00:21, Sean P. DeNigris <[hidden email]> wrote: >> > >> > Sean P. DeNigris wrote >> >> >> >> PipeableOSProcess>>#upToEnd eventually calls >> >> AttachableFileStream>>#upToEnd, which tries to perform a buffered read >> >> by >> >> "self nextInto: 1000" (which eventually calls primitiveFileRead which >> >> calls sqFileReadIntoAt which calls fread with count arg of 1000). >> >> >> > >> > After further investigation, it seems to me that blocking #upToEnd is >> > functionally the same as #upToEndOfFile, because its test to stop >> > reading >> > data is StandardFileStream>>atEnd, which calls feof(). Therefore, if >> > there >> > is no EOF, it will keep reading until the pipe is out of data, and then >> > hang >> > in fread on the following iteration. >> > >> imo >> stdin upToEnd >> makes no sense.. >> one should not expect that data which comes to stdin has any notion of >> "end", >> therefore he should never use this method on such stream. > > > Um, no. See below. > >> >> stdin/out are unbound (endless) streams , and use things like eof(), >> and other of such sort >> should be discouraged.. since it is same as asking >> infinity atEnd. > > > Um, no. One can redirect a file to stdin. One can type EOF to stdin. EOF > definitely *does* make sense for stdin. what? like putc(EOF)? but that is a convention between two ends. if another end does not recognizing EOF character as an "end of input" signal, it will keep waiting for more data. since these streams are natually binary, i would be really surprised if some characters is reserved for special purposes. > stdin. stdout and stderr are merely > well-defined stream names. they can be bound to arbitrary streams, infinite > or otherewise. In unix shells piping is built using dup with fork & exec to > arrange that some program reads and writes to specific pipe files in a full > pipe. right. but as you gave an example, imagine that i used dup() and one fork keeps writing to stream, while other closes own copy, do receiving side receives any "EOF" signal? i doubt. Here the excert from feof man page: The function feof() tests the end-of-file indicator for the stream pointed to by stream, returning non-zero if it is set. The end-of-file indicator may be cleared by explic- itly calling clearerr(), or as a side-effect of other operations, e.g. fseek(). which means that actually, eof is nothing else than error which captured while attempting to read moar from stream, nicely converted to eof() by higher abstraction level functions (fxxxx C functions). but if you look at basic infrastructure, supported by kernel (read(), write()), there is no notion of "eof" for descriptors. all you can have is an error while attempting to read from or write to descriptor, and then you can decide how to handle that error by either treating it as end-of-file or signaling exception etc. >> >> > -- >> > View this message in context: >> > http://forum.world.st/OSProcess-is-process-done-tp4569088p4609991.html >> > Sent from the Pharo Smalltalk mailing list archive at Nabble.com. >> > >> >> >> >> -- >> Best regards, >> Igor Stasenko. >> > > > > -- > best, > Eliot > -- Best regards, Igor Stasenko. |
On Fri, May 4, 2012 at 5:24 PM, Igor Stasenko <[hidden email]> wrote:
No. One types an EOF character to the shell (see stty) and the shell responds by closing the pipe to the process. The process then detects an eof condition once it has read all the data from the pipe. There are no EOF characters in a stream on unix.
but that is a convention between two ends. if another end does not Um, that's not how it is used. One process (e.g. parent) holds the write end of the pipe and it can close that end. The other process (e.g. child) holds teh read end and it can detect eof when it consumes all available input.
Right.
So for files, where eof is an attempt to read beyond end-of-file but not so for pipes and socket streams there is a notion of eof, which is when all data has been read and the write side of the pipe/socketstream is closed. See pipe(2), & socketpair(2)
best, Eliot |
On 5 May 2012 02:36, Eliot Miranda <[hidden email]> wrote:
> > > On Fri, May 4, 2012 at 5:24 PM, Igor Stasenko <[hidden email]> wrote: >> >> On 5 May 2012 00:56, Eliot Miranda <[hidden email]> wrote: >> > >> > >> > On Fri, May 4, 2012 at 3:28 PM, Igor Stasenko <[hidden email]> >> > wrote: >> >> >> >> On 5 May 2012 00:21, Sean P. DeNigris <[hidden email]> wrote: >> >> > >> >> > Sean P. DeNigris wrote >> >> >> >> >> >> PipeableOSProcess>>#upToEnd eventually calls >> >> >> AttachableFileStream>>#upToEnd, which tries to perform a buffered >> >> >> read >> >> >> by >> >> >> "self nextInto: 1000" (which eventually calls primitiveFileRead >> >> >> which >> >> >> calls sqFileReadIntoAt which calls fread with count arg of 1000). >> >> >> >> >> > >> >> > After further investigation, it seems to me that blocking #upToEnd is >> >> > functionally the same as #upToEndOfFile, because its test to stop >> >> > reading >> >> > data is StandardFileStream>>atEnd, which calls feof(). Therefore, if >> >> > there >> >> > is no EOF, it will keep reading until the pipe is out of data, and >> >> > then >> >> > hang >> >> > in fread on the following iteration. >> >> > >> >> imo >> >> stdin upToEnd >> >> makes no sense.. >> >> one should not expect that data which comes to stdin has any notion of >> >> "end", >> >> therefore he should never use this method on such stream. >> > >> > >> > Um, no. See below. >> > >> >> >> >> stdin/out are unbound (endless) streams , and use things like eof(), >> >> and other of such sort >> >> should be discouraged.. since it is same as asking >> >> infinity atEnd. >> > >> > >> > Um, no. One can redirect a file to stdin. One can type EOF to stdin. >> > EOF >> > definitely *does* make sense for stdin. >> >> what? like putc(EOF)? > > > No. One types an EOF character to the shell (see stty) and the shell > responds by closing the pipe to the process. The process then detects an > eof condition once it has read all the data from the pipe. There are no EOF > characters in a stream on unix. > handling input from user. if you look at general case (between two unfamiliar processes), you cannot assume anything. >> but that is a convention between two ends. if another end does not >> recognizing EOF character >> as an "end of input" signal, it will keep waiting for more data. >> since these streams are natually binary, i would be really surprised >> if some characters is reserved >> for special purposes. >> >> > stdin. stdout and stderr are merely >> > well-defined stream names. they can be bound to arbitrary streams, >> > infinite >> > or otherewise. In unix shells piping is built using dup with fork & >> > exec to >> > arrange that some program reads and writes to specific pipe files in a >> > full >> > pipe. >> >> right. but as you gave an example, imagine that i used dup() >> and one fork keeps writing to stream, while other closes own copy, >> do receiving side receives any "EOF" signal? i doubt. > > > Um, that's not how it is used. One process (e.g. parent) holds the write > end of the pipe and it can close that end. The other process (e.g. child) > holds teh read end and it can detect eof when it consumes all available > input. > closing own end? and it is free to do so. that's why i say that using upToEnd for stdin is bad practice... because i can always redirect endless stream as input for stdin .. and so your program will run in endless loop. > >> >> Here the excert from feof man page: >> The function feof() tests the end-of-file indicator for the stream >> pointed to by stream, returning non-zero if it is set. The >> end-of-file indicator may be cleared by explic- >> itly calling clearerr(), or as a side-effect of other operations, >> e.g. fseek(). >> >> which means that actually, eof is nothing else than error which >> captured while attempting to read moar from stream, nicely converted >> to eof() by higher abstraction level functions (fxxxx C functions). > > > Right. > >> >> >> but if you look at basic infrastructure, supported by kernel (read(), >> write()), there is no notion of "eof" for descriptors. all you can >> have is an error while attempting to read from or write to descriptor, >> and then you can decide how to handle that error by either treating it >> as end-of-file or signaling exception etc. > > > So for files, where eof is an attempt to read beyond end-of-file but not so > for pipes and socket streams there is a notion of eof, which is when all > data has been read and the write side of the pipe/socketstream is closed. > See pipe(2), & socketpair(2) but its again about handling error status. it even says so in man page: --- A pipe whose read or write end has been closed is considered widowed. Writing on such a pipe causes the writing process to receive a SIGPIPE signal. Widowing a pipe is the only way to deliver end-of-file to a reader: after the reader consumes any buffered data, reading a widowed pipe returns a zero count. --- but as you can see, for system there is still no notion of end-of-file. it is again, convention where you can assume that upon receiving SIGPIPE you meet the end of input, except from cases.. when such condition is unexpectable.. like reading from /dev/random :) one application can treat SIGPIPE as end of input, while another can treat it as "there something wrong with process, which delivering data to me", and so he will attempt to reconnect. -- Best regards, Igor Stasenko. |
Free forum by Nabble | Edit this page |