Hi
I have a case where the #& in Socket >> #waitForSendDoneFor: shows up in a tally (see attachment). I'm able to push about 1 Mbyte/s more when I replace that with #and:. So my question is this really needed or should I file a bug and send a patch? This is Pharo 1.1.1 BTW. Cheers Philippe tally.txt (18K) Download Attachment |
On Tue, 9 Nov 2010, Philippe Marschall wrote:
> Hi > > I have a case where the #& in Socket >> #waitForSendDoneFor: shows up in > a tally (see attachment). I'm able to push about 1 Mbyte/s more when I > replace that with #and:. So my question is this really needed or should > I file a bug and send a patch? I "fixed" this once by for Squeak, but I didn't push this change. Here's the boolean expression: (sendDone := self primSocketSendDone: socketHandle) not and: [ self isConnected and: [ "Connection end and final data can happen fast, so test in this order" Time millisecondClockValue < deadline] ] This refactoring saves one #isConnected send which may not be significant. The appearence of #& in MessageTally may be just false measurement. IIRC with the current VMs the time spent in primitives are added to the next real message send (for performance reasons), which means that the #primSocketConnectionStatus: from #isConnected may be showing up there. Let me know if this change helps. Levente > > This is Pharo 1.1.1 BTW. > > Cheers > Philippe > |
In reply to this post by Philippe Marschall-2-3
What does your patch do? At a minimum, it deserves a little attention. Things that come to mind are that one version does less work due to some type of optimization (and runs faster as a result) or that one is too quick to detect a loss of connection and sends less data per opportunity, appearing to run slower as a result.
Can you elaborate on "I'm able to push about 1 Mbyte/s more"? I guess I'm asking how that manifests itself? Are there a bunch of connections that form, send and fail? Do they each get a little farther or do they go faster? Also, my standard objection to timeouts enters into this. IMHO, the socket should do what it is asked to do, blocking only the calling thread, and let other threads and/or the user decide when that is taking too long. Could you be getting timeouts that are causing unexpected behavior? ________________________________________ From: [hidden email] [[hidden email]] On Behalf Of Philippe Marschall [[hidden email]] Sent: Tuesday, November 09, 2010 1:11 AM To: [hidden email] Subject: [Pharo-project] #& in Socket >> #waitForSendDoneFor: Hi I have a case where the #& in Socket >> #waitForSendDoneFor: shows up in a tally (see attachment). I'm able to push about 1 Mbyte/s more when I replace that with #and:. So my question is this really needed or should I file a bug and send a patch? This is Pharo 1.1.1 BTW. Cheers Philippe |
In reply to this post by Levente Uzonyi-2
On Tue, 9 Nov 2010, Levente Uzonyi wrote:
> On Tue, 9 Nov 2010, Philippe Marschall wrote: > >> Hi >> >> I have a case where the #& in Socket >> #waitForSendDoneFor: shows up in >> a tally (see attachment). I'm able to push about 1 Mbyte/s more when I >> replace that with #and:. So my question is this really needed or should >> I file a bug and send a patch? > > I "fixed" this once by for Squeak, but I didn't push this change. Here's the > boolean expression: > > (sendDone := self primSocketSendDone: socketHandle) not and: [ > self isConnected and: [ > "Connection end and final data can happen fast, so test in > this order" > Time millisecondClockValue < deadline] ] > > This refactoring saves one #isConnected send which may not be significant. I just realized, that you replaced #& already to #and:, so nvm. Levente > > The appearence of #& in MessageTally may be just false measurement. IIRC with > the current VMs the time spent in primitives are added to the next real > message send (for performance reasons), which means that the > #primSocketConnectionStatus: from #isConnected may be showing up there. Let > me know if this change helps. > > > Levente > >> >> This is Pharo 1.1.1 BTW. >> >> Cheers >> Philippe >> > > |
In reply to this post by Schwab,Wilhelm K
On 09.11.2010 07:58, Schwab,Wilhelm K wrote:
> What does your patch do? It replaces the #& with and #and: swaps receiver and argument to preserve the same semantics. That saves a primitive call if the data is already sent. It's basically the same as posted by Levente. > At a minimum, it deserves a little attention. Things that come to mind are that one version does less work due to some type of optimization (and runs faster as a result) or that one is too quick to detect a loss of connection and sends less data per opportunity, appearing to run slower as a result. > > Can you elaborate on "I'm able to push about 1 Mbyte/s more"? I guess I'm asking how that manifests itself? Are there a bunch of connections that form, send and fail? Do they each get a little farther or do they go faster? Throughput outgoing from the Pharo image was about 1 Mbyte/s higher. Now I can't reproduce it anymore so it was probably a measuring inaccuracy. That was making 100000 HTTP requests with Apache bench, keep alive and a concurrency level of 10. Apache (the server, not the bench) opens 10 persistent worker connections to the Pharo image each having it's own process an socket so the load should be pretty well spread across them. Then it's Seaside and writing a constant 16 Kbyte byte array to the response without encoding or rendering. It's exactly the same incoming and outgoing traffic, it just finished quicker. > Also, my standard objection to timeouts enters into this. It's on localhost, the longest request takes about 20ms to finish and #sendSomeData:startIndex:count: has a timeout of 20sec. Sending the entire response is four sends to #sendData:count: which is at least four #waitForSendDoneFor: sends. > IMHO, the socket should do what it is asked to do, blocking only the calling thread, and let other threads and/or the user decide when that is taking too long. What does this have to do with replacing a #& with a #and:? > Could you be getting timeouts that are causing unexpected behavior? I don't think so, see above. Since I can't reproduce it anymore it was probably a measuring inaccuracy (~3 %). However I think it's still worthwhile replacing that #&. Cheers Philippe |
Phillipe,
The *possible* connection is that pushing limits has a way of making otherwise benign problems show themselves. Bill ________________________________________ From: [hidden email] [[hidden email]] On Behalf Of Philippe Marschall [[hidden email]] Sent: Tuesday, November 09, 2010 2:21 PM To: [hidden email] Subject: Re: [Pharo-project] #& in Socket >> #waitForSendDoneFor: > IMHO, the socket should do what it is asked to do, blocking only the calling thread, and let other threads and/or the user decide when that is taking too long. What does this have to do with replacing a #& with a #and:? > Could you be getting timeouts that are causing unexpected behavior? I don't think so, see above. Since I can't reproduce it anymore it was probably a measuring inaccuracy (~3 %). However I think it's still worthwhile replacing that #&. Cheers Philippe |
In reply to this post by Philippe Marschall-2-3
On Tue, 9 Nov 2010, Philippe Marschall wrote:
> On 09.11.2010 07:58, Schwab,Wilhelm K wrote: >> What does your patch do? > > It replaces the #& with and #and: swaps receiver and argument to > preserve the same semantics. That saves a primitive call if the data is > already sent. It's basically the same as posted by Levente. > >> At a minimum, it deserves a little attention. Things that come to mind are that one version does less work due to some type of optimization (and runs faster as a result) or that one is too quick to detect a loss of connection and sends less data per opportunity, appearing to run slower as a result. >> >> Can you elaborate on "I'm able to push about 1 Mbyte/s more"? I guess I'm asking how that manifests itself? Are there a bunch of connections that form, send and fail? Do they each get a little farther or do they go faster? > > Throughput outgoing from the Pharo image was about 1 Mbyte/s higher. Now 1MB/s throughput sounds pretty low. On windows I could transfer 160MB/s using two processes in the same image by directly using the Socket primitives in 4k chunks. With high level methods (#sendData: #receiveData:) it went down to 110MB/s. With a two image setup (client-server) I got 66MB/s (one CPU/image). Though our machines are different (yours is probably faster), still Machine/OS differences + concurrent access + ab + apache + AJP + Seaside caused 66x slowdown. That's more than acceptable IMHO. Levente |
On Thu, 11 Nov 2010, Levente Uzonyi wrote:
> On Tue, 9 Nov 2010, Philippe Marschall wrote: > >> On 09.11.2010 07:58, Schwab,Wilhelm K wrote: >>> What does your patch do? >> >> It replaces the #& with and #and: swaps receiver and argument to >> preserve the same semantics. That saves a primitive call if the data is >> already sent. It's basically the same as posted by Levente. >> >>> At a minimum, it deserves a little attention. Things that come to mind >>> are that one version does less work due to some type of optimization (and >>> runs faster as a result) or that one is too quick to detect a loss of >>> connection and sends less data per opportunity, appearing to run slower as >>> a result. >>> >>> Can you elaborate on "I'm able to push about 1 Mbyte/s more"? I guess I'm >>> asking how that manifests itself? Are there a bunch of connections that >>> form, send and fail? Do they each get a little farther or do they go >>> faster? >> >> Throughput outgoing from the Pharo image was about 1 Mbyte/s higher. Now > > 1MB/s throughput sounds pretty low. On windows I could transfer 160MB/s using > two processes in the same image by directly using the Socket primitives in 4k > chunks. With high level methods (#sendData: #receiveData:) it went down to > 110MB/s. With a two image setup (client-server) I got 66MB/s (one CPU/image). Looks like I missed the word "higher" in your mail. Anyway, I'm interested in your absolute numbers too. Levente > > Though our machines are different (yours is probably faster), still > Machine/OS differences + concurrent access + ab + apache + AJP + Seaside > caused 66x slowdown. That's more than acceptable IMHO. > > > Levente > > |
On 11.11.2010 00:39, Levente Uzonyi wrote:
> On Thu, 11 Nov 2010, Levente Uzonyi wrote: > >> On Tue, 9 Nov 2010, Philippe Marschall wrote: >> >>> On 09.11.2010 07:58, Schwab,Wilhelm K wrote: >>>> What does your patch do? >>> >>> It replaces the #& with and #and: swaps receiver and argument to >>> preserve the same semantics. That saves a primitive call if the data is >>> already sent. It's basically the same as posted by Levente. >>> >>>> At a minimum, it deserves a little attention. Things that come to >>>> mind are that one version does less work due to some type of >>>> optimization (and runs faster as a result) or that one is too quick >>>> to detect a loss of connection and sends less data per opportunity, >>>> appearing to run slower as a result. >>>> >>>> Can you elaborate on "I'm able to push about 1 Mbyte/s more"? I >>>> guess I'm asking how that manifests itself? Are there a bunch of >>>> connections that form, send and fail? Do they each get a little >>>> farther or do they go faster? >>> >>> Throughput outgoing from the Pharo image was about 1 Mbyte/s higher. Now >> >> 1MB/s throughput sounds pretty low. On windows I could transfer >> 160MB/s using two processes in the same image by directly using the >> Socket primitives in 4k chunks. With high level methods (#sendData: >> #receiveData:) it went down to 110MB/s. With a two image setup >> (client-server) I got 66MB/s (one CPU/image). > > Looks like I missed the word "higher" in your mail. Anyway, I'm > interested in your absolute numbers too. to see this performance in actual production code. It's a simple Seaside request handler that just writes a 16 Kbyte byte array to the response. In fact it's the source of www.seaside.st. No rendering, no encoding, no sessions, no backtracking, no continuations. You can find the code AJP-Tests-Core in the ajp project on SqueakSource. This is done on an old Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz, 64bit Linux 2.6.35, Cog r2316 and Pharo 1.1.1. Cheers Philippe apache-bench.txt (2K) Download Attachment |
In reply to this post by Philippe Marschall-2-3
so philippe
let us know what we should integrate. On Nov 9, 2010, at 7:15 AM, Philippe Marschall wrote: > Hi > > I have a case where the #& in Socket >> #waitForSendDoneFor: shows up in > a tally (see attachment). I'm able to push about 1 Mbyte/s more when I > replace that with #and:. So my question is this really needed or should > I file a bug and send a patch? > > This is Pharo 1.1.1 BTW. > > Cheers > Philippe > <tally.txt> |
On 11.11.2010 10:22, Stéphane Ducasse wrote:
> so philippe > > let us know what we should integrate. I created 3277 and committed a fix to the inbox. I'd welcome a review. Cheers Philippe |
In reply to this post by Stéphane Ducasse
tx!
> On 11.11.2010 10:22, Stéphane Ducasse wrote: >> so philippe >> >> let us know what we should integrate. > > I created 3277 and committed a fix to the inbox. I'd welcome a review. > > Cheers > Philippe > > |
Free forum by Nabble | Edit this page |