tim Rowledge uploaded a new version of Monticello to project The Trunk:
http://source.squeak.org/trunk/Monticello-tpr.693.mcz ==================== Summary ==================== Name: Monticello-tpr.693 Author: tpr Time: 18 January 2019, 4:22:04.378573 pm UUID: bad22387-6176-41c3-9059-5a81199a26c7 Ancestors: Monticello-tpr.692 Allow up to 3 attempts to access the repository, to allow for network slowness etc. Pass the exception up if we still don't get what we want. =============== Diff against Monticello-tpr.692 =============== Item was changed: ----- Method: MCHttpRepository>>readStreamForFileNamed:do: (in category 'private') ----- readStreamForFileNamed: aString do: aBlock + | contents attempts| + attempts := 0. + self displayProgress: 'Downloading ', aString during: [ + [attempts := attempts + 1. + contents := self httpGet: (self urlForFileNamed: aString) arguments: nil] on: NetworkError do: [:ex| + attempts >= 3 ifTrue:[ex pass]. + ex retry ]]. - | contents | - contents := self displayProgress: 'Downloading ', aString during: [ - self httpGet: (self urlForFileNamed: aString) arguments: nil ]. ^contents ifNotNil: [ aBlock value: contents ]! |
The problem is you have your timeout set to low, not the server
not receiving the request. Retry'ing doesn't make the initial request come back any sooner, you're simply hammering the server with duplicate request(s). This code has today's date on it, did you test it? On Fri, Jan 18, 2019 at 6:22 PM <[hidden email]> wrote: > > tim Rowledge uploaded a new version of Monticello to project The Trunk: > http://source.squeak.org/trunk/Monticello-tpr.693.mcz > > ==================== Summary ==================== > > Name: Monticello-tpr.693 > Author: tpr > Time: 18 January 2019, 4:22:04.378573 pm > UUID: bad22387-6176-41c3-9059-5a81199a26c7 > Ancestors: Monticello-tpr.692 > > Allow up to 3 attempts to access the repository, to allow for network slowness etc. Pass the exception up if we still don't get what we want. > > =============== Diff against Monticello-tpr.692 =============== > > Item was changed: > ----- Method: MCHttpRepository>>readStreamForFileNamed:do: (in category 'private') ----- > readStreamForFileNamed: aString do: aBlock > > + | contents attempts| > + attempts := 0. > + self displayProgress: 'Downloading ', aString during: [ > + [attempts := attempts + 1. > + contents := self httpGet: (self urlForFileNamed: aString) arguments: nil] on: NetworkError do: [:ex| > + attempts >= 3 ifTrue:[ex pass]. > + ex retry ]]. > - | contents | > - contents := self displayProgress: 'Downloading ', aString during: [ > - self httpGet: (self urlForFileNamed: aString) arguments: nil ]. > ^contents ifNotNil: [ aBlock value: contents ]! > > |
> On 2019-01-19, at 10:27 AM, Chris Muller <[hidden email]> wrote: > > The problem is you have your timeout set to low, Well, strictly speaking, the #httpGet.... method has that problem, not any code I touched. A longer timeout does sound like a better solution. It looks like WebClient is using the Socket standardTimeout by default and that is 45. I guessed that would be milliseconds though there is no comment saying so in Socket class>>#standardTimeout .. but Socket class>>#standardDeadline makes it look more like that would be 45 seconds. And SocketStream>>#timeout: refers to it as seconds so that would mean a really long seeming timeout. I'm pretty sure that in my initial problem email there was no delay anywhere near that long before the first notifier opened. > not the server > not receiving the request. Retry'ing doesn't make the initial request > come back any sooner, you're simply hammering the server with > duplicate request(s). Good point. It would be nice to cleanly handle all the possible errors with properly error specific responses;. One would need to know more about network handling than I to do it comprehensively. And this is why I was asking for input from people with more familiarity with network stuff. tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim The halfway point between right and wrong is still damn wrong. Compromise isn't always a solution |
> On 2019-01-19, at 11:04 AM, tim Rowledge <[hidden email]> wrote: > > > >> On 2019-01-19, at 10:27 AM, Chris Muller <[hidden email]> wrote: >> >> The problem is you have your timeout set to low, > > Well, strictly speaking, the #httpGet.... method has that problem, not any code I touched. A longer timeout does sound like a better solution. I think, maybe, perhaps, possibly, that since the error I saw was a 504, that the too-short timeout might have been in the context of the gateway, rather than "our" code? Is that plausible? tim -- tim Rowledge; [hidden email]; http://www.rowledge.org/tim If you think C++ is not overly complicated, just what is a protected abstract virtual base pure virtual private destructor and when was the last time you needed one? -- Tom Cargin |
Free forum by Nabble | Edit this page |