I'm trying to do a wget style download with Zinc but running into a few
problems when a server uses chunked transfer encoding. Eventually it ends up in ZnUtils streamFrom:to:size: with an empty totalSize (as content-length is unset) and then produces an empty file. However the download works fine if I simply do an upToEnd on the ZnChunkedReadStream. This can be seen by executing ZnClient new url: 'http://www.google.com'; downloadTo: '/tmp/foo.txt'. Thanks Chris |
Chris,
On 19 Jul 2012, at 20:21, Chris wrote: > I'm trying to do a wget style download with Zinc but running into a few problems when a server uses chunked transfer encoding. Eventually it ends up in ZnUtils streamFrom:to:size: with an empty totalSize (as content-length is unset) and then produces an empty file. However the download works fine if I simply do an upToEnd on the ZnChunkedReadStream. This can be seen by executing ZnClient new url: 'http://www.google.com'; downloadTo: '/tmp/foo.txt'. > > Thanks > Chris Yes, this seems to be a problem: your analysis is correct, since the size is unknown upfront - the whole idea of chunked transfer -, #streamFrom:to:size: in its current version cannot work. I will look at this tomorrow, I am pretty sure this can quite easily be fixed (minus the progress bar). Thanks for reporting this. Regards, Sven -- Sven Van Caekenberghe http://stfx.eu Smalltalk is the Red Pill |
On 19/07/2012 20:09, Sven Van Caekenberghe wrote:
> Chris, > > On 19 Jul 2012, at 20:21, Chris wrote: > >> I'm trying to do a wget style download with Zinc but running into a few problems when a server uses chunked transfer encoding. Eventually it ends up in ZnUtils streamFrom:to:size: with an empty totalSize (as content-length is unset) and then produces an empty file. However the download works fine if I simply do an upToEnd on the ZnChunkedReadStream. This can be seen by executing ZnClient new url: 'http://www.google.com'; downloadTo: '/tmp/foo.txt'. >> >> Thanks >> Chris > Yes, this seems to be a problem: your analysis is correct, since the size is unknown upfront - the whole idea of chunked transfer -, #streamFrom:to:size: in its current version cannot work. I will look at this tomorrow, I am pretty sure this can quite easily be fixed (minus the progress bar). > > Thanks for reporting this. > |
In reply to this post by Sven Van Caekenberghe
On 19 Jul 2012, at 21:09, Sven Van Caekenberghe wrote: > Yes, this seems to be a problem: your analysis is correct, since the size is unknown upfront - the whole idea of chunked transfer -, #streamFrom:to:size: in its current version cannot work. I will look at this tomorrow, I am pretty sure this can quite easily be fixed (minus the progress bar). I made some changes and commits. You can load the lastest version of Zn with the following load script Gofer it squeaksource: 'ZincHTTPComponents'; package: 'Zinc-HTTP'; package: 'Zinc-FileSystem'; package: 'Zinc-Tests'; load If you are not on Pharo 2.0, use the following Gofer it squeaksource: 'ZincHTTPComponents'; package: 'Zinc-HTTP'; package: 'Zinc-FileSystem-Legacy'; package: 'Zinc-Tests'; load Now, your example should work fine '/tmp/foo.txt' asFileReference ensureDeleted. ZnClient new url: 'http://www.google.com'; downloadTo: '/tmp/foo.txt'. Sven -- Sven Van Caekenberghe http://stfx.eu Smalltalk is the Red Pill |
On 20/07/2012 12:45, Sven Van Caekenberghe wrote:
> On 19 Jul 2012, at 21:09, Sven Van Caekenberghe wrote: > >> Yes, this seems to be a problem: your analysis is correct, since the size is unknown upfront - the whole idea of chunked transfer -, #streamFrom:to:size: in its current version cannot work. I will look at this tomorrow, I am pretty sure this can quite easily be fixed (minus the progress bar). > I made some changes and commits. You can load the lastest version of Zn with the following load script > > Gofer it > squeaksource: 'ZincHTTPComponents'; > package: 'Zinc-HTTP'; > package: 'Zinc-FileSystem'; > package: 'Zinc-Tests'; > load > > If you are not on Pharo 2.0, use the following > > Gofer it > squeaksource: 'ZincHTTPComponents'; > package: 'Zinc-HTTP'; > package: 'Zinc-FileSystem-Legacy'; > package: 'Zinc-Tests'; > load > > Now, your example should work fine > > '/tmp/foo.txt' asFileReference ensureDeleted. > ZnClient new url: 'http://www.google.com'; downloadTo: '/tmp/foo.txt'. > file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. Regards, Chris |
On 20 Jul 2012, at 18:22, Chris wrote: > Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. I won't be able to do anything until the beginning of August. It would be really helpful if you could provide me with an actual test case that fails. Thanks, Sven |
On 20/07/2012 18:30, Sven Van Caekenberghe wrote:
> On 20 Jul 2012, at 18:22, Chris wrote: > >> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. > I won't be able to do anything until the beginning of August. > > It would be really helpful if you could provide me with an actual test case that fails. > > Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method Regards, Chris |
Hi Chris,
On 21 Jul 2012, at 17:52, Chris <[hidden email]> wrote: > On 20/07/2012 18:30, Sven Van Caekenberghe wrote: >> On 20 Jul 2012, at 18:22, Chris wrote: >> >>> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. >> I won't be able to do anything until the beginning of August. >> >> It would be really helpful if you could provide me with an actual test case that fails. > > Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method > > Regards, > Chris The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong although it did work in all cases thrown to it up until now. I added an extra test for the case where the buffer being used is smaller than the chunk size: ZnChunkedReadStream>>#testReadingBuffered | data chunked plain buffer readStream | data := String withAll: ($a to: $z), ($A to: $Z). chunked := String streamContents: [ :stream | ZnUtils nextPutAll: data on: stream chunked: 16 ]. readStream := ZnChunkedReadStream on: chunked readStream. buffer := String new: 11. plain := String streamContents: [ :output | | readCount | [ readStream atEnd ] whileFalse: [ readCount := readStream readInto: buffer startingAt: 1 count: buffer size. output next: readCount putAll: buffer ] ]. self assert: plain equals: data As usual, load the latest code to get the fix. I hope it now works for your particular case. Thanks again for reporting this, being persistent about it and for pointing me in the right direction ! Regards, Sven Here are the actual commits: === Name: Zinc-HTTP-SvenVanCaekenberghe.291 Author: SvenVanCaekenberghe Time: 2 August 2012, 11:26:02 am UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290 various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem); added ZnLimitedReadStream>>#nextInto: as it is used by Fuel === Name: Zinc-Tests-SvenVanCaekenberghe.151 Author: SvenVanCaekenberghe Time: 2 August 2012, 11:27:58 am UUID: 3da15e83-c0ca-4066-a496-71d91393db01 Ancestors: Zinc-Tests-SvenVanCaekenberghe.150 added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem) === -- Sven Van Caekenberghe http://stfx.eu Smalltalk is the Red Pill |
sven should we do an update in zinc for 2.0?
Stef On Aug 2, 2012, at 11:39 AM, Sven Van Caekenberghe wrote: > Hi Chris, > > On 21 Jul 2012, at 17:52, Chris <[hidden email]> wrote: > >> On 20/07/2012 18:30, Sven Van Caekenberghe wrote: >>> On 20 Jul 2012, at 18:22, Chris wrote: >>> >>>> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. >>> I won't be able to do anything until the beginning of August. >>> >>> It would be really helpful if you could provide me with an actual test case that fails. >> >> Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method >> >> Regards, >> Chris > > The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong although it did work in all cases thrown to it up until now. > > I added an extra test for the case where the buffer being used is smaller than the chunk size: > > ZnChunkedReadStream>>#testReadingBuffered > | data chunked plain buffer readStream | > data := String withAll: ($a to: $z), ($A to: $Z). > chunked := String streamContents: [ :stream | > ZnUtils nextPutAll: data on: stream chunked: 16 ]. > readStream := ZnChunkedReadStream on: chunked readStream. > buffer := String new: 11. > plain := String streamContents: [ :output | | readCount | > [ readStream atEnd ] whileFalse: [ > readCount := readStream readInto: buffer startingAt: 1 count: buffer size. > output next: readCount putAll: buffer ] ]. > self assert: plain equals: data > > As usual, load the latest code to get the fix. I hope it now works for your particular case. > > Thanks again for reporting this, being persistent about it and for pointing me in the right direction ! > > Regards, > > Sven > > Here are the actual commits: > > === > Name: Zinc-HTTP-SvenVanCaekenberghe.291 > Author: SvenVanCaekenberghe > Time: 2 August 2012, 11:26:02 am > UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd > Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290 > > various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem); > added ZnLimitedReadStream>>#nextInto: as it is used by Fuel > === > Name: Zinc-Tests-SvenVanCaekenberghe.151 > Author: SvenVanCaekenberghe > Time: 2 August 2012, 11:27:58 am > UUID: 3da15e83-c0ca-4066-a496-71d91393db01 > Ancestors: Zinc-Tests-SvenVanCaekenberghe.150 > > added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem) > === > > -- > Sven Van Caekenberghe > http://stfx.eu > Smalltalk is the Red Pill > > > |
If Chris confirms it fixes his problem, we probably should.
On 02 Aug 2012, at 19:43, Stéphane Ducasse <[hidden email]> wrote: > sven should we do an update in zinc for 2.0? > > Stef > > On Aug 2, 2012, at 11:39 AM, Sven Van Caekenberghe wrote: > >> Hi Chris, >> >> On 21 Jul 2012, at 17:52, Chris <[hidden email]> wrote: >> >>> On 20/07/2012 18:30, Sven Van Caekenberghe wrote: >>>> On 20 Jul 2012, at 18:22, Chris wrote: >>>> >>>>> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. >>>> I won't be able to do anything until the beginning of August. >>>> >>>> It would be really helpful if you could provide me with an actual test case that fails. >>> >>> Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method >>> >>> Regards, >>> Chris >> >> The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong although it did work in all cases thrown to it up until now. >> >> I added an extra test for the case where the buffer being used is smaller than the chunk size: >> >> ZnChunkedReadStream>>#testReadingBuffered >> | data chunked plain buffer readStream | >> data := String withAll: ($a to: $z), ($A to: $Z). >> chunked := String streamContents: [ :stream | >> ZnUtils nextPutAll: data on: stream chunked: 16 ]. >> readStream := ZnChunkedReadStream on: chunked readStream. >> buffer := String new: 11. >> plain := String streamContents: [ :output | | readCount | >> [ readStream atEnd ] whileFalse: [ >> readCount := readStream readInto: buffer startingAt: 1 count: buffer size. >> output next: readCount putAll: buffer ] ]. >> self assert: plain equals: data >> >> As usual, load the latest code to get the fix. I hope it now works for your particular case. >> >> Thanks again for reporting this, being persistent about it and for pointing me in the right direction ! >> >> Regards, >> >> Sven >> >> Here are the actual commits: >> >> === >> Name: Zinc-HTTP-SvenVanCaekenberghe.291 >> Author: SvenVanCaekenberghe >> Time: 2 August 2012, 11:26:02 am >> UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd >> Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290 >> >> various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem); >> added ZnLimitedReadStream>>#nextInto: as it is used by Fuel >> === >> Name: Zinc-Tests-SvenVanCaekenberghe.151 >> Author: SvenVanCaekenberghe >> Time: 2 August 2012, 11:27:58 am >> UUID: 3da15e83-c0ca-4066-a496-71d91393db01 >> Ancestors: Zinc-Tests-SvenVanCaekenberghe.150 >> >> added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem) >> === >> >> -- >> Sven Van Caekenberghe >> http://stfx.eu >> Smalltalk is the Red Pill >> >> >> > > |
In reply to this post by Sven Van Caekenberghe
Almost!
I had to change ZnUtils class>>#streamFrom:to: to say readCount := inputStream readInto: buffer startingAt: 1 count: buffer size rather than startingAt: 0. Otherwise ZnChunkedReadStream>>#readInto:startingAt:count: had an offset and read of 0 first time in and failed on the ByteArray replace. Seems a bit fundamental but worked fine after I changed it. > If Chris confirms it fixes his problem, we probably should. > > On 02 Aug 2012, at 19:43, Stéphane Ducasse <[hidden email]> wrote: > >> sven should we do an update in zinc for 2.0? >> >> Stef >> >> On Aug 2, 2012, at 11:39 AM, Sven Van Caekenberghe wrote: >> >>> Hi Chris, >>> >>> On 21 Jul 2012, at 17:52, Chris <[hidden email]> wrote: >>> >>>> On 20/07/2012 18:30, Sven Van Caekenberghe wrote: >>>>> On 20 Jul 2012, at 18:22, Chris wrote: >>>>> >>>>>> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. >>>>> I won't be able to do anything until the beginning of August. >>>>> >>>>> It would be really helpful if you could provide me with an actual test case that fails. >>>> Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method >>>> >>>> Regards, >>>> Chris >>> The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong although it did work in all cases thrown to it up until now. >>> >>> I added an extra test for the case where the buffer being used is smaller than the chunk size: >>> >>> ZnChunkedReadStream>>#testReadingBuffered >>> | data chunked plain buffer readStream | >>> data := String withAll: ($a to: $z), ($A to: $Z). >>> chunked := String streamContents: [ :stream | >>> ZnUtils nextPutAll: data on: stream chunked: 16 ]. >>> readStream := ZnChunkedReadStream on: chunked readStream. >>> buffer := String new: 11. >>> plain := String streamContents: [ :output | | readCount | >>> [ readStream atEnd ] whileFalse: [ >>> readCount := readStream readInto: buffer startingAt: 1 count: buffer size. >>> output next: readCount putAll: buffer ] ]. >>> self assert: plain equals: data >>> >>> As usual, load the latest code to get the fix. I hope it now works for your particular case. >>> >>> Thanks again for reporting this, being persistent about it and for pointing me in the right direction ! >>> >>> Regards, >>> >>> Sven >>> >>> Here are the actual commits: >>> >>> === >>> Name: Zinc-HTTP-SvenVanCaekenberghe.291 >>> Author: SvenVanCaekenberghe >>> Time: 2 August 2012, 11:26:02 am >>> UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd >>> Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290 >>> >>> various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem); >>> added ZnLimitedReadStream>>#nextInto: as it is used by Fuel >>> === >>> Name: Zinc-Tests-SvenVanCaekenberghe.151 >>> Author: SvenVanCaekenberghe >>> Time: 2 August 2012, 11:27:58 am >>> UUID: 3da15e83-c0ca-4066-a496-71d91393db01 >>> Ancestors: Zinc-Tests-SvenVanCaekenberghe.150 >>> >>> added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem) >>> === >>> >>> -- >>> Sven Van Caekenberghe >>> http://stfx.eu >>> Smalltalk is the Red Pill >>> >>> >>> >> > > |
Hi Chris,
Thanks again for testing and for suggesting the fix. ZnUtils>>#streamFrom:to: without the size is a new method that was not yet covered by a unit test, I added one now. Stef, I will update the issue: Zn can be updated now in Pharo 2.0 (and maybe in 1.4 as well). Sven On 03 Aug 2012, at 18:19, Chris <[hidden email]> wrote: > Almost! > > I had to change ZnUtils class>>#streamFrom:to: to say > > readCount := inputStream readInto: buffer startingAt: 1 count: buffer size rather than startingAt: 0. > > Otherwise ZnChunkedReadStream>>#readInto:startingAt:count: had an offset and read of 0 first time in and failed on the ByteArray replace. Seems a bit fundamental but worked fine after I changed it. > >> If Chris confirms it fixes his problem, we probably should. >> >> On 02 Aug 2012, at 19:43, Stéphane Ducasse <[hidden email]> wrote: >> >>> sven should we do an update in zinc for 2.0? >>> >>> Stef >>> >>> On Aug 2, 2012, at 11:39 AM, Sven Van Caekenberghe wrote: >>> >>>> Hi Chris, >>>> >>>> On 21 Jul 2012, at 17:52, Chris <[hidden email]> wrote: >>>> >>>>> On 20/07/2012 18:30, Sven Van Caekenberghe wrote: >>>>>> On 20 Jul 2012, at 18:22, Chris wrote: >>>>>> >>>>>>> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. >>>>>> I won't be able to do anything until the beginning of August. >>>>>> >>>>>> It would be really helpful if you could provide me with an actual test case that fails. >>>>> Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method >>>>> >>>>> Regards, >>>>> Chris >>>> The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong although it did work in all cases thrown to it up until now. >>>> >>>> I added an extra test for the case where the buffer being used is smaller than the chunk size: >>>> >>>> ZnChunkedReadStream>>#testReadingBuffered >>>> | data chunked plain buffer readStream | >>>> data := String withAll: ($a to: $z), ($A to: $Z). >>>> chunked := String streamContents: [ :stream | >>>> ZnUtils nextPutAll: data on: stream chunked: 16 ]. >>>> readStream := ZnChunkedReadStream on: chunked readStream. >>>> buffer := String new: 11. >>>> plain := String streamContents: [ :output | | readCount | >>>> [ readStream atEnd ] whileFalse: [ >>>> readCount := readStream readInto: buffer startingAt: 1 count: buffer size. >>>> output next: readCount putAll: buffer ] ]. >>>> self assert: plain equals: data >>>> >>>> As usual, load the latest code to get the fix. I hope it now works for your particular case. >>>> >>>> Thanks again for reporting this, being persistent about it and for pointing me in the right direction ! >>>> >>>> Regards, >>>> >>>> Sven >>>> >>>> Here are the actual commits: >>>> >>>> === >>>> Name: Zinc-HTTP-SvenVanCaekenberghe.291 >>>> Author: SvenVanCaekenberghe >>>> Time: 2 August 2012, 11:26:02 am >>>> UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd >>>> Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290 >>>> >>>> various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem); >>>> added ZnLimitedReadStream>>#nextInto: as it is used by Fuel >>>> === >>>> Name: Zinc-Tests-SvenVanCaekenberghe.151 >>>> Author: SvenVanCaekenberghe >>>> Time: 2 August 2012, 11:27:58 am >>>> UUID: 3da15e83-c0ca-4066-a496-71d91393db01 >>>> Ancestors: Zinc-Tests-SvenVanCaekenberghe.150 >>>> >>>> added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem) >>>> === >>>> >>>> -- >>>> Sven Van Caekenberghe >>>> http://stfx.eu >>>> Smalltalk is the Red Pill >>>> >>>> >>>> >>> >> >> > > |
excellent
I'm slowly back to live and I will start to integrate more so that esteban can do something more exciting. Stef On Aug 3, 2012, at 10:55 PM, Sven Van Caekenberghe wrote: > Hi Chris, > > Thanks again for testing and for suggesting the fix. > > ZnUtils>>#streamFrom:to: without the size is a new method that was not yet covered by a unit test, I added one now. > > Stef, > > I will update the issue: Zn can be updated now in Pharo 2.0 (and maybe in 1.4 as well). > > Sven > > On 03 Aug 2012, at 18:19, Chris <[hidden email]> wrote: > >> Almost! >> >> I had to change ZnUtils class>>#streamFrom:to: to say >> >> readCount := inputStream readInto: buffer startingAt: 1 count: buffer size rather than startingAt: 0. >> >> Otherwise ZnChunkedReadStream>>#readInto:startingAt:count: had an offset and read of 0 first time in and failed on the ByteArray replace. Seems a bit fundamental but worked fine after I changed it. >> >>> If Chris confirms it fixes his problem, we probably should. >>> >>> On 02 Aug 2012, at 19:43, Stéphane Ducasse <[hidden email]> wrote: >>> >>>> sven should we do an update in zinc for 2.0? >>>> >>>> Stef >>>> >>>> On Aug 2, 2012, at 11:39 AM, Sven Van Caekenberghe wrote: >>>> >>>>> Hi Chris, >>>>> >>>>> On 21 Jul 2012, at 17:52, Chris <[hidden email]> wrote: >>>>> >>>>>> On 20/07/2012 18:30, Sven Van Caekenberghe wrote: >>>>>>> On 20 Jul 2012, at 18:22, Chris wrote: >>>>>>> >>>>>>>> Thanks for that. I'm actually still having a bit of trouble when the file is bigger than the chunk size. ZnChunkedReadStream>>#readInto:startingAt:count: uses a limit variable which is bigger than the collection and requestedCount. >>>>>>> I won't be able to do anything until the beginning of August. >>>>>>> >>>>>>> It would be really helpful if you could provide me with an actual test case that fails. >>>>>> Okay thanks Sven, I'll see what I can do. I am just working with a third party server which is sending chunks of 1mb (which I assume is valid) and I think Zinc just needs to support chunks bigger than the 16k in the above method >>>>>> >>>>>> Regards, >>>>>> Chris >>>>> The 16K buffer used in ZnUtils>>#streamFrom:to:[size:] is independent from the chunk buffer used in ZnChunkedReadStream, BUT you did find a serious problem. Actually the code of ZnChunkedReadStream>>#readInto:startingAt:count: was embarrassingly bad and wrong although it did work in all cases thrown to it up until now. >>>>> >>>>> I added an extra test for the case where the buffer being used is smaller than the chunk size: >>>>> >>>>> ZnChunkedReadStream>>#testReadingBuffered >>>>> | data chunked plain buffer readStream | >>>>> data := String withAll: ($a to: $z), ($A to: $Z). >>>>> chunked := String streamContents: [ :stream | >>>>> ZnUtils nextPutAll: data on: stream chunked: 16 ]. >>>>> readStream := ZnChunkedReadStream on: chunked readStream. >>>>> buffer := String new: 11. >>>>> plain := String streamContents: [ :output | | readCount | >>>>> [ readStream atEnd ] whileFalse: [ >>>>> readCount := readStream readInto: buffer startingAt: 1 count: buffer size. >>>>> output next: readCount putAll: buffer ] ]. >>>>> self assert: plain equals: data >>>>> >>>>> As usual, load the latest code to get the fix. I hope it now works for your particular case. >>>>> >>>>> Thanks again for reporting this, being persistent about it and for pointing me in the right direction ! >>>>> >>>>> Regards, >>>>> >>>>> Sven >>>>> >>>>> Here are the actual commits: >>>>> >>>>> === >>>>> Name: Zinc-HTTP-SvenVanCaekenberghe.291 >>>>> Author: SvenVanCaekenberghe >>>>> Time: 2 August 2012, 11:26:02 am >>>>> UUID: 3d8c50cd-2d7b-459f-89f3-b77a23dccfdd >>>>> Ancestors: Zinc-HTTP-SvenVanCaekenberghe.290 >>>>> >>>>> various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem); >>>>> added ZnLimitedReadStream>>#nextInto: as it is used by Fuel >>>>> === >>>>> Name: Zinc-Tests-SvenVanCaekenberghe.151 >>>>> Author: SvenVanCaekenberghe >>>>> Time: 2 August 2012, 11:27:58 am >>>>> UUID: 3da15e83-c0ca-4066-a496-71d91393db01 >>>>> Ancestors: Zinc-Tests-SvenVanCaekenberghe.150 >>>>> >>>>> added new ZnChunkedReadStreamTests>>#testReadingBuffered to validate various fixes to ZnChunkedReadStream>>#readInto:startingAt:count: (thx Chris Bailey for reporting the problem) >>>>> === >>>>> >>>>> -- >>>>> Sven Van Caekenberghe >>>>> http://stfx.eu >>>>> Smalltalk is the Red Pill >>>>> >>>>> >>>>> >>>> >>> >>> >> >> > > |
Free forum by Nabble | Edit this page |