Sven, thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation?Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box. 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: Robert, _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
> On 18 Jan 2016, at 13:17, Robert Kuszinger <[hidden email]> wrote: > > Sven, > > thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation? Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb. > Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box. The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb. Given an adaptor instance, you can access the server using #server. So together this would be ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024. Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory. Let me know how that goes. > thanks > Robert > > > 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: > Robert, > > If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor. > > But of course, it might not necessarily have to do with the adaptor. > > HTH, > > Sven > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Sven, thanks for the comments. I understand all."Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <[hidden email]>:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Robert,
This is not such an easy problem, you have to really understand HTTP. BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? Now, here is the key idea (pure Zn, no Seaside, quick hack): (ZnServer startOn: 1701) reader: [ :stream | ZnRequest readStreamingFrom: stream ]; maximumEntitySize: 100*1024*1024; onRequestRespond: [ :req | '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | out binary. ZnUtils streamFrom: req entity stream to: out ]. ZnResponse ok: (ZnEntity text: 'done') ]; yourself. You would use it like this: $ echo one two three > data.bin $ curl -X POST -d @data.bin http://localhost:1701 $ cat /tmp/upload.bin one two three With a 1Mb data file generated from Pharo: '/tmp/data.txt' asFileReference writeStreamDo: [ :out | 1 * 1024 timesRepeat: [ 1 to: 32 do: [ :each | out << Character alphabet << (each printStringPadded: 5); lf ] ] ] $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 * Rebuilt URL to: http://localhost:1701/ * Trying ::1... * connect to ::1 port 1701 failed: Connection refused * Trying 127.0.0.1... * Connected to localhost (127.0.0.1) port 1701 (#0) > POST / HTTP/1.1 > Host: localhost:1701 > User-Agent: curl/7.43.0 > Accept: */* > Content-Length: 1048576 > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > * Done waiting for 100-continue * We are completely uploaded and fine < HTTP/1.1 200 OK < Content-Type: text/plain;charset=utf-8 < Content-Length: 4 < Date: Mon, 18 Jan 2016 14:56:53 GMT < Server: Zinc HTTP Components 1.0 < * Connection #0 to host localhost left intact done $ diff data2.bin /tmp/upload.bin This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? Maybe all that is needed is giving Pharo more memory. What platform are you on ? Sven > On 18 Jan 2016, at 13:39, Robert Kuszinger <[hidden email]> wrote: > > Sven, > > thanks for the comments. I understand all. > > Could you please clarify this: > > "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." > > Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? > > > Answering on how it goes: > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... > > > thanks > Robert > > > > > 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: > > > On 18 Jan 2016, at 13:17, Robert Kuszinger <[hidden email]> wrote: > > > > Sven, > > > > thanks for the tip. Actually the UTF problem won't appear with Zinc but I also need to do large file uploads. Zinc resets connection somewhere between 10-19 MB File size. Is there a built-in limitation? > > Yes, one of several used by Zn to protect itself from resource abuse (DOS). The default limit is 16Mb. > > > Anyway, an idea on how to break through this limit may also help. I need to upload 150-300 MB files regulary with this service. KomHttp did it out-of-the-box. > > The limit you are running into is called #maximumEntitySize and can be set with ZnServer>>#maximumEntitySize: - The default limit is 16Mb. > > Given an adaptor instance, you can access the server using #server. > > So together this would be > > ZnZincServerAdaptor default server maximumEntitySize: 300*1024*1024. > > Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory. > > Let me know how that goes. > > > thanks > > Robert > > > > > > 2016-01-18 12:45 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: > > Robert, > > > > If you are using Pharo, it seems more logical that you would use ZnZincServerAdaptor. > > > > But of course, it might not necessarily have to do with the adaptor. > > > > HTH, > > > > Sven > > > > _______________________________________________ > > seaside mailing list > > [hidden email] > > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. I would be great to create the upload receiving part also with Pharo at least. All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) This is the scenario. Sven Van Caekenberghe <[hidden email]> ezt írta (időpont: 2016. jan. 18., H, 16:30): Robert, _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Hey all
just my 2ct while skimming the thread. I have upload problems with my seaside app and plan to tackle them by utilizing the reverse proxy. In my scenario, that is nginx, wich ships an "upload module" https://www.nginx.com/resources/wiki/modules/upload/ Given that, the upload is handled by the reverse proxy and only when the file is already on the file system, the backend (seaside in this case) would get a notification request. I plan to implement this within the next 6 weeks, so if I get going something usable, I'll probably hand it back to the seaside community :) Remind me if I forget ;) best regards -Tobias On 18.01.2016, at 18:00, Robert Kuszinger <[hidden email]> wrote: > > Sven, > > thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... > > Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. > > No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. > > Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. > > I would be great to create the upload receiving part also with Pharo at least. > > All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. > > I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) > > This is the scenario. > > Robert > > > > Sven Van Caekenberghe <[hidden email]> ezt írta (időpont: 2016. jan. 18., H, 16:30): > Robert, > > This is not such an easy problem, you have to really understand HTTP. > > BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? > > Now, here is the key idea (pure Zn, no Seaside, quick hack): > > (ZnServer startOn: 1701) > reader: [ :stream | ZnRequest readStreamingFrom: stream ]; > maximumEntitySize: 100*1024*1024; > onRequestRespond: [ :req | > '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | > out binary. > ZnUtils streamFrom: req entity stream to: out ]. > ZnResponse ok: (ZnEntity text: 'done') ]; > yourself. > > You would use it like this: > > $ echo one two three > data.bin > $ curl -X POST -d @data.bin http://localhost:1701 > $ cat /tmp/upload.bin > one two three > > With a 1Mb data file generated from Pharo: > > '/tmp/data.txt' asFileReference writeStreamDo: [ :out | > 1 * 1024 timesRepeat: [ > 1 to: 32 do: [ :each | > out << Character alphabet << (each printStringPadded: 5); lf ] ] ] > > $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 > * Rebuilt URL to: http://localhost:1701/ > * Trying ::1... > * connect to ::1 port 1701 failed: Connection refused > * Trying 127.0.0.1... > * Connected to localhost (127.0.0.1) port 1701 (#0) > > POST / HTTP/1.1 > > Host: localhost:1701 > > User-Agent: curl/7.43.0 > > Accept: */* > > Content-Length: 1048576 > > Content-Type: application/x-www-form-urlencoded > > Expect: 100-continue > > > * Done waiting for 100-continue > * We are completely uploaded and fine > < HTTP/1.1 200 OK > < Content-Type: text/plain;charset=utf-8 > < Content-Length: 4 > < Date: Mon, 18 Jan 2016 14:56:53 GMT > < Server: Zinc HTTP Components 1.0 > < > * Connection #0 to host localhost left intact > done > > $ diff data2.bin /tmp/upload.bin > > This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. > > Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. > > I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. > > That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? > > Maybe all that is needed is giving Pharo more memory. What platform are you on ? > > Sven > > > On 18 Jan 2016, at 13:39, Robert Kuszinger <[hidden email]> wrote: > > > > Sven, > > > > thanks for the comments. I understand all. > > > > Could you please clarify this: > > > > "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." > > > > Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? > > > > > > Answering on how it goes: > > > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... > > > > > > thanks > > Robert > > > > > > > > > > 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: > > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
I quickly read through the docs of the nginx module: that looks like a very good solution.
Is the plugin good and available for the open source version of nginx ? > On 18 Jan 2016, at 18:56, Tobias Pape <[hidden email]> wrote: > > Hey all > > just my 2ct while skimming the thread. > > I have upload problems with my seaside app > and plan to tackle them by utilizing the reverse proxy. > In my scenario, that is nginx, wich ships an "upload module" > https://www.nginx.com/resources/wiki/modules/upload/ > > Given that, the upload is handled by the reverse proxy and only when > the file is already on the file system, the backend (seaside in this case) > would get a notification request. > > I plan to implement this within the next 6 weeks, so if I get going something > usable, I'll probably hand it back to the seaside community :) > Remind me if I forget ;) > > best regards > -Tobias > > > On 18.01.2016, at 18:00, Robert Kuszinger <[hidden email]> wrote: > >> >> Sven, >> >> thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... >> >> Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. >> >> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. >> >> Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. >> >> I would be great to create the upload receiving part also with Pharo at least. >> >> All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. >> >> I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) >> >> This is the scenario. >> >> Robert >> >> >> >> Sven Van Caekenberghe <[hidden email]> ezt írta (időpont: 2016. jan. 18., H, 16:30): >> Robert, >> >> This is not such an easy problem, you have to really understand HTTP. >> >> BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? >> >> Now, here is the key idea (pure Zn, no Seaside, quick hack): >> >> (ZnServer startOn: 1701) >> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; >> maximumEntitySize: 100*1024*1024; >> onRequestRespond: [ :req | >> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | >> out binary. >> ZnUtils streamFrom: req entity stream to: out ]. >> ZnResponse ok: (ZnEntity text: 'done') ]; >> yourself. >> >> You would use it like this: >> >> $ echo one two three > data.bin >> $ curl -X POST -d @data.bin http://localhost:1701 >> $ cat /tmp/upload.bin >> one two three >> >> With a 1Mb data file generated from Pharo: >> >> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | >> 1 * 1024 timesRepeat: [ >> 1 to: 32 do: [ :each | >> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] >> >> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 >> * Rebuilt URL to: http://localhost:1701/ >> * Trying ::1... >> * connect to ::1 port 1701 failed: Connection refused >> * Trying 127.0.0.1... >> * Connected to localhost (127.0.0.1) port 1701 (#0) >>> POST / HTTP/1.1 >>> Host: localhost:1701 >>> User-Agent: curl/7.43.0 >>> Accept: */* >>> Content-Length: 1048576 >>> Content-Type: application/x-www-form-urlencoded >>> Expect: 100-continue >>> >> * Done waiting for 100-continue >> * We are completely uploaded and fine >> < HTTP/1.1 200 OK >> < Content-Type: text/plain;charset=utf-8 >> < Content-Length: 4 >> < Date: Mon, 18 Jan 2016 14:56:53 GMT >> < Server: Zinc HTTP Components 1.0 >> < >> * Connection #0 to host localhost left intact >> done >> >> $ diff data2.bin /tmp/upload.bin >> >> This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. >> >> Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. >> >> I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. >> >> That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? >> >> Maybe all that is needed is giving Pharo more memory. What platform are you on ? >> >> Sven >> >>> On 18 Jan 2016, at 13:39, Robert Kuszinger <[hidden email]> wrote: >>> >>> Sven, >>> >>> thanks for the comments. I understand all. >>> >>> Could you please clarify this: >>> >>> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." >>> >>> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? >>> >>> >>> Answering on how it goes: >>> >>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... >>> >>> >>> thanks >>> Robert >>> >>> >>> >>> >>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: >>> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Tobias Pape
Hi Tobias,
This is what we do since years :) There was a blog post online describing all the details but I don’t find it anymore. The only think I can find is Nick Ager’s reply when we had a little trouble setting it up [1] I might try to separate off the code to spare you some time. It’s quite simple, actually. [1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html Johan > On 18 Jan 2016, at 18:56, Tobias Pape <[hidden email]> wrote: > > Hey all > > just my 2ct while skimming the thread. > > I have upload problems with my seaside app > and plan to tackle them by utilizing the reverse proxy. > In my scenario, that is nginx, wich ships an "upload module" > https://www.nginx.com/resources/wiki/modules/upload/ > > Given that, the upload is handled by the reverse proxy and only when > the file is already on the file system, the backend (seaside in this case) > would get a notification request. > > I plan to implement this within the next 6 weeks, so if I get going something > usable, I'll probably hand it back to the seaside community :) > Remind me if I forget ;) > > best regards > -Tobias > > > On 18.01.2016, at 18:00, Robert Kuszinger <[hidden email]> wrote: > >> >> Sven, >> >> thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... >> >> Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. >> >> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. >> >> Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. >> >> I would be great to create the upload receiving part also with Pharo at least. >> >> All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. >> >> I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) >> >> This is the scenario. >> >> Robert >> >> >> >> Sven Van Caekenberghe <[hidden email]> ezt írta (időpont: 2016. jan. 18., H, 16:30): >> Robert, >> >> This is not such an easy problem, you have to really understand HTTP. >> >> BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? >> >> Now, here is the key idea (pure Zn, no Seaside, quick hack): >> >> (ZnServer startOn: 1701) >> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; >> maximumEntitySize: 100*1024*1024; >> onRequestRespond: [ :req | >> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | >> out binary. >> ZnUtils streamFrom: req entity stream to: out ]. >> ZnResponse ok: (ZnEntity text: 'done') ]; >> yourself. >> >> You would use it like this: >> >> $ echo one two three > data.bin >> $ curl -X POST -d @data.bin http://localhost:1701 >> $ cat /tmp/upload.bin >> one two three >> >> With a 1Mb data file generated from Pharo: >> >> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | >> 1 * 1024 timesRepeat: [ >> 1 to: 32 do: [ :each | >> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] >> >> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 >> * Rebuilt URL to: http://localhost:1701/ >> * Trying ::1... >> * connect to ::1 port 1701 failed: Connection refused >> * Trying 127.0.0.1... >> * Connected to localhost (127.0.0.1) port 1701 (#0) >>> POST / HTTP/1.1 >>> Host: localhost:1701 >>> User-Agent: curl/7.43.0 >>> Accept: */* >>> Content-Length: 1048576 >>> Content-Type: application/x-www-form-urlencoded >>> Expect: 100-continue >>> >> * Done waiting for 100-continue >> * We are completely uploaded and fine >> < HTTP/1.1 200 OK >> < Content-Type: text/plain;charset=utf-8 >> < Content-Length: 4 >> < Date: Mon, 18 Jan 2016 14:56:53 GMT >> < Server: Zinc HTTP Components 1.0 >> < >> * Connection #0 to host localhost left intact >> done >> >> $ diff data2.bin /tmp/upload.bin >> >> This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. >> >> Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. >> >> I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. >> >> That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? >> >> Maybe all that is needed is giving Pharo more memory. What platform are you on ? >> >> Sven >> >>> On 18 Jan 2016, at 13:39, Robert Kuszinger <[hidden email]> wrote: >>> >>> Sven, >>> >>> thanks for the comments. I understand all. >>> >>> Could you please clarify this: >>> >>> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." >>> >>> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? >>> >>> >>> Answering on how it goes: >>> >>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... >>> >>> >>> thanks >>> Robert >>> >>> >>> >>> >>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: >>> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Hi Johan
On 18.01.2016, at 19:18, Johan Brichau <[hidden email]> wrote: > Hi Tobias, > > This is what we do since years :) > There was a blog post online describing all the details but I don’t find it anymore. > The only think I can find is Nick Ager’s reply when we had a little trouble setting it up [1] > > I might try to separate off the code to spare you some time. > It’s quite simple, actually. Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;) Best regards -Tobias > > [1] http://forum.world.st/Using-nginx-file-upload-module-td3591666.html > > Johan > >> On 18 Jan 2016, at 18:56, Tobias Pape <[hidden email]> wrote: >> >> Hey all >> >> just my 2ct while skimming the thread. >> >> I have upload problems with my seaside app >> and plan to tackle them by utilizing the reverse proxy. >> In my scenario, that is nginx, wich ships an "upload module" >> https://www.nginx.com/resources/wiki/modules/upload/ >> >> Given that, the upload is handled by the reverse proxy and only when >> the file is already on the file system, the backend (seaside in this case) >> would get a notification request. >> >> I plan to implement this within the next 6 weeks, so if I get going something >> usable, I'll probably hand it back to the seaside community :) >> Remind me if I forget ;) >> >> best regards >> -Tobias >> >> >> On 18.01.2016, at 18:00, Robert Kuszinger <[hidden email]> wrote: >> >>> >>> Sven, >>> >>> thanks for the demo. Zn without Seaside is just fine if it could work. A one-field form with only the uploaded file could work also. Some javascript addition on client side is acceptable - I'll see then. I understand that a simple file upload is also "composite" data with filename and binary content... >>> >>> Usage: An office need to receive large map / digital survey data files from clients. Now they post it on CD or DVD disks, however the typical amount is 100-200 Mb in one or two or more files (depending one who has heard about ZIP and who hasn't :) - really! ). So we are trying to create an upload portal where they could login and then upload files to folders where folder name contains their ID and date. That's it. >>> >>> No, SSH/SFTP or FTP with OS auth is not acceptable. They want pure browser upload as clients know this from their everyday life. And they could also add metadata about their uploads. >>> >>> Login, auth to existing client database is done in Seaside/Pharo in a few hours, works nicely. >>> >>> I would be great to create the upload receiving part also with Pharo at least. >>> >>> All this stuff is behind and IIS/ARR - tested for large uploads, worked well when extending timeout limitations is IIS (with Kom but eating memory, maybe not so much as Zinc now, but it had the codepage problem I wanted debug earlier). OS is Windows Server 2008 R2Datacenter Edition, IIS 7.5. >>> >>> I'm developing on Linux and testing on Windows Server 2008 configured to the same setup (IIS, ARR, etc.) >>> >>> This is the scenario. >>> >>> Robert >>> >>> >>> >>> Sven Van Caekenberghe <[hidden email]> ezt írta (időpont: 2016. jan. 18., H, 16:30): >>> Robert, >>> >>> This is not such an easy problem, you have to really understand HTTP. >>> >>> BTW, such huge uploads don't seem a very good idea anyway, you will get annoying timeouts as well. I am curious, what is in those files ? >>> >>> Now, here is the key idea (pure Zn, no Seaside, quick hack): >>> >>> (ZnServer startOn: 1701) >>> reader: [ :stream | ZnRequest readStreamingFrom: stream ]; >>> maximumEntitySize: 100*1024*1024; >>> onRequestRespond: [ :req | >>> '/tmp/upload.bin' asFileReference writeStreamDo: [ :out | >>> out binary. >>> ZnUtils streamFrom: req entity stream to: out ]. >>> ZnResponse ok: (ZnEntity text: 'done') ]; >>> yourself. >>> >>> You would use it like this: >>> >>> $ echo one two three > data.bin >>> $ curl -X POST -d @data.bin http://localhost:1701 >>> $ cat /tmp/upload.bin >>> one two three >>> >>> With a 1Mb data file generated from Pharo: >>> >>> '/tmp/data.txt' asFileReference writeStreamDo: [ :out | >>> 1 * 1024 timesRepeat: [ >>> 1 to: 32 do: [ :each | >>> out << Character alphabet << (each printStringPadded: 5); lf ] ] ] >>> >>> $ curl -v -X POST --data-binary @data2.bin http://localhost:1701 >>> * Rebuilt URL to: http://localhost:1701/ >>> * Trying ::1... >>> * connect to ::1 port 1701 failed: Connection refused >>> * Trying 127.0.0.1... >>> * Connected to localhost (127.0.0.1) port 1701 (#0) >>>> POST / HTTP/1.1 >>>> Host: localhost:1701 >>>> User-Agent: curl/7.43.0 >>>> Accept: */* >>>> Content-Length: 1048576 >>>> Content-Type: application/x-www-form-urlencoded >>>> Expect: 100-continue >>>> >>> * Done waiting for 100-continue >>> * We are completely uploaded and fine >>> < HTTP/1.1 200 OK >>> < Content-Type: text/plain;charset=utf-8 >>> < Content-Length: 4 >>> < Date: Mon, 18 Jan 2016 14:56:53 GMT >>> < Server: Zinc HTTP Components 1.0 >>> < >>> * Connection #0 to host localhost left intact >>> done >>> >>> $ diff data2.bin /tmp/upload.bin >>> >>> This code is totally incomplete, you need lots of error handling. Furthermore, working with streaming requests is dangerous, because you are responsible for reading the bodies correctly. >>> >>> Also, if you want an upload in a form, you will have to parse that form (see ZnApplicationFormUrlEncodedEntity and ZnMultiPartFormDataEntity), you will then again take it in memory. These things are normally done for you by Zn and/or Seaside. >>> >>> I also tried with 100Mb, it worked, but it took several minutes, like 10 to 15. The #streamFrom:to: above uses a 16Kb buffer, which is probably too small for this use case. Maybe curl doesn't upload very aggressively. Performance is another issue. >>> >>> That is why I asked what is in the files, what you eventually want to do with it. Is the next processing step in Pharo too ? >>> >>> Maybe all that is needed is giving Pharo more memory. What platform are you on ? >>> >>> Sven >>> >>>> On 18 Jan 2016, at 13:39, Robert Kuszinger <[hidden email]> wrote: >>>> >>>> Sven, >>>> >>>> thanks for the comments. I understand all. >>>> >>>> Could you please clarify this: >>>> >>>> "Technically, it would be possible to write a Zn handler that can accept a large upload in a streaming fashion (and save it to a file for example), but I don't think that will happen with Seaside - so you will pull that all in memory." >>>> >>>> Is there a chance to create a streaming solution? Is it documented somewhere? Why do you think it won't happen with Seaside? Is there a Seaside set or design limitation? >>>> >>>> >>>> Answering on how it goes: >>>> >>>> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is low" warning window appeared. I've clicked on "Proceed" just for curiosity but no reaction in the Pharo gui... hmmm... >>>> >>>> >>>> thanks >>>> Robert >>>> >>>> >>>> >>>> >>>> 2016-01-18 13:27 GMT+01:00 Sven Van Caekenberghe <[hidden email]>: >>>> >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Hmmm. Putting upload one step out in the frontline.. I ask my office also about nginx and also test install for myself. Anyway, a pure pharo streaming solution is still interesting for me. R 2016.01.18. 19:20 ezt írta ("Tobias Pape" <[hidden email]>):
Hi Johan _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by Tobias Pape
Are you referring to file uploads on SS3? There, the file is stored inside of Gemstone db, right? Johan _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On 18.01.2016, at 19:24, Johan Brichau <[hidden email]> wrote: > >> On 18 Jan 2016, at 19:19, Tobias Pape <[hidden email]> wrote: >> >> Oh that would be just great. My students would rejoice to no longer see the spurious "503 gateway timed out" messaged ;) >> >> Best regards >> -Tobias > > Are you referring to file uploads on SS3? It is not the SS3, it is another system :) > There, the file is stored inside of Gemstone db, right? For ss3 on GS, thats right. It actually still works well that way, but we don't have many files over 6MB, and more and more packages are over at github so I don't see any need to implement upload improvements there. On the other system, however, we hat students trying to upload some 300 MB files. In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire to use nginx upload. Best regards -Tobias > > Johan > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. This will work for Robert as well, of course :) Johan _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On 18.01.2016, at 19:43, Johan Brichau <[hidden email]> wrote: > >> On 18 Jan 2016, at 19:34, Tobias Pape <[hidden email]> wrote: >> >> For ss3 on GS, thats right. It actually still works well that way, but we don't have many files >> over 6MB, and more and more packages are over at github so I don't see any need to >> implement upload improvements there. > > True. I was just going to say it would not really help there since you need the file inside the db anyway. > >> On the other system, however, we hat students trying to upload some 300 MB files. >> In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire >> to use nginx upload. > > Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. > This will work for Robert as well, of course :) > Take your time :) And, thank you. Best regards -Tobias > Johan > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Actually,
I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :( I did find this: http://www.squeaksource.com/fileupload/ I did not check what’s in there but until I can take a look, here it is already ;) cheers Johan
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On 18.01.2016, at 19:58, Johan Brichau <[hidden email]> wrote: > Actually, > > I was quickly trying to find Nick’s blog post on the wayback machine but it’s not archived :( > > I did find this: http://www.squeaksource.com/fileupload/ > > I did not check what’s in there but until I can take a look, here it is already ;) > :D > cheers > Johan > >> On 18 Jan 2016, at 19:45, Tobias Pape <[hidden email]> wrote: >> >> >> On 18.01.2016, at 19:43, Johan Brichau <[hidden email]> wrote: >> >>> >>>> On 18 Jan 2016, at 19:34, Tobias Pape <[hidden email]> wrote: >>>> >>>> For ss3 on GS, thats right. It actually still works well that way, but we don't have many files >>>> over 6MB, and more and more packages are over at github so I don't see any need to >>>> implement upload improvements there. >>> >>> True. I was just going to say it would not really help there since you need the file inside the db anyway. >>> >>>> On the other system, however, we hat students trying to upload some 300 MB files. >>>> In principle that was fine, but Nginx+FCGI+Seaside took way too long, hence my desire >>>> to use nginx upload. >>> >>> Ok, I’m currently heads-down in some other things, so let me take a look tomorrow on this. >>> This will work for Robert as well, of course :) >>> >> >> Take your time :) >> And, thank you. >> >> Best regards >> -Tobias >> >>> Johan >>> _______________________________________________ >>> seaside mailing list >>> [hidden email] >>> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside >> >> _______________________________________________ >> seaside mailing list >> [hidden email] >> http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
In reply to this post by kuszi
On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote:
> > Answering on how it goes: > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB > upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is > low" warning window appeared. I've clicked on "Proceed" just for > curiosity but no reaction in the Pharo gui... hmmm... > Assuming you're running the Cog VM you're likely hitting its memory limits. I don't recall the exact number, but once you get to the 400- 500 Meg range you're hitting the absolute limit of how much RAM Cog can deal with. Also as you've noticed, once you get over about 300 Meg things will start to slow down rather dramatically. From what I've read Spur roughly doubles the amount of RAM that the VM can work with and should perform much better at large image sizes. You might want to consider handling large file uploads like this outside of the image (i.e. still have your front end in Seaside but handle the actual upload via an external mechanism) > > thanks > Robert > _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On 18/01/16 21:04, Phil (list) wrote:
> On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote: >> >> Answering on how it goes: >> >> 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ 120 MB >> upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space is >> low" warning window appeared. I've clicked on "Proceed" just for >> curiosity but no reaction in the Pharo gui... hmmm... >> > > Assuming you're running the Cog VM you're likely hitting its memory > limits. I don't recall the exact number, but once you get to the 400- > 500 Meg range you're hitting the absolute limit of how much RAM Cog can > deal with. That is just default limits. On a mac I've worked with about 2GB. There used to be some limitation on windows, I think there was an issue in 2011 on windows where there was a limit closer to 512 GB, but AFAIk that was fixed. Stephan _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On Tue, 2016-01-19 at 02:14 +0100, Stephan Eggermont wrote:
> On 18/01/16 21:04, Phil (list) wrote: > > On Mon, 2016-01-18 at 13:39 +0100, Robert Kuszinger wrote: > > > > > > Answering on how it goes: > > > > > > 20 - 40 - 10MB upload in seconds. Now it seems to stuck on a ~ > > > 120 MB > > > upload. Pharo memory (windows OS) seemed to grow ~ 441 MB. "Space > > > is > > > low" warning window appeared. I've clicked on "Proceed" just for > > > curiosity but no reaction in the Pharo gui... hmmm... > > > > > > > Assuming you're running the Cog VM you're likely hitting its memory > > limits. I don't recall the exact number, but once you get to the > > 400- > > 500 Meg range you're hitting the absolute limit of how much RAM Cog > > can > > deal with. > > That is just default limits. On a mac I've worked with about 2GB. > There > used to be some limitation on windows, I think there was an issue in > 2011 on windows where there was a limit closer to 512 GB, but AFAIk > that > was fixed. > Is that something that can be changed without a custom build? If so, I'd love to learn how. I was under the impression that this was a hard limit in Cog (that varies a bit by platform, but still well below 1G) > Stephan > > > _______________________________________________ > seaside mailing list > [hidden email] > http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On the mac you can change the limit in <my-vm-dir>/Pharo.app/Contents/Info.plist by adjusting the value for the SqueakMaxHeapSize setting and restarting the image. |
Free forum by Nabble | Edit this page |