Hi list,
We need to let the user upload and download files securely, and we've been trying to do it straight through Swazoo, storing the files at disk when uploading them and removing their ILFileProxy instance so that the image doesn't grow huge. It seemed to work well enough for a while, but of course it is not an elegant or very well performing solution, files over 150Mb can't be uploaded and, sometimes, Swazoo freezes.
We know Swazoo is meant to host Smalltalk web frameworks, not files, that's why we'd like to know how does people serve files securely in Smalltalk web apps. We've considered using Apache and storing only the public files' URL in Iliad, but that'd mean a huge security breach as any user finding out this URL would be able to access the file.
So how does, for instance, SmalltalkHub handle secure file storage? Thanks a lot!
p.s. We have massive storage space, in case this helps tilt the balances.
Bernat Romagosa. |
I've done something like this with nginx and ruby framework, Merb. Merb
has a tiny piece of code that (I think) sets a http header for nginx to see, and then nginx handles the download. I think you can also handle uploads with an nginx upload module. The directory(s) where nginx reads/writes these files is not accessible through a public url. Bottom line is, without regard to the language of your app server, handling large files should not be its job. ~Jon On 02/06/2012 06:22 PM, Bernat Romagosa wrote: > Hi list, > > We need to let the user upload and download files securely, and we've > been trying to do it straight through Swazoo, storing the files at > disk when uploading them and removing their ILFileProxy instance so > that the image doesn't grow huge. It seemed to work well enough for a > while, but of course it is not an elegant or very well performing > solution, files over 150Mb can't be uploaded and, sometimes, Swazoo > freezes. > > We know Swazoo is meant to host Smalltalk web frameworks, not files, > that's why we'd like to know how does people serve files securely in > Smalltalk web apps. > > We've considered using Apache and storing only the public files' URL > in Iliad, but that'd mean a huge security breach as any user finding > out this URL would be able to access the file. > > So how does, for instance, SmalltalkHub handle secure file storage? > > Thanks a lot! > > p.s. We have massive storage space, in case this helps tilt the balances. > > -- > Bernat Romagosa. |
Hmm, I'll take a look at that, but I'm not sure how to approach it. Do you have any specific demo or pointers to documentation? We're looking for some sort of system that allows to upload files and sends an HTTP redirect to our Iliad app when it's done, so that we know what file was uploaded by whom, then we should be able to build a download link that would trigger a redirect from Pharo to a temporary url pointing to the file and release said url when the download was complete.
Something similar to this is what we have thought of so far, but we really hope there's a better solution because that would mean building some sort of servlet, which is something we'd rather avoid as we don't even know where to start... :(
Cheers,
2012/2/6 Jon Hancock <[hidden email]> I've done something like this with nginx and ruby framework, Merb. Merb has a tiny piece of code that (I think) sets a http header for nginx to see, and then nginx handles the download. Bernat Romagosa. |
In reply to this post by jhancock
Hi!
On 02/06/2012 11:36 AM, Jon Hancock wrote: > I've done something like this with nginx and ruby framework, Merb. Merb > has a tiny piece of code that (I think) sets a http header for nginx to > see, and then nginx handles the download. > I think you can also handle uploads with an nginx upload module. > The directory(s) where nginx reads/writes these files is not accessible > through a public url. > > Bottom line is, without regard to the language of your app server, > handling large files should not be its job. Of course a fair way to deal with it but: > On 02/06/2012 06:22 PM, Bernat Romagosa wrote: >> that the image doesn't grow huge. It seemed to work well enough for a >> while, but of course it is not an elegant or very well performing >> solution, files over 150Mb can't be uploaded and, sometimes, Swazoo >> freezes. I find this odd - me and Janko competed a while back on how to improve file upload speed (I implemented SocketStream and added support for "chunkwise reading" + KomHttpServer, and Janko using Swazoo IIRC) and we both ended up writing properly "writing on disk as it streams" implementations that easily handled 1Gb files without growing the image and without going slower. So you should definitely ask Janko. :) regards, Göran |
Hi!
I'm following to Janko. Cheers, Nico On Mon, 2012-02-06 at 22:53 +0100, Göran Krampe wrote: > Hi! > > On 02/06/2012 11:36 AM, Jon Hancock wrote: > > I've done something like this with nginx and ruby framework, Merb. Merb > > has a tiny piece of code that (I think) sets a http header for nginx to > > see, and then nginx handles the download. > > I think you can also handle uploads with an nginx upload module. > > The directory(s) where nginx reads/writes these files is not accessible > > through a public url. > > > > Bottom line is, without regard to the language of your app server, > > handling large files should not be its job. > > Of course a fair way to deal with it but: > > > On 02/06/2012 06:22 PM, Bernat Romagosa wrote: > >> that the image doesn't grow huge. It seemed to work well enough for a > >> while, but of course it is not an elegant or very well performing > >> solution, files over 150Mb can't be uploaded and, sometimes, Swazoo > >> freezes. > > I find this odd - me and Janko competed a while back on how to improve > file upload speed (I implemented SocketStream and added support for > "chunkwise reading" + KomHttpServer, and Janko using Swazoo IIRC) and we > both ended up writing properly "writing on disk as it streams" > implementations that easily handled 1Gb files without growing the image > and without going slower. > > So you should definitely ask Janko. :) > > regards, Göran |
In reply to this post by gokr
Also, if you intend to use Kom, there's now a package for Iliad named
IliadKom in the dev repository. Cheers, Nico On Mon, 2012-02-06 at 22:53 +0100, Göran Krampe wrote: > Hi! > > On 02/06/2012 11:36 AM, Jon Hancock wrote: > > I've done something like this with nginx and ruby framework, Merb. Merb > > has a tiny piece of code that (I think) sets a http header for nginx to > > see, and then nginx handles the download. > > I think you can also handle uploads with an nginx upload module. > > The directory(s) where nginx reads/writes these files is not accessible > > through a public url. > > > > Bottom line is, without regard to the language of your app server, > > handling large files should not be its job. > > Of course a fair way to deal with it but: > > > On 02/06/2012 06:22 PM, Bernat Romagosa wrote: > >> that the image doesn't grow huge. It seemed to work well enough for a > >> while, but of course it is not an elegant or very well performing > >> solution, files over 150Mb can't be uploaded and, sometimes, Swazoo > >> freezes. > > I find this odd - me and Janko competed a while back on how to improve > file upload speed (I implemented SocketStream and added support for > "chunkwise reading" + KomHttpServer, and Janko using Swazoo IIRC) and we > both ended up writing properly "writing on disk as it streams" > implementations that easily handled 1Gb files without growing the image > and without going slower. > > So you should definitely ask Janko. :) > > regards, Göran |
In reply to this post by gokr
On 02/06/2012 10:53 PM, Göran Krampe wrote:
>>> > > I find this odd - me and Janko competed a while back on how to improve > file upload speed (I implemented SocketStream and added support for > "chunkwise reading" + KomHttpServer, and Janko using Swazoo IIRC) and we > both ended up writing properly "writing on disk as it streams" > implementations that easily handled 1Gb files without growing the image > and without going slower. > > So you should definitely ask Janko. :) Did that ever end up in a stable release? Paolo |
http://forum.world.st/SocketStream-changes-for-4-1-too-late-td1680213.html
-- Sent from my HP TouchPad On Feb 7, 2012 10:44, Paolo Bonzini <[hidden email]> wrote: On 02/06/2012 10:53 PM, Göran Krampe wrote: >>> > > I find this odd - me and Janko competed a while back on how to improve > file upload speed (I implemented SocketStream and added support for > "chunkwise reading" + KomHttpServer, and Janko using Swazoo IIRC) and we > both ended up writing properly "writing on disk as it streams" > implementations that easily handled 1Gb files without growing the image > and without going slower. > > So you should definitely ask Janko. :) Did that ever end up in a stable release? Paolo |
Free forum by Nabble | Edit this page |