Hi all, I work on pharo 2.0 and seaside-REST with ZincServer. I need to rebuild a file which is uploaded by POST requests through Seaside-REST but I don't really use WASession. On javascript layer, I cut the file with this function: document.querySelector('input[type="file"]').addEventListener('change', function(e) { var name = this.files.name; var blob = this.files[0]; const BYTES_PER_CHUNK = 1024 * 1024; // 1MB chunk sizes. const SIZE = blob.size; var start = 0; var end = BYTES_PER_CHUNK; while(start < SIZE) { upload(name, blob.slice(start, end)); start = end; end = start + BYTES_PER_CHUNK; } }, false);And I send datas with this function: function upload(aFileName, files) { var formData = new FormData(); for (var i = 0, file; file = files[i]; ++i) { formData.append(aFileName, file); } var xhr = new XMLHttpRequest(); xhr.open('POST', '/myPath', true); xhr.onload = function(e) { ... }; xhr.send(formData); // multipart/form-data } I have a POST method on my handler, but I receive multiple requests. On this way, I don't have access to the requestContext in order to get the WAFile in request. If I don't cut the file, it works and I can store on disk. But not in the other case. Of course, my goal is to store the file on disk correctly. So, How to get file content of each request ? How to rebuild and store the file correctly if the requests aren't ordored ? Any help ? Thank's a lot. _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Okay, I dropped the idea of cut the file. One solution (not tested yet): ZnConstants maximumEntitySize: 104857600 (100 * 1024 * 1024) in order to increase the size of incoming entities to 100 M for example. But, if we find one solution for the other case, we could upload file faster. Happy smalltalk ;) 2015-03-12 15:21 GMT-10:00 Sebastien Audier <[hidden email]>:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Not really sure I understand the question, but
it sounds like you would need to include some sort of position
information in each POST. That way the receiving end knows where
in the file to put the data.
On 3/19/15 8:42 PM, Sebastien Audier
wrote:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Hi Bob, Thanks for your answer. Right, but before this problem, I failed to recover the file content through the request. If I don't slice the file, I can recover a WAFile instance by: | file | file := self requestContext request postFields values first... And I stream on disk after that. If I slice the file on web client, I can't recover the WAFile, because: self requestContext -> return an error. What's wrong in my process ? Thanks again, 2015-03-19 15:02 GMT-10:00 Bob Arning <[hidden email]>:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
sounds like the sliced and non-sliced versions
are posting in somewhat different ways.
does the sample code below work ok if the data is less than BYTES_PER_CHUNK? IOW, is the code ok for a single chunk POST, but failing for multiple chunks? What does the receiving code look like? Is it different for the sliced and non-sliced versions? Can you say more about what "return an error" is? Is there a debugger stack you could include? On 3/19/15 9:31 PM, Sebastien Audier
wrote:
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
2015-03-19 15:46 GMT-10:00 Bob Arning <[hidden email]>:
Yes, sounds like the sliced version is posting in different format. And may be, Zinc doesn't handle correctly.
Not only multiple chunks. Even if the file is smaller than the BYTES_PER_CHUNK limit, it doesn't work. It doesn't work with just one request.
No it's the same method.
Okay, this is my post method on the handler. test <post> <produces: 'application/json; charset=utf-8'> | uploadedFile disk file | uploadedFile := self requestContext request postFields values first. "here, uploadedFile is a WAFile if we don't slice the file" [disk := (FileSystem disk workingDirectory / 'ressources' / 'upload'). disk ensureDirectory. file := ((disk / (uploadedFile fileName)) ensureFile). file writeStreamDo: [:str | str nextPutAll: (uploadedFile rawContents)]] fork. ^'' when I slice the file, the self requestContext return a signal does not undestood. _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
On 3/20/15 12:48 AM, Sebastien Audier
wrote:
So, how do you POST in the non-sliced version? If there us a SqueakDebug.log (or similar), that would tell us exactly what was not understood and by whom. Which might help us in determining what's wrong with the POST.
_______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Just like it: function upload(files) { var formData = new FormData(); for (var i = 0, file; file = files[i]; ++i) { formData.append(files.name, file); } var xhr = new XMLHttpRequest(); xhr.open('POST', '/myPath', true); xhr.onload = function(e) { }; xhr.send(formData); // multipart/form-data }
Okay, I am at the office and I don't have access to my pharo, but when I could reproduce this log, I will send it to you. Thanks _______________________________________________ seaside mailing list [hidden email] http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/seaside |
Free forum by Nabble | Edit this page |