How do you use Dropzone.js with chunked file uploads (PHP only)? - javascript

I can find plenty of documentation on how to use the chunked file upload with various API's and libraries, but I am unable to find how to use Dropzone.js chunked file upload with just plain PHP.
The documentation is very minimal. I cannot add any libraries or API's other than jQuery on the client side (JavaScript).
My question is how do you use Dropzone.js new chunked file upload feature with PHP only on the server side. (Client side code is appreciated for setup).
Here is the code I've attempted so far:
Client side .js file:
var myDropzone = new Dropzone("div#formDiv",
{
url: "uploadform.php",
params: function (files, xhr, chunk)
{
if (chunk)
{
return
{
dzUuid=chunk.file.upload.uuid,
dzChunkIndex=chunk.index,
dzTotalFileSize=chunk.file.size,
dzCurrentChunkSize=chunk.dataBlock.data.size,
dzTotalChunkCount=chunk.file.upload.totalChunkCount,
dzChunkByteOffset=chunk.index * this.options.chunkSize,
dzChunkSize=this.options.chunkSize,
dzFilename=chunk.file.name;
};
}
},
method: "post",
timeout: 600000,
maxFileSize: 1024,
parallelUploads: 1,
chunking: true,
forceChunking: true,
chunkSize: 1000000,
parallelChunkUploads: true,
retryChunks: true,
retryChunksLimit: 3,
chunksUploaded: function (file, done)
{
// All chunks have uploaded successfully
},
error: function (msg)
{
alert(msg.responseText);
}
});
Here is the PHP (server):
foreach ($_POST as $key => $value)
{
_log('key:' . $key . '; Value:' . $value);
}
The above code shows nothing (_log() just echoes it to the screen and logs it in a text file).
I looked at the send/receive headers and it only sends one call to the server.
I've verified that the file drag/drop zone is setup correctly by Dropzone.js using the developer tools console in the browser.
Edit: Here is the documentation for chunked uploads: https://gitlab.com/meno/dropzone/wikis/faq#chunked-uploads
Chunked uploads
Dropzone offers the possibility to upload files in chunks. The
relevant configuration options for this feature are:
chunking which should be set to true
forceChunking, if true, will always send a file in chunks, even if it
is only one chunk
chunkSize in bytes
parallelChunkUploads, if true, the chunks will be uploaded
simultaneously
retryChunks, if true, the library will retry to upload a chunk if it
fails
retryChunksLimit defaults to 3
Then there are two important callbacks. The first one is: params which
can be a function, that receives files, xhr and chunk as the first
argument. If chunking is enabled, you know that files only contains
that one file, and chunk is the object holding all the information
about the current chunk. Example:
var chunk = {
file: file,
index: 0,
status: Dropzone.UPLOADING,
progress: 0.4
}
See the documentation for that parameter for more information or look
at the source code for the default implementation.
The second important callback is chunksUploaded, which gets the file
that finished uploading and the done function as second argument. Do
whatever you need to do in that function, to tell the server that the
file finished uploading and invoke the done() function when ready.

I found that I had a number of problems. One was that I wasn't using the most recent version of Dropzone.js.
Another was that I wasn't checking $_FILE and $_POST for the variables and file parts.
Once I got those fixed, I changed the JavaScript to:
var myDropzone = new Dropzone("div#formDiv",
{
url: "uploadform.php",
method: "post",
timeout: 180000,
maxFileSize: 1024,
parallelUploads: 1,
chunking: true,
forceChunking: true,
chunkSize: 256000,
parallelChunkUploads: true,
retryChunks: true,
retryChunksLimit: 3,
};
By removing the params function, I get the default values. These values include the chunk index and the maximum number of chunks.
From there I was able to get the chunked files in the PHP script using:
$_FILE['file']['name']
$_FILE['file']['tmp_name']
$_POST['dzchunkindex']
$_POST['dztotalchunkcount']
After that it was just a matter of checking in the PHP script for all the file parts being uploaded and then re-assembling them which is handled in many tutorials that are easy to find on the internet.

Related

Is it possible to upload files *sequentially* via AJAX?

I've successfully implemented the uploading of directory structures with Fine Uploader using the dragAndDrop: reportDirectoryPaths option. Each file that is uploaded has it's qqpath property to signify the path from which it came.
var exampleUploader = new qq.FineUploader({
element: document.getElementById('fine-uploader'),
template: 'qq-template-manual-trigger',
request: {
endpoint: '/api/UploadDocuments',
params: {
param1: "Test1",
param2: "Test2"
}
},
callbacks: {
onComplete: function (id, name, responseJSON, xhr) {
console.log("onComplete");
},
onAllComplete: function (id, name, responseJSON, xhr) {
console.log("onAllComplete");
}
},
multiple: true,
dragAndDrop: {
reportDirectoryPaths: true
},
autoUpload: true,
debug: false
});
There is however, one problem: the files are uploaded one by one, but the ajax request is called sometimes at the same time. If two files with the same directory structure are passed to the service at the exact same time, the directories might be created twice in one go. Is there a way to only do the ajax request on success of the previous? In other words, is there a way to upload the files sequentially, rather than at the same time (a whole bunch of files being passed into the service at once)?
Thanks in advance,
aj
The problem you are seeing is an issue with your server, and not with anything client/browser-side. It's shortsighted to limit your frontend to one request at a time. This presents a completely unnecessary bottleneck. Your server should sort all of this out.
On approach is to keying uploaded files by the UUID assigned by Fine Uploader and then sort out the storage hierarchy later. And if you don't want to trust the UUID supplied by Fine Uploader, you can always generate your own server-side, return it with the response, and Fine Uploader will use that for all other requests related to that specific file instead.
Another approach is for your server to simply check server side to see if the directory already exists for the target file.
Keep in mind that this "qqPath" property you are depending on only exists in Chrome/Opera, and only when a directory is dropped.
If you want to upload files one by one using fine uploader the easiest way is limiting the max connections to 1.
var exampleUploader = new qq.FineUploader({
element: document.getElementById('fine-uploader'),
template: 'qq-template-manual-trigger',
request: {
endpoint: '/api/UploadDocuments',
params: {
param1: "Test1",
param2: "Test2"
}
},
callbacks: {
onComplete: function (id, name, responseJSON, xhr) {
console.log("onComplete");
},
onAllComplete: function (id, name, responseJSON, xhr) {
console.log("onAllComplete");
}
},
maxConnections: 1,
multiple: true,
dragAndDrop: {
reportDirectoryPaths: true
},
autoUpload: true,
debug: false
});
By default the parameter maxConnections is 3.

Unable to send base64 video through Ajax Post to PHP

When I try to upload a MP4 video with 16.9 MB size, using ajax async post to an PHP file the console triggers an error saying: POST http://website.com/proc_vids.php net::ERR_EMPTY_RESPONSE
I know for a fact that this problem is related with PHP memory_limit because when I set to 200 MB it's all fine but when I change it back to 100 MB this error happens.
I can't even get the POST to an PHP variable because as soon as the ajax post call is made it triggers the error without even doing anything on server side (PHP). Here is the ajax post code:
var proc = 1;
video = document.getElementById('preview_video').src;
$.ajax({
'async': true,
'type': "POST",
'global': false,
'dataType': 'json',
'url': "proc_vids.php",
'data': {proc: proc, video: video}
}).done(function () {
//Do something
});
PHP code:
$proc = $_POST['proc'];
if ($proc == 1){
//$video = $_POST['video'];
}
As you can see I commented the line where I pass the POST to a variable and still triggering the error.
What can I do to the video variable containing the base64 code to not expand consuming such high memory levels?
Is there any alternatives without setting the memory_limit higher?
Problem solved thanks to cmorrissey!
I used the same method as described in this thread: Convert HTML5 Canvas into File to be uploaded?
Sending AJAX POST as a FormData and converting the base64 data to Uint8Array into a blob is the key to not allocate PHP memory when the POST is made. But be careful tho because older browsers may not support blob.
Thank you guys ;)

Multipart or base64 for AJAX file uploads?

I'm writing a single page application with EmberJS and need to upload a few files.
I wrote a special view, that wraps the file input field and extracts the first file selected. This lets me bind the File-Object to a model-attribute.
Now I have to choose.
I can write a special file transform, that serialises the File-Object to base64 and simply PUT/POST this.
Or I can intercept the RESTAdapter methods createRecord and updateRecord to check every model for File-Objects and switch the PUT/POST requests to multipart/form-data and send it with the help of FormData
Does one of these directions pose significant problems?
I've had to evaluate the same concern for a Restful API I'm developing. In my opinion, the most ideal method would be to just use the RESTAdapter with base64 encoded data.
That being said, I had to use the multipart/form-data method in my case, because the data transfer is 30% higher when you base64 encode the file data. Since my API would be have to accept large (100MB+) files, I opted to have the POST method of the API to receive multipart form data, with the file and json data being one of the POST variables.
So, unless you need to upload large files like in my case, I'd recommend always sticking to the REST methods.
Just ran into this myself, and ended up using a simple jQuery AJAX call using the FormData object. My multi-select implementation (where one can drop multiple files at once) looks like this:
filesDidChange: function() {
// Get FileList
var $input = this.$('input'),
fileList = $input.get(0).files;
// Iterate files
for (var i = 0; i < fileList.length; i++) {
var file = fileList[i],
formData = new FormData();
// Append information to FormData instance
formData.append('attachment[title]', file.name);
formData.append('attachment[file]', file);
formData.append('attachment[post_id]', this.get('post.id'));
// Send upload request
Ember.$.ajax({
method: 'POST',
url: '/attachments',
cache: false,
contentType: false,
processData: false,
data: formData,
success: makeSuccessHandler(this),
error: makeErrorHandler(this)
});
}
// Notify
this.container.lookup('util:notification').notify('Uploading file, please wait...');
// Clear FileList
$input.val(null);
},

trouble showing already existing files jQuery File Uploader

I've been beating my head over this for quite a while now, so I think it's time I reach out for help. I have some already existing code that uses the jQuery File Uploader plugin, allowing me to upload files to my webserver. The trouble I am having is listing files that already exist on the web server.
Here is my initialization code that runs at the client:
$('#fileupload').fileupload({
disableImageResize: false,
url: '/api/upload',
done: function (e, data) { // data is checked here }
});
// Load existing files:
$('#fileupload').addClass('fileupload-processing');
$.ajax({
url: $('#fileupload').fileupload('option', 'url'),
dataType: 'json',
context: $('#fileupload')[0],
data: { action: "FileList", blob: "uts", path: "Unit 14/Binaries/" }
}).always(function (e, data) {
$(this).removeClass('fileupload-processing');
}).done(function (result) {
$(this).fileupload('option', 'done')
.call(this, $.Event('done'), { result: result });
});
Now, I am trying to return a list of pre-existing files on the server side that matches the JSON response akin to the documentation. My ASP.NET code on the server side is as follows (with two bogus files called "Something" and "SomethingElse" using my FilesStatus class).
// Get a list of files from
private void FileList(HttpContext hc)
{
var serializer = new System.Web.Script.Serialization.JavaScriptSerializer();
List<FilesStatus> fs_list = new List<FilesStatus>();
fs_list.Add(new FilesStatus("Something", 124));
fs_list.Add(new FilesStatus("SomethingElse", 124));
HttpContext.Current.Response.AddHeader("Pragma", "no-cache");
HttpContext.Current.Response.AddHeader("Cache-Control", "private, no-cache");
hc.Response.AddHeader("Content-Disposition", "inline; filename=\"files.json\"");
var result = new { files = fs_list.ToArray() };
hc.Response.Write(serializer.Serialize(result));
hc.Response.ContentType = "application/json";
HttpContext.Current.Response.StatusCode = 200;
}
In the "done" function of the AJAX code, I see what I believe is the proper response on the client side. Here, you can see the format in my Javascript debugger (i.e., a top level "files" that is an array):
These files do not get populated in to the file list, though. The code that I marked "//data is checked here" in the main done() function shows that the array can be accessed as "data.result.files" NOT "data.files." I can change ".call(this, $.Event('done'), { result: result });" to ".call(this, $.Event('done'), { files: result.files });" so that "data.files" is the location of the file array, but this does not solve the problem. I can't seem to get any pre-existing files to load in to the list.
Does anyone see what I am doing wrong? Happy holidays.
What happens when you change the line:
hc.Response.Write(serializer.Serialize(result));
into
hc.Response.Write(serializer.Serialize(fs_list.ToArray()));
It looks like the serializer is taking the variable name into account when you are serializing your file descriptions. The 'result' JSON object should disappear from the response.
I was overwriting the done() method where I have:
done: function (e, data) { // data is checked here }
I did this for debugging, but apparently it blocks the pre-existing file list from being loaded and calling the download template.

Using jQuery and iFrame to Download a File

I have the following code to download a .csv file:
$.ajax({
url: urlString,
contentType: "application/json; charset=utf-8",
dataType: "json",
cache: false,
success: function(data) {
if (data) {
var iframe = $("<iframe/>").attr({
src: data,
style: "visibility:hidden;display:none"
}).appendTo(buttonToDownloadFile);
} else {
alert('Something went wrong');
}
}
});
The urlString is pointing to a Restful service that generates the .csv file and returns the file path which is assigned to the src attribute for the iFrame. This works for any .csv files but I'm having problems with .xml files.
When I use the same code but changing the contentType to text/xml and use it for downloading .xml files this doesn't work.
Can I use the same approach here for .xml files?
UPDATE:
Thanks to Ben for pointing me to the right direction. It turns out I don't need the ajax call at all. Instead, I can just use the iFrame and its url attribute to call the web service, which will generate the content, add the header (Content-Disposition), and return the stream.
You can also offer it as a download from a virtual anchor element, even if the data is client-side:
/*
* Create an anchor to some inline data...
*/
var url = 'data:application/octet-stream,Testing%20one%20two%20three';
var anchor = document.createElement('a');
anchor.setAttribute('href', url);
anchor.setAttribute('download', 'myNote.txt');
/*
* Click the anchor
*/
// Chrome can do anchor.click(), but let's do something that Firefox can handle too
// Create event
var ev = document.createEvent("MouseEvents");
ev.initMouseEvent("click", true, false, self, 0, 0, 0, 0, 0, false, false, false, false, 0, null);
// Fire event
anchor.dispatchEvent(ev);
http://jsfiddle.net/D572L/
I'm guessing that the problem is that most browsers will try to render XML in the browser itself, whereas they tend to have no handler for CSV, so they'll automatically default to prompt the user to download the file. Try modifying the headers of the XML file to force the download. Something like (PHP example):
header("Content-Type: application/force-download");
header("Content-Type: application/octet-stream");
header("Content-Type: application/download");
header('Content-Disposition: attachment; filename="some filename"');
That should tell most browsers not to attempt to open the file, but instead to have the user download the file and let the OS determine what to do with it.
If you have no power to control headers in the XML file itself, you can try a work-around using a server-side script. Use JS to pass the URL to a server-side script:
//build the new URL
var my_url = 'http://example.com/load_file_script?url=' + escape(path_to_file);
//load it into a hidden iframe
var iframe = $("<iframe/>").attr({
src: my_url,
style: "visibility:hidden;display:none"
}).appendTo(buttonToDownloadFile);
and on the server-side (your http://example.com/load_file_script script) you use cURL/file_get_contents/wgets/[some other mechanism of fetching remote files] to grab the contents of the remote file, add the Content-Disposition: attachment headers, and print the code of the original file.

Categories