Running into the following error when I try to upload files larger than 1.7 MB:
"Request failed with error message - The request message is too big. The server does not allow messages larger than 2097152 bytes. . Stack Trace - undefined"
function uploadFile(arrayBuffer, fileName)
{
//Get Client Context,Web and List object.
var clientContext = new SP.ClientContext();
var oWeb = clientContext.get_web();
var oList = oWeb.get_lists().getByTitle('CoReTranslationDocuments');
var bytes = new Uint8Array(arrayBuffer);
var i, length, out = '';
for (i = 0, length = bytes.length; i < length; i += 1)
{
out += String.fromCharCode(bytes[i]);
}
var base64 = btoa(out);
var createInfo = new SP.FileCreationInformation();
createInfo.set_content(base64);
createInfo.set_url(fileName);
var uploadedDocument = oList.get_rootFolder().get_files().add(createInfo);
clientContext.load(uploadedDocument);
clientContext.executeQueryAsync(QuerySuccess, QueryFailure);
}
We just switched from SP2013 to Sharepoint Online. This code worked well with even larger files previously. Does the 2MB limit refer to the file being uploaded or the size of the REST request?
I also did read about a possible solution using filestream - is that something I can use in this scenario?
Any suggestions/ modifications to the code will be much appreciated.
SharePoint has its own limits for CSOM. Unfortunately, these limits cannot be configured in Central Administration and also cannot be set using CSOM for obvious reasons.
When googling for the issue, mostly a solution is given by setting the ClientRequestServiceSettings.MaxReceivedMessageSize property to the desired size.
Call the following PowerShell script from SharePoint Management Shell :
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$ws.Update()
This will set the limit to 200 MB.
However, in SharePoint 2013 Microsoft apparently added another configuration setting to also limit the amount of data which the server shall process from a CSOM request (Why anyone would configure this one differently is beyond me...). After reading a very, very long SharePoint Log file and crawling through some disassembled SharePoint server code, I found that this parameter can be set via the property ClientRequestServiceSettings.MaxParseMessageSize.
We are now using the following script with SharePoint 2013 and it works great:
$ws = [Microsoft.SharePoint.Administration.SPWebService]::ContentService
$ws.ClientRequestServiceSettings.MaxReceivedMessageSize = 209715200
$ws.ClientRequestServiceSettings.MaxParseMessageSize = 209715200
$ws.Update()
Hope that saves some people a headache!
Related
I'm working on file encryption for my messenger and I'm struggling with uploading the file after encryption is done.
The encryption seems fine in terms of performance, but when I try to make an upload, the browser hangs completely. Profiler writes "small GC" events infinitely, and the yellow bar about the hung up script is appearing every 10 seconds.
What I already tried:
Read the file with FileReader to ArrayBuffer, then turn it into a basic Array, encrypt it, then create a FormData object, create a File from the data, append it to FormData and send. It worked fast with original, untouched file around 1.3 Mb in size when I did not do the encryption, but on the encrypted "fake" File object after upload I get file with 4.7 Mb and it was not usable.
Send as a plain POST field (multipart formdata encoding). The data is corrupted after it is saved on PHP this way.
Send as a Base64-encoded POST field. Finally it started working this way after I found a fast converting function from binary array to Base64 string. btoa() gave wrong results after encode/decode. But after I tried a file of 8.5 Mb in size, it hung again.
I tried moving extra data to URL string and send file as Blob as described here. No success, browser still hangs.
I tried passing to Blob constructor a basic Array, a Uint8Array made of it, and finally I tried to send File as suggested in docs, but still the same result, even with small file.
What is wrong with the code? HDD load is 0% when this hang happens. Also the files in question are really very small
Here is what I get on the output from my server script when I emergency terminate the JS script by pressing the button:
Warning: Unknown: POST Content-Length of 22146226 bytes exceeds the limit of 8388608 bytes in Unknown on line 0
Warning: Cannot modify header information - headers already sent in Unknown on line 0
Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent in D:\xmessenger\upload.php on line 2
Array ( )
Here is my JavaScript:
function uploadEncryptedFile(nonce) {
if (typeof window['FormData'] !== 'function' || typeof window['File'] !== 'function') return
var file_input = document.getElementById('attachment')
if (!file_input.files.length) return
var file = file_input.files[0]
var reader = new FileReader();
reader.addEventListener('load', function() {
var data = Array.from(new Uint8Array(reader.result))
var encrypted = encryptFile(data, nonce)
//return //Here it never hangs
var form_data = new FormData()
form_data.append('name', file.name)
form_data.append('type', file.type)
form_data.append('attachment', arrayBufferToBase64(encrypted))
/* form_data.append('attachment', btoa(encrypted)) // Does not help */
form_data.append('nonce', nonce)
var req = getXmlHttp()
req.open('POST', 'upload.php?attachencryptedfile', true)
req.onload = function() {
var data = req.responseText.split(':')
document.getElementById('filelist').lastChild.realName = data[2]
document.getElementById('progress2').style.display = 'none'
document.getElementById('attachment').onclick = null
encryptFilename(data[0], data[1], data[2])
}
req.send(form_data)
/* These lines also fail when the file is larger */
/* req.send(new Blob(encrypted)) */
/* req.send(new Blob(new Uint8Array(encrypted))) */
})
reader.readAsArrayBuffer(file)
}
function arrayBufferToBase64(buffer) {
var binary = '';
var bytes = new Uint8Array(buffer);
var len = bytes.byteLength;
for (var i = 0; i < len; i++) {
binary += String.fromCharCode(bytes[i]);
}
return window.btoa(binary);
}
Here is my PHP handler code:
if (isset($_GET['attachencryptedfile'])) {
$entityBody = file_get_contents('php://input');
if ($entityBody == '') exit(print_r($_POST, true));
else exit($entityBody);
if (!isset($_POST["name"])) exit("Error");
$name = #preg_replace("/[^0-9A-Za-z._-]/", "", $_POST["name"]);
$nonce = #preg_replace("/[^0-9A-Za-z+\\/]/", "", $_POST["nonce"]);
if ($name == ".htaccess") exit();
$data = base64_decode($_POST["attachment"]);
//print_r($_POST);
//exit();
if (strlen($data)>1024*15*1024) exit('<script type="text/javascript">parent.showInfo("Файл слишком большой"); parent.document.getElementById(\'filelist\').removeChild(parent.document.getElementById(\'filelist\').lastChild); parent.document.getElementById(\'progress2\').style.display = \'none\'; parent.document.getElementById(\'attachment\').onclick = null</script>');
$uname = uniqid()."_".str_pad($_SESSION['xm_user_id'], 6, "0", STR_PAD_LEFT).substr($name, strrpos($name, "."));
file_put_contents("upload/".$uname, $data);
mysql_query("ALTER TABLE `attachments` AUTO_INCREMENT=0");
mysql_query("INSERT INTO `attachments` VALUES('0', '".$uname."', '".$name."', '0', '".$nonce."')");
exit(mysql_insert_id().":".$uname.":".$name);
}
HTML form:
<form name="fileForm" id="fileForm" method="post" enctype="multipart/form-data" action="upload.php?attachfile" target="ifr">
<div id="fileButton" title="Прикрепить файл" onclick="document.getElementById('attachment').click()"></div>
<input type="file" name="attachment" id="attachment" title="Прикрепить файл" onchange="addFile()" />
</form>
UPD: the issue is not solved, unfortunately. My answer is only partially correct. Now I made a silly mistake in the code (forgot to update the server side), and I found another cause of possible hang. If I submit a basic POST form (x-www-urlencoded) and code in the PHP script tries to execute this line ($uname is defined, $_FILES is an empty array)
if (!copy($_FILES['attachment']['tmp_name'], "upload/".$uname)) exit("Error");
then the whole thing hangs again. If I terminate the script, the server response is code 200, and the body contents are just fine (I have error output on on my dev machine). I know it is a bad thing - calling copy with the first argument which is undefined at all, but even server error 500 must not hang the browser in such a way (btw, new latest version of Firefox is also affected).
I have Apache 2.4 on Windows 7 x64 and PHP 5.3. Can someone please verify this thing? Maybe a bug should be filed to Apache/Firefox team?
Oh my God. This terrible behavior was caused by... post_max_size = 8M set in php.ini. And files smaller than 8 Mb actually did not hang the browser, as I figured it out.
Last question is - why? Why cannot Apache/PHP (I have Apache 2.4 btw, it is not old) somehow gracefully abort the connection, telling the browser that the limit is exceeded? Or maybe it is a bug in XHR implementation, and is not applicable to basic form submit. Anyway, will be useful for people who stumble upon it.
By the way, I tried it in Chrome with the same POST size limit, and it does not hang there completely like in Firefox (the request is still in some hung up condition with "no response available", but the JS engine and the UI are not blocked ).
I have to map a lot of different files with different structures to a db. There is a lot of different tables in those xlsx so I thought about schemeless noSQL approach, but I'm quite newbie in this field.
It should be a microservice with client interface for choosing tables/cells for parsing xlsx files. I do not have strict technology; it could be JAVA, GROOVY, Python or even a JavaScript engine.
Do you know any working solution for doing it?
Here is example xlsx (but I've got also other files, also in xls format): http://stat.gov.pl/download/gfx/portalinformacyjny/pl/defaultaktualnosci/5502/11/13/1/wyniki_finansowe_podmiotow_gospodarczych_1-6m_2015.xlsx
The work you have to do is called ETL (Extract Transform Load). You need to either find a good ETL software (here is a discussion about open source ETL) or to script your own solution in a language you are used with.
The advantage of a ready made GUI software is that you just have to drag and drop data but if you have some custom logic or semi structured data like in your xlsx example, you have limited support.
The advantage of writing your own script is you have all the freedom you need.
I have done some ETL work and I used successfully Groovy for writing my own solution with custom logic and so on, and in terms of GUI I used Altova Mapforce when I had to import some exotic file types.
If you decide to write your own solution you have to:
Convert all data to an easy to load format. In your case you have to convert each xls or xlsx tab to CSV with a naming convention.
Load your files in your chosen language for transforming
Do your logic to put data in a desirable format
Save it in a database (SQL or noSQL)
Maybe you should try Google Sheets to display excel and Google Apps Script (https://developers.google.com/apps-script/overview) to write custom add-on for parsing data to JSON.
Spreadsheet Service (https://developers.google.com/apps-script/reference/spreadsheet/) has plenty methods to access data in sheets.
Next you can send this JSON over API (https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app) or put directly into database (https://developers.google.com/apps-script/guides/jdbc).
Maybe isn't clean, but fast solution.
I had a project that done work almost the same as your problem but it seem easier as I had a fixed structure of xlsx files.
For xlsx parsing, I had experiment with Python and Openpyxl and had no struggle while working with them, they are simple, fast and easy to use.
For database, I recommend using MongoDB, you can deal with documents and collections in MongoDB just as simple as working with JSON objects or a set of JSON objects. PyMongo is the best and recommended way to work with MongoDB from Python I think.
The problem is you have different files with different structures. I cannot recommend anything deeper on this without viewing your data. But you should find the general structure of them or you have to figure out the way to classify them into common sets, each set will be parsed using appropriate algorithm.
Javascript solution, as xlsx2csv (you can make export anywhere):
var def = "1.xlsx";
if (WScript.Arguments.length>0) def = WScript.Arguments(0);
var col = [];
var objShell = new ActiveXObject( "Shell.Application" );
var fs = new ActiveXObject("Scripting.FileSystemObject");
function flush(){
WScript.Echo(col.join(';'));
}
function import_xlsx(file) {
var strZipFile = file; // '"1.xlsx" 'name of zip file
var outFolder = "."; // 'destination folder of unzipped files (must exist)
var pwd =WScript.ScriptFullName.replace( WScript.ScriptName, "");
var i,j,k;
var strXlsFile = strZipFile;
var strZipFile = strXlsFile.replace( ".xlsx",".zip").replace( ".XLSX",".zip");
fs.CopyFile (strXlsFile,strZipFile, true);
var objSource = objShell.NameSpace(pwd+strZipFile).Items();
var objTarget = objShell.NameSpace(pwd+outFolder);
for (i=0;i<objSource.Count;i++)
if (objSource.item(i).Name == "xl"){
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
objTarget.CopyHere(objSource.item(i), 256);
}
var xml = new ActiveXObject("Msxml2.DOMDocument.6.0");
xml.load("xl\\sharedStrings.xml");
var sel = xml.selectNodes("/*/*/*") ;
var vol = [];
for(i=0;i<sel.length;i++) vol.push(sel[i].text);
xml.load ("xl\\worksheets\\sheet1.xml");
ret = "";
var line = xml.selectNodes("/*/*/*");
var li, line2 = 0, line3=0, row;
for (li = 0; li< line.length; li++){
if (line[li].nodeName == "row")
for (row=0;row<line[li].childNodes.length;row++){
r = line[li].childNodes[row].selectSingleNode("#r").text;
line2 = eval(r.replace(r.substring(0,1),""));
if (line2 != line3) {
line3 = line2;
if (line3 != 0) {
//flush -------------------------- line3
flush();
for (i=0;i<col.length;i++) col[i]="";
}
}
try{
t = line[li].childNodes[row].selectSingleNode("#t").text;
//i = instr("ABCDEFGHIJKLMNOPQRSTUVWXYZ", left(r,1))
i = ("ABCDEFGHIJKLMNOPQRSTUVWXYZ").indexOf(r.charAt(0));
while (i > col.length) col.push("");
if (t == "s"){
t = eval(line[li].childNodes[row].firstChild.text)
col[i] = vol[t];
} else col[i] = line[li].childNodes[row].firstChild.text;
} catch(e) {};
}
flush();
}
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
if (fs.FileExists(strZipFile)) fs.DeleteFile(strZipFile);
}
import_xlsx(def);
I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.
Sorry about the vague title but I'm a bit lost so it's hard to be specific. I've started playing around with Firefox extensions using the add-on SDK. What I'm trying to to is to watch a page for changes, a Twitch.tv chat window in this case, and save those changes to a file.
I've gotten this to work, every time something changes on the page it gets saved. But, "unusual" characters like for example something in Korean doesn't get saved properly. I think this has to do with encoding of the file/string? I tried saving the same characters by copy-pasting them into notepad, it asked me to save in Unicode and when I did everything worked fine. So I figured, ok, I'll change the encoding of the log file to unicode as well before writing to it. Didn't exactly work... Now all the characters were in some kind of foreign language.
The code I'm using to write to the file is this:
var {Cc, Ci, Cu} = require("chrome");
var {FileUtils} = Cu.import("resource://gre/modules/FileUtils.jsm");
var file = FileUtils.getFile("Desk", ["mylogfile.txt"]);
var stream = FileUtils.openFileOutputStream(file, FileUtils.MODE_WRONLY | FileUtils.MODE_CREATE | FileUtils.MODE_APPEND);
stream.write(data, data.length);
stream.close();
I looked at the description of FileUtils.jsm over at MDN and as far as I can tell there's no way to tell it which encoding I want to use?
If you don't know a fix could you give me some good search terms because I seem to be coming up short on that front. Since I know basically nothing on the subject I'm flailing around in the dark a bit at the moment.
edit:
This is what I ended up with (for now) to get this thing working:
var {Cc, Ci, Cu} = require("chrome");
var {FileUtils} = Cu.import("resource://gre/modules/FileUtils.jsm");
var file = Cc['#mozilla.org/file/local;1']
.createInstance(Ci.nsILocalFile);
file.initWithPath('C:\\temp\\temp.txt');
if(!file.exists()){
file.create(file.NORMAL_FILE_TYPE, 0666);
}
var charset = 'UTF-8';
var fileStream = Cc['#mozilla.org/network/file-output-stream;1']
.createInstance(Ci.nsIFileOutputStream);
fileStream.init(file, FileUtils.MODE_WRONLY | FileUtils.MODE_CREATE | FileUtils.MODE_APPEND, 0x200, false);
var converterStream = Cc['#mozilla.org/intl/converter-output-stream;1']
.createInstance(Ci.nsIConverterOutputStream);
converterStream.init(fileStream, charset, data.length,
Ci.nsIConverterInputStream.DEFAULT_REPLACEMENT_CHARACTER);
converterStream.writeString(data);
converterStream.close();
fileStream.close();
Dumping just the raw bytes (well, raw jschars actually) won't work. You need to first convert the data into some sensible encoding.
See e.g. the File I/O Snippets. Here are the crucial bits of creating a converter output stream wrapper:
var converter = Components.classes["#mozilla.org/intl/converter-output-stream;1"].
createInstance(Components.interfaces.nsIConverterOutputStream);
converter.init(foStream, "UTF-8", 0, 0);
converter.writeString(data);
converter.close(); // this closes foStream
Another way is to use OS.File + TextConverter:
let encoder = new TextEncoder(); // This encoder can be reused for several writes
let array = encoder.encode("This is some text"); // Convert the text to an array
let promise = OS.File.writeAtomic("file.txt", array, // Write the array atomically to "file.txt", using as temporary
{tmpPath: "file.txt.tmp"}); // buffer "file.txt.tmp".
It might be even possible to mix both. OS.File has the benefit that it will write data and access files off the main thread (so it won't block the UI while the file is being written).
I have asked this question also on the appcelerator forum, but as I find I often get better answers from you lovely people here on stackoverflow I am also asking it here just incase anyone can spread some light.
I have created a downloadQueue of urls and am using it to download files with the httpclient. Each file in the downloadQueue is is sent the the httpclient one at a time, with the next download being initiated only after the previous has been completed.
When I start the download, it seems to be working correctly and manages to download several files before it it simply freezes and I get an "out of memory" error in the DDMS error log.
I tried implementing suggestions found in other posts a sample of which are:
[http://developer.appcelerator.com/question/28911/httpclient-leaks-easily-or-can-we-have-a-close-method#answer-104241][1]
[http://developer.appcelerator.com/question/35041/large-file-download-on-mobile][2]
[http://developer.appcelerator.com/question/120129/httpclient-and-setfile][3]
[http://developer.appcelerator.com/question/95521/httpclient---save-response-directly-to-file][4]
I tried all of the following:
- moving larger file downloads directly form the nativepath rather then simply saving to file in order to insure that tmp files are not kept longer then necessary.
using the undocument setFile method of the httpclient. (This stopped my code dead without any error message, and as it is undocumented I have no idea if it was ever implemented on android anyway)
-using a settimeout in httplient.onload after the file has been download to pause for 1 second before requesting the next file (I have no idea how this would help, but I am clutching a straws now)
Below is the relevant parts of my code (which is complete except for the GetFileUrls functions which I excluded for simplicity sake as all this function does is return an array of URLs).
Can anyone spot anything that might be causing my memory issue. Does anyone have any ideas as I have tried everthing I can think of? (HELP!)
var count = 0;
var downloadQueue = [];
var rootDir = Ti.Filesystem.getExternalStorageDirectory();
downloadQueue = GetFileUrls(); /* this function is not included in order to keep my post as short as possible, bu it returns an array of urls */
DownloadFile(downloadQueue[count]);
var downloader = Ti.Network.createHTTPClient({timeout:10000});
downloader.onerror = function(){
Ti.API.info(this.responseData);
}
downloader.onload = function(){
SaveFile(this.folderName, this.fileName, this.responseData);
count += 1;
setTimeout( function(){ DownloadFile(); }, 1000);
}
function DownloadFile(){
if (count < downloadQueue.length){
var fileUrl = downloadQueue[count];
var fileName = fileUrl.substring(fileUrl.lastIndexOf('/') + 1);
downloader.fileName = fileName;
downloader.folderName = rootDir;
downloader.open('GET', fileUrl);
downloader.send();
}
}
function SaveFile(foldername, filename, response){
if (response.type == 1){
var f = Ti.Filesystem.getFile(response.nativePath);
var dest = Ti.Filesystem.getFile(foldername, filename);
if (dest.exists()){
dest.deleteFile();
}
f.move(dest.nativePath);
}else{
var dest = Ti.Filesystem.getFile(foldername, filename);
dest.write(response);
}
}
try to use events instead of the nested recursion that you are using. Android does not seem to like that too much