I have a problem in my PhoneGap app. I would like to write a file of 15 MB. If I try the OS pulls more and more memory and the app crashes without message. I can reproduce this on android and blackberry tablets. Is there a way to implement the writing more efficient?
best regards
fe.createWriter(
(fw: any) => {
fw.onwriteend = (e) => {
fw.onwriteend = (e) => {
callback();
}
fw.write(data);
}
// write BOM (dead for now)
fw.write("");
},
(error: any) => {
alert("FileWriter Failed: " + error.code);
});
It's TypeScript, I hope JS developers won't struggle with this ;)
I found the answer.
Crash reason:
PhoneGap FileWrite.write cannot handle too big buffer, do not know exact size, I think
this issue is due to PG transfer data to iOS through URL Scheme, somehow it crash when
"URL" is too long.
How to fix it: write small block every time, code below:
function gotFileWriter(writer) {
function writeFinish() {
// ... your done code here...
}
var written = 0;
var BLOCK_SIZE = 1*1024*1024; // write 1M every time of write
function writeNext(cbFinish) {
var sz = Math.min(BLOCK_SIZE, data.byteLength - written);
var sub = data.slice(written, written+sz);
writer.write(sub);
written += sz;
writer.onwrite = function(evt) {
if (written < data.byteLength)
writeNext(cbFinish);
else
cbFinish();
};
}
writeNext(writeFinish);
}
UPDATE Aug 12,2014:
In my practice, the performance of saving file through Cordova FileSystem is not good, especially for large file(>5M) on phone, it takes a few seconds. If you are downloading file from server to local disk, you may want a "efficient and direct" way, try cordova-plugin-file-transfer plugin.
#Imskull's answer is the correct one ... i just want to put in the one for Blob (make sure it is blob and not arraybuffer) which is updated based on the one on top ... what i also added was a line to assure myself i am adding to the end of file ... it is more than enough to make ur app stop crashing (on ios mainly :P )
function gotFileWriter(writer) {
function writeFinish() {
// ... your done code here...
}
var written = 0;
var BLOCK_SIZE = 1*1024*1024; // write 1M every time of write
function writeNext(cbFinish) {
writer.onwrite = function(evt) {
if (written < data.size)
writeNext(cbFinish);
else
cbFinish();
};
if (written) writer.seek(writer.length);
writer.write(data.slice(written, written + Math.min(BLOCK_SIZE, data.size - written)));
written += Math.min(BLOCK_SIZE, data.size - written);
}
writeNext(writeFinish);
}
Related
I'm trying to run an open-source tool that uses sbt on localhost (9000). However, when I try to upload a file (.arrf) to the tool, which is the first thing required by the tool, I get a FileNotFound error as in the following image.
I have no experience with Scala and have limited experience with JS but as far as I understand there is an issue when uploading the file to the server. I tried with files of different sizes and with files used by the original author. I looked into the code and file upload is handled by a FileUploadService.scala class which is as the following,
class FileUploadService(serviceSavePath: String) {
val basePath = if (serviceSavePath.endsWith("/")) {
serviceSavePath
} else { serviceSavePath + "/" }
val uploadedParts: ConcurrentMap[String, Set[FileUploadInfo]] = new ConcurrentHashMap(8, 0.9f, 1)
def fileNameFor(fileInfo: FileUploadInfo) = {
s"${basePath}${fileInfo.resumableIdentifier}-${fileInfo.resumableFilename}"
}
def isLast(fileInfo: FileUploadInfo): Boolean = {
(fileInfo.resumableTotalSize - (fileInfo.resumableChunkSize * fileInfo.resumableChunkNumber)) < fileInfo.resumableChunkSize
}
def savePartialFile(filePart: Array[Byte], fileInfo: FileUploadInfo) {
if (filePart.length != fileInfo.resumableChunkSize & !isLast(fileInfo)) {
println("error uploading part")
return
}
val partialFile = new RandomAccessFile(fileNameFor(fileInfo), "rw")
val offset = (fileInfo.resumableChunkNumber - 1) * fileInfo.resumableChunkSize
try {
partialFile.seek(offset)
partialFile.write(filePart, 0, filePart.length)
} finally {
partialFile.close()
}
val key = fileNameFor(fileInfo)
if (uploadedParts.containsKey(key)) {
val partsUploaded = uploadedParts.get(key)
uploadedParts.put(key, partsUploaded + fileInfo)
} else {
uploadedParts.put(key, Set(fileInfo))
}
}
As far as I understand from the error, the error occurs in the savePartialFile function where it tries to create a new Random Access File. Here are the details of the last GET request before the error occurs. I also added the error log below. I added quite a lot of outputs and details because I'm very inexperienced in scala and web dev, yet I hope everything is clear.
Cheers!
2022-01-05 13:56:15,972 [ERROR] from application in application-akka.actor.default-dispatcher-86 -
! #7m99nng77 - Internal server error, for (POST) [/upload?resumableChunkNumber=3&resumableChunkSize=1048576&resumableCurrentChunkSize=1048576&resumableTotalSize=17296294&resumableType=&resumableIdentifier=17296294-mixedDriftarff&resumableFilename=mixedDrift.arff&resumableRelativePath=mixedDrift.arff&resumableTotalChunks=16] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[FileNotFoundException: .\tmp\arff\17296294-mixedDriftarff-mixedDrift.arff (The system cannot find the path specified)]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:255)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:182)
at play.core.server.AkkaHttpServer$$anonfun$$nestedInanonfun$executeHandler$1$1.applyOrElse(AkkaHttpServer.scala:230)
at play.core.server.AkkaHttpServer$$anonfun$$nestedInanonfun$executeHandler$1$1.applyOrElse(AkkaHttpServer.scala:229)
at scala.concurrent.Future.$anonfun$recoverWith$1(Future.scala:412)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at play.api.libs.streams.Execution$trampoline$.executeScheduled(Execution.scala:109)
at play.api.libs.streams.Execution$trampoline$.execute(Execution.scala:71)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:284)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
at scala.concurrent.Promise.complete(Promise.scala:49)
at scala.concurrent.Promise.complete$(Promise.scala:48)
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:183)
at scala.concurrent.Promise.failure(Promise.scala:100)
at scala.concurrent.Promise.failure$(Promise.scala:100)
at scala.concurrent.impl.Promise$DefaultPromise.failure(Promise.scala:183)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:41)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:38)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.io.FileNotFoundException: .\tmp\arff\17296294-mixedDriftarff-mixedDrift.arff (The system cannot find the path specified)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:124)
at services.FileUploadService.savePartialFile(FileUploadService.scala:30)
at controllers.OverviewController.$anonfun$upload$3(OverviewController.scala:161)
at play.api.data.Form.fold(Form.scala:144)
at controllers.OverviewController.$anonfun$upload$1(OverviewController.scala:156)
at scala.Function1.$anonfun$andThen$1(Function1.scala:52)
at play.api.mvc.ActionBuilderImpl.invokeBlock(Action.scala:482)
at play.api.mvc.ActionBuilderImpl.invokeBlock(Action.scala:480)
at play.api.mvc.ActionBuilder$$anon$9.invokeBlock(Action.scala:331)
at play.api.mvc.ActionBuilder$$anon$9.invokeBlock(Action.scala:326)
at play.api.mvc.ActionBuilder$$anon$2.apply(Action.scala:419)
at play.api.mvc.Action.$anonfun$apply$2(Action.scala:96)
at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:302)
at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:37)
... 12 common frames omitted
Apparently, the author of the code is not creating folders programmatically in the code. So creating two empty directories as "tmp/arrf" in the base directory of the sbt project worked.
Ey! I recently made a small framework to build my small webpages and projects on top.
Today i'm checking a lot of articles about tricks to speed up stuff. I'm interested in improve xhr speed.
I been reading and found some file extensions get usually cached by default and others don't.
I use a filename.ff special extension on my frameworks to known what files i want to fech when accessing a resource.
As a live example
https://bugs.stringmanolo.ga/#projects/fastframework is being downloaded from https://github.com/StringManolo/bugWriteups/blob/master/projects/fastframework/fastframework.ff using XHR when you click the fastframework link in this page https://bugs.stringmanolo.ga/#projects
My question is:
If i change the extension from fastframework.ff to fastframework.ff.js is the file getting cached by the browser and then it will be downloaded faster? Also will be working offline? Or it's already cached? Changing the framework code to use .ff.js isn't going to make a diference at all?
I finally solved it in a better way using service workers and cache api.
I let you the code i used here, so maybe is helpfull to someone in the future.
ff.js (ff is a ff = {} object)
/*** Cache Service Workers Code */
ff.cache = {}
ff.cache.resources = [];
ff.cache.start = function(swName, ttl) {
let tl = 0;
tl = localStorage.cacheTTL;
if (+tl) {
const now = new Date();
if (now.getTime() > +localStorage.cacheTTL) {
localStorage.cacheTTL = 0;
caches.delete("cachev1").then(function() {
});
}
} else {
navigator.serviceWorker.register(swName, {
scope: './'
})
.then(function(reg) {
caches.open("cachev1")
.then(function(cache) {
cache.addAll(ff.cache.resources)
.then(function() {
localStorage.cacheTTL = +(new Date().getTime()) + +ttl;
});
});
})
.catch(function(err) {
});
}
};
ff.cache.clean = function() {
caches.delete("cachev1").then(function() {
});
};
/* End Cache Service Workers Code ***/
cache.js (this is the service worker intercepting the requests)
self.addEventListener('fetch', (e) => {
e.respondWith(caches.match(e.request).then((response) => {
if(response)
return response
else
return fetch(e.request)
}) )
})
main.js (this is the main file included into the index.html file)
ff.cache.resources = [
"./logs/dev/historylogs.ff",
"./blogEntries/xss/xss1.ff",
"./blogEntries/xss/w3schoolsxss1.ff",
"./blogEntries/csrf/w3schoolscsrf1.ff",
"./projects/fastframework/fastframework.ff",
"./projects/jex/jex.ff",
"./ff.js",
"./main.js",
"./main.css",
"./index.html",
"./resources/w3schoolspayload.png",
"./resources/w3schoolsxsslanscape.png",
"./resources/w3schoolsxss.png"];
ff.cache.start("./cache.js", 104800000);
/* 604800000 milliseconds equals 1 week */
You can test it live in https://bugs.stringmanolo.ga/index.html is hosted from github repo in case you need to see more code.
I am trying to speed up the upload. So I tried with different solution, with both BackEnd and Front-End. Those are,
1) I uploaded the tar file (already compressed one)
2) I tried chunk upload (sequentially), if the response is success next API will get triggered. In the back-end side, in the same file the content will get appended.
3) I tried chunk upload but in parallel, at a single time I make the 50 request to upload the chunk content (I know, at a time browser handle only 6 requests). From the backend side, we are storing all the chunk file separately, after receiving the final request, appending all those chunks in to the single file.
But observed is, I am not seeing the much difference with all these cases.
Following is my service file
export class largeGeneUpload {
chromosomeFile: any;
options: any;
chunkSize = 1200000;
activeConnections = 0;
threadsQuantity = 50;
totalChunkCount = 0;
chunksPosition = 0;
failedChunks = [];
sendNext() {
if (this.activeConnections >= this.threadsQuantity) {
return;
}
if (this.chunksPosition === this.totalChunkCount) {
console.log('all chunks are done');
return;
}
const i = this.chunksPosition;
const url = 'gene/human';
const chunkIndex = i;
const start = chunkIndex * this.chunkSize;
const end = Math.min(start + this.chunkSize, this.chromosomeFile.size);
const currentchunkSize = this.chunkSize * i;
const chunkData = this.chromosomeFile.webkitSlice ? this.chromosomeFile.webkitSlice(start, end) : this.chromosomeFile.slice(start, end);
const fd = new FormData();
const binar = new File([chunkData], this.chromosomeFile.upload.filename);
console.log(binar);
fd.append('file', binar);
fd.append('dzuuid', this.chromosomeFile.upload.uuid);
fd.append('dzchunkindex', chunkIndex.toString());
fd.append('dztotalfilesize', this.chromosomeFile.upload.total);
fd.append('dzchunksize', this.chunkSize.toString());
fd.append('dztotalchunkcount', this.chromosomeFile.upload.totalChunkCount);
fd.append('isCancel', 'false');
fd.append('dzchunkbyteoffset', currentchunkSize.toString());
this.chunksPosition += 1;
this.activeConnections += 1;
this.apiDataService.uploadChunk(url, fd)
.then(() => {
this.activeConnections -= 1;
this.sendNext();
})
.catch((error) => {
this.activeConnections -= 1;
console.log('error here');
// chunksQueue.push(chunkId);
});
this.sendNext();
}
uploadChunk(resrc: string, item) {
return new Promise((resolve, reject) => {
this._http.post(this.baseApiUrl + resrc, item, {
headers: this.headers,
withCredentials: true
}).subscribe(r => {
console.log(r);
resolve();
}, err => {
console.log('err', err);
reject();
});
});
}
But the thing is, If I upload the same file in google drive it is not taking much time.
Let's consider, I have 700 MB file, to upload it in google drive it took 3 mins. But the same 700 MB file to upload with my Angular code with our back-end server it took 7 mins to finish it.
How do I improve the performance of file upload.?
forgive me ,
it seems silly answer but this depend on your hosting infrastructure
A lot of variables can cause this, but by your story it has nothing to do with your front-end code. Making it into chunks is not going to help, because browsers have their own optimized algorithm to upload files. The most likely culprit is your backend server or the connection from your client to the server.
You say that google drive is fast, but you should also know that google has a very widespread global infrastructure with top of the line cloud servers. If you are using, for example, a 2 euro per month fixed place hosting provider, you cannot expect the same processing and network power as google.
I am trying to write a JXA script in Apple Script Editor, that compresses a string using the LZ algorithm and writes it to a text (JSON) file:
var story = "Once upon a time in Silicon Valley..."
var storyC = LZString.compress(story)
var data_to_write = "{\x22test\x22\x20:\x20\x22"+storyC+"\x22}"
app.displayAlert(data_to_write)
var desktopString = app.pathTo("desktop").toString()
var file = `${desktopString}/test.json`
writeTextToFile(data_to_write, file, true)
Everything works, except that the LZ compressed string is just transformed to a set of "?" by the time it reaches the output file, test.json.
It should look like:
{"test" : "㲃냆Њޱᐈ攀렒삶퓲ٔ쀛䳂䨀푖㢈Ӱນꀀ"}
Instead it looks like:
{"test" : "????????????????????"}
I have a feeling the conversion is happening in the app.write command used by the writeTextToFile() function (which I pulled from an example in Apple's Mac Automation Scripting Guide):
var app = Application.currentApplication()
app.includeStandardAdditions = true
function writeTextToFile(text, file, overwriteExistingContent) {
try {
// Convert the file to a string
var fileString = file.toString()
// Open the file for writing
var openedFile = app.openForAccess(Path(fileString), { writePermission: true })
// Clear the file if content should be overwritten
if (overwriteExistingContent) {
app.setEof(openedFile, { to: 0 })
}
// Write the new content to the file
app.write(text, { to: openedFile, startingAt: app.getEof(openedFile) })
// Close the file
app.closeAccess(openedFile)
// Return a boolean indicating that writing was successful
return true
}
catch(error) {
try {
// Close the file
app.closeAccess(file)
}
catch(error) {
// Report the error is closing failed
console.log(`Couldn't close file: ${error}`)
}
// Return a boolean indicating that writing was successful
return false
}
}
Is there a substitute command for app.write that maintains the LZ compressed string / a better way to accomplish what I am trying to do?
In addition, I am using the readFile() function (also from the Scripting Guide) to load the LZ string back into the script:
function readFile(file) {
// Convert the file to a string
var fileString = file.toString()
// Read the file and return its contents
return app.read(Path(fileString))
}
But rather than returning:
{"test" : "㲃냆Њޱᐈ攀렒삶퓲ٔ쀛䳂䨀푖㢈Ӱນꀀ"}
It is returning:
"{\"test\" : \"㲃냆੠Њޱᐈ攀렒삶퓲ٔ쀛䳂䨀푖㢈Ӱນꀀ\"}"
Does anybody know a fix for this too?
I know that it is possible to use Cocoa in JXA scripts, so maybe the solution lies therein?
I am just getting to grips with JavaScript so I'll admit trying to grasp Objective-C or Swift is way beyond me right now.
I look forward to any solutions and/or pointers that you might be able to provide me. Thanks in advance!
After some further Googl'ing, I came across these two posts:
How can I write UTF-8 files using JavaScript for Mac Automation?
read file as class utf8
I have thus altered my script accordingly.
writeTextToFile() now looks like:
function writeTextToFile(text, file) {
// source: https://stackoverflow.com/a/44293869/11616368
var nsStr = $.NSString.alloc.initWithUTF8String(text)
var nsPath = $(file).stringByStandardizingPath
var successBool = nsStr.writeToFileAtomicallyEncodingError(nsPath, false, $.NSUTF8StringEncoding, null)
if (!successBool) {
throw new Error("function writeFile ERROR:\nWrite to File FAILED for:\n" + file)
}
return successBool
};
While readFile() looks like:
ObjC.import('Foundation')
const readFile = function (path, encoding) {
// source: https://github.com/JXA-Cookbook/JXA-Cookbook/issues/25#issuecomment-271204038
pathString = path.toString()
!encoding && (encoding = $.NSUTF8StringEncoding)
const fm = $.NSFileManager.defaultManager
const data = fm.contentsAtPath(pathString)
const str = $.NSString.alloc.initWithDataEncoding(data, encoding)
return ObjC.unwrap(str)
};
Both use Objective-C to overcome app.write and app.read's inability to handle UTF-8.
I am running node.js on raspbian and trying to save/update a file every 2/3 seconds using the following code:
var saveFileSaving = false;
function loop() {
mainLoop = setTimeout(function() {
// update data
saveSaveFile(data, function() {
//console.log("Saved data to file");
loop();
});
}, 1500);
}
function saveSaveFile(data, callback) {
if(!saveFileSaving) {
saveFileSaving = true;
var wstream = fs.createWriteStream(path.join(__dirname, 'save.json'));
wstream.on('finish', function () {
saveFileSaving = false;
callback(data);
});
wstream.on('error', function (error) {
console.log(error);
saveFileSaving = false;
wstream.end();
callback(null);
});
wstream.write(JSON.stringify(data));
wstream.end();
} else {
callback(null);
}
}
When I run this it works fine for an hour then starts spitting out:
[25/May/2016 11:3:4 am] { [Error: EROFS, open '<path to file>']
errno: 56,
code: 'EROFS',
path: '<path to file>' }
I have tried jsonfile plugin which also sends out a similiar write error after an hour.
I have tried both fileSystem.writeFile and fileSystem.writeFileSync both give the same error after an hour.
I was thinking it had to do with the handler not being let go before a new save occurs which is why I started using the saveFileSaving flag.
Resetting the system via hard reset fixes the issue (soft reset does not work as the system seems to be locked up).
Any suggestions guys? I have searched the web and so only found one other question slightly similar from 4 years ago which was left in limbo.
Note: I am using the callback function from the code to continue with the main loop.
I was able to get this working by unlinking the file and saving the file every time I save while it is not pretty it works and shouldn't cause too much overhead.
I also added a backup solution which saves a backup every 5 minutes in case the save file has issues.
Thank you for everyone's help.
Here is my ideas:
1) Check free space when this problem happens by typing in terminal:
df -h
2) Also check if file is editable when problem occurs. with nano or vim and etc.
3) Your code too complicated for simply scheduling data manipulation and writing it to file. Because of even Your file will be busy (saveFileSaving) You will lose data until next iteration, try to use that code:
var
async = require('async'),
fs = require('fs'),
path = require('path');
async.forever(function(next) {
// some data manipulation
try {
fs.writeFileSync(path.join(__dirname, 'save.json'), JSON.stringify(data));
}
catch(ex) {
console.error('Error writing data to file:', ex);
}
setTimeout(next, 2000);
});
4) How about keeping file descriptor open?
var
async = require('async'),
fs = require('fs'),
path = require('path');
var file = fs.createWriteStream(path.join(__dirname, 'save.json'));
async.forever(function(next) {
// some data manipulation
file.write(JSON.stringify(data));
setTimeout(next, 2000);
});
var handleSignal = function (exc) {
// close file
file.end();
if(exc) {
console.log('STOPPING PROCESS BECAUSE OF:', exc);
}
process.exit(-1);
}
process.on('uncaughtException', handleSignal);
process.on('SIGHUP', handleSignal);
5) hardware or software problems (maybe because of OS drivers) with raspberry's storage controller.