cordova-plugin-file clean overwrite of file - javascript

I'm looking for a clean and safe way to fully overwrite existing files when using the cordova-plugin-file Cordova plugin. What I've found, is that even when using the exclusive: false option, if the new file contents is shorter than the existing file, the remainder of the existing file persists at the end of the new file.
Example. I have an existing file with the contents of 0123456789, and want to replace it with abcd. When using exclusive: false, the file I end up with afterwards has the contents of abcd456789
Obviously this causes complications when reading back, especially when I expect these files to be correct json.
I've been unable to find other answers that don't just simply say use exclusive: false.
So far, I can work around this by manually deleting the file first, then writing to it, but this leaves me a point where I'm at risk of losing the entire file data if the app closes at the wrong moment.
Another option may be to write to a temp file, then remove the existing one, then copy the temp, then remove the temp. And when reading, check for the file I want, if it's not there, check for a temp file for it, then copy and clean up if exists. This feels like a very long-winded work around for something that should be an option.
Am I missing something here?
This is my existing work around, though it does not handle the potential of app closing yet. Is there a nicer way before I have to go down that rabbit hole?
private replaceFileAtPath<T>(path: string, data: T): void {
FileService.map.Settings.getFile(path, { create: true }, fileEntry => {
fileEntry.remove(() => {})
FileService.map.Settings.getFile(path, { create: true }, fe =>
this.writeFile(fe, data)
)
})
}
private writeFile<T>(file: FileEntry, data: T, cb?: () => void): void {
file.createWriter(writer => {
const blob = new Blob([JSON.stringify(data)], { type: 'application/json' })
writer.write(blob)
})
}

i guess i found the way to fix this.
You can use HTML5 FileWriter truncate() method.
file.createWriter(writer => {
const blob = new Blob([JSON.stringify(data)], { type: 'application/json' })
const truncated = false;
writer.onwriteend = function() {
if (!truncated) {
truncated = true;
this.truncate(this.position);
return;
}
};
writer.write(blob)
})

Related

Using Node's 'Docx' to append to an existing Word doc with JS

I'm curious if it is possible to append to a generated Word document that I've created with Js and docx module. Currently I can generate the document and format it. Yet, I'm not seeing anything in their documentation about appending or adding a paragraph to an existing document (there is a section on exporting, but it always creates a new file, even if its the same name). Is this a limitation in JavaScript? I'm assuming that since I'd want to look for the document in the system files, that this creates a security issue (JS from a browser looking for a file in a end user's system) and hence why docx doesn't do it. Any guidance on this is greatly appreciated. Also, if it's not then would using Word APIs solve this?
P.S. I can share the code that generates the document, but that part is running fine, I just need to know if what I want is possible or if I'm wasting time.
This is a function I've tried and found from stackoverflow, but it was for a web app, this is a chrome extension. I've looked around and can't find anything else to try. Idally I'd like to write to the doc generated and add to it.
// pretty sure I cann't utilize a function like this unfornately
// all code is functional before this is call
function writeToDocument(doc, text){
let paragraph = new docx.Paragraph();
// invalid code
// paragraph.addRun(new docx.TextRun(text));
// doc.addParagraph(paragraph);
let packer = new docx.Packer();
docx.packer.toBuffer(doc).then((buffer) =>{
fs.writeFileSync(docName + ".docx",buffer);
});
}
It looks this is limitation of library. There is addSection() but it is private. Also, there are no methods to open a previously generated file.
Only way is: first create content, and later create doc and save it:
let paragraphs = [];
//any way to add element to array, eg
paragraphs[paragraphs.length] = new Paragraph({
children: [
new TextRun("Hello World"),
new TextRun({
text: "Foo Bar",
bold: true,
}),
new TextRun({
text: "\tGithub is the best",
bold: true,
}),
],
});
//paragraphs[paragraphs.length] = addAnotherParagraph()
//create document
const doc = new Document({
sections: [
{
properties: {},
children: paragraphs,
},
],
});
//and save it in fauvorite way
Packer.toBuffer(doc).then((buffer) => {
//why in `docx` documentation uses sync versioin?... You should avoid it
fs.writeFileSync("My Document.docx", buffer);
});

How to check for unique values in chrome.storage array

I am really struggling with this:
What I am trying to do is create an array that is stored in chrome.sync with the url of unique pages a user visits. Adding the url to the array works fine - what I am struggling with is how to check if that url already exists in the array. This is what I have currently:
function getURL(array) {
chrome.tabs.query({active: true, lastFocusedWindow: true, currentWindow: true}, tabs => {
let tabUrl = tabs[0].url;
addArr(tabUrl, array);
});
}
function addArr(tabUrl, array) {
array.map((x) => {
let urlCheck = x.url;
console.log(urlCheck);
if(urlCheck != tabUrl) {
array.push(tabUrl);
chrome.storage.sync.set({data: array}, function() {
console.log("added")
});
}
})
}
The annoying thing I can't work out is that it works when I remove anything to do with the the map and only keep the:
array.push(tabUrl);
chrome.storage.sync.set({data: array}, function() {
console.log("added")
});
This makes even less sense because when I inspect the array by inspecting the popup and then looking at console, there is no error only an empty data array. The error that it gives me, however, when I inspect the popup (actually right click on extension) is
Error handling response: TypeError: Cannot read properties of
undefined (reading 'url') at
chrome-extension://jkhpfndgmiimdbmjbajgdnmgembbkmgf/getURL.js:31:30
That getURL.js line 31? That is part of the code that gets the active tab. i.e
let tabUrl = tabs[0].url;
which isn't even in the same function.
Any ideas? I am totally stumped.
Use includes method:
function addArr(tabUrl, array) {
if (!array.includes(tabUrl)) {
array.push(tabUrl);
chrome.storage.sync.set({data: array});
}
}
However, chrome.storage.sync has a limit of 8192 bytes for each value which can only hold in the ballpark of 100 URLs, so you'll have to limit the amount of elements in the array e.g. by using array.splice or use several keys in the storage or use chrome.storage.local. There are also other limits in the sync storage like the frequency of write operations, which you can easily exceed if the user does a lot of navigation. An even better solution is to use IndexedDB with each URL as a separate key so you don't have to rewrite the entire array each time.

Replace JS object by reference [refactor working code]

tl;dr Working code is at the bottom, can it be made more elegant.
I am building a metalsmith (static site generator) plugin. Metalsmith plugins always take the form:
const myPlugin = options => (files, metalsmith, done) => {
// Mutate `files` or `metalsmith` in place.
done()
}
I have written my plugin in a functional (immutable) style (with Ramda.js) and would like to completely overwrite files with the new value. The following is conceptually what I want, but won't work because it is reassigning files to updated not manipulating the files object on the heap.
const myPlugin = options => (files, metalsmith, done) => {
const updated = { foo: "foo" }
files = updated
done()
}
I have achieved the desired functionality, with the following, but it seems inelegant.
const myPlugin = options => (files, metalsmith, done) => {
const updated = { foo: "foo" }
deleteMissingKeys(old, updated)
Object.assign(old, updated)
done()
}
const deleteMissingKeys = (old, updated) => {
Object.keys(old).forEach(key => {
if (!updated.hasOwnProperty(key)) {
delete old[key]
}
})
}
Is there a better way to achieve these ends?
There is no super elegant way to do this in JavaScript, but this is not a bad thing. The truth is, well-written JavaScript should not need a behavior like this. There is no such thing as "pass-by-reference" in JavaScript, so it is natural that attempts like yours will be inelegant.
Shall a library need a behavior like this, instead of trying to work out a "hack" to pass-by-reference, there is a much more "javascriptonic" way to do it, which is pass around a wrapper object for the desired object:
// instead of trying to use a "hack" to pass-by-reference
var myObj = { /* ... */ };
function myFunc(obj) {
// your hack here to modify obj, since
// obj = { /* ... */ }
// won't work, of course
}
myFunc(myObj);
// you should use a wrapper object
var myWrapper = {
myObj: { /* ... */ }
}
function myFunc(wrapper) {
wrapper.myObj = { /* ... */ };
}
myFunc(myWrapper);
I strongly suggest you reconsider why you really want to do this in the first place.
But if you insist, your solution isn't that bad, I like how you used Object.assign() instead of a clunky for loop to add the fields.
I should add, though, that depending on the situation you might also want to set the prototype of the object to the intended value (if for example the old object was an instanceof Date and you want to make it a plain object, you certainly need to call Object.setPrototypeOf(old, Object.prototype)).

How to update the local path with a commit version?

I'm trying to clone a repo to the local file system and then checkout a specific commit.
This is what I have:
Git.Clone(GIT_REPO_URL, localPath, CLONE_OPTIONS).then((repo) => {
return repo.getCommit(version).then((commit) => {
// use the local tree
});
}).catch((error) => {
// handler clone failure
});
This clones the repo just fine, but the local version I end up with is the current head of the master and not the commit I checked out (version).
How do I update the local tree to match this commit?
Thanks.
getCommit does not modify the local tree; it merely retrieves the commit in memory in the script, so that you can work with it without modifying the local tree. (This would, for example, be useful if you wanted to walk through the git history and do some analysis, but didn't need to actually update the local tree to access the files themselves.)
To actually check out a specific branch or commit, you want to use the checkoutBranch and checkoutRef functions, which actually do update the local working tree. (Notice how their descriptions explicitly say so, unlike getCommit, which doesn't say it modifies the working three because it doesn't actally do that).
If you want to checkout a random commit, you need to use the Checkout.tree(*) function instead.
var repository;
NodeGit.Clone(GIT_REPO_URL, localPath, CLONE_OPTIONS)
.then(function(repo) {
repository = repo;
return repo.getCommit(version);
})
.then(function(commit) {
return NodeGit.Checkout.tree(repository, commit, checkoutStrategy:
git.Checkout.STRATEGY.FORCE
});
}).catch((error) => {
// handler clone failure
});
You may or may not also want to detach your HEAD.
Edit:
This will checkout and set HEAD to the commit in question.
var repository;
NodeGit.Clone(GIT_REPO_URL, localPath, CLONE_OPTIONS)
.then(function(repo) {
repository = repo;
return repo.getCommit(version);
})
.then(function(commit) {
repository.setHeadDetached(commit.id());
}).catch((error) => {
// handler clone failure
});
I haven't managed to make it work, and as it turned out to be way too complicated to get something as trivial to work, I've decided to just do this:
import * as process from "child_process";
const command = `git clone ${ GIT_REPO_URL } ${ repoPath } && cd ${ repoPath } && git checkout ${ version }`;
process.exec(command, error => {
...
});

Read a bunch of JSON files, transform them, and save them

I'm trying to achieve this with Gulp.
Read every .json file in a given directory including subdirectories.
Transform them in some way, for example add a new root level, etc.
Save them into a new directory keeping original structure.
The point where I'm lost is how to pipe reading/writing JSON to src.
I have the following skeleton now.
gulp.task("migratefiles", function () {
return gulp.src("files/**/*.json")
.pipe(/* WHAT HERE? */)
.pipe(gulp.dest("processed"));
});
There's a number of way you can do this:
(1) Use the gulp-json-transform plugin:
var jsonTransform = require('gulp-json-transform');
gulp.task("migratefiles", function () {
return gulp.src("files/**/*.json")
.pipe(jsonTransform(function(json, file) {
var transformedJson = {
"newRootLevel": json
};
return transformedJson;
}))
.pipe(gulp.dest("processed"));
});
Pros:
Easy to use
Supports asynchronous processing (if you return a Promise)
Gives access to path of each file
Cons:
Only rudimentary output formatting
(2) Use the gulp-json-editor plugin:
var jeditor = require('gulp-json-editor');
gulp.task("migratefiles", function () {
return gulp.src("files/**/*.json")
.pipe(jeditor(function(json) {
var transformedJson = {
"newRootLevel": json
};
return transformedJson;
}))
.pipe(gulp.dest("processed"));
});
Pros:
Easy to use
Automatically recognizes the indentation your input files use (two spaces, four spaces, tabs etc.) and formats your output files accordingly
Supports various js-beautify options
Cons:
Doesn't seem to support asynchronous processing
Doesn't seem to have a way to access path of each file
(3) Do it manually (by directly accessing the vinyl file object using map-stream):
var map = require('map-stream');
gulp.task("migratefiles", function () {
return gulp.src("files/**/*.json")
.pipe(map(function(file, done) {
var json = JSON.parse(file.contents.toString());
var transformedJson = {
"newRootLevel": json
};
file.contents = new Buffer(JSON.stringify(transformedJson));
done(null, file);
}))
.pipe(gulp.dest("processed"));
});
Pros:
Full control/access over everything
Supports asynchronous processing (through a done callback)
Cons:
Harder to use

Categories