I like to write a Thunderbird AddOn that encrypts stuff. For this, I already extracted all data from the compose window. Now I have to save this into files and run a local executable for encryption. But I found no way to save the files and execute an executable on the local machine. How can I do that?
I found the File and Directory Entries API documentation, but it seems to not work. I always get undefined while trying to get the object with this code:
var filesystem = FileSystemEntry.filesystem;
console.log(filesystem); // --> undefined
At least, is there a working AddOn that I can examine to find out how this is working and maybe what permissions I have to request in the manifest.json?
NOTE: Must work cross-platform (Windows and Linux).
The answer is, that WebExtensions are currently not able to execute local files. Also, saving to some local folder on the disk is also not possible.
Instead, you need to add some WebExtension Experiment to your project and there use the legacy APIs. There you can use the IOUtils and FileUtils extensions to reach your goal:
Execute a file:
In your background JS file:
var ret = await browser.experiment.execute("/usr/bin/executable", [ "-v" ]);
In the experiment you can execute like this:
var { ExtensionCommon } = ChromeUtils.import("resource://gre/modules/ExtensionCommon.jsm");
var { FileUtils } = ChromeUtils.import("resource://gre/modules/FileUtils.jsm");
var { XPCOMUtils } = ChromeUtils.import("resource://gre/modules/XPCOMUtils.jsm");
XPCOMUtils.defineLazyGlobalGetters(this, ["IOUtils");
async execute(executable, arrParams) {
var fileExists = await IOUtils.exists(executable);
if (!fileExists) {
Services.wm.getMostRecentWindow("mail:3pane")
.alert("Executable [" + executable + "] not found!");
return false;
}
var progPath = new FileUtils.File(executable);
let process = Cc["#mozilla.org/process/util;1"].createInstance(Ci.nsIProcess);
process.init(progPath);
process.startHidden = false;
process.noShell = true;
process.run(true, arrParams, arrParams.length);
return true;
},
Save an attachment to disk:
In your backround JS file you can do like this:
var f = messenger.compose.getAttachmentFile(attachment.id)
var blob = await f.arrayBuffer();
var t = await browser.experiment.writeFileBinary(tempFile, blob);
In the experiment you can then write the file like this:
async writeFileBinary(filename, data) {
// first we need to convert the arrayBuffer to some Uint8Array
var uint8 = new Uint8Array(data);
uint8.reduce((binary, uint8) => binary + uint8.toString(2), "");
// then we can save it
var ret = await IOUtils.write(filename, uint8);
return ret;
},
IOUtils documentation:
https://searchfox.org/mozilla-central/source/dom/chrome-webidl/IOUtils.webidl
FileUtils documentation:
https://searchfox.org/mozilla-central/source/toolkit/modules/FileUtils.jsm
Related
I am trying to download multiple files from a OneDrive folder. Below has my code but it will only download the last file and not all of them
for(const f in files)
{
var fileURL = (files[f]["#microsoft.graph.downloadUrl"]);
var fileName = (JSON.stringify(files[f].name)).slice(1,-1);
var request = https.get(fileURL, function(response){
console.log(fileURL);
if(response.statusCode == 200){
var file = fs.createWriteStream(`./temp/${userId}/${fileName}`);
response.pipe(file);
}
request.setTimeout(60000,function(){
request.destroy();
});
});
}
i.e the console log would print
FILE_URL1
FILE_URL1
FILE_URL1
rather than
FILE_URL1
FILE_URL2
FILE_URL3
Note that if the console.log(fileURL) is placed before var request https.get... it prints out the 3 file urls. I'm not sure if its a problem with the loops or if there is something else. I am quite new at javascript so I dont know a lot.
Replace the var with const or let you will see the different result
Description: What's the difference between using "let" and "var"?
I am fairly new to JS/Winappdriver.
The application I am trying to test is a windows based "Click Once" application from .Net, so I have to go to a website from IE and click "Install". This will open the application.
Once the application is running, I have no way to connect the application to perform my UI interactions while using JavaScript.
Using C#, I was looping through the processes looking for a process name, get the window handle, convert it to hex, add that as a capability and create the driver - it worked. Sample code below,
public Setup_TearDown()
{
string TopLevelWindowHandleHex = null;
IntPtr TopLevelWindowHandle = new IntPtr();
foreach (Process clsProcess in Process.GetProcesses())
{
if (clsProcess.ProcessName.StartsWith($"SomeName-{exec_pob}-{exec_env}"))
{
TopLevelWindowHandle = clsProcess.Handle;
TopLevelWindowHandleHex = clsProcess.MainWindowHandle.ToString("x");
}
}
var appOptions = new AppiumOptions();
appOptions.AddAdditionalCapability("appTopLevelWindow", TopLevelWindowHandleHex);
appOptions.AddAdditionalCapability("ms:experimental-webdriver", true);
appOptions.AddAdditionalCapability("ms:waitForAppLaunch", "25");
AppDriver = new WindowsDriver<WindowsElement>(new Uri(WinAppDriverUrl), appOptions);
AppDriver.Manage().Timeouts().ImplicitWait = TimeSpan.FromSeconds(60);
}
How do I do this in Javascript ? I can't seem to find any code examples.
Based on an example from this repo, I tried the following in JS to find the process to latch on to but without luck.
import {By2} from "selenium-appium";
// this.appWindow = this.driver.element(By2.nativeAccessibilityId('xxx'));
// this.appWindow = this.driver.element(By2.nativeXpath("//Window[starts-with(#Name,\"xxxx\")]"));
// this.appWindow = this.driver.elementByName('WindowsForms10.Window.8.app.0.13965fa_r11_ad1');
// thisappWindow = this.driver.elementByName('xxxxxxx');
async connectAppDriver(){
await this.waitForAppWindow();
var appWindow = await this.appWindow.getAttribute("NativeWindowHandle");
let hex = (Number(ewarpWindow)).toString(16);
var currentAppCapabilities =
{
"appTopLevelWindow": hex,
"platformName": "Windows",
"deviceName": "WindowsPC",
"newCommandTimeout": "120000"
}
let driverBuilder = new DriverBuilder();
await driverBuilder.stopDriver();
this.driver = await driverBuilder.createDriver(currentEwarpCapabilities);
return this.driver;
}
I keep getting this error in Winappdriver
{"status":13,"value":{"error":"unknown error","message":"An unknown error occurred in the remote end while processing the command."}}
I've also opened this ticket here.
It seems like such an easy thing to do, but I couldn't figure this one out.
Any of nodes packages I could use to get the top level window handle easily?
I am open to suggestions on how to tackle this issue while using JavaScript for Winappdriver.
Hope this helps some one out there,
Got around this by creating an exe using C# that generated hex of the app to connect based on the process name, it looks like something like this.
public string GetTopLevelWindowHandleHex()
{
string TopLevelWindowHandleHex = null;
IntPtr TopLevelWindowHandle = new IntPtr();
foreach (Process clsProcess in Process.GetProcesses())
{
if (clsProcess.ProcessName.StartsWith(_processName))
{
TopLevelWindowHandle = clsProcess.Handle;
TopLevelWindowHandleHex = clsProcess.MainWindowHandle.ToString("x");
}
}
if (!String.IsNullOrEmpty(TopLevelWindowHandleHex))
return TopLevelWindowHandleHex;
else
throw new Exception($"Process: {_processName} cannot be found");
}
Called it from JS to get the hex of the top level window handle, like this,
async getHex () {
var pathToExe =await path.join(process.cwd(), "features\\support\\ProcessUtility\\GetWindowHandleHexByProcessName.exe");
var pathToDir =await path.join(process.cwd(), "features\\support\\ProcessUtility");
const result = await execFileSync(pathToExe, [this.processName]
, {cwd: pathToDir, encoding: 'utf-8'}
, async function (err, data) {
console.log("Error: "+ err);
console.log("Data(hex): "+ data);
return JSON.stringify(data.toString());
});
return result.toString().trim();
}
Used the hex to connect to the app like this,
async connectAppDriver(hex) {
console.log(`Hex received to connect to app using hex: ${hex}`);
const currentAppCapabilities=
{
"browserName": '',
"appTopLevelWindow": hex.trim(),
"platformName": "Windows",
"deviceName": "WindowsPC",
"newCommandTimeout": "120000"
};
const appDriver = await new Builder()
.usingServer("http://localhost:4723/wd/hub")
.withCapabilities(currentAppCapabilities)
.build();
await driver.startWithWebDriver(appDriver);
return driver;
}
Solution:
In WebDriverJS (used by selenium / appium), use getDomAttribute instead of getAttribute. Took several hours to find :(
element.getAttribute("NativeWindowHandle")
POST: /session/270698D2-D93B-4E05-9FC5-3E5FBDA60ECA/execute/sync
Command not implemented: POST: /session/270698D2-D93B-4E05-9FC5-3E5FBDA60ECA/execute/sync
HTTP/1.1 501 Not Implemented
let topLevelWindowHandle = await element.getDomAttribute('NativeWindowHandle')
topLevelWindowHandle = parseInt(topLevelWindowHandle).toString(16)
GET /session/DE4C46E1-CC84-4F5D-88D2-35F56317E34D/element/42.3476754/attribute/NativeWindowHandle HTTP/1.1
HTTP/1.1 200 OK
{"sessionId":"DE4C46E1-CC84-4F5D-88D2-35F56317E34D","status":0,"value":"3476754"}
and topLevelWindowHandle have hex value :)
I am trying to write a JXA script in Apple Script Editor, that compresses a string using the LZ algorithm and writes it to a text (JSON) file:
var story = "Once upon a time in Silicon Valley..."
var storyC = LZString.compress(story)
var data_to_write = "{\x22test\x22\x20:\x20\x22"+storyC+"\x22}"
app.displayAlert(data_to_write)
var desktopString = app.pathTo("desktop").toString()
var file = `${desktopString}/test.json`
writeTextToFile(data_to_write, file, true)
Everything works, except that the LZ compressed string is just transformed to a set of "?" by the time it reaches the output file, test.json.
It should look like:
{"test" : "㲃냆Њޱᐈ攀렒삶퓲ٔ쀛䳂䨀푖㢈Ӱນꀀ"}
Instead it looks like:
{"test" : "????????????????????"}
I have a feeling the conversion is happening in the app.write command used by the writeTextToFile() function (which I pulled from an example in Apple's Mac Automation Scripting Guide):
var app = Application.currentApplication()
app.includeStandardAdditions = true
function writeTextToFile(text, file, overwriteExistingContent) {
try {
// Convert the file to a string
var fileString = file.toString()
// Open the file for writing
var openedFile = app.openForAccess(Path(fileString), { writePermission: true })
// Clear the file if content should be overwritten
if (overwriteExistingContent) {
app.setEof(openedFile, { to: 0 })
}
// Write the new content to the file
app.write(text, { to: openedFile, startingAt: app.getEof(openedFile) })
// Close the file
app.closeAccess(openedFile)
// Return a boolean indicating that writing was successful
return true
}
catch(error) {
try {
// Close the file
app.closeAccess(file)
}
catch(error) {
// Report the error is closing failed
console.log(`Couldn't close file: ${error}`)
}
// Return a boolean indicating that writing was successful
return false
}
}
Is there a substitute command for app.write that maintains the LZ compressed string / a better way to accomplish what I am trying to do?
In addition, I am using the readFile() function (also from the Scripting Guide) to load the LZ string back into the script:
function readFile(file) {
// Convert the file to a string
var fileString = file.toString()
// Read the file and return its contents
return app.read(Path(fileString))
}
But rather than returning:
{"test" : "㲃냆Њޱᐈ攀렒삶퓲ٔ쀛䳂䨀푖㢈Ӱນꀀ"}
It is returning:
"{\"test\" : \"㲃냆੠Њޱᐈ攀렒삶퓲ٔ쀛䳂䨀푖㢈Ӱນꀀ\"}"
Does anybody know a fix for this too?
I know that it is possible to use Cocoa in JXA scripts, so maybe the solution lies therein?
I am just getting to grips with JavaScript so I'll admit trying to grasp Objective-C or Swift is way beyond me right now.
I look forward to any solutions and/or pointers that you might be able to provide me. Thanks in advance!
After some further Googl'ing, I came across these two posts:
How can I write UTF-8 files using JavaScript for Mac Automation?
read file as class utf8
I have thus altered my script accordingly.
writeTextToFile() now looks like:
function writeTextToFile(text, file) {
// source: https://stackoverflow.com/a/44293869/11616368
var nsStr = $.NSString.alloc.initWithUTF8String(text)
var nsPath = $(file).stringByStandardizingPath
var successBool = nsStr.writeToFileAtomicallyEncodingError(nsPath, false, $.NSUTF8StringEncoding, null)
if (!successBool) {
throw new Error("function writeFile ERROR:\nWrite to File FAILED for:\n" + file)
}
return successBool
};
While readFile() looks like:
ObjC.import('Foundation')
const readFile = function (path, encoding) {
// source: https://github.com/JXA-Cookbook/JXA-Cookbook/issues/25#issuecomment-271204038
pathString = path.toString()
!encoding && (encoding = $.NSUTF8StringEncoding)
const fm = $.NSFileManager.defaultManager
const data = fm.contentsAtPath(pathString)
const str = $.NSString.alloc.initWithDataEncoding(data, encoding)
return ObjC.unwrap(str)
};
Both use Objective-C to overcome app.write and app.read's inability to handle UTF-8.
I have a large json file that looks like that:
[
{"name": "item1"},
{"name": "item2"},
{"name": "item3"}
]
I want to stream this file (pretty easy so far), for each line run a asynchronous function (that returns a promise) upon the resolve/reject call edit this line.
The result of the input file could be:
[
{"name": "item1", "response": 200},
{"name": "item2", "response": 404},
{"name": "item3"} // not processed yet
]
I do not wish to create another file, I want to edit on the fly the SAME FILE (if possible!).
Thanks :)
I don't really answer the question, but don't think it can be answered in a satisfactory way anyway, so here are my 2 cents.
I assume that you know how to stream line by line, and run the function, and that the only problem you have is editing the file that you are reading from.
Consequences of inserting
It is not possible to natively insert data into any file (which is what you want to do by changing the JSON live). A file can only grow up at its end.
So inserting 10 bytes of data at the beginning of a 1GB file means that you need to write 1GB to the disk (to move all the data 10 bytes further).
Your filesystem does not understand JSON, and just sees that you are inserting bytes in the middle of a big file so this is going to be very slow.
So, yes it is possible to do.
Write a wrapper over the file API in NodeJS with an insert() method.
Then write some more code to be able to know where to insert bytes into a JSON file without loading the whole file and not producing invalid JSON at the end.
Now I would not recommend it :)
=> Read this question: Is it possible to prepend data to an file without rewriting?
Why do it then?
I assume that want to either
Be able to kill your process at any time, and easily resume work by reading the file again.
Retry partially treated files to fill only the missing bits.
First solution: Use a database
Abstracting the work that needs to be done to live edit files at random places is the sole purpose of existence of databases.
They all exist only to abstract the magic that is behind UPDATE mytable SET name = 'a_longer_name_that_the_name_that_was_there_before' where name = 'short_name'.
Have a look at LevelUP/Down, sqlite, etc...
They will abstract all the magic that needs to be done in your JSON file!
Second solution: Use multiple files
When you stream your file, write two new files!
One that contain current position in the input file and lines that need to be retried
The other one the expected result.
You will also be able to kill your process at any time and restart
According to this answer writing to the same file while reading is not reliable. As a commenter there says, better to write to a temporary file, and then delete the original and rename the temp file over it.
To create a stream of lines you can use byline. Then for each line, apply some operation and pipe it out to the output file.
Something like this:
var fs = require('fs');
var stream = require('stream');
var util = require('util');
var LineStream = require('byline').LineStream;
function Modify(options) {
stream.Transform.call(this, options);
}
util.inherits(Modify, stream.Transform);
Modify.prototype._transform = function(chunk, encoding, done) {
var self = this;
setTimeout(function() {
// your modifications here, note that the exact regex depends on
// your json format and is probably the most brittle part of this
var modifiedChunk = chunk.toString();
if (modifiedChunk.search('response:[^,}]+') === -1) {
modifiedChunk = modifiedChunk
.replace('}', ', response: ' + new Date().getTime() + '}') + '\n';
}
self.push(modifiedChunk);
done();
}, Math.random() * 2000 + 1000); // to simulate an async modification
};
var inPath = './data.json';
var outPath = './out.txt';
fs.createReadStream(inPath)
.pipe(new LineStream())
.pipe(new Modify())
.pipe(fs.createWriteStream(outPath))
.on('close', function() {
// replace input with output
fs.unlink(inPath, function() {
fs.rename(outPath, inPath);
});
});
Note that the above results in only one async operation happening at a time. You could also save the modifications to an array and once all of them are done write the lines from the array to a file, like this:
var fs = require('fs');
var stream = require('stream');
var LineStream = require('byline').LineStream;
var modifiedLines = [];
var modifiedCount = 0;
var inPath = './data.json';
var allModified = new Promise(function(resolve, reject) {
fs.createReadStream(inPath).pipe(new LineStream()).on('data', function(chunk) {
modifiedLines.length++;
var index = modifiedLines.length - 1;
setTimeout(function() {
// your modifications here
var modifiedChunk = chunk.toString();
if (modifiedChunk.search('response:[^,}]+') === -1) {
modifiedChunk = modifiedChunk
.replace('}', ', response: ' + new Date().getTime() + '}');
}
modifiedLines[index] = modifiedChunk;
modifiedCount++;
if (modifiedCount === modifiedLines.length) {
resolve();
}
}, Math.random() * 2000 + 1000);
});
}).then(function() {
fs.writeFile(inPath, modifiedLines.join('\n'));
}).catch(function(reason) {
console.error(reason);
});
If instead of lines you wish to stream chunks of valid json which would be a more robust approach, take a look at JSONStream.
As mentioned in the comment, the file you have is not proper JSON, although is valid in Javascript. In order to generate proper JSON, JSON.stringify() could be used. I think it would make life difficult for others to parse nonstandard JSON as well, therefore I would recommend furnishing a new output file instead of keeping the original one.
However, it is still possible to parse the original file as JSON. This is possible via eval('(' + procline + ')');, however it is not secure to take external data into node.js like this.
const fs = require('fs');
const readline = require('readline');
const fr = fs.createReadStream('file1');
const rl = readline.createInterface({
input: fr
});
rl.on('line', function (line) {
if (line.match(new RegExp("\{name"))) {
var procline = "";
if (line.trim().split('').pop() === ','){
procline = line.trim().substring(0,line.trim().length-1);
}
else{
procline = line.trim();
}
var lineObj = eval('(' + procline + ')');
lineObj.response = 200;
console.log(JSON.stringify(lineObj));
}
});
The output would be like this:
{"name":"item1","response":200}
{"name":"item2","response":200}
{"name":"item3","response":200}
Which is line-delimited JSON (LDJSON) and could be useful for streaming stuff, without the need for leading and trailing [, ], or ,. There is an ldjson-stream package for it as well.
I get a PKCS#7 crypto package from a 3rd party system.
The package is not compressed and not encrypted, PEM-encoded, signed with X.509 certificate.
I also have a PEM cert file from the provider.
The data inside is XML
I need to do the following in Node.JS:
extract the data
verify the signature
A sample package (no sensitive info, data refers to our qa system) http://pastebin.com/7ay7F99e
OK, finally got it.
First of all, PKCS messages are complex structures binary-encoded using ASN1.
Second, they can be serialized to binary files (DER encoding) or text PEM files using Base64 encoding.
Third, PKCS#7 format specifies several package types from which my is called Signed Data. These formats are distinguished by OBJECT IDENTIFIER value in the beginning of the ASN1 object (1st element of the wrapper sequence) — you can go to http://lapo.it/asn1js/ and paste the package text for the fully parsed structure.
Next, we need to parse the package (Base64 -> ASN1 -> some object representation). Unfortunately, there's no npm package for that. I found quite a good project forge that is not published to npm registry (though npm-compatible). It parsed PEM format but the resulting tree is quite an unpleasant thing to traverse. Based on their Encrypted Data and Enveloped Data implementations I created partial implementation of Signed Data in my own fork. UPD: my pull request was later merged to the forge project.
Now finally we have the whole thing parsed.
At that point I found a great (and probably the only on the whole web) explanative article on signed PKCS#7 verification: http://qistoph.blogspot.com/2012/01/manual-verify-pkcs7-signed-data-with.html
I was able to extract and successfully decode the signature from the file, but the hash inside was different from the data's hash. God bless Chris who explained what actually happens.
The data signing process is 2-step:
original content's hash is calculated
a set of "Authorized Attributes" is constructed including: type of the data singed, signing time and data hash
Then the set from step 2 is signed using the signer's private key.
Due to PKCS#7 specifics this set of attributes is stored inside of the context-specific constructed type (class=0x80, type=0) but should be signed and validated as normal SET (class=0, type=17).
As Chris mentions (https://stackoverflow.com/a/16154756/108533) this only verifies that the attributes in the package are valid. We should also validate the actual data hash against the digest attribute.
So finally here's a code doing validation (cert.pem is a certificate file that the provider sent me, package is a PEM-encoded message I got from them over HTTP POST):
var fs = require('fs');
var crypto = require('crypto');
var forge = require('forge');
var pkcs7 = forge.pkcs7;
var asn1 = forge.asn1;
var oids = forge.pki.oids;
var folder = '/a/path/to/files/';
var pkg = fs.readFileSync(folder + 'package').toString();
var cert = fs.readFileSync(folder + 'cert.pem').toString();
var res = true;
try {
var msg = pkcs7.messageFromPem(pkg);
var attrs = msg.rawCapture.authenticatedAttributes;
var set = asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SET, true, attrs);
var buf = Buffer.from(asn1.toDer(set).data, 'binary');
var sig = msg.rawCapture.signature;
var v = crypto.createVerify('RSA-SHA1');
v.update(buf);
if (!v.verify(cert, sig)) {
console.log('Wrong authorized attributes!');
res = false;
}
var h = crypto.createHash('SHA1');
var data = msg.rawCapture.content.value[0].value[0].value;
h.update(data);
var attrDigest = null;
for (var i = 0, l = attrs.length; i < l; ++i) {
if (asn1.derToOid(attrs[i].value[0].value) === oids.messageDigest) {
attrDigest = attrs[i].value[1].value[0].value;
}
}
var dataDigest = h.digest();
if (dataDigest !== attrDigest) {
console.log('Wrong content digest');
res = false;
}
}
catch (_e) {
console.dir(_e);
res = false;
}
if (res) {
console.log("It's OK");
}
Your answer is a big step in the right direction. You are however missing out an essential part of the validation!
You should verify the hash of the data against the digest contained in the signed attributes. Otherwise it would be possible for someone to replace the content with malicious data. Try for example validating the following 'package' with your code (and have a look at the content): http://pastebin.com/kaZ2XQQc
I'm not much of a NodeJS developer (this is actually my first try :p), but here's a suggestion to help you get started.
var fs = require('fs');
var crypto = require('crypto');
var pkcs7 = require('./js/pkcs7'); // forge from my own fork
var asn1 = require('./js/asn1');
var folder = '';
var pkg = fs.readFileSync(folder + 'package').toString();
var cert = fs.readFileSync(folder + 'cert.pem').toString();
try {
var msg = pkcs7.messageFromPem(pkg);
var attrs = msg.rawCapture.authenticatedAttributes; // got the list of auth attrs
var set = asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SET, true, attrs); // packed them inside of the SET object
var buf = new Buffer(asn1.toDer(set).data, 'binary'); // DO NOT forget 'binary', otherwise it tries to interpret bytes as UTF-8 chars
var sig = msg.rawCapture.signature;
var shasum = crypto.createHash('sha1'); // better be based on msg.rawCapture.digestAlgorithms
shasum.update(msg.rawCapture.content.value[0].value[0].value);
for(var n in attrs) {
var attrib = attrs[n].value;
var attrib_type = attrib[0].value;
var attrib_value = attrib[1].value[0].value;
if(attrib_type == "\x2a\x86\x48\x86\xf7\x0d\x01\x09\x04") { // better would be to use the OID (1.2.840.113549.1.9.4)
if(shasum.digest('binary') == attrib_value) {
console.log('hash matches');
var v = crypto.createVerify('RSA-SHA1');
v.update(buf);
console.log(v.verify(cert, sig)); // -> should type true
} else {
console.log('hash mismatch');
}
}
}
}
catch (_e) {
console.dir(_e);
}
based on inspiration form this answer, I've implemented a sample for signing and verifying pdf files using node-signpdf and node-forge.