ActiveXObject to download directly to HDD - javascript

Is there a native ActiveX Object or similar that I can use to download a source file straight to my HDD. Currently I'm using the following:
function downloadToFile(url, file) {
var xhr = new ActiveXObject("msxml2.xmlhttp"),
ado = new ActiveXObject("ADODB.Stream");
xhr.open("GET", url, false);
xhr.send();
if (xhr.status === 200) {
ado.type = 1;
ado.open();
ado.write(xhr.responseBody);
ado.saveToFile(file);
ado.close();
}
}
But this feels a bit inefficient for a few reasons:
I'm currently using two objects in place of what could possibly be a single object.
The entire response is stored in memory until its wrote to file. This isn't an issue for the most part until I use it to download fairly large files.
Notes/Edits:
I'm working from within microsoft's MSScriptControl.ScriptControl so many web-based libraries will not help.
I'm not necessarily looking for a single object if an answer is able to write the data to file as it is received.

BITSAdmin
BITSAdmin is a Windows command-line tool for downloading and uploading files using the Background Intelligent Transfer Service (BITS).
Note: In Windows 7, BITSAdmin states it's deprecated in favor of PowerShell BITS cmdlets, and may not be included in future Windows versions.
Syntax:
bitsadmin /transfer job_name url local_name
JScript version:
var oShell = new ActiveXObject("WScript.Shell");
oShell.Run("bitsadmin /transfer myDownloadJob http://upload.wikimedia.org/wikipedia/en/b/bc/Wiki.png C:\\Work\\wikipedia-logo.png");
.NET System.Net.WebClient Class
If you have .NET Framework, you can register the System.Net.WebClient class for COM access:
C:\Windows\Microsoft.NET\Framework\v4.0.30319> regasm System.dll
and then use it like this:
var strURL = "http://upload.wikimedia.org/wikipedia/en/b/bc/Wiki.png";
var strFilePath = "C:\\Work\\wikipedia-logo.png";
var oWebClient = new ActiveXObject("System.Net.WebClient");
oWebClient.DownloadFile(strURL, strFilePath);
Chilkat HTTP Library
Chilkat HTTP ActiveX library (commercial) lets you download files directly:
var strURL = "http://upload.wikimedia.org/wikipedia/en/b/bc/Wiki.png";
var strFilePath = "C:\\Work\\wikipedia-logo.png";
var oHTTP = new ActiveXObject("Chilkat_9_5_0.Http");
// Any string unlocks the component for the 1st 30-days.
var success = oHTTP.UnlockComponent("Anything for 30-day trial");
if (success != 1) {
WScript.Echo(oHTTP.LastErrorText);
WScript.Quit();
}
success = oHTTP.Download(strURL, strFilePath);
if (success != 1)
WScript.Echo(oHTTP.LastErrorText);
cURL
Or how about shelling out to cURL (free, MIT/X derivate license)? Though I guess it counts as two objects because of WScript.Shell:
var oShell = new ActiveXObject("WScript.Shell");
oShell.Run("curl -o C:\\Work\\wikipedia-logo.png http://upload.wikimedia.org/wikipedia/en/b/bc/Wiki.png");

Related

How to read a LOCAL binary file using JavaScript while in HTML

Currently I am building a local (non-internet) application that launches a Chromium browser in Visual Basic .NET.
It uses CefSharp to achieve this.
When the HTML launches I need to read multiple files in order to plot graphs using Plotly.
The problem: I can't read binary files.
I have succeeded in reading ASCII and non-binary files, by disabling security on CefSharp. I tried using the FolderSchemeHandlerFactory class, but that didn't work.
In order to read ASCII files I have resorted to using XMLHttpRequest which works for ASCII , but not binary. I have tried changing the response type to arraybuffer, but that doesn't work either.
function readTextFile(file){
var array = []
var file= new XMLHttpRequest();
file.open("GET", file, false);
file.onreadystatechange = function ()
{
if(file.readyState === 4)
{
if(file.status === 200 || file.status == 0)
{
var text= file.responseText;
array = text.split("\n");
}
}
}
file.send(null);
return array;
}

File Upload using Selenium HtmlUnitDriver-headless webdriver

I'm trying to upload a local file (C:\sample.txt) to my server. I have tried to implement this with Chrome web driver and its working absolutely fine.
But during implementing the same with HTMLUnitDriver, i couldn't browse the file item from my local disk. I tried the below two methods as well,
1) Send keys:
WebElement inputFile = driver.findElement(By.id("file"));
System.out.println(driver.getCurrentUrl());
LocalFileDetector detector = new LocalFileDetector();
String path = "C:\\UploadSample1.txt";
File f = detector.getLocalFile(path);
inputFile.sendKeys(f.getAbsolutePath());
2) Using a Robot:
WebElement browseFile = fluentWait(By.id("browseFile"), driver);
browseFile.click();
File file = new File("C:\\UploadSample1.txt");
driver.switchTo().activeElement();
StringSelection fileNameToWrite = new StringSelection(
file.getAbsolutePath());
Toolkit.getDefaultToolkit().getSystemClipboard()
.setContents(fileNameToWrite, null);
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
robot.keyPress(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_V);
robot.keyRelease(KeyEvent.VK_V);
robot.keyRelease(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
I need the file item to be browsed, then only i can save it to my server. Because just sending the file path will be searching the file in server disk. Now i'm really stuck and couldn't move further.
Any help is highly appreciated. Thankyou!
If you need to browse to the file first, that isn't possible IMHO; for that you will need AutoIT (as Robot class is not recommended). So your best bet would be sending file path using sendKeys.
formInput.setValueAttribute(formValue); worked fine for me.
Code snippet:
Iterator<String> formValueIterator = formValues.keySet().iterator();
while(formValueIterator.hasNext()){
String formKey = formValueIterator.next();
String formValue = formValues.get(formKey);
HtmlInput formInput = form.getInputByName(formKey);
if (formInput != null)
if (formInput instanceof HtmlPasswordInput) {
((HtmlPasswordInput)formInput).setValueAttribute(formValue);
} else {
formInput.setValueAttribute(formValue);
}
}

Solution to map different excel files to db

I have to map a lot of different files with different structures to a db. There is a lot of different tables in those xlsx so I thought about schemeless noSQL approach, but I'm quite newbie in this field.
It should be a microservice with client interface for choosing tables/cells for parsing xlsx files. I do not have strict technology; it could be JAVA, GROOVY, Python or even a JavaScript engine.
Do you know any working solution for doing it?
Here is example xlsx (but I've got also other files, also in xls format): http://stat.gov.pl/download/gfx/portalinformacyjny/pl/defaultaktualnosci/5502/11/13/1/wyniki_finansowe_podmiotow_gospodarczych_1-6m_2015.xlsx
The work you have to do is called ETL (Extract Transform Load). You need to either find a good ETL software (here is a discussion about open source ETL) or to script your own solution in a language you are used with.
The advantage of a ready made GUI software is that you just have to drag and drop data but if you have some custom logic or semi structured data like in your xlsx example, you have limited support.
The advantage of writing your own script is you have all the freedom you need.
I have done some ETL work and I used successfully Groovy for writing my own solution with custom logic and so on, and in terms of GUI I used Altova Mapforce when I had to import some exotic file types.
If you decide to write your own solution you have to:
Convert all data to an easy to load format. In your case you have to convert each xls or xlsx tab to CSV with a naming convention.
Load your files in your chosen language for transforming
Do your logic to put data in a desirable format
Save it in a database (SQL or noSQL)
Maybe you should try Google Sheets to display excel and Google Apps Script (https://developers.google.com/apps-script/overview) to write custom add-on for parsing data to JSON.
Spreadsheet Service (https://developers.google.com/apps-script/reference/spreadsheet/) has plenty methods to access data in sheets.
Next you can send this JSON over API (https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app) or put directly into database (https://developers.google.com/apps-script/guides/jdbc).
Maybe isn't clean, but fast solution.
I had a project that done work almost the same as your problem but it seem easier as I had a fixed structure of xlsx files.
For xlsx parsing, I had experiment with Python and Openpyxl and had no struggle while working with them, they are simple, fast and easy to use.
For database, I recommend using MongoDB, you can deal with documents and collections in MongoDB just as simple as working with JSON objects or a set of JSON objects. PyMongo is the best and recommended way to work with MongoDB from Python I think.
The problem is you have different files with different structures. I cannot recommend anything deeper on this without viewing your data. But you should find the general structure of them or you have to figure out the way to classify them into common sets, each set will be parsed using appropriate algorithm.
Javascript solution, as xlsx2csv (you can make export anywhere):
var def = "1.xlsx";
if (WScript.Arguments.length>0) def = WScript.Arguments(0);
var col = [];
var objShell = new ActiveXObject( "Shell.Application" );
var fs = new ActiveXObject("Scripting.FileSystemObject");
function flush(){
WScript.Echo(col.join(';'));
}
function import_xlsx(file) {
var strZipFile = file; // '"1.xlsx" 'name of zip file
var outFolder = "."; // 'destination folder of unzipped files (must exist)
var pwd =WScript.ScriptFullName.replace( WScript.ScriptName, "");
var i,j,k;
var strXlsFile = strZipFile;
var strZipFile = strXlsFile.replace( ".xlsx",".zip").replace( ".XLSX",".zip");
fs.CopyFile (strXlsFile,strZipFile, true);
var objSource = objShell.NameSpace(pwd+strZipFile).Items();
var objTarget = objShell.NameSpace(pwd+outFolder);
for (i=0;i<objSource.Count;i++)
if (objSource.item(i).Name == "xl"){
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
objTarget.CopyHere(objSource.item(i), 256);
}
var xml = new ActiveXObject("Msxml2.DOMDocument.6.0");
xml.load("xl\\sharedStrings.xml");
var sel = xml.selectNodes("/*/*/*") ;
var vol = [];
for(i=0;i<sel.length;i++) vol.push(sel[i].text);
xml.load ("xl\\worksheets\\sheet1.xml");
ret = "";
var line = xml.selectNodes("/*/*/*");
var li, line2 = 0, line3=0, row;
for (li = 0; li< line.length; li++){
if (line[li].nodeName == "row")
for (row=0;row<line[li].childNodes.length;row++){
r = line[li].childNodes[row].selectSingleNode("#r").text;
line2 = eval(r.replace(r.substring(0,1),""));
if (line2 != line3) {
line3 = line2;
if (line3 != 0) {
//flush -------------------------- line3
flush();
for (i=0;i<col.length;i++) col[i]="";
}
}
try{
t = line[li].childNodes[row].selectSingleNode("#t").text;
//i = instr("ABCDEFGHIJKLMNOPQRSTUVWXYZ", left(r,1))
i = ("ABCDEFGHIJKLMNOPQRSTUVWXYZ").indexOf(r.charAt(0));
while (i > col.length) col.push("");
if (t == "s"){
t = eval(line[li].childNodes[row].firstChild.text)
col[i] = vol[t];
} else col[i] = line[li].childNodes[row].firstChild.text;
} catch(e) {};
}
flush();
}
if (fs.FolderExists("xl")) fs.DeleteFolder("xl");
if (fs.FileExists(strZipFile)) fs.DeleteFile(strZipFile);
}
import_xlsx(def);

createBlockBlob and commitBlobBlocks create empty files in BlobStorage

I'm developing a web app that can upload large file into the Azure Blob Storage.
As a backend, I am using Windows Azure Mobile Services (the web app will generate contents for mobile devices) in nodeJS.
My client can successfully send chunks of data to the backend, everything looks fine but, at the end, the uploaded file is empty. The data upload has been prepared by following this tutorial: it works perfectly when the file is small enough to be uploaded in a single requests. The process fails when the file needs to be broken in chunks. It uses the ReadableStreamBuffer from the tutorial.
Can somebody help me?
Here the code:
Back-end : createBlobBlockFromStream
[...]
//Get references
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
//console.log(request.body);
var blobName = request.body.file;
var blobExt = request.body.ext;
var blockId = request.body.blockId;
var data = new Buffer(request.body.data, "base64");
var stream = new ReadableStreamBuffer(data);
var streamLen = stream.size();
var blobFull = blobName+"."+blobExt;
console.log("BlobFull: "+blobFull+"; id: "+blockId+"; len: "+streamLen+"; "+stream);
var blobService = azure.createBlobService(accountName, accountKey, host);
//console.log("blockId: "+blockId+"; container: "+container+";\nblobFull: "+blobFull+"streamLen: "+streamLen);
blobService.createBlobBlockFromStream(blockId, container, blobFull, stream, streamLen,
function(error, response){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, {message : "block created"});
}
});
[...]
Back-end: commitBlobBlock
[...]
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
var accountName = appSettings.STORAGE_NAME;
var accountKey = appSettings.STORAGE_KEY;
var host = accountName + '.blob.core.windows.net';
var container = "zips";
var blobName = request.body.file;
var blobExt = request.body.ext;
var blobFull = blobName+"."+blobExt;
var blockIdList = request.body.blockList;
console.log("blobFull: "+blobFull+"; blockIdList: "+JSON.stringify(blockIdList));
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.commitBlobBlocks(container, blobFull, blockIdList, function(error, result){
if(error){
request.respond(statusCodes.INTERNAL_SERVER_ERROR, error);
} else {
request.respond(statusCodes.OK, result);
blobService.listBlobBlocks(container, blobFull)
}
});
[...]
The second method returns the correct list of blockId, so I think that the second part of the process works fine. I think that it is the first method that fails to write the data inside the block, as if it creates some empty blocks.
In the client-side, I read the file as an ArrayBuffer, by using the FileReader JS API.
Then I convert it in a Base4 encoded string by using the following code. This approach works perfectly if I create the blob in a single call, good for small files.
[...]
//data contains the ArrayBuffer read by the FileReader API
var requestData = new Uint8Array(data);
var binary = "";
for (var i = 0; i < requestData.length; i++) {
binary += String.fromCharCode( requestData[ i ] );
}
[...]
Any idea?
Thank you,
Ric
Which version of the Azure Storage Node.js SDK are you using? It looks like you might be using an older version; if so I would recommend upgrading to the latest (0.3.0 as of this writing). We’ve improved many areas with the new library, including blob upload; you might be hitting a bug that has already been fixed. Note that there may be breaking changes between versions.
Download the latest Node.js Module (code is also on Github)
https://www.npmjs.org/package/azure-storage
Read our blog post: Microsoft Azure Storage Client Module for Node.js v. 0.2.0 http://blogs.msdn.com/b/windowsazurestorage/archive/2014/06/26/microsoft-azure-storage-client-module-for-node-js-v-0-2-0.aspx
If that’s not the issue, can you check a Fiddler trace (or equivalent) to see if the raw data blocks are being sent to the service?
Not too sure if your still suffering from this problem but i was experiencing the exact same thing and came across this looking for a solution. Well i found one and though id share.
My problem was not with how i push the block but in how i committed it. My little proxy server had no knowledge of prior commits, it just pushes the data its sent and commits it. Trouble is i wasn't providing the commit message with the previously committed blocks so it was overwriting them with the current commit each time.
So my solution:
var opts = {
UncommittedBlocks: [IdOfJustCommitedBlock],
CommittedBlocks: [IdsOfPreviouslyCommittedBlocks]
}
blobService.commitBlobBlocks('containerName', 'blobName', opts, function(e, r){});
For me the bit that broke everything was the format of the opts object. I wasn't providing an array of previously committed block names. Its also worth noting that i had to base64 decode the existing block names as:
blobService.listBlobBlocks('containerName', 'fileName', 'type IE committed', fn)
Returns an object for each block with the name being base64 encoded.
Just for completeness here's how i push my blocks, req is from the express route:
var blobId = blobService.getBlockId('blobName', 'lengthOfPreviouslyCommittedArray + 1 as Int');
var length = req.headers['content-length'];
blobService.createBlobBlockFromStream(blobId, 'containerName', 'blobName', req, length, fn);
Also with the upload i had a strange issue where the content-length header caused it to break and so had to delete it from the req.headers object.
Hope this helps and is detailed enough.

Create a platform independent path string

I'm using the Mozilla addon sdk for development and need to create a file on the local system.
Currently I use the statement below but feel it may not cover all platforms.
Running the statement on Windows 7 and Windows XP returns:
console.log(system.platform);
winnt
Running it on Linux returns:
console.log(system.platform);
linux
Is there a more reliable way to create the fullPath string, without having to check contents of system.platform?
pathToFile = Cc["#mozilla.org/file/directory_service;1"]
.getService(Ci.nsIProperties).get("Home", Ci.nsIFile).path;
if (system.platform.indexOf("win") == 0) {
fileSeparator = "\";
}else{
fileSeparator = "/";
}
fullPath=pathToFile + fileSeparator + 'myFile.txt'
Just a little modfication to your code should do the trick
var file = Cc["#mozilla.org/file/directory_service;1"]
.getService(Ci.nsIProperties).get("Home", Ci.nsIFile);
file.append("myFile.txt");
var fullPath = file.path;
I'd like to point out an alternative to #Kashif's answer.
Use FileUtils.getFile(), which is just a convenience function, essentially doing multiple .append()s, one per item in the parts array.
Cu.import("resource://gre/modules/FileUtils.jsm");
var file = FileUtils.getFile("Home", ["myFile.txt"]);
var path = file.path;
The SDK has a 'fs/path' module that has parity with Node's path API

Categories