Protractor e2e test case for downloading pdf file - javascript

Can anyone tell me how to write test case for a link to download pdf file using jasmine framework ?
Thanks in advance.

I can currently set download path location
Chrome
capabilities: {
'browserName': 'chrome',
'platform': 'ANY',
'version': 'ANY',
'chromeOptions': {
// Get rid of --ignore-certificate yellow warning
args: ['--no-sandbox', '--test-type=browser'],
// Set download path and avoid prompting for download even though
// this is already the default on Chrome but for completeness
prefs: {
'download': {
'prompt_for_download': false,
'default_directory': '/e2e/downloads/',
}
}
}
}
For remote testing you would need a more complex infrastructure like setting up a Samba share or network shared directory destination.
Firefox
var FirefoxProfile = require('firefox-profile');
var q = require('q');
[...]
getMultiCapabilities: getFirefoxProfile,
framework: 'jasmine2',
[...]
function getFirefoxProfile() {
"use strict";
var deferred = q.defer();
var firefoxProfile = new FirefoxProfile();
firefoxProfile.setPreference("browser.download.folderList", 2);
firefoxProfile.setPreference("browser.download.manager.showWhenStarting", false);
firefoxProfile.setPreference("browser.download.dir", '/tmp');
firefoxProfile.setPreference("browser.helperApps.neverAsk.saveToDisk", "application/vnd.openxmlformats-officedocument.wordprocessingml.document");
firefoxProfile.encoded(function(encodedProfile) {
var multiCapabilities = [{
browserName: 'firefox',
firefox_profile : encodedProfile
}];
deferred.resolve(multiCapabilities);
});
return deferred.promise;
}
Finally and maybe obvious, to trigger the download you click on the download link as you know, e.g.
$('a.some-download-link').click();

I needed to check the contents of the downloaded file (a CSV export in my case) against an expected result, and found the following to work:
var filename = '/tmp/export.csv';
var fs = require('fs');
if (fs.existsSync(filename)) {
// Make sure the browser doesn't have to rename the download.
fs.unlinkSync(filename);
}
$('a.download').click();
browser.driver.wait(function() {
// Wait until the file has been downloaded.
// We need to wait thus as otherwise protractor has a nasty habit of
// trying to do any following tests while the file is still being
// downloaded and hasn't been moved to its final location.
return fs.existsSync(filename);
}, 30000).then(function() {
// Do whatever checks you need here. This is a simple comparison;
// for a larger file you might want to do calculate the file's MD5
// hash and see if it matches what you expect.
expect(fs.readFileSync(filename, { encoding: 'utf8' })).toEqual(
"A,B,C\r\n"
);
});
I found Leo's configuration suggestion helpful for allowing the download to be saved somewhere accessible.
The 30000ms timeout is the default, so could be omitted, but I'm leaving it in as a reminder in case someone would like to change it.

it could be the test for checking href attribute like so:
var link = element(by.css("a.pdf"));
expect(link.getAttribute('href')).toEqual('someExactUrl');

The solutions above would not work for remote browser testing, e.g. via BrowserStack. An alternative solution, just for Chrome, could look like this:
if ((await browser.getCapabilities()).get('browserName') === 'chrome') {
await browser.driver.get('chrome://downloads/');
const items =
await browser.executeScript('return downloads.Manager.get().items_') as any[];
expect(items.length).toBe(1);
expect(items[0].file_name).toBe('some.pdf');
}

One thing I've done in the past is to use an HTTP HEAD command. Basically, it's the same as a 'GET', but it only retrieves the headers.
Unfortunately, the web server needs to support 'HEAD' explicitly. If it does, you can actually try the URL and then check for 'application/pdf' in the Content-Type, without having to actually download the file.
If the server isn't set up to support HEAD, you can probably just check the link text like was suggested above.

Related

Does anyone know how to define navigator online in main process in electron?

I know you can use navigator onLine inside the renderer process because it's a rendered inside a browser. But what I'm trying to do is something like this in the main process:
if (navigator.onLine){
mainWindow.loadURL("https://google.com")
} else {
mainWindow.loadFile(path.join(__dirname, 'index.html'));
}
So basically if the user is offline, just load a local html file, and if they're online, take them to a webpage. But, like expected, I keep getting the error that 'navigator is not defined'. Does anyone know how can I somehow import the navigate cdn in the main process? Thanks!
TL;DR: The easiest thing to do is to just ask Electron. You can do this via the net module from within the Main Process:
const { net } = require ("electron");
const isInternetAvailable = () => return net.isOnline ();
// To check:
if (isInternetAvailable ()) { /* do something... */ }
See Electron's documentation on the method; specifically, this approach doesn't tell you whether your service is accessible via the internet, but rather that a service can be contacted (or not even this, as the documentation mentions links which would not involve any HTTP request at all).
However, this is not a reliable measurement and you might want to increase its hit rate by manuallly checking whether a certain connection can be made.
In order to check whether an internet connection is available, you'll have to make a connection yourself and see if it fails. This can be done from the Main Process using plain NodeJS:
// HTTP code basically from the NodeJS HTTP tutorial at
// https://nodejs.dev/learn/making-http-requests-with-nodejs/
const https = require('https');
const REMOTE_HOST = "google.com"; // Or your domain
const REMOTE_EP = "/"; // Or your endpoint
const REMOTE_PAGE = "https://" + REMOTE_HOST + REMOTE_EP;
function checkInternetAvailability () {
return new Promise ((resolve, reject) => {
const options = {
hostname: REMOTE_HOST,
port: 443,
path: REMOTE_EP,
method: 'GET',
};
// Try to fetch the given page
const req = https.request (options, res => {
// Yup, that worked. Tell the depending code.
resolve (true);
req.destroy (); // This is no longer needed.
});
req.on ('error', error => {
reject (error);
});
req.on ('timeout', () => {
// No, connection timed out.
resolve (false);
req.destroy ();
});
req.end ();
});
}
// ... Your window initialisation code ...
checkInternetAvailability ().then (
internetAvailable => {
if (internetAvailable) mainWindow.loadURL (REMOTE_PAGE);
else mainWindow.loadFile (path.join (__dirname, 'index.html'));
// Call any code needed to be executed after this here!
}
).catch (error => {
console.error ("Oops, couldn't initialise!", error);
app.quit (1);
});
Please note that this code here might not be the most desirable since it just "crashes" your app with exit code 1 if there is any error other than connection timeout.
This, however, makes your startup asynchronous, which means that you need to pay attention on the execution chain of your app startup. Also, startup may be really slow in case the timeout is reached, it may be worth considering NodeJS' http module documentation.
Also, it makes sense to actually try to retrieve the page you're wanting to load in the BrowserWindow (constant values REMOTE_HOST and REMOTE_EP), because that also gives you an indication whether your server is up or not, although that means that the page will be fetched twice (in the best case, when the connection test succeeds and when Electron loads the page into the window). However, that should not be that big of a problem, since no external assets (images, CSS, JS) will be loaded.
One last note: This is not a good metric of whether any internet connection is available, it just tells you whether your server answered within the timeout window. It might very well be that any other service works or that the connection just is very slow (i.e., expect false negatives). Should be "good enough" for your use-case though.

Open browser and reload on source code changes

Basically I want to open the default browser (which I handled already) and when the development folder has any changes (which I handled already) to use the browser reference to reload the url.
This is what I have done so far:
var open = requires('opn');
var fs = requires('fs');
var promise = open('http://localhost/my-developemnt-path/:80');
var browser;
promise.then((cp) => {
//get a reference to the browser from the child process cp
browser = cp;//...??
});
fs.watch('my-developemnt-path', {recursive: true}, (eventType, filename) => {
console.log(`event type is: ${eventType}`);
if (filename) {
console.log(`filename provided: ${filename}`);
browser && browser.location.reload();
} else {
console.log('filename not provided');
}
});
So how do I get the browser reference out of the child-process and how can I use it to force a reload?
CLARIFICATIONS
I am not using any Express or other particular application. Just a common web app I am running on Apache.
I am using nodejs just to open a browser window and monitor files changes under the working dir.

Electron - Invalid package on unzip

For around 3 weeks I've been working on an Electron app and finally decided to get around to adding update checking. For my research, the standard way to do this in Electron (using Squirrel) requires the user to physically install the application onto their computer. I would rather not do this, and keep everything as portable as possible. I then decided to try making my own update script by having the program download the update.zip, and extract it to overwrite the existing files. This works well, up until the very end. At the very end of the extraction, I receive a Invalid package error, and the actual app.asar file is missing, rendering the application useless.
I am using this to download and extract the updates:
function downloadFile(url, target, fileName, cb) { // Downloads
var req = request({
method: 'GET',
uri: url
});
var out = fs.createWriteStream(target+'/'+fileName);
req.pipe(out);
req.on('end', function() {
unzip(target+'/'+fileName, target, function() {
if (cb) {
cb();
}
});
});
}
function unzip(file, target, cb) { // Unzips
var out = fs.createReadStream(file);
out.pipe(unzipper.Extract({ path: target })).on('finish', function () {
dialog.showMessageBox({
type: 'question',
message: 'Finished extracting to `'+target+'`'
});
if (cb) {
cb();
}
});
}
And call it with:
downloadFile('http://example.com/update.zip', path.join(__dirname, './'), 'update.zip', function() { // http://example.com/update.zip is not the real source
app.relaunch();
app.quit();
});
And I use the unzipper NPM package (https://www.npmjs.com/package/unzipper).
The code works perfectly for all other zips, but it fails when trying to extract a zip containing an Electron app.
Anything I'm doing wrong, or maybe a different package that properly supports extracting zips with .asar files?
Edit 1
I just found https://www.npmjs.com/package/electron-basic-updater, which does not throw the same JavaScript error however it still does not extract the .asar files correctly, and will throw it's own error. Since the .asar is still missing, the app is still useless after the "update"
Thanks to your link to electron-basic-updater, I have found this issue mentioned there: https://github.com/TamkeenLMS/electron-basic-updater/issues/4.
They refer to the issue in the electron app: https://github.com/electron/electron/issues/9304.
Finally, in the end of the second topic there's a solution:
This is due to the electron fs module treating asar files as directories rather than files. To make the unzip process work you need to do one of two things:
Set process.noAsar = true
Use original-fs instead of fs
I have seen the people working with original-fs. But it looked like a big trouble to me.
So I tried setting process.noAsar = true (and then process.noAsar = false after unzipping) - and that worked like a charm.

Retrieve html content of a page several seconds after it's loaded

I'm coding a script in nodejs to automatically retrieve data from an online directory.
Knowing that I had never done this, I chose javascript because it is a language I use every day.
I therefore from the few tips I could find on google use request with cheerios to easily access components of dom of the page.
I found and retrieved all the necessary information, the only missing step is to recover the link to the next page except that the one is generated 4 seconds after loading of page and link contains a hash so that this step Is unavoidable.
What I would like to do is to recover dom of page 4-5 seconds after its loading to be able to recover the link
I looked on the internet, and much advice to use PhantomJS for this manipulation, but I can not get it to work after many attempts with node.
This is my code :
#!/usr/bin/env node
require('babel-register');
import request from 'request'
import cheerio from 'cheerio'
import phantom from 'node-phantom'
phantom.create(function(err,ph) {
return ph.createPage(function(err,page) {
return page.open(url, function(err,status) {
console.log("opened site? ", status);
page.includeJs('http://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js', function(err) {
//jQuery Loaded.
//Wait for a bit for AJAX content to load on the page. Here, we are waiting 5 seconds.
setTimeout(function() {
return page.evaluate(function() {
var tt = cheerio.load($this.html())
console.log(tt)
}, function(err,result) {
console.log(result);
ph.exit();
});
}, 5000);
});
});
});
});
but i get this error :
return ph.createPage(function (page) {
^
TypeError: ph.createPage is not a function
Is what I am about to do is the best way to do what I want to do? If not what is the simplest way? If so, where does my error come from?
If You dont have to use phantomjs You can use nightmare to do it.
It is pretty neat library to solve problems like yours, it uses electron as web browser and You can run it with or without showing window (You can also open developer tools like in Google Chrome)
It has only one flaw if You want to run it on server without graphical interface that You must install at least framebuffer.
Nightmare has method like wait(cssSelector) that will wait until some element appears on website.
Your code would be something like:
const Nightmare = require('nightmare');
const nightmare = Nightmare({
show: true, // will show browser window
openDevTools: true // will open dev tools in browser window
});
const url = 'http://hakier.pl';
const selector = '#someElementSelectorWitchWillAppearAfterSomeDelay';
nightmare
.goto(url)
.wait(selector)
.evaluate(selector => {
return {
nextPage: document.querySelector(selector).getAttribute('href')
};
}, selector)
.then(extracted => {
console.log(extracted.nextPage); //Your extracted data from evaluate
});
//this variable will be injected into evaluate callback
//it is required to inject required variables like this,
// because You have different - browser scope inside this
// callback and You will not has access to node.js variables not injected
Happy hacking!

ng-webworker - IE 11 errors

So I'm using a library called ng-webworker and attempting to run a very simple long running task.
$scope.onParallelDownload = function() {
function doubler(num) {
return num * 2;
}
var myWorker = webWorker.create(doubler);
myWorker.run(3).then(function(result) {
alert("Answer: " + result);
}, function(error) {
var err = error;
});
}
This works perfectly in Chrome and shows the alert, but when run in Internet Explorer 11, where I am debugging it the error function is hit, which was still promising, however, there is no data given in the error payload which is problematic because I've absolutely no idea what is causing the web worker to fail on that particular browser.
Most likely you did not set the path to the file worker_wrapper.min.js (or worker_wrapper.js). This file is required for IE (see below). Adjust your app config to the following:
angular.module('myApp', [
// your dependencies
'ngWebworker'
])
.config(['WebworkerProvider', function (WebworkerProvider) {
WebworkerProvider.setHelperPath("./bower_components/ng-webworker/src/worker_wrapper.min.js"); // adjust path
}]);
This code assumes you installed ngWebworker with bower. You might still have to adjust the path, depending on the path you are in.
If you've already set the helper path but it still does not work, check if helper file is being loaded in the developer tools (you might have set a wrong path and get a 404).
Details
When passing a function to Webworker, it transforms this function into a blob which is then executed by the web worker as if it were an independent file. However, Internet Explorer treats these blobs as cross-domain, so this does not work. The workaround that ngWebworker uses is to run an independent JavaScript file (the worker_wrapper.min.js we set above). The web worker then runs that file and ngWebworker passes your stringified function to the worker where it is evaluated.
Note that if you're not on IE, this file will not be used.

Categories