I have a JS file that gets served by a Node.Js server to a web browser.
When running in Dev I want the client-side JS to send data to the localhost so can I log the payload to my local node.js server.
But when we deploy to production I of course want the client-side JS file to send data from the browser to my Production URL.
Right now I've been manually modifying the URL in the JS file that gets served, toggling between localhost and the public URL before I do my Gulp build but I know that is not the right way, and prone to the "whoops I forgot" issue.
What is the correct approach? Or best practice? Is there some gulp package I should be using?
In case you haven't solved this by now you can use gulp-replace. For example, say you have a build task that reads from /src, minifies your JavaScript and outputs it to /dist. You can pipe your JavaScript source to replace() (first argument is the development URL, second argument is your production URL):
var gulp = require('gulp');
var path = require('path');
var jsmin = require('gulp-jsmin');
var replace = require('gulp-replace');
var SOURCE = 'src';
var BUILD = 'dist';
var URL = 'http://www.whatever.com/api';
gulp.task('build', function () {
return gulp.src(path.join(SOURCE, '**/*.js'))
.pipe(jsmin())
.pipe(replace('http://localhost:3000/api', URL))
.pipe(gulp.dest(BUILD))
});
If you have a file ./src/script.js that does a simple jQuery AJAX request, see the before and after effect below.
Before
$.get('http://localhost:3000/api', function (data) {
console.log(data);
});
After (ignoring minification)
$.get('http://www.whatever.com/api', function (data) {
console.log(data);
});
Related
I use the FayeJS and the latest version has been modified to use RequireJS, so there is no longer a single file to link into the browser. Instead the structure is as follows:
/adapters
/engines
/mixins
/protocol
/transport
/util
faye_browser.js
I am using the following nodejs build script to try and end up with all the above minified into a single file:
var fs = require('fs-extra'),
requirejs = require('requirejs');
var config = {
baseUrl: 'htdocs/js/dev/faye/'
,name: 'faye_browser'
, out: 'htdocs/js/dev/faye/dist/faye.min.js'
, paths: {
dist: "empty:"
}
,findNestedDependencies: true
};
requirejs.optimize(config, function (buildResponse) {
//buildResponse is just a text output of the modules
//included. Load the built file for the contents.
//Use config.out to get the optimized file contents.
var contents = fs.readFileSync(config.out, 'utf8');
}, function (err) {
//optimization err callback
console.log(err);
});
The content of faye_browser.js is:
'use strict';
var constants = require('./util/constants'),
Logging = require('./mixins/logging');
var Faye = {
VERSION: constants.VERSION,
Client: require('./protocol/client'),
Scheduler: require('./protocol/scheduler')
};
Logging.wrapper = Faye;
module.exports = Faye;
As I under stand it the optimizer should pull in the required files, and then if those files have required files, it should pull in those etc..., and and output a single minified faye.min.js that contains the whole lot, refactored so no additional serverside calls are necessary.
What happens is faye.min.js gets created, but it only contains the content of faye_browser.js, none of the other required files are included.
I have searched all over the web, and looked at a heap of different examples and none of them work for me.
What am I doing wrong here?
For anyone else trying to do this, I mist that on the download page it says:
The Node.js version is available through npm. This package contains a
copy of the browser client, which is served up by the Faye server when
running.
So to get it you have to pull down the code via NPM and then go into the NPM install dir and it is in the "client" dir...
I want to know how I can verify if a file was downloaded using Selenium Webdriver after I click the download button.
Your question doesn't say whether you want to confirm it locally or remotely(like browserstack) . If it is remotely then my answer will be "NO" as you can see that the file is getting downloaded but you can not access the folder. So you wont be able to assert that the file has been downloaded.
If you want to achieve this locally(in Chrome) then the answer is "YES", you can do it something like this:
In wdio.conf.js(To know where it is getting downloaded)
var path = require('path');
const pathToDownload = path.resolve('chromeDownloads');
// chromeDownloads above is the name of the folder in the root directory
exports.config = {
capabilities: [{
maxInstances: 1,
browserName: 'chrome',
os: 'Windows',
chromeOptions: {
args: [
'user-data-dir=./chrome/user-data',
],
prefs: {
"download.default_directory": pathToDownload,
}
}
}],
And your spec file(To check if the file is downloaded or not ?)
const fsExtra = require('fs-extra');
const pathToChromeDownloads = './chromeDownloads';
describe('User can download and verify a file', () =>{
before(() => {
// Clean up the chromeDownloads folder and create a fresh one
fsExtra.removeSync(pathToChromeDownloads);
fsExtra.mkdirsSync(pathToChromeDownloads);
});
it('Download the file', () =>{
// Code to download
});
it('Verify the file is downloaded', () =>{
// Code to verify
// Get the name of file and assert it with the expected name
});
});
more about fs-extra : https://www.npmjs.com/package/fs-extra
Hope this helps.
TL;DR: Unless your web-app has some kind of visual/GUI trigger once the download finishes (some text, an image/icon-font, push-notification, etc.), then the answer is a resounding NO.
Webdriver can't go outside the scope of your browser, but your underlying framework can. Especially if you're using NodeJS. :)
Off the top of my head I can think of a few ways I've been able to do this in the past. Choose as applicable:
1. Verify if the file has been downloaded using Node's File System (aka fs)
Since you're running WebdriverIO, under a NodeJS environment, then you can make use its powerful lib tool-suite. I would use fs.exists, or fs.existsSync to verify if the file is in the expected folder.
If you want to be diligent, then also use fs.statSync in conjunction with fs.exists & poll the file until it has the expected size (e.g.: > 2560 bytes)
There are multiple examples online that can help you put together such a script. Use the fs documentation, but other resources as well. Lastly, you can add said script inside your it/describe statement (I remember your were using Mocha).
2. Use child_process's exec command to launch third-party scripts
Though this method requires more work to setup, I find it more relevant on the long run.
!!! Caution: Apart from launching the script, you need to write a script in a third-party framework.
Using an AutoIT script;
Using a Sikuli script;
Using a TestComplete (not linking it, I don't like it that much), or [insert GUI verification script here] script;
Note: All the above frameworks can generate an .exe file that you can trigger from your WebdriverIO test-cases in order to check if your file has been downloaded, or not.
Steps to take:
create one of the stand-alone scripts like mentioned above;
place the script's .exe file inside your project in a known folder;
use child_process.exec to launch the script and assert its result after it finishes its execution;
Example:
exec = require('child_process').exec;
// Make sure you also remove the .exe from scriptName
var yourScript = pathToScript + scriptName;
var child = exec(yourScript);
child.on('close', function (code, signal) {
if (code!==0) {
callback.fail(online.online[module][code]);
} else {
callback();
}
});
Finally: I'm sure there are other ways to do it. But, your main take-away from such a vague question should be: YES, you can verify if the file has been downloaded if you absolutely must, expecially if this test-case is CRITICAL to your regression-run.
I am writing tests in protractor which a JS based framework and selenium test stack for running tests. I am facing an issue where I have to test file upload.
Problem I am having is File I am trying to upload is in the test package whereas selenium node is a separate server so it will not get the file.
I tried using file descriptor although the file name is set contents don’t get uploaded.
Below is the code snippet that I have.
var remote = require('selenium-webdriver/remote');
browser.setFileDetector(new remote.FileDetector());
var absolutePath = path.resolve(__dirname, "../specs/data/baseProducts.csv");
$('input[type="file"]').sendKeys(absolutePath);
Do you have any inputs for the same?
Or do you know anyone who has written file upload tests in JS using selenium?
Your help will be much appreciated
First of all, for the file upload to work with remote selenium servers, you need the latest protractor (currently, 3.0.0) (which would have the latest selenium-webdriver nodejs package as a dependency).
Then, these two lines are crucial to be able to send files over the wire to the selenium node:
var remote = require('selenium-webdriver/remote');
browser.setFileDetector(new remote.FileDetector());
And, now you should be able to upload files as if you are running tests locally.
Complete working test (tested on BrowserStack, works for me perfectly):
var path = require('path'),
remote = require('selenium-webdriver/remote');
describe("File upload test", function () {
beforeEach(function () {
browser.setFileDetector(new remote.FileDetector());
browser.get("https://angular-file-upload.appspot.com/");
});
it("should upload an image", function () {
var input = element(by.model("picFile")),
uploadedThumbnail = $("img[ngf-src=picFile]");
// no image displayed
expect(uploadedThumbnail.isDisplayed()).toBe(true);
// assuming you have "test.jpg" right near the spec itself
input.sendKeys(path.resolve(__dirname, "test.jpg"));
// there is a little uploaded image displayed
expect(uploadedThumbnail.isDisplayed()).toBe(true);
});
});
Also see relevant issues:
setFileDectector unable to set remote file detector
Protractor file uploads - Support remote uploads with webdriver setFileDetector & LocalFileDetector
Thanks to #alecxe for his answer!
I just had this situation, trying to upload some files to BrowserStack. In my case I'm using Cucumber - Protractor - NodeJs - BrowserStack. This code is already tested, working in local env and BorwserStack.
let path = require('path');
let remote = require('selenium-webdriver/remote');
this.When(/^I upload a file$/, () => {
browser.setFileDetector(new remote.FileDetector());
var fileToUpload = '../image_with_title.jpg';
var absolutePath = path.join(__dirname, fileToUpload);
page.fileupload.sendKeys(absolutePath);
});
The magic line is:
let remote = require('selenium-webdriver/remote');
This solution worked for me.
The below two lines of code did the trick.
var remote = require('selenium-webdriver/remote');
browser.setFileDetector(new remote.FileDetector());
I am able to upload the file remote server.
Is there a way in a Node.js Jake build to wait until a certain file has been copied, and advance to do some operation only after the destination file can be found? I think this question pretty much comes down to "is there a way to copy files synchronously in Node.js/Jake?" (Perhaps something else than writing something from scratch, using the combination of fs.readSync and fs.writeSync.)
Background:
I'm developing a web app that is run on Node.js (with Express) during development, but will be deployed on a Java server in production. (We use Jade and Stylus in the client and Express enables us to run the app without generating all the HTML files etc. and deploying it after every change.)
I use Jake for making the build, i.e. generating HTML files from Jade files and CSS from Stylus files etc. Now I'm also trying to concatenate all of the app's JavaScript files into one minimized file and change all the HTML files to use that instead of all the separate JS files that are used in "raw" form during development.
However, I now have a problem with that last step. My idea was to copy all of my Jade files into a temporary directory for the deployment build and replace the reference (in a Jade file used as a header on all HTML pages) to a list of all separate JS files to the one that has just been generated by concatenating and minimizing the whole bunch. But as I first copy all of the Jade files to another location (which happens asynchronously) and try to edit one of the files, opening the file always fails since the copy operation hasn't really finished yet.
This is what I have now (in a simplified form) in my jakefile:
var fs = require('fs');
var fse = require('fs-extra');
var path = require('path');
var glob = require('glob');
var Snockets = require('snockets');
var snockets = new Snockets();
// generating the minimized JS file
snockets.getConcatenation(baseDir + '/scripts/all.js', { minify: true }, function(err, allJs) {
if (err) {
throw err;
}
fs.writeFileSync(generatedJsFileName, allJs);
});
// copying all the Jade files to a temp dir
glob.sync('**/*.*', {
cwd : srcDir
}).forEach(function(file) {
var loadPath = srcDir + '/' + file;
var savePath = targetDir + '/' + file;
fse.mkdirsSync(path.dirname(savePath));
fse.copy(loadPath, savePath);
});
// trying to read one of the copied files (which fails, since the file cannot be found yet)
fs.readFile(targetDir + '/views/includes/head.jade', 'utf8', function(err, data) {
...
});
This might be a stupid question, and a stupid way to try to solve the problem in the first place. So, also suggestions for a better approach are very welcome.
Update:
I also tried using Parseq, putting each operation (creating the JS file, copying the Jade files, reading one file) in its own function, but even that gives me the same error. If I run the script several times without deleting the target directory of the copy operation in between, the file can be found. So e.g. the path is correct and the problem really seems to be about timing.
I didn't really find an answer to the main question so I don't know if this helps anyone else facing the same problem. But I did find a way to get around the problem.
I ended using the same original Jade files for the two different conversions, but in the second conversion I use a custom js function to change the script tag reference to point to the minified file.
I.e.
var data = jade.compile(str, { filename: file, pretty: true })({
css: function(path) {
return '<link rel="stylesheet" href="/styles/' + path + '.css" />';
},
js: function(path) {
var name = '<script src="/scripts/';
if (path == 'all') {
name += generatedJsFileName;
}
else {
name += path + '.js';
}
name += '"></script>';
return name;
}
});
It might not be the prettiest workaround but it works.
I have an application using node.js backend and require.js/backbone frontend.
My backend has a config/settings system, which depending on the environment (dev, production, beta) can do different things. I would like to propagate some of the variables to the client as well, and have them affect some template rendering (e.x change the Title or the URL of the pages).
What is the best way to achieve that?
I came up with a way to do it, and it seems to be working but I don't think its the smartest thing to do and I can't figure out how to make it work with requirejs optimizer anyway.
What I do is on the backend I expose an /api/config method (through GET) and on the client
I have the following module config.js:
// This module loads an environment config
// from the server through an API
define(function(require) {
var cfg = require('text!/api/config');
return $.parseJSON(cfg);
});
any page/module that needs config will just do:
var cfg = require('config');
As I said I am having problem with this approach, I can't compile/optimize my client code
with requirejs optimizer since /api/config file doesn't exist in offline during optimization. And I am sure there are many other reason my approach is a bad idea.
If you use use module bundlers such as webpack to bundle JavaScript files for usage in a browser, you can reuse your Node.js module for the client running in a browser. In other words, put your settings or configuration in Node.js modules, and share them between the backend and the client.
For example, you have the following settings in config.js:
Normal Node.js module: config.js
const MY_THIRD_PARTY_URL = 'https://a.third.party.url'
module.exports = { MY_THIRD_PARTY_URL }
Use the module in Node.js backend
const config = require('path-to-config.js')
console.log('My third party URL: ', config.MY_THIRD_PARTY_URL)
Share it in the client
import config from 'path-to-config.js'
console.log('My third party URL: ', config.MY_THIRD_PARTY_URL)
I do the following (note that this is Jade, i have never used require.js or backbone, however as long as you can pass variables from express into your templating language, you should be able to place JSON in data-* attributes on any element you want.)
// app.js
app.get('/', function(req, res){
var bar = {
a: "b",
c: Math.floor(Math.random()*5),
};
res.locals.foo = JSON.stringify(bar);
res.render('some-jade-template');
});
// some-jade-template.jade
!!!
html
head
script(type="text/javascript"
, src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js")
script(type="text/javascript")
$.ready(init);
function init(){
var json = $('body').attr('data-stackoverflowquestion');
var obj = JSON.parse(json);
console.log(obj);
};
body(data-stackoverflowquestion=locals.foo)
h4 Passing data with data-* attributes example