I have written some tests for my website using selenium and javascript. I want to know the standard way of using this script in production. Locally I'm running chrome driver and testing my script. What I have tried is in start of my package.json I run my test node test.js && react-scripts start.What is the standard way of doing the same in production?
In case there is no any difference between running the tests on your local machine and the product environment there is no reason to use your tests differently on the production.
However, it is common to run the tests on production with Jenkins or other CI/CD tools on Unix server in headless mode or with Selenium Grid etc.
In this case you will have to adjust your tests for running with Selenium Grid or in headless mode respectively, to adjust them running on Unix etc.
All this depends on YOUR actual configuration, how YOU will use it.
There are multiple different ways of handling Selenium in production. For example, if you have an open source project, you may consider using GitHub Actions. Here's an example of a JavaScript Workflow from the Selenium Project: https://github.com/SeleniumHQ/selenium/actions/workflows/javascript.yml
That's probably a good place to start, since it is open source and you can see how they run tests. Once you've learned that, you can try out some of the other popular solutions out there if you want (Eg: Jenkins, Azure Pipelines, AWS, Google Cloud, CircleCI, GitLab, TravisCI, etc.)
There is no way for you to use selenium in production, considering that you need selenium in production, such as a crawler, what may be different are your dependencies, in development you will use the dependencies for testing, selenium will use the available driver, either in production or in development/testing.
Perhaps the arguments you use for the chosen browser may be different for the environments used (production, development, etc.), which doesn't make sense to me, because in the test you should reproduce the same scenario as in production.
Related
I am using Webpack to build a webapp that is designed for the browser.
After my build process I have two files: index.html and app.bundle.js
To my dismay I have found that the although the development Webpack configuration works the production configuration does not because of errors during minification.
I am looking for the most basic way to run a test that does the following:
Opens the index.html file (which contains <script defer="defer" src="app.bundle.js"> and sees if there are any errors when the script contained in app.bundle.js runs
That is it.
I need a full Browser environment (requestianimationframe, fetch, etc.)
I have tried JSDOM and I get Error: Uncaught [ReferenceError: fetch is not defined] which looks like it has a bunch of issues, and then there are a tremendous amount of overlapping libraries and tools like Phantom.js Zombie.js Pupeteer, headless-chrome etc. and I honestly can't figure out how to just open my app. I have tried to explore all of these tools but they are all complicated.
Ideally I wouldn't have to create a local webserver, but I'd be fine with that if this is what is required.
The 'best' way to do this IMHO, would be to have a full test environment that mirrors your production environment that you can deploy to and then run tests against, using a framework like WebdriverIO or Playwright. But that can be prohibitively expensive and requires a fair amount of devops work.
Your second best option is probably to configure a local webserver on whatever machine you run your tests on and spin up Selenium or Puppeteer tests there. Leveraging a framework like WebdriverIO or Playwright a basic sanity check test shouldn't take more than an hour or two to set up.
What are those reasons that people install Node.js on PC, and can a Node.js website be developed without installing Node.js on PC, if yes, what are the disadvantages?
Thanks
There can be many reasons, but some common ones:
When developing for any language, you'll need to be able to run and test your code, and running it locally makes it easier to do so.
Additionally, although you can use remote debugging, debugging is faster and easier to set up locally.
Many of the tools used by web developers are also developed in JavaScript and to run those locally, an engine capable of doing so is required. It makes sense that these tools are developed in JavaScript, not just because their primary user group will be able to understand and extend their code, but also because many of these tools need to be able to perform tasks and integrate with components that require something like a JavaScript engine to begin with.
In many situations, running a local environment on your development system will be cheaper and easier to maintain than having a separate test server; and you don't want to run your untested code on a production server.
Not directly related, but npm and Node.js have many uses beyond serving as a back-end to JavaScript-driven websites. Many people that have Node.js installed have nothing to do with web development, but have it for one of the many other reasons.
To answer your 2nd question: "can a Node.js website be developed without installing Node.js on PC?" Yes, but I can see very little reason to want to do so, unless you must. The advantage might be that you can avoid having a complicated piece of software with a large footprint and possibly some security concerns on your development machine. But the disadvantages for the average developer likely far outweigh that - more so if you just sandbox the entire development environment in case security is your main concern.
If you are writing everything from scratch, you don't need a package manager, but there is so much great stuff out there that you can use rather than writing it yourself! If you want to use it, you need a package manager that allows you to download (an optionally specific version of) specific packages, which may use packages themselves, and YOUR source code repository doesn't need to store a copy of every package you use, and every package used by the packages you use, because, as long as your source code specifies which packages it uses in a way your package manager understands, all you need to do is specify a manifest of (an optionally specific version of) specific packages your code uses directly.
"NPM" is one such package manager. "bower" is another but that uses NPM under the hood. Maven is a package manager I've seen in Java projects, and NuGet for MS projects, but for JavaScript projects it's usually NPM. And NPM uses node.
I have built a web automation programme with selemium & javascript. Now i want make it usable for everyone so that anyone can use it without any dependecies or coding environment i mean any non technical people can use it easily.
how can i do it?
One option is to use: https://www.npmjs.com/package/pkg
This command line interface enables you to package your Node.js
project into an executable that can be run even on devices without
Node.js installed.
The most useful part will be dependencies:
During packaging process pkg parses your sources, detects calls to
require, traverses the dependencies of your project and includes them
into executable
However, i expect this will not manage the webdriver executable. You'll most likely need to ship chromedriver/geckodriver/etc with your resultant exe.
Looking forward, i expect you'll need to manage your distributable for the rollout of new browser versions. Different users will be on different versions at different times and your relevant drivers will need to be updated and re-shipped.
I am just trying to get my head round unit testing in Javascript and RequireJS. I am building a web-app and obviously only want to have tests run in development not production builds.
Questions:
Do you just test when you want to, or do you have JS tests running
on every page load when in development?
If tests are only on demand
then how do you trigger your tests to run? Query strings (eg.
?testing=true) or something like that?
I just need an idea of how people go about testing in development. I am using BackboneJS, RequireJS and jQuery on the front end with a NodeJS/ExpressJS server on the backend.
For a Backbone project at work we have a maven build process that runs our automated javascript tests through jsTestDriver, and we read the results with Sonar. I usually run the tests manually (with 'mvn test'), but I could easily tell maven every time I save a file, for example. I wrote a post that shows how to integrate QUnit, Requirejs, and code coverage with JSTD that is independent of Maven: js-test-driver+qunit+coverage+requirejs. It also contains links to a QUnitAdapter that is way more up-to-date and developed than the one on the jsTestDriver site. I'll update this post when I manage to write about how I got jsTestDriver working with Maven and Sonar. Hope it helps.
Grunt is a popular JS build tool. There's something called grunt-watch that can monitor certain files for change, and execute tasks accordingly. You could easily run unit tests with something like this on every save.
Usually end-to-end tests take longer, and we use the CI for that. I've seen a presentation on Meteor TDD that does end-to-end tests after every save though.
There are many end-to-end test frameworks, and they can run in a headless browser like phantom js using a build tool like grunt. Some frameworks open an actual browser to run the tests, but run via command line and report results using XML.
If you break out your components enough, the tests could have a small enough scope to run on each save.
For some core code I use JsUnit + Rhino on build server. For more complex bits (usually interface) I use selenium (it also runs on build server). I don't test anything on page load, I only use not-compressed versions of scripts.
I don't any solution for integration tests.
We have a rich web client. Our controllers and service facades are written in coffeescript (JavaScript) and jquery. In the past they would have been java.
To run our JavaScript jasmine tests from Jenkins/Hudson, we use java's junit and htmlunit to load a test oriented jsp page which includes the jasmine specs.
When the Htmlunit tries to run, it blows up trying to getPage() probably because of an XML parser class path which is extremely challenging to track down in our world.
We just want to be able to run our JavaScript tests from Jenkins and have it report failure if a JavaScript test does not pass. We are just using jsp and htmlunit in order to run JavaScript tests. Can we load the JavaScript tests and javascript code into a JavaScript engine with Jenkins as the thing that kicks it off? If so, how?
Sounds like you're in a Java environment. My jasmine-maven-plugin might be a good fit.
Jasmine Reporters would also be a solution. It has instructions for running headlessly via PhantomJS for example, and it can generate JUnit XML so Jenkins can understand the test results natively, graphing test count, duration, and failure over time.
Also, the "xvfb-run" wrapper often provided with xvfb is a great help here, so you can do "xvfb-run phantomjs.runner.sh ..." in a truly headless environment.
I've previously solved this problem by running the tests with a node.js plugin called jasmine-node
This solution of course requires node.js and a few node modules to properly run the jasmine tests. There is no real browser running the tests, but an emulated one using a module called jsdom, which basically creates a headless browser, and more specifically, a DOM, which the tests can interact with.
There's node modules for jQuery, underscore and propably other too, so these can be tested too. You can even skip the whole browser emulation if you'd rather run the tests in a browser, though I find it too cumbersome compared to automated Jenkins testing.
jasmine-node generates jUnit test reports, which Jenkins can interpret just fine.
I just realized there is some jenkins-jasmine-node plugin that might ease this process.
Grunt is your friend
use grunt http://gruntjs.com/
with grunt jasimine https://github.com/gruntjs/grunt-contrib-jasmine
with nodejs http://nodejs.org/
on jenkins using https://wiki.jenkins-ci.org/display/JENKINS/NodeJS+Plugin
got this setup and it's really nice, plus this gives you a place to start making your build server do other nice things such as deployment, unit testing, etc you know, other nice things
Can you use selenium? That would actually use a real browser then and get as close to the real environment as possible.