How can I avoid serving too many files on Karma server - javascript

I am using Karma as my unit test runner. Generally speaking, it is awesome, but I am facing some annoying problem.
The website I am developing on is based on a quite large ui library with about 40000 files. All the resource from the library is loaded on use, so I have to set them all as 'served' files on the Karma server. Thus I will frequently meet the EMFILE error, which means I am opening too many files at the same time.
Of course I can modify the open file maximum through ulimit -n, but this is not as graceful as I expected, and loading tens of thousands files is really time-consuming. Furthermore, I am going to work on a shared machine which I do not have root auth.
I am currently using another http-server and let Karma server proxy all the lib files to it, but it is only a temporary solution.
So I am wondering if there is any way to avoid including all the library files as 'served'?

Related

Basic Automated Test of a Javascript Browser Library

I am using Webpack to build a webapp that is designed for the browser.
After my build process I have two files: index.html and app.bundle.js
To my dismay I have found that the although the development Webpack configuration works the production configuration does not because of errors during minification.
I am looking for the most basic way to run a test that does the following:
Opens the index.html file (which contains <script defer="defer" src="app.bundle.js"> and sees if there are any errors when the script contained in app.bundle.js runs
That is it.
I need a full Browser environment (requestianimationframe, fetch, etc.)
I have tried JSDOM and I get Error: Uncaught [ReferenceError: fetch is not defined] which looks like it has a bunch of issues, and then there are a tremendous amount of overlapping libraries and tools like Phantom.js Zombie.js Pupeteer, headless-chrome etc. and I honestly can't figure out how to just open my app. I have tried to explore all of these tools but they are all complicated.
Ideally I wouldn't have to create a local webserver, but I'd be fine with that if this is what is required.
The 'best' way to do this IMHO, would be to have a full test environment that mirrors your production environment that you can deploy to and then run tests against, using a framework like WebdriverIO or Playwright. But that can be prohibitively expensive and requires a fair amount of devops work.
Your second best option is probably to configure a local webserver on whatever machine you run your tests on and spin up Selenium or Puppeteer tests there. Leveraging a framework like WebdriverIO or Playwright a basic sanity check test shouldn't take more than an hour or two to set up.

HMR For Production

My team is building a large React application. I am wanting to know if what we are trying to accomplish in regards to build and deployment are possible with Webpack.
Let’s say our team is building Google Admin. There are 4 modules/app within the admin that 4 different teams are focused on. There is then a console application that is the entry point to these 4 modules/apps. We want to be able to work on each of the modules independently and be able to deploy them independently.
How we have it setup right now is there would be 4 separate applications that are dev harnesses to build these modules. We build them and copy the distribution .js and .js.map files to the console's ./modules folder. We would reference these modules lazily using System.import.
Is it possible, while the console app is built and in production, to “swap out” the module-one.js and module-one.js.map files that the console depends on without having to rebuild and redeploy the entire console app?
Goals:
Do not package these apps for npm. This would definitely require the console app to update and rebuild.
Build any module and deploy just that specific module to production without having to rebuild the console application.
Do not redirect to separate SPAs.
I tried my best to explain the goal. Any input would be much appreciated. I have found nothing in my search.
webpack loads the modules into memory and watches the filesystem for changes, as long as webpack is running you shouldn't have an issue replacing any given module. However webpack will attempt to build the entire in memory bundle with each module change (as it has no way of knowing that your module is truly independent). The only thing I can thin of would be to write a shim between the console app and the modules that watches the files (like webpack) but only replaces the in memory version of the local file that was changed. Reading this I'm not even sure if it makes sense to me...

Organising a client & server SPA/Node.js application

I am writing an application with a Node.js backend and a single-page web app front-end.
I am keeping the client and server logic in the same project for simplicity and speed of development.
I am considering how best to organise the artifacts.
The Node.js part is straightforward because it doesn't need to go through a battery of pre-processors (transpilation, minification, concatenation etc).
The front-end needs to be transformed per the above, and I guess placed in a dist folder.
The current hierarchy of files is like so:
my-app
- src
- client
- server
Should I put the dist folder for the client artifacts under src/client?
Has anyone tried this and found problems with this approach?
I am using Heroku (a deployment system that uses git).
Committing the built artifacts for the client feels wrong, but if I want to deploy it by pushing to Heroku I think I need to commit them. Is this correct?
This question, as is, invites opinionated answers, so I'll start by saying this is by no means the only way to go, but in my opinion, it is the easiest to work with and makes the most sense.
The production client code, after pre-processing, should be located in my-app/dist or my-app/dst, which could either mean distribution or destination, depending on how you look at it. Either way, my recommendation is to commit this folder, as it saves you a lot of hassle debugging remotely.
For example, if your code works locally but not remotely, using something like the postinstall hook to generate your dist folder adds yet another suspect to check when trying to determine what the issue is with your program.
Another advantage of committing the dist folder is it allows you to specify all the packages you use for your build process as devDependencies rather than dependencies. This is a huge plus, and makes deployment a lot faster, as well as less memory usage on your heroku process.
That being said, I still recommend (as you already probably plan to do) using an automated watch task to build your dist folder for ease of development, even if you decide you don't want to use that same build process remotely and opt for committing the dist directory instead. You could add that as a custom npm command, e.g. npm run build and have that invoke your gulp task.
One last thing. For those of you using templating languages like pug or dust or ejs instead of a framework like react or angular, I recommend determining whether you can run any of your templates to build static HTML files that will be served in production.
If not, you should at least compile your templates (not to be confused with running them) by following the recommendations provided by your particular templating language. Typically, they'll suggest using their command line utility to generate the compiled templates, so that they don't have to be compiled every time they're invoked in production. This will make your node.js server respond faster to requests at the expense of using more memory to cache the compiled templates.
If you're planning to go this route, I would edit nodejs app.js/index.js to serve static file and point the directory to dist/.
Also, you would need to tell express to forward all non-api requests to the frontend.

How make deployment onto Amazon Web Services (AWS) with ReactJS/NodeJS together?

I currently have ReactJS + NodeJS/ExpressJS + Webpack onto EC2, Amazon Web Services (AWS) under one project and would like to get it deployed together at once, in one project.
What are some suggestions on how to go about doing so? Done the research, and I've only seen tutorials on deploying one in specific, whether it be just ReactJS or just NodeJS. Any insights or leads would be greatly appreciated.
Will accept/upvote answer. Thank you in advance
You don't "deploy" ReactJS, it's just a static file or files like any other JS libraries in your applications. You also don't deploy Webpack. Webpack should run on a developer machine (or in CI/CD stack or build system).
As for the NodeJS part just use Elastic Beanstalk.
I do not commit builds to source control. I see that a lot and it can make things easier, but you can also forget to rebuild as you have to do it manually, and it adds a lot of bloat to your repo.
I believe builds should be run as part of the deployment process. Assuming you are using git, you can add script hooks/post-receive in a remote repo there. When you push to that remote, the script will run. This is where I do my webpack build.
You may want to look into https://github.com/git-deploy/git-deploy for context, but I do this manually.
In my projects, on the deployment machine I do git --init --bare /var/git/myproject.git then add the script in /var/git/hooks/post-receive. The hook checks out the code into /var/www/myproject, runs the build, which fills in the /var/www/myproject/build. Then it removes the old /var/www/myproject/public and renames build to public. And done.
I'm coming from more of an operations background and would say that if your goal includes keeping that site up as much as possible then use Packer to generate AMI's and CloudFormation to build an Application Load Balancer (the newer, cheaper brother of ELB) in front of an AutoScalingGroup which keeps the EC2 instances up and running.
I'm currently working on a large scale project doing exactly what you describe. First off, there are so many different ways to do this, so what you really need is some general guidelines to get started, then we can dig a little deeper into details when some initial decisions are made, if you'd like. If you've already got the app deploying and running in two separate steps, but are just looking to combine those, I can definitely help. I'd just need to know how you're currently building/deploying. If you're just getting started on building your pipeline and need to set up the process from scratch, then read on:
First off you'll want to set up some kind of build server that will install your npm dependancies and run your webpack build. Most likely you'll want a separate webpack config that's just for your build server, this'll give you a build optimized for production or qa/staging environments. This config should split out vendor files that you won't update all the time, pull out seperate css files with extract text plugin and uglify the files. If you have an isomorphic React app, or are using es6 features not supported in your version of node, then you'll need a webpack build for your server code as well. This is really different from the hot reloading build you'll want to have on your local machine while you're actually coding the app. I'll be happy to show some examples if you'd like of our webpack config files for both local development and our CI build. You may also need a build.sh or makefile to do something with the compiled .js files that your webpack build creates, but that'll depend on your deployment which I'll cover later. You can run your production build locally as your getting your config just right and fire up the app from those files to test it's all working. Additionally, since you'll likely want to be able to automate all of this, you probably want to run your tests and linting right before you build your app, we run eslint and mocha/jdom to run our enzyme/expect specs as part of our build. Once that's all working nicely, you'll most likely want to set up a build server that can run your builds automatically. My team is using Jenkins for this, which is a little more work to set up, but it's free (aside from the ec2 box we run it on). There are also a ton of subscription based build/continuous integration servers, such as Travis and CodeShip. There's plenty of articles on the pros and cons of these different products and how to set them up. The bottom line is you'll want to have a build server that can pull down your code from source control, install npm deps, lint, test and build your app. If anything fails it should fail your build and if your build succeeds you'll have some sort of archive that you'll later deploy to an ec2 instance. In our shop we use a build.sh file to tarball up our build archive (basically a folder with our node server files as well as our minified client files, css files and any fonts or images needed to run the app) and upload it to an S3 bucket that we deploy from. We like this fairly old school method because the tarball will never change, so we have ultra reliable roll backs.
What you do with your build archive will depend on how you want to do deployments. We have a custom deployment system using puppet, but there are plenty of products that do this such as elastic beanstalk, that would be much easier to set up. You'll want some kind of process supervisor to actually run your node app, so unless you have a dev ops team that wants to build custom pipelines, using AWS built in features will probably be the easiest way to get started. As usual, there are so many ways to do this, but the basic principal is that you need something to download your build archive and run/supervise your node process. You also may want to be able to create and configure ec2 boxes on the fly (Puppet, chef, etc.), or even use containers (Docker) which allow you to move complete stacks around as single units. Using automation to create and configure servers is crucial if you need to scale your app, but it is complicated and may not be necessary for smaller projects. This is definitely an area where you can start simply and add complexity later on, as long as you have good long term goals and make sure to take the necessary step to prepare for future complexity.
All of this can get you pretty far in the weeds, so it's best to find the simplest thing that will serve your needs as you get started and then add complexity as real life situations demand so. I'll be happy to elaborate on any of these details if you provide a little more context about how big and well funded of a project your working on. If it's a little side project to learn the tech, I'd have very different advice then if you're trying to build an app that'll have a lot of traffic and/or complex features.
This could get 100 different answers and they could all end up being good ideas. First, you mention react + nodejs - keep in mind that these solve different tasks. React is going to be frontend and served out via static files. Nodejs is focused more around the server-side and would be the code that serves data. They can easily work together. You might use Express for the webserver (nodejs) to serve the HTML/React pages.
Unfortunately, I saw that you mentioned webpack, so you are going to have to 'build' your application with something - either via webpack, gulp, grunt, etc. This is where source control and build servers are great - but if you're new to it, it might be more complex than you need.
If you have just basic EC2 images as webservers and only 1-2, then the biggest hurdle is just pushing up your code. Something like https://deploybot.com/ could work as it can push your git repo down to multiple hosts via ftp, etc. If you wanted to get a bit fancier, you could look at something like Jenkins or some of the other items.
Docker is a great choice and if you are going to be dealing with multiple developers, server environments, deployments - it's worth the time. Otherwise, keep it simple and just get your code on the EC2 instance ;).

why use module loading in the backend?

I have been trying to find an answer why webpack cares about module loading on the backend. Is there a reason why this may be needed?
Does JSPM do backend module loading as well?
Assuming your first question is along the lines of "Why pre-bundle JavaScript code for the client?"
There are many reasons for module bundling. A few:
Simple file aggregation: Bundling related code makes many tasks easier / more intuitive. Instead of deploying a large directory tree of files after bundling those files may instead be a single bundle file.
Loading Performance: Individually loading dependencies that are in separate files on the client side has historically been very slow. Each file must be parsed and evaluated separately and depending on the module system used may incur considerable delay while waiting for dependencies to be discovered and loaded.
Media type abstraction: Bundlers typically allow methods for bundling non-JavaScript content. Including assets like images and stylesheets is convenient and encourages explicit/clear dependency by parts of your application that use them.
Tree shaking: By analyzing dependency among modules and code it's often possible to selectively include what is needed by the application and reduce the size of your overall code base. This isn't inherently a characteristic of bundling, but is commonly done because there is some notion of a build step.
Regarding your second question:
JSPM does offer this functionality. This is can be done on the command line with the jspm bundle command.
The simplest reason is for performance. Opening a file and closing a file is a slower processes than the time it takes to send the file (stream) so the fewer open and close file operations the faster the server can send the files requested. So by reducing the number of files that make up a javascript/web project the faster the browser will finish getting the files and start processing them for the end user.
The things that a good build process can do for you web project can go beyond simply adding all your Js files together as tools such as JSPM can also bring css and html files together into one bundle.js file further adding to your end-user experience.

Categories