Automated swapping out of JavaScript library URLs on deploy - javascript

I'm building a purely client side JavaScript based web app, and am looking to optimize the workflow for switching to CDN URLs for the JavaScript libraries I use on the production server.
In order to be able to do work offline, my laptop development machine loads all the libraries from a /js folder on a local web server. When I deploy the app, I want to substitute these URLs to use CDN versions of the jQuery library on Google e.g.. Since there's no server side logic, I can't make a check there for something like Rails.env.production? like I would if this were a Rails app.
I'm deploying by pushing to a git repo on the production machine and running a post-receive hook. I imagine I could run some kind of sed routine that switches URLs over the update in the same post-receive script, but am curious if there's not maybe a more elegant solution.
The easiest thing would be to simply put client side logic into the app to check what hostname it was called form, but I'd like to keep that as a last resort.
There is a previous discussion on fallback loading here, but a broader sense my question is about the automated swapping out of a block of text for another when deploying to a production machine.

You want to commit the change that has the pointer to the CDN version on the branch from which you will be publishing.
Commit the change that will have the development version to your development branch.
Assuming that this is the only way your branches differ, merge with "ours" strategy on both branches.
git merge -s ours mainbranch
This will ensure that you don't get that code change from now on wherever you merge or rebase.
hope this helps.

You could simply make the CDN host work locally by adding it to your /etc/hosts.
If that's not an option, use inline JavaScript to add the <script src="..." tags to the document and put some logic to decide whether to use CDN or not into that script.

Related

Changing web environment URLs automatic

I'm currently working on a web project(js, html, php with no frameworks) using AWS technology, including AWS version control. I work locally with Xampp and when I push my code to the master branch there is a trigger that deploys the code to the Production environment.
My main problem now is that I need to manually change the development-production urls or use a html tag and relative URLs, and it's a mess.. When working on development my url is something like: "192.168.64.2/web" and when production it's "myweburl.com". So I need something that checks out my actual URL and change all those over the project. Something like a global variable with the URL to use.
I've read about using "dotenv" and "dotenv-webpack" for environment manage, but I do need to install it on the server, as well as nodejs, and configure .gitignore, and I expect an easier solution. I would like to avoid nodejs. I've searched about my question all around internet but I just find ways that do not convince me, but if this question is repeated just redirect me there.
Is there any secure approach just using javascript, php or a config file?
The objective is having something that depending on my URL (DEV, TEST, PROD) changes all the URLs of my project, and protect the other environment URLs from being seen.
Thank you

Dynamic site regeneration GatsbyJS

I have a website in GatsbyJS that has huge datasets of dynamic data fetched on load of a page via React fetch. The data displayed needs to be semi live (e.g. be refreshed every 5 minutes).
I am wondering how do I achieve SSR speeds with this, because dynamic fetch doesn't cut it. Is it cron scheduled rebuild and if so what happens during the replace of the build folder?
You can use a service like Netlify. They provide a WebHook URL that triggers a build upon querying. You could then have a cron, every 5 minutes, hitting this URL to trigger a rebuild. Netlify handles the build for you, only replacing your site if the build succeeded.
If you want to do it yourself, you can maybe use Caddy - a webserver that has a plugin for deploying from git, similar to how Netlify works (only update site if build succeeds). Note: this is not yet supported by Caddy 2 (current version).
Another option is PM2, which also builds your site for your, handling failures gracefully (your site is always up, only replaced when the build succeeds).
I think a cron scheduled rebuild is probably your best bet. You may want to check out Gatsby cloud. They have recently added incremental builds, which means that only certain pages get rebuilt. If only a subset of your pages need to be rebuilt that could significantly speed things up. I think what happens during the replace of the build folder depends on where you host your site. Some hosting services like Netlify will probably use some sort of a URL swap to instantaneously replace your old deployment with the new one. If you host it on a regular VPS there will probably be some inconsistencies as the files are copied over/regenerated. Aside from the newly added incremental build feature I think what you're looking for is precisely Gatsby's Achilles heel.

Client-Side templating with nodejs and pug

I have a web app that I'm building that has dynamic widgets being constructed on the client-side. Currently I am using nodejs and pug as my server side templating library, and I like the simplicity of pug.
I would like to have a series of small pug files on the server that the client side can use as building blocks to construct the user desired widget.
I tried using a previous solution found here:
client side server side templating nodejs
However, that solution looks like overkill for what I want. Moreover, it looks like the ezel project is no longer maintained, it hasn't been updated in 2 years and it still uses jade (which npm gives me a lot of errors).
I just want to be able to construct my dynamic widgets in pug in the browser. This page seems to have exactly what I want:
https://pugjs.org/api/reference.html
Specifically the pug.renderFile('path/to/file.pug', options); function seems like exactly what I want to use to dynamically build my widgets (the user has all the controls on how the widgets are constructed/displayed, so the browser needs to dynamically construct the html views)
My issue is the dependence on:
https://pugjs.org/js/pug.js
And the need to do require('pug') in the browser. I already have pug installed as part of my package.json. Is there a more robust way of getting pug.js directly? I am still new to web development, my background is in C++/Java, so I'm not entirely sure if using pug.js in the browser directly is the best solution or if there are better standard solution. The stackoverflow question I posted is the only post I came across that is remotely similar.
I researched and tested a solution that I really like. NPM has a cool package called pug-cli.
https://www.npmjs.com/package/pug-cli
I modified my npm start script to do the following:
pug -c -w --name-after-file -o public/js/views views/client/
What this allows me to do is write my client views in put in the views/client folder. A task is running in the background that monitors changes in views/client/. Upon changes, it complies .pug files from views/client/ folder into javascript and saves it into public/js/views/. Then in the client code you just include Template.js and call Template(parameters) in your js. There is no need for a pug.js on client side. This is with debugging, to turn debug off, run with -D
For instance, views/client/example.pug will get automatically complied to public/js/views/exampleTemplate.js
Then all you have to do in the client is include this js file, and call exampleTemplate(params) to get your templated string (you can call this arbitrarily with different parameters get different views). This allows me to arbitrarily/dynamically compose and construct views from the client, when the views are unknown on the server side.
I like this approach for my workflow, but I am open to better suggestions.
If you use webpack:
https://github.com/pugjs/pug-loader and
https://github.com/willyelm/pug-html-loader serve well.
In case of rollup:
https://www.npmjs.com/package/rollup-plugin-pug + https://www.npmjs.com/package/rollup-plugin-pug-html seem to be good solution (currently testing how it works, we are now experimenting with native es6 modules and backuping bundles with rollup)
In case of browserify:
https://www.npmjs.com/package/jadeify (never have tried)
Also pug-cli have got -c key, so you can just run any watcher and generate js files as mentioned above, but it seems to be bit too straightforward since we've got variety of bundling tools now in 2017.

Best way to keep files that are used in multiple projects in sync?

I have a few files called "helpers.scss", "helpers.js" and "consolerules.js" that I use in every one of my projects. When I'm working on a project I'm modifying one of the files, for example I will add a function for replacing all strings within a strings into "helpers.js" but then when I open my other project I don't have that function.
Or I will add a helper css class in helpers.scss in the other project and I don't have it in the other projects.
What is the best way so I can always keep them in sync when I edit them in one of the projects? I was thinking of bower, gists, git, dropbox, google drive or something like that ...
I used two ways to handle these:
Get a CDN like server
Have a single version of those files and place them on a server. For example you could have URLs such as:
https://cdn.example.com/css/helpers.css
https://cdn.example.com/js/helpers.js
If you want to support versions (maybe you should?), you can add that to the filename:
https://cdn.example.com/css/helpers-1.3.css
https://cdn.example.com/js/helpers-1.2.js
Or to the path if you view all your files as having one common version:
https://cdn.example.com/1.2/css/helpers.css
https://cdn.example.com/1.2/js/helpers.js
Versioning is useful if you want to test a website with the newest version before using that version on your live site.
This is most certainly the easiest way if you can implement it that way. Now all your other websites will use those URLs instead of local versions of the files:
<link type="text/stylesheet" href="https://cdn.example.com/1.2/css/helpers.css"/>
Pull those files at build time
Depending on how you organize your websites (it is really not clear from your questions) and assuming you have folders on your machine with the original source, you can bring in those files as required with a script that you run before you upload your sites.
In my case, I like to do that in three steps:
I write the files
I copy the files to a .../build/... folder
I send the .../build/... folder to my test or production server
One reason for this is to generate a build folder that includes exactly what you want, verify it, then send it to your server. That verification happens only when you write your script. Once done, it should not require any additional work.
So... one reason to get such a script is that I can compile my files. For example, if you write PHP code, the servers only need the most compressed version of your code (unless you are debugging and need to find line numbers...) The script that generate the build folder could do:
for p in php/*.php
do
php -w $p build/$p
done
Now your PHP code on your server may be something like 20% smaller.
Similarly, you could copy your helper.css file as in:
cp ../helper-project/css/helper.css build/public_html/css/.
This copies the helpers.css file to your build folder. Since it grabs that file from your unique ../helper-project folder, you will always end up with the latest.
And instead of a simple cp command, you could also minimize that file at the same time:
cleancss --remove-empty ../helper-project/css/helper.css > build/public_html/css/.
The only problem here is that if you make changes to the helper-project, it won't automatically update all the projects. You still have to do in each project and run the script(s) that generate the build folder and copy that to your servers. Yet, I find that to be a practical way of doing things because that way I know when I do the update and I can test the resulting website(s) before going to production and once I update a production site, I can verify that it's still all working just fine.
You can do this with git (or any modern VCS); I assume you are using some sort of VCS for your code.
If you have a project being managed in git, you can even add multiple remotes, such that you can pull in code from multiple sources.
If you are using a VCS like git, then it is just a matter of doing a git pull <remote ref> <branch ref> whenever you want to sync up.
Otherwise, the comments to your question offer some alternatives.

How best to share a QooXDoo using Git?

I'm developing a little project with QooXDoo and want to share the source with some friends. Should I just check in the whole project folder?
You should add the build and cache directories to the relevant .gitignore files (these directories they are the equivalent of 'object' files, so they should not be stored in version control unless you have a very good reason).
As for Qooxdoo itself, I usually place it next to the project so it's easy to duplicate the setup. I end up with something like this
/
tmp
qooxdoo-sdk-xxx
my-app-directory
HTH
I guess the best approach would be to just check in your application and a little instruction file how to setup a qooxdoo SDK.
This way the developers can work with your application locally and using a qooxdoo SDK to work against.
As long as you do not need to work against the current trunk it's better to work with the latest SDK.
See the download page at qooxdoo.org
you may want to have a look at remOcular a qooxdoo frontend with perl backend, complete with autoconf setup.

Categories