I'm using Meteor 1.5, FlowRouter and Blaze.
I've been working on an app which requires offline support, unfortunately it also has several (large) areas of the app that are only available to a small subset of users, to avoid bloating the initial JS download with content most users won't require, I'm using dynamic imports at the FlowRouter.route({action}) level.
For offline mode (in addition to handling the data, etc) I'm using a service worker to cache the JS, CSS and HTML. Unfortunately, because dynamic imports work over the websocket, it isn't possible to cache these as they are loaded.
Fortunately, the user has to notify the server of their intent to work offline (so relevant data and files, videos, etc can be downloaded), this gives the opportunity to load these dynamic imports prior to the client going offline.
What are my options for caching dynamic imports? What I've considered so far:
writing a simple package with all the dynamic imports loaded statically, and with {lazy: true} defined in the package.js.
requires quite a lot of restructuring
the lazy: true means the package isn't actually available via a URL, it seems it is only available as a dynamic import itself.
writing a server side "fetcher" that will take a package name as an argument and serve the content from the file system
I don't know how the client side packages are stored on the server, and if the server has access to those files at all.
using browserify (or something similar) to manually generate a static js bundle in /public which can be downloaded when the client states their intent to go offline
Its very manual, and a change to the dynamic imports could easily be missed.
Has anyone attempted this before, I know meteor doesn't officially support service workers, but as far as I can tell, with the exception of dynamic imports, it plays nice with them
Related
In the Gatsby documentation it says that the default build mode is SSG:
SSG is the default rendering mode in Gatsby. While the name has the word “static” in it, it doesn’t at all mean boring or lifeless. It simply means the entire site is pre-rendered into HTML, CSS, and JavaScript at build time, which then get served as static assets to the browser.
But it seems that when you build it, the compontents and libraries have to be SSR friendly, and you need to use workarounds when using client-only libraries.
From the documentation it seems like there are three options for rendering:
SSG - Static Site Generation
DSG - Deferred Static Generation
SSR - Server Side Rendering
What if I am not interested in using SSR and just want to serve the static SSG version of a Gatsby site. Is there an option to build a purely static site, client-side site like Vite or Create React App and not have it complain about server-side rendering errors like this?
failed Building static HTML for pages - 1.639s
ERROR #95312 HTML.COMPILATION
"window" is not available during server-side rendering. Enable "DEV_SSR" to debug this during "gatsby develop".
Building a static site and doing server-side rendering are very nearly the same thing. The primary difference is when it is done (at build time instead of on demand).
The code to generate the HTML to be delivered to the client still has to be executed, and it still has to run in an environment where window is not available.
So no. You still need to do the workaround so that the code which can only run on the client is only run on the client.
In my project I load json files asynchronously via import().then.
Occasionally, when new content is available but isn't applied yet, the old cached script bundle tries to load the json files by their old name (because the bundling generates new hash names on every build). But those files are not available anymore.
I saw that many apps use a toast message to inform their users about the update so they can refresh, but is there another way to solve this issue?
No. There's no automatic way to handle this.
Based on your question I take it your SW configuration uses a cache-first strategy. Since it's a cache-first strategy the situation you described can happen and will happen sometimes.
You have two options:
Precache all the JSON files too. This way, when the old JS code tries to fetch the JS files asynchronously, they will be served from the SW's cache instead of going to the network. Client gets an old version of the whole app, including JSON files.
Implement some sort of complex(ish) custom-logic that tries to get the files and in an error situation talks to your server, fetches the correct filenames, and tries again with the new filenames. You could easily generate a file that lists all the current JSON filenames.
Both options have different gotchas and they might or might not work depending on the application.
I'm feeling pretty lost and stupid in the brave new world of "modern" web applications with node.js, module bundlers, task runners, and such.
I have a working Zend Framework (ZF 1) PHP application (which also embeds WordPress multi-site allowing users to create their own blog sites). It is hosted on an Apache server with mod_php. It uses html tables a fair bit for forms and data displays, but thankfully not for entire page layouts, though the css is based on a fixed-width page of 1000px.
The application began development under the notion that javascript should be used only for "progressive enhancement", though eventually we succumbed to requiring that javascript be enabled in order to get correct behavior. We support signup and login using OAuth2 authentication through several providers (Facebook, Google, LinkedIn, Twitter and others), but only via the server flow not the javascript SDKs. We use jQuery and limited amounts of Zend_Dojo javascript libraries plus a handful of homegrown javascript functions (in addition to whatever WordPress uses).
We fairly recently added an Nginx reverse proxy in front of the original Apache webserver. It hosts our ssl certificate and serves static file assets.
Now we're looking to move toward a more responsive design to better accommodate mobile and tablet users, and musing about progressive web apps. So major changes to css and increased use of javascript are in the cards. Although the Nginx serving of static assets gives us eTags, Google Page Speed Insights tells us that we have blocking downloads of javascript and css resources, and that we don't take advantage of browser caching.
It appears from various articles I've seen that the Webpack bundling tool can provide major help in addressing all of these performance bottlenecks. But for the life of me I don't see how it fits in to this ecosystem. My mental model of how our site works is that an http query is analyzed by PHP code, dispatched to a PHP action routine that accesses session data and our MySQL database, and then outputs html via phtml templates (ZF1 view scripts) that contain embedded PHP tags. The phtml templates may contain <script>, <style>, and <img> tags, either directly or by being injected into the html by other PHP functions that manage the overall page layout and the content of its <header> section.
But when I look at Webpack, it seems like it's expecting some sort of top-level javascript file from which it can build a dependency tree of other javascript and css files via import or require directives, or something. And it somehow supports cache-busting by hashing the contents of static asset files, creating new files from them with the hash value embedded in the file name, and editing the references to those files to include the hash value. But for this application, all of the references to javascript, css, and image files appear either in .phtml files (usually within embedded <?php> tags) or in pure .php files. Yet webpack doesn't appear to process php files at all - so I don't see how it can either find the references to javascript, css, and image files, or edit them to include a hash! And the articles I've seen about using webpack in PHP projects don't seem to mention this issue at all. There's an html loader, but not one for PHP. Is there some sort of standard practice about using javascript in an independent modular fashion within PHP sites instead of using <script> tags that I just don't know about?
And finally, different web pages have different requirements for javascript and styles, while webpack seems to want a single javascript main entry point from which all dependencies can be found. Does using webpack in this ecosystem imply making a separate webpack project for each page? I've read lots of articles about webpack, but they all seem to be dealing with web apps that are not structured at all like mine!
I did read this answer here on Stackoverflow which I expected would enlighten me. It came fairly close - it explained that I do need to create a top-level index.js file that requires all of the other javascript files. But since different pages use different javascript, I deduced that I would need to create a different index.js file for each page (and thus treat each page as a different project). Can that be true? Many articles talk about "single page apps", so perhaps that's just the assumption in these kinds of descriptions. Or maybe I need to understand "Code Splitting". Maybe if I keep reading that answer over and over again I'll eventually get the gist. It talks about CSS and style-loader and css-loader, but it's not clear to me how the <style> tags present in my .phtml files get processed by them (not to mention styles enqueued in WordPress code). I've attempted the SurviveJs and Official Webpack documentation, but again, they seem to be talking about a different universe than the one I live in. I'm thinking that a Rosetta stone exists somewhere that would map this new world back to traditional PHP apps! Any pointers?
It's an old question, but I attempt to give some pointers, as I've just went through similar hurdle: trying to integrate Webpack with legacy ZF1 app to do:
Asset bundling
Cache busting via appending version hash to filename
Catering for different bundles for different app routes
My approach:
First I checked in newer versions of Zend_View provide some solutions for versioning front end assets. I found this:
https://docs.zendframework.com/zend-view/helpers/asset/
and really liked the idea of encapsulating versioning concerns in separate config file. Obviously to be able to use this format I either have to use this zend_view helper in legacy app, or simply extend legacy zend_view class and add ->asset method that simply reads in the resource map of this format:
'view_helper_config' => [
'asset' => [
'resource_map' => [
'css/style.css' => 'css/style-3a97ff4ee3.css',
'js/vendor.js' => 'js/vendor-a507086eba.js',
],
],
],
The additional advantage of sticking to this format is that once you'll upgrade your app to newer version of Zend Framework or Zend Expressive, you don't need to change anything, just start using Asset helper of modern Zend_View.
Once we have a map like that we need to make webpack write it. So HtmlWebpackPlugin is not restricted to html files. We can write our own template and have a full control over how templates being written with webpack variables (such as asset name and hash). The big advantage here is that webpack doesn't need to overwrite typically numerous view templates that can turn into a mess and has its own problems (eg. what if we include scripts in controllers via headScript calls?) - it only writes the map. This bit solves the issue #2 - cache busting. Issues #1 and #3 - asset bundling and creating different bundles can now be solved native webpack way - by creating multiple bundles and then writing config file using our custom template:
const path = require('path');
module.exports = {
mode: 'development',
entry: {
'js/vendor.js': './frontend/src/js/vendor.js',
'css/style.css': './frontend/src/css/style.js',
// and so on...
},
output: {
filename: '[name]-[hash].js',
path: path.resolve(__dirname, 'public/js'),
},
plugins: [
new HtmlWebpackPlugin({ // Also generate a test.html
filename: 'view-helper-config.php',
template: 'view-helper-config.tpl'
})
]
};
And the view-helper-config.tpl would be:
'view_helper_config' => [
'asset' => [
'resource_map' => [
<% for (var chunk in htmlWebpackPlugin.files.chunks) { %>
<%chunk%> => <%= htmlWebpackPlugin.files.chunks[chunk].entry %>
<% } %>
],
],
],
I am setting up a new site using angular, mvc and web api. The static content (js, css, images, etc) will be in Site A, the MVC site will be in Site B and the api will be in Site C. These are all separate sites, not virtual directories. I'm trying to use bundling in the MVC site to bundle the js and css files from the static site for use in the MVC site.
I've set up a Virtual Path Provider but when I load the site angular doesn't work and also doesn't throw any errors. I'm assuming that the angular.js file is not being loaded from the bundle because if I include a local javascript file angular works.
Is what I want to do possible? If so, how?
Virtual Path Providers only apply to views, not things like CSS and JS. Unfortunately, there's not really a good way to handle this scenario. The bundler can only act on files within the same project, not those in a separate project. If you want a separate site to handle your static assets, then you pretty much just have to resort to referencing them directly. You can use the Web.config's app settings section to set the base URL for your static site (that way you have just one place to go if you need to change it later and you can do things like run transforms on it to have a different value in production). This also means you're somewhat on your own for bundling and minification. However, you can make your static site an MVC site as well just to get the bundling infrastructure and then use that site to handle bundling. All your bundles should be at the standard location of /Content/[style bundle name].css or /bundles/[script bundle name].js. There's a cache busting string added to the path, but you can somewhat handle that manually.
I would like to use angular.js for my Image Editing Tool in my website. Do I need node.js also?
I don't understand the scenario. If I don't need it, then when do we use both nodejs and angularjs together?
I feel your pain.
For someone new to Angular 2 development, I can feel the pain of having to learn server side technologies for something that is essentially a client side technology. From what I understand:
node.js is only used to manage the dependencies of an angular 2 application. If you can somehow manage to get those dependencies without using node.js, npm or jspm then you can run and develop your application offline. However, doing it manually will take an inexorable amount of time since you have to download files manually which may have other dependencies which will require other files to be downloaded again (yes I've been there). node.js or npm or jspm for that matter automates this process as well as taking all the necessary steps of configuring the files (jspm) so that whenever you use a particular dependency in your application, that particular dependency's other dependency will also be present in your system.
Some browsers, particularly Google Chrome restricts files loaded locally for security purposes so that certain HTML 5 technologies used by Angular 2 will produce an error when loaded using the file: protocol. So you need a server from which you can serve your application so that all the available HTML 5 technologies is available for Angular 2 to run.
node.js is also needed for the hot-module-reload capability for rapid application development since it provides a file watcher api to detect changes to source code.
But there is a way to develop Angular 2 application offline without node.js.
Remember when I said that if you can manage to get all the required dependencies, you can run and develop your application offline? If you can somehow find or create a package that has all the required dependencies your application will need, then you do not need npm or jspm to manage the dependencies for you.
For the file-access-restriction problem, you can load your project as an extension. Extensions have the ability to use all the available HTML 5 technologies as well as some powerful api's (not available even to applications served on a server), while at the same time being local to your development environment. So you do not need to fire a web server to access HTML 5 technologies if you serve your application as an extension.
For the hot-module-reload capability, you can approach it from the other way. Instead of having a file watcher in the web server to monitor changes to files in the local system, you can do it from the application itself. Since the application can fetch or xmlhttprequest resources that are needed by the application, you can periodically fetch or xmlhttprequest the resources your application needs and compare it to some cache. But how do you know which files to check? You can look for links within the page, script, of img. If you use SystemJS as the module loader, then you can use its registry to look for the files needed by your application but not loaded in the page, since it has been transpiled or something. While doing all this can be a performance drain to your system along with the added overhead of transpiling or preprocessing non-native code, this job can be outsourced to a web worker which will free up the main execution thread in the system for your application code.
Don't believe me? Here's proof.
The Angular in Chrome project on github contains a zipped package which contains the required dependencies needed to develop a minimal Angular 2 application (by minimal, I am referring to the Tour of Heroes tutorial referred on the quickstart page). So that if you are on a system not supported by node.js (yes there are, ChromeOS for instance) or just on a restricted system in which node.js just isn't available, all the required dependencies are available and you do not need npm or jspm to manage the required dependencies for you.
There is a proof of concept extension which serves the tour of heroes tutorial (the development files, typescript and all) locally as a chrome extension.
The extension also implements a hot-module-reload functionality by hooking into the hmr-primitives developed by alexis vincent for SystemJS. The hot-module-reload functionality is enabled by a single javascript file so that if this functionality is not needed or is taking up too much resources, then you can just remove the offending line of code.
But be warned though.
If you are using this system, then you need a way to update your development package as technology moves forward and it moves at a rapid pace (what with talk of Angular 3 when Angular 2 has just been released) or the technologies that you are using to develop your application may become obsolete or that somewhere along the line an api change may prevent your application from being functional in the future. You are also not guaranteed to have up-to-date repositories for the dependencies since these types of packages are maintained manually.
Bundling your application as a Chrome extension like in Angular in Chrome will introduce performance bottlenecks. Since code is transpiled and modules are lazy loaded, you lose the advances of JIT compilation and other performance enhancements that modern javascript engines use to optimize code run on the browser. However, what you lose in performance, you gain the flexibility to use the technology that you prefer to develop in. There is always a tradeoff. Moreover, the performance hit is only at the beginning as code is loaded. Once it has been loaded by the application, then the system will know how to implement the performance enhancements. When you distribute your application, you really need to compile the needed resources to take advantage of the performance enhancements of modern javascript engines.
The hot-module-reload capability is currently a hackish way of implementing a file watcher which uses common conventions for a project (temp1.ts, temp1.css, temp1.htm) since there is no way (I might be wrong on this) to get a definitive list of all the resources needed by the application but not loaded on the main page (the transpiled or pre-processed resources).
You don't need NodeJS for creating a client side image editing tool.
AngularJS is a web application framework, maintained by Google and the community, that assists with creating single-page applications, which consist of one HTML page with CSS and JavaScript on the client side.
But if someday you will want to upload and store those images on a server and make them accessible by multiple clients - then yes you will also need a server. This server could be made with NodeJS.
node.js is used to write Javascript on the server side.
angular.js is a client side framework.
You don't need node.js to use angular.js but, you can install npm (node package manager) to use some awesome tools that will make your life as an angular developer much easier.
For example: yoeman which is a great scaffolding tool.
There are many other tools available on npm here is a link to their site
Learn more about angular at the official angular website or at the angular youtube channel
No. Angular is used at the client side and Node for the server side.
They use to go together as the MEAN Stack but it's not necessary.
You don't need Node.JS for AngularJS to work. NodeJS is server side, AngularJS is client side.
If you are new to AngularJS, I'd suggest this tutorial AngularJS tutorial.
In the tutorial you will use NodeJS, you will understand why the two work together, but are not necessary.
It's hard to answer without knowing how your Imaging editing tool works. But to answer your question, no you do not need Node.js to use AngularJS.
Angular is a front-end javascript framework which operates in the clients web browser.
Node is a service which can execute javascript and is often used on a server maybe in replacement of PHP (like in MEAN stack).
Also, because Node is a service which can execute javascript it can be used in your local computer when developing Angular applications to do background tasks such as minifying css and javascript and performing tests.
So if your Imaging editing tool is developed in javascript and your application used Angular and Node (as a web server), the code could be executed on either client side or server side.
Have a read on MEAN stack to see where Node and Angular fit in. You don't even need Node at all but it's nice to develop all in the same language.
Reason for installing NodeJs
As a web browser such as Chrome, Firefox etc. understands only JavaScript, we have to transpile our Typescript to JavaScript. Therefore, the Typescript transpiler requires Node.js for generating the Typescript code to JavaScript.