Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have an application that consists of N Modules. Almost all of those modules will be loaded on demand.
Is there any good way to organize AngularJS application with dynamically loaded and unloaded modules?
Why do we need unload modules
Number of Modules (N) can be as much as possible and I can't guarantee any maximum number of them. So I try to avoid excessive use of the memory;
I don't think it is the best practice to leave the code inside browser that we not going to use (I don't like the idea that tab with my webapp will consume all available memory and will hangs the browser);
I think Google is too going that way. You can work with your Gmail whole the day and it's still running properly (Google I/O 2013 - A Trip Down Memory Lane with Gmail and DevTools http://www.youtube.com/watch?v=x9Jlu_h_Lyw).
Linked
the Nested Modules in AngularJS
As of Angular 1.2.16 and 1.3.0 beta, the bootstrap() method defines a specific element on the DOM as the root scope for a collection of modules. There is no corresponding method to unbind. The ng-app directive is just a shortcut for bootstrap(), so the same limitation applies.
The angular-requirejs-seed project Artemis linked to does not answer your question. It uses the NG_DEFER_BOOTSTRAP! property to suspend the bootstrap process and dynamically define which modules to add at the time the page is loaded. It does not unload anything.
The most straightforward way to overcome Angular’s inherent inability to unload modules is to destroy the DOM element where your modules are running. Unfortunately, by destroying the element, you also lose whatever markup you had. One solution to that problem is to put your app’s markup into an HTML5 <template> element and clone its contents. Here’s an example I wrote in JSBin where an ambiguously named someGuyInASuit directive loads a picture of Dr. Jekyll or Mr. Hyde depending on which module is loaded.
This work-around is not well suited to an app consisting of many modules, especially if you intend to swap them in and out frequently as the user interacts with it. For one, all your models will be destroyed. Also, all the config() and run() blocks will be re-run. You may want to either fork Angular and add your own un-bootstrap method or have a look at another framework such as React, which has a method for unloading components built in.
Maybe you could find this post helpful. http://rarabaolaza.tumblr.com/post/56707155391/a-plugin-based-architecture-for-angularjs-apps
It describes an aproximation to using requirejs and some metadata to create a plugin based angular app where every plugin is a module, maybe you could adapt it to your needs or get some ideas. But no module unloading I´m afraid.
If you believe this could help you I´m currently revising it and (I believe) improving the ideas, so feel free to ask
HTH
Look at this project. You can use Angular alongside with RequireJs https://github.com/tnajdek/angular-requirejs-seed
Regarding your project, I wouldn't care about numerous angular modules, they are just a lightweight objects that allow you to split you code into logical parts.
If you faced some memory problems consider some code refactoring to get rid of performance killers.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
We have an large application spread across multiple teams, built with Java Server Pages. The goal is to migrate to Angular. A monolithic migration/launch was deemed not practical, so a gradual migration is preferred.
The idea is to use a Webpack 5 Module Federation app shell to load Angular micro frontend remotes into the existing JSP app. The question is whether to load the remotes as Angular apps or Web Components. The thought is that Web Components might allow them to embed a reusable microfrontend fragment into the JSPs in cases where they can't migrate an entire page at once, or they have components that will exist in both the unmigrated JSPs and the new Angular pages.
After the migration, they'll either keep the micro frontend architecture if it's justified, or abandon it and merge the remotes into one Angular app.
Another alternative might be lazy loaded modules rather than opening the pandora's box of micro frontend architecture. Just informally split the app up into lazy loaded modules per team. Downside here is possibly more teams stepping on each others toes in the repository, but that's no different than how they've been operating. Their concern about lazy loading modules is they don't think they'll be able to do something like this:
<!-- my ancient JSP site. LOL page load with every click -->
<JSP-header></JSP-header>
<myAngularComponent></myAngularComponent>
<script type="text/javascript" src="https://lawlcats.com/myAngularComponent.js"></script>
<JSP-footer></JSP-footer>
All in all, the proposed solution is incredibly complex. These teams are brand new to Angular and are already considering combining different frameworks within a micro frontend architecture, AND implementing web components. Sounds like a huge lift to me. I'm also unsure if they've considered how they'll manage the repository across teams.
Does anyone see room for improvement or flaws in this plan? I'd love suggestions for the micro frontend remotes being Angular vs Web Components, vs abandoning micro frontends altogether in favor of lazy loaded modules.
Well, it would be good if your team first fully grasped what are the implications of MFEs or WebComponents as tools.
Micro Frontends are self-contained, stateful, full-fledged, and fully black-box applications. Maybe it would help your team to think about them as iframes.
You stick a MFE app somewhere on your page, and that is it. It does its own thing. The host app just hosts all the MFE apps on all the different http ports, but it doesn't know anything about what's going on inside of them.
In the simplest classic example, there is no communication between the host and the apps, and the apps also never ever talk to each other. AFAIR, if you wanted them to communicate, it's theoretically possible to do that via http (since they do actually live on specific ports), or wild shenanigans like utilizing LocalStorage. But it's generally not easy.
WebComponents, on the other hand, are just raw components that do one specific thing. They are also black boxes, but usually super tiny and thin. You can think of them as something like input.
An input knows how it should be styled, it knows which raw browser Web APIs it should talk to, it knows that it should render text in response to the user typing on their keyboard, and how to expose a couple values and events to the external world. But ultimately, an input in itself is pretty dumb and can hardly be called an "application". It's just a small building block of the actual modern JS app.
inputs also don't talk to each other - why would they - but they expose clear, native html APIs for input and output, so the host app can very easily talk to them and make use of them. The host is still responsible for actually knowing how to do that though.
As for your use case, your options depend on your team's technical and business needs.
MFE Federation is a pretty strictly defined type of architecture. It might work in your use case, but you'll need to consider that trying to organize any kind of communication between these separate apps later on is asking for trouble.
On the other hand, if you really want to just have a bunch of modern JS components that you could stick anywhere in your existing JSP code, then WebComponents might be your best bet. IIRC, Angular components can be built and used as WebComponents like in your example, so it could be viable to write them in Angular (and then possibly migrate the entire app to an actual Angular SPA sometime later). The problem is that these components won't do anything by themselves, they still need the host page to actually use them.
It might also viable to write just one application with a bunch of lazy-loaded modules - which the standard straightforward case in modern Angular - and let it live under some specific routes. You would then just start rewriting pages in Angular one by one. Nobody likes that, but sometimes it's just what has to be done. As an upside, in that case at least you would have an actual modern Angular app as the host for all the JS components, instead of whatever JSP thing you currently have.
In theory you can mix-and-match approaches 2. and 3., with some pages only partially using the new shiny WebComponents, and some pages fully rewritten. It's probably what will have to eventually happen, but I'd try to initially stick to just 2 or just 3, to make the first steps of the migration simpler for your inexperienced devs.
Whether you choose Webpack MFE Federation, JS/Angular WebComponents, or a simple Angular SPA, I would strongly recommend looking into NX, which is a very powerful, framework-agnostic JS monorepo toolset. Among other features, it should help you solve the problem of managing the code between teams.
With NX, you can add arbitrary tags to all of your modules, and make the linter track and ban the dependencies between specific tags. As a simple example, your could say that modules tagged team-a cannot be used by team-b or team-c, so that Team A can safely do whatever they want in them without breaking anything that other teams actively use.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Imagine a web app that have dozens of page groups (pg1, pg2, ...) and some of these page groups need some JavaScript code specific only to them, but not to the entire app. For example, some visual fixes on window.resize() might be relevant only in pg2 but nowhere else.
Here are some possible solutions:
1/ Centralized: having one script file for the entire app that deals with all page groups. It's quite easy to know if relevant DOM object is present and so all irrelevant pages simply do a minor extra if().
Biggest advantage is that all JS is loaded once for the entire web app and no modification of the HTML code is needed. Disadvantage is that a additional checks are added to irrelevant pages.
2/ Mixed: the centralized script checks for the existence of a specific function on a page and launches it if it exists. For example we could add a
if (typeof page_specific_resize === 'function') page_specific_resize();
The specific page group in this case will have:
<script>
function page_specific_resize() {
//....
}
</script>
Advantage is that the code exists only for relevant pages and so isn't tested on every page. Disadvantage is additional size for the HTML results in the entire page group. If there are more than a few lines of code, the page group might be able to load an additional script specific to it but then we're adding an http call there to possibly save a few kilos in the centralized script.
Which is the best practice? Please comment on these solutions or suggest your own solution. Adding some resources to support your claims for what's better (consider performance and ease of maintenance) would be great. The most detailed answer will be selected. Thanks.
It's a bit tough to think on a best solution since it's an hypothetical scenario and we don't have the numbers to crunch on: what are the most loaded pages, how many are there, the total script size, etc...
That said, I didn't find the specific answer, Best Practice TM, but general points where people agree on.
1) Cacheing:
According to this:
https://developer.yahoo.com/performance/rules.html
"Using external files in the real world generally produces faster
pages because the JavaScript and CSS files are cached by the browser"
2) Minification
Still according to the Yahoo link:
"[minification] improves response time performance because the size of the downloaded file is reduced"
3) HTTP Requests
It's best to reduce HTTP calls (based on community answer).
One big javascript file or multiple smaller files?
Best practice to optimize javascript loading
4) Do you need that specific scritp at all?
According to: https://github.com/stevekwan/best-practices/blob/master/javascript/best-practices.md
"JavaScript should be used to decorate your site with additional functionality, and should not be required for your site to be operational."
It depends on the resources you have to load. Depends on how frequently a specific page group is loaded or how much frequently you expect it to be requested. The web app is single page? What each specific script do?
If the script loads a form, the user will not need to visit the page more than once. User will need internet connection to post data later anyway.
But if it's a script to resize a page and the user has some connection hiccups (ex: visiting your web app on a mobile, while taking the the subway), it may be better to have the code already loaded so the user can freely navigate. According to the Github link I posted earlier:
"Events that get fired all the time (for example, resizing/scrolling)"
Is one thing that should be optimized because it's related to performance.
Minifying all the code in one JS file to be cached early will reduce the number of requests made. Also, it may take a few seconds to a connection to stablish, but takes milliseconds to process a bunch of "if" statements.
However, if you have a heavy JS file for just one feature which is not the core of your app (say, this one single file is almost n% the size of the total of other scripts combined), then there is no need to make the user wait for that specific feature.
"If your page is media-heavy, we recommend investigating JavaScript techniques to load media assets only when they are required."
This is the holy grail of JS and hopefully what modules in ECMA6/7 will solve!
There are module loaders on the client such as JSPM and there are now hot swapping JS code compilers in node.js that can be used on the client with chunks via webpack.
I experimented successfully last year with creating a simple application that only loaded the required chunk of code/file as needed at runtime:
It was a simple test project I called praetor.js on github: github.com/magnumjs/praetor.js
The gh-pages-code branch can show you how it works:
https://github.com/magnumjs/praetor.js/tree/gh-pages-code
In the main,js file you can see how this is done with webpackJsonp once compiled:
JS:
module.exports = {
getPage: function (name, callback) {
// no better way to do this with webpack for chunk files ?
if (name.indexOf("demo") !== -1) {
require(["./modules/demo/app.js"], function (require) {
callback(name, require("./modules/demo/app.js"));
});
}
.....
To see it working live, go to http://magnumjs.github.io/praetor.js/ and open the network panel and view the chunks being loaded at runtime when navigating via the select menu on the left.
Hope that helps!
It is generally better to have fewer HTTP requests, but it depends on your web app.
I usually use requirejs to include the right models and views I need in each file.
While in development it saves me considerable time on debugging (because I know which files are present), in production this could be problematic considering the number of requests.
So I use r.js, which is conceived especially for requirejs, to compile my JS files when it's time for deployment.
Hope this helps
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've been looking at using a new workflow process for web development. Yemoan, Grunt, and Bower with AngularJS seems like a great solution for front-end development. The only downside is that the SEO is absolutely horrible. This seems like a HUGE component of the business decision driving adoption of these services yet I can't find any solutions.
What's a solid solution for making SEO-friendly javascript apps?
The current standard practice for making ajax heavy sites/apps SEO friendly is to use snapshots. See the google tutorial on this here: https://developers.google.com/webmasters/ajax-crawling/docs/html-snapshot and here: https://developers.google.com/webmasters/ajax-crawling/docs/specification
To summarize, you add this tag <meta name="fragment" content="!"> to your DOM. The crawler will see this and redirect itself from www.example.com to www.example.com?_escaped_fragment_= where it will be expecting the snapshot of the page.
You could manually copy the html from your site after all ajax is finished, and create your snapshot files yourself. However, this could be quite a nuisance. Instead, you could use PhantomJS to automate this process for you. Personally, I am going to use .htaccess to send the escaped_fragment requests to a single php file which has cached markup created from the content manager when the edits were made. This allows it to recreate the markup for crawlers to view (but no functionality for humans).
Here's a relevant piece of info from Debunking 10 common KnockoutJS myths. I assume it applies more or less equally to Angular.
Graceful degradation in absense of javascript depends on the way your
application has been architectured. Although KO being a pure
javascript library, does not offer any support for graceful
degradation in absence of javascript, nevertheless unlike many of the
competing technologies it does not hinder graceful degradation.
To create a KO application that degrades gracefully, just ensure that
the initial state of the page that is rendered by the server suffices
to convey the information that a user should see in absence of
javascript. Fallback mechanisms (eg simple forms and links) should be
available that provide the complete (or partial) application
functionality in absence of javascript. Then when you create your view
models you can instantiate them from the data already available from
the DOM and future data can be loaded via ajax without refreshing the
page.
A good example for this functionality can be a grid. The basic HTML
page served by the server can contain a simple HTML table with support
for traditional links for pagination. Then you can create your view
models from the data present in the table ( or ajax if a bit of
redundant data load does not matter for you) and utilize KO for
interactive bindings.
Since KO does not use special inline markup or custom html tags, but
rather simple data-bind attributes which are anyways not visible in
absence of javascript, it does not hinder graceful degradation.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm currently writing an article covering tips/tricks/best practices when working with JavaScript within an Enterprise environment. "Enterprise" can be a bit ambiguous so, for the purpose of this article we will define it as: supporting multiple web-based applications within a network that is not necessarily connected to the Internet.
Here are just a few of the thoughts I've had, to get your creative juices flowing:
Ensure all libraries are maintained in a central, web-accessible location and that all applications reference those libraries (rather than maintaining independent copies).
Reference libraries by version, guaranteeing new releases won't break your applications (no jquery-latest, use jquery-#.#.# instead).
Proper namespacing of application code
What tips can you provide to help me out?
Test your javascript on the largest DOM size possible. IE6/7/8 will hang based on number of executed VM statements, as opposed to actual run time. Regex and regex jQuery selectors are particularly bad.
Write less. Javascript in particular becomes very hard to manage and debug beyond a certain size set. Breaking up functionality into different external source files can help, but always consider a better method to do what your doing (example: jquery plugin.)
If you're writing a common pattern over and over, STOP. Either create a global method, or if the method acts on a jQuery selector, consider writing your own jQuery plugin instead.
Don't make methods take DOM objects or IDs. Pass in the jQuery object itself, and operate on that. In this manner, you don't force arbitrary DOM constraints on your method (the object passed in might not even be on the DOM yet, or it might not have an ID).
Don't modify prototypes. This breaks libraries/jQuery. Write a plugin or new datatype if you have to.
Don't modify libraries; this breaks upgradability. You can often achieve a similar affect by wrapping the jQuery library with your own plugin and forwarding/intercepting calls, kind of like AOP.
Don't have code execute while the DOM is still loading. This leads to race conditions that you'll only catch on the machines breakage occurs on, and even then it won't be consistent.
Don't style the page with jQuery. It's tempting, but a FOUC gets worse as the DOM grows. Build .first-child, .last-child etc. in your server pages, as opposed to hacking it in with jQuery.
maybe I'll come back and add more... but for now I have just few in my mind:
1) Caching strategies. Enterprise servers are heavy loaded, to serve http requests it is important to know how can you deal with it. E.g. JS can be cached on client side, but you should know how to 'tell ' a client that new version is available.
2) There are different libraries which minify counts of requests to JS files just appending them (based on configuration). E.g. for Java it is Jawr (just one of). It's better to load 1,2,3 scripts (read 'files') instead of 100 (and this number becomes normal today, in era of RIA). One more nice trick Jawr does, it creates zipped bundles, so when client asks for script server does not need to zip it.
3) Your business logic can be processed by application server (sort of JBoss, GlassFish etc when we talk about java), but JavaScript is static so it can be server by http server (like Apache, or better lighttd, nginx). Again this way you minify server loading (critical for enterprise)
4) libraries like jquery can be loaded from Google CDN (or any other reliable source).
5) use Yslow, PageSpeed, Ajax DynaTrace to check performance, get ideas to improve etc.
6) try mod_pagespeed, it can 'eliminate' jawr, or make powerful company for it
7) one more issue used today is JavaScript-on-demand loading
8) offline storage
Well, although you've specified topics you are interested in, the area still looks unlimited...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am in the process of converting an internal application to use more Ajax via jQuery. I am moving away from the standard code behind of the current ASP.NET application and incorporating JavaScript methods that are going to run client side. My concern is what is the best method for allowing this application to stay maintainable for those who come behind me.
I've found that build scripts are the key to success in maintainable Javascript. Essentially in order to have something fast and concise on the final page you need a script to make that maintainable in development.
If you use a build process you can break your JS into files based on the functionality. You can use the debug versions of libraries during development, and leave glorious code comments smattered around your code with impunity.
A build script should:
Lint all your code
Strip comments
Concatenate the files together
Minify/Obfuscate/Compress the code
Version files for deployment
Push the version files to the server ready to be "turned on" in production
This leaves you to work on files that make logical sense and it to tidy up after you for production.
I would say that applying the same sort of OOP principles to your javascript that you apply to server-side code is one of the best ways to enhance maintainability. Instead of global functions, think about creating classes that encapsulate your javascript methods and data. Use javascript unit testing frameworks to test this code to make sure that it is robust.
Run JSLint as part of your build and make sure that if it doesn't pass, it fails your build.
Ensuring that your JS is valid is not a key to maintainability but it will flesh out some browser issues that might have otherwise squeaked by.
If possible, start writing unit tests using a framework like Rhinounit, JSUnit or QUnit.
Create a script that runs during your build that removes all comments from javascript.
Then comment liberally, and encourage everyone else to also, safe with the knowledge that comments won't make it into production.
I would suggest putting as much of the javascript code as you can into external script files, to add some modularity. But beyond that, comments and keeping the code and markup as clean as possible are the best that can be done in any programming project, regardless of language.
Also, if you can, try to make a bit of an object-oriented design if you can. I was once on a project that involved a lot of jQuery/Javascript, and it did not have an object-oriented design, which made it difficult for us to develop and maintain it for a while. Over time, we actually ended up rewriting parts of it to make it better.
So my best advice is to think ahead, comment (like others have said), and make sure that you code your Javascript the same way you'd code anything else (as in, follow all the good design principles that you'd use on a server-side language.)
Nicholas Zakas: "Maintainable JavaScript" on Yahoo! Video