How to abort RequireJS 'require' request? - javascript

Is it possible (and how) to abort RequireJS require request?
I want to be able to do something like this:
define([
'Backbone.View'
],
function (BackboneView) {
var View = BackboneView.extend({
initialize: function () {
// store require deferred
// (I know this is not possible in this way!)
// Is there a way to capture require deferred?
this.deferred = require(['someModule'], function () {
// do something
});
},
close: function () {
// abort require request if view is closed before request resolves
this.deferred.abort();
this.remove()
}
});
return View;
});
So, I want to be able to abort require request if Backbone View is closed before require request resolves.
I have checked RequireJS documentation and their GitHub page.
There is nothing describing how to handle this.
Any help is welcome :)
Tnx

RequireJS does not provide a means to cancel a require call.
I suppose in theory it would be possible to implement it but it would complicate RequireJS' internal by a lot for rather rare benefits. Suppose you require foo, consider:
How often will your code actually be in a position where cancelling the require call is desirable? Your view would have to be initialized and closed in quick succession.
If foo has already been fetched and initialized before the require call is issued the only job require has to do is return the already existing reference to foo. So there is essentially nothing to cancel. Note here that if the view is ever allowed to completely initialize even just once in your application, the scenario considered here applies.
If RequireJS was fast enough to complete the require call before cancellation, then cancellation has no effect. Once an application is optimized, it is quite likely that RequireJS will complete the call before cancellation is possible. In an optimized app, the code that runs require is in the same bundle as the code being required, so RequireJS does no have to fetch anything from the network to satisfy the require call.
So in many cases the cancellation would be without effect, but for the cases where RequireJS might have to cancel something, it would have to keep a close accounting of who is requiring what. Maybe your require call in your view is the only one that is requiring foo but foo may also be requiring bar and baz. If foo is the only one requiring them, then if you cancel the require call, then RequireJS can cancel fetching and initializing foo, bar and baz but there's always the possibility that after the require is issued but before it completes another part of your application needs baz. So now RequireJS has to remember that it is okay to cancel foo and bar but not baz because it was required somewhere else.

Your question is a little weird seeing you will deploy only one minimised file and there will be virtually no delay.
For science note that a require simply adds a script element to the DOM with certain proporties like data-requiremodule=[MODULE_NAME]
so you can basically do something like:
function abortModuleLoad(moduleName)
var scripts = document.getElementsByTagName('script');
for(var i = 0 ; i < scripts.length ; i++){
if(scripts[i].dataset.requiremodule === moduleName) document.head.removeChild(scripts[i])
}
}
close: function () {
// abort require request if view is closed before request resolves
abortModuleLoad("someModule");
this.remove()
}

Related

NodeJS and Electron - request-promise in back-end freezes CSS animation in front-end

Note: Additional information appended to end of original question as Edit #1, detailing how request-promise in the back-end is causing the UI freeze. Keep in mind that a pure CSS animation is hanging temporarily, and you can probably just skip to the edit (or read all for completeness)
The setup
I'm working on a desktop webapp, using Electron.
At one point, the user is required to enter and submit some data. When they click "submit", I use JS to show this css loading animation (bottom-right loader), and send data asynchronously to the back-end...
- HTML -
<button id="submitBtn" type="submit" disabled="true">Go!</button>
<div class="submit-loader">
<div class="loader _hide"></div>
</div>
- JS -
form.addEventListener('submit', function(e) {
e.preventDefault();
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 0)
});
where ._hide is simply
._hide {
visibility: hidden;
}
and where ipcRenderer.send() is an async method, without option to set otherwise.
The problem
Normally, the 0ms delay is sufficient to allow the DOM to be changed before the blocking event takes place. But not here. Whether using the setTimeout() or not, there is still a delay.
So, add a tiny delay...
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 100);
Great! The loader displays immediately upon submitting! But... after 100ms, the animation stops dead in its tracks, for about 500ms or so, and then gets back to chooching.
This working -> not working -> working pattern happens regardless of the delay length. As soon as the ipcRenderer starts doing stuff, everything is halted.
So... Why!?
This is the first time I've seen this kind of behavior. I'm pretty well-versed in HTML/CSS/JS, but am admittedly new to NodeJS and Electron. Why is my pure CSS animation being halted by the ipcRenderer, and what can I do to remedy this?
Edit #1 - Additional Info
In the back-end (NodeJS), I am using request-promise to make a call to an external API. This happens when the back-end receives the ipcRenderer message.
var rp = require('request-promise');
ipcMain.on('credentials:submit', function(e, credentials) {
var options = {
headers : {
... api-key...
},
json: true,
url : url,
method : 'GET'
};
return rp(options).then(function(data) {
... send response to callback...
}).catch(function(err) {
... send error to callback...
});
}
The buggy freezing behavior only happens on the first API call. Successive API calls (i.e. refreshing the desktop app without restarting the NodeJS backend), do not cause the hang-up. Even if I call a different API method, there are no issues.
For now, I've implemented the following hacky workaround:
First, initialize the first BrowserWindow with show:false...
window = new BrowserWindow({
show: false
});
When the window is ready, send a ping to the external API, and only display the window after a successful response...
window.on('ready-to-show', function() {
apiWrapper.ping(function(response) {
if(response.error) {
app.quit();
}else {
window.show(true);
}
});
});
This extra step means that there is about 500ms delay before the window appears, but then all successive API calls (whether .ping() or otherwise) no longer block the UI. We're getting to the verge of callback hell, but this isn't too bad.
So... this is a request-promise issue (which is asynchronous, as far as I can tell from the docs). Not sure why this behavior is only showing-up on the first call, so please feel free to let me know if you know! Otherwise, the little hacky bit will have to do for now.
(Note: I'm the only person who will ever use this desktop app, so I'm not too worried about displaying a "ping failed" message. For a commercial release, I would alert the user to a failed API call.)
Worth to check how does request-promise internally setups up module loading. reading it, it seems like there is kind of lazy loading (https://github.com/request/request-promise/blob/master/lib/rp.js#L10-L12) when request is being called. Quick try out
const convertHrtime = require('convert-hrtime');
const a = require('request-promise');
const start = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end = process.hrtime(start);
console.log(convertHrtime(end));
const start2 = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end2 = process.hrtime(start2);
console.log(convertHrtime(end2));
returns value like below:
{ seconds: 0.00421092,
milliseconds: 4.21092,
nanoseconds: 4210920 }
{ seconds: 0.000511664,
milliseconds: 0.511664,
nanoseconds: 511664 }
first call is obviously taking longer than subsequent. (number of course may vary, I ran this on bare node.js on relatively fast cpu) If module loading is major cost for first call, then it'll block main process until module is loaded (cause node.js require resolve is synchronous)
I'm not able to say this is concrete reason, but worth to check. As suggested in comment, try other lib or bare internal module (like Electron's net) to rule out.

Understanding execute async script in Selenium

I've been using selenium (with python bindings and through protractor mostly) for a rather long time and every time I needed to execute a javascript code, I've used execute_script() method. For example, for scrolling the page (python):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
Or, for infinite scrolling inside an another element (protractor):
var div = element(by.css('div.table-scroll'));
var lastRow = element(by.css('table#myid tr:last-of-type'));
browser.executeScript("return arguments[0].offsetTop;", lastRow.getWebElement()).then(function (offset) {
browser.executeScript('arguments[0].scrollTop = arguments[1];', div.getWebElement(), offset).then(function() {
// assertions
});
});
Or, for getting a dictionary of all element attributes (python):
driver.execute_script('var items = {}; for (index = 0; index < arguments[0].attributes.length; ++index) { items[arguments[0].attributes[index].name] = arguments[0].attributes[index].value }; return items;', element)
But, WebDriver API also has execute_async_script() which I haven't personally used.
What use cases does it cover? When should I use execute_async_script() instead of the regular execute_script()?
The question is selenium-specific, but language-agnostic.
When should I use execute_async_script() instead of the regular execute_script()?
When it comes to checking conditions on the browser side, all checks you can perform with execute_async_script can be performed with execute_script. Even if what you are checking is asynchronous. I know because once upon a time there was a bug with execute_async_script that made my tests fail if the script returned results too quickly. As far as I can tell, the bug is gone now so I've been using execute_async_script but for months beforehand, I used execute_script for tasks where execute_async_script would have been more natural. For instance, performing a check that requires loading a module with RequireJS to perform the check:
driver.execute_script("""
// Reset in case it's been used already.
window.__selenium_test_check = undefined;
require(["foo"], function (foo) {
window.__selenium_test_check = foo.computeSomething();
});
""")
result = driver.wait(lambda driver:
driver.execute_script("return window.__selenium_test_check;"))
The require call is asynchronous. The problem with this though, besides leaking a variable into the global space, is that it multiplies the network requests. Each execute_script call is a network request. The wait method works by polling: it runs the test until the returned value is true. This means one network request per check that wait performs (in the code above).
When you test locally it is not a big deal. If you have to go through the network because you are having the browsers provisioned by a service like Sauce Labs (which I use, so I'm talking from experience), each network request slows down your test suite. So using execute_async_script not only allows writing a test that looks more natural (call a callback, as we normally do with asynchronous code, rather than leak into the global space) but it also helps the performance of your tests.
result = driver.execute_async_script("""
var done = arguments[0];
require(["foo"], function (foo) {
done(foo.computeSomething());
});
""")
The way I see it now is that if a test is going to hook into asynchronous code on the browser side to get a result, I use execute_async_script. If it is going to do something for which there is no asynchronous method available, I use execute_script.
Here's the reference to the two APIs (well it's Javadoc, but the functions are the same), and here's an excerpt from it that highlights the difference
[executeAsyncScript] Execute an asynchronous piece of JavaScript in
the context of the currently selected frame or window. Unlike
executing synchronous JavaScript, scripts executed with this method
must explicitly signal they are finished by invoking the provided
callback. This callback is always injected into the executed function
as the last argument.
Basically, execSync blocks further actions being performed by the selenium browser, while execAsync does not block and calls on a callback when it's done.
Since you've worked with protractor, I'll use that as example.
Protractor uses executeAsyncScript in both get and waitForAngular
In waitForAngular, protractor needs to wait until angular announces that all events settled. You can't use executeScript because that needs to return a value at the end (although I guess you can implement a busy loop that polls angular constantly until it's done). The way it works is that protractor provides a callback, which Angular calls once all events settled, and that requires executeAsyncScript. Code here
In get, protractor needs to poll the page until the global window.angular is set by Angular. One way to do it is driver.wait(function() {driver.executeScript('return window.angular')}, 5000), but that way protractor would pound at the browser every few ms. Instead, we do this (simplified):
functions.testForAngular = function(attempts, callback) {
var check = function(n) {
if (window.angular) {
callback('good');
} else if (n < 1) {
callback('timedout');
} else {
setTimeout(function() {check(n - 1);}, 1000);
}
};
check(attempts);
};
Again, that requires executeAsyncScript because we don't have a return value immediately. Code here
All in all, use executeAsyncScript when you care about a return value in a calling script, but that return value won't be available immediately. This is especially necessary if you can't poll for the result, but must get the result using a callback or promise (which you must translate to callback yourself).

Open RequireJS Application with PhantomJS

I'm trying to load a single page application that uses a heavy amount of async code execution involving RequireJS and jQuery deferreds. The application loads as expected inside the browser, but not in PhantomJS.
For instance, I spent some time trying to figure out how to make the following snippet work:
# index.html
<body>
<script>
require.config({
base: '.',
paths: {
main: 'main'
}
})
require(['main'], function() {
window.myglobal = {
something: 'foo'
}
});
</script>
</body>
# phantomjs
page.evaluateAsync(function() {
console.log(window.myglobal.something); // Should print out 'foo'.
}, 100);
I consider that using evaluateAsync with a fixed timeout that has to be determined by trial and error is not really satisfactory. Perhaps someone can suggest a better pattern.
The documentation for evaluateAsync does not say much so I'm going to take it at face value and assume that it just executes the code asynchronously, without any further constraint regarding what may or may not have loaded already. (The source code does not indicate any further constraints either.)
The problem I'm seeing is that you have two asynchronous functions that may execute in any order. When require(['main'], ...) is called, this tells RequireJS to start loading the module but you don't know when the module will actually load. Similarly, when you execute page.evaluateAsync you are telling PhantomJS to execute a piece of code asynchronously. So it will execute but you don't know when.
So the following can happen:
The module finishes loading: window.myglobal is set.
console.log is called, which outputs the correct value.
Or:
console.log is called, which fails.
The module finishes loading: window.myglobal is set.
Setting a timeout that delays the execution of console.log will make it more likely that the first case happens but it does not guarantee it.
What you could do is change your HTML like this:
<body>
<script>
require.config({
base: '.',
paths: {
main: 'main'
}
})
define('init', ['main'], function () {
window.myglobal = {
something: 'foo'
};
});
require(['init']);
</script>
</body>
Your PhantomJS script:
page.evaluateAsync(function() {
require(['init'], function () {
console.log(window.myglobal.something); // Should print out 'foo'.
});
});
What this does is define a module called init. (This is a rare case where explicitly naming your module with define is okay. Usually you just start the define call with the list of dependencies.) Then when evaluateAsync is called it asks for the init module, which guarantees that the assignment to window.myglobal will have happened before console.log runs.
It would also be possible to use promises to get the desired results but I've preferred to show a solution that uses only RequireJS.
PhantomJS is a headless browser that is used for all kinds of stuff. A big part of it is the testing/automation of websites. It means that you generally don't have the opportunity of changing the site code. Most of the time it is not necessary, such as in this case.
You simply need to wait until the page script/DOM is at a state that you want for further processing. This is usually done using waitFor from the examples of PhantomJS.
In your case, you can add the waitFor definition to the beginning of the script and wait for window.myglobal to be defined:
page.open(url, function(){
waitFor(function check(){
return page.evaluate(function(){
return !!window.myglobal;
});
}, function then(){
// do something useful
}, 10000); // upper bound on acceptable wait timeout
});
check is a function which is called periodically to check that a certain condition is met. So the logic is that as soon as the condition is met, you can do something useful including doing something on the page using page.evaluate from the then callback.
There are also ways not to wait for specific variables/DOM nodes, but waiting for general ending of network activity as in this answer.

How to use browser parallel loading but still maintain the order of scripts using RequireJS

I have this friend, who hates RequireJS, I recently started using it & I like it, but by his arguments, I started feeling like I might end up like him and give up using RequireJS.
require(['one'],function(){
require(['two'],function(){
require(['three'],function(){
});
});
});
Above code is fairly straight-forward, three depends on two & two depends on one. That makes the browser to load one first & then two & then three in sequence, that's how requirejs handles it, which makes the site real slow if there are lots of these kinds of dependencies.
Browser parallel loading feature it not used at all.
I want to know if there is any way in which require loads all these files asyncronously but makes sure to maintain the order in their execution, so that browser parallelism can be used.
RequireJS is a powerful tool that allows us to load script asynchronously (means we start the loading of each one, and don't wait until it is actually loaded), but still manage dependencies (if one file depends on another, we wanna wake sure the dependency will be loaded beforehand). The way you use RequireJS is not what it is made for. The callback function inside the require is called as soon as the dependency module is loaded ('one', 'two', 'three'). So you are just loading all the modules sequentially, not asynchronously (one is loaded -> callback function is called -> two is loaded -> callback function is called -> three is loaded -> callback function is called). That makes no sense. The way it is supposed to be:
in your HTML file:
<script data-main='main.js' src='require.js'></script>
in your main.js file (some file you wrote inside your script tag as data-main):
require(['one'], function(one){
one.function1();
});
in one.js:
define(['two', 'three'], function(two, three) {
return {
function1 : function() {
two.function2();
three.function3();
console.log('one');
}
}
});
in two.js:
define([], function() {
return {
function2 : function() {
console.log('two');
}
}
});
in three.js:
define([], function() {
return {
function3 : function() {
console.log('three');
}
}
});
In my example 'one' depends on 'two' and 'three' (it requires function2() from 'two' and function3() from 'three'), 'two' and 'three' have no dependencies. This example code assumes all the files are in one folder (including require.js). As a result, we see 'two', 'three', 'one' printed (in that order).
Although RequireJS uses AMD model for loading scripts, we can still manage module evaluation order by ourselves. If you don't want to use define(), you can use a special plugin called order!. It works for RequireJS 1.0 API. It allows to fetch files asynchronously, but make evaluation in a specific order: http://requirejs.org/docs/1.0/docs/api.html#order.
The accepted solution will work nicely for most use-cases, especially because it will usually make sense to use r.js to bundle everything, which makes parallel loading a moot point. However, this solution does not actually allow for loading all modules in parallel, instead either loading in a 3-step sequence that looks like: [main.js] -> [one.js] -> [two.js, three.js] (if you don't use r.js to package the files all to one module) or a single load of one packaged file (if you do use r.js to package all the files to one module).
If you do in fact want to make the files load in a single parallel step like: [one.js, two.js, three.js], you have a couple of options:
A. Use RequireJS 1.0 + order plugin
This one is covered in gthacoder's other answer.
B. Wrap the scripts so that Require can load them, then execute them in a separate stage
This introduces some complexity, but is very reliable. The key thing is that every module that you want to load in parallel should contain a named module inside it that does not match the name you use to load the file. This will prevent the module from executing until you explicitly tell it to:
one.js
define('one-inner', ['two-inner'], function () {
...
});
two.js
define('two-inner', ['three-inner'], function () {
...
});
three.js
define('three-inner', function () {
...
});
Your page or main.js file
// 1. Require loads one.js, two.js, and three.js in parallel,
// but does not execute the modules, because nothing has called
// them by the correct name yet.
require(['one', 'two', 'three']), function () {
// 2. Kickstart execution of the modules
require(['one-inner'], function () {
....
}
});

How Async really works and How to use it properly with node.js (node-webkit)

efor this problem i am using Node-Webkit (node.js) and Async, loading a Windows App.
The reason of this question is to definitively answer:
What really means asynchronous execution in Javascript and Node.Js.
My personal code problem is at the end of the Question. "The Case".
I am going to explain all about the problem i have directly with a schematic summary. (And I will be updating the info as you help me to understand it)
The Concept (theory)
Imagine a Primary Screen (JS, Html, css, ... Node.js frameworks) and a Background Procedure (JS execution every 10 min, JS internal checks, background Database Optimization, ...).
Whatever you do in Primary Screen wont affect background execution (except some important cases), and Background can change even the Screen if he needs to (screen timers, info about online web status, ...)
Then the behaviour is like:
Thread 1: Your actions inside the App framework. Thread 2: Background App routines
Any action as they finish gives his output to screen, despite of the rest of the actions in async parallel
The Interpretation (For me)
I think this is something that "Async" will handle without problems, as a parallel execution.
async.parallel([
function(){ ... },
function(){ ... }
], callback); //optional callback
So the Thread 1 and Thread 2 can work together correctly while they do not affect the same code or instruction.
The Content will be changing while any threads request something of/to it.
The Implementation (Reality)
Code is not fully asynchronous during the execution, there are sync parts with common actions, that when they need calls the async codes.
Sync: Startup with containers -> Async: load multiple content and do general stuff -> Sync: Do an action in the screen -> ...
The Case
So here it is my not working properly code:
win.on('loaded', function() {
$( "#ContentProgram" ).load( "view/launcherWorkSpace.html", function() {
$("#bgLauncher").hide();
win.show();
async.parallel([
function() //**Background Process: Access to DB and return HTML content**
{
var datacontent = new data.GetActiveData();
var exeSQL = new data.conn(datacontent);
if(exeSQL.Res)
{
var r = exeSQL.Content;
if(r.Found)
{
logSalon = new data.activeSData(r)
$('#RelativeInfo').empty();
$("#RelativeInfo").html("<h4 class='text-success'>Data found: <b>" + logData.getName + "</b></h4>");
}
}
},
function() //**Foreground Process: See an effect on screen during load.**
{
$("#bgLauncher").fadeIn(400);
$("#centralAccess").delay(500).animate({bottom:0},200);
}
]);
});
});
As you can see, im not using "Callback()" because i dont need to (and it does the same).
I want to do the Foreground Process even if Background Process is not finished, but the result of the code is done at same time when both request has finished...
If i disconect the DB manually, first function takes 3 seconds until gives an exception (that i wont handle). Until then, both proccess will not output (show on screen) anything. (Foreground Process should be launched whatever happends to Background Process).
Thanks and sorry for so much explanation for something that looks like trivial.
EDITED
This start to be annoying... I tried without Async, just a javascript with callback like this:
launchEffect(function () {
var datacontent = new data.GetActiveData();
var exeSQL = new data.conn(datacontent);
if(exeSQL.Res)
{
var r = exeSQL.Content;
if(r.Found)
{
logData = new data.activeData(r)
$('#RelativeInfo').empty();
$("#RelativeInfo").html("<h4 class='text-success'>Salón: <b>" + log.getName + "</b></h4>");
}
}
});
});
});
function launchEffect(callback)
{
$("#bgLauncher").fadeIn(400);
$("#centralAccess").delay(500).animate({bottom:0},200);
callback();
}
Even with this... Jquery doesnt work until the callback answer...
node-webkit let's you run code written like code for node.js, but is ultimately just a shim running in WebKit's Javascript runtime and only has one thread, which means that most 'asynchronous' code will still block the execution of any other code.
If you were running node.js itself, you'd see different behavior because it can do genuinely asynchronous threading behind the scenes. If you want more threads, you'll need to supply them in your host app.

Categories