I have a Vue 2.0 app in which I use this line in order to call this.refreshState() every min.
this.scheduler = setInterval(() => this.refreshState(), 60 * 1000)
Later in the code I need to make sure that the execution loop is stopped and also that if there's an instance of this.refreshState() currently running (from the setInterval scheduler) it's stopped as well (even if it's in the middle of doing stuff).
So far I'm using :
clearInterval(this.scheduler)
as per (https://developer.mozilla.org/en-US/docs/Web/API/clearInterval)
The question I'm having is does clearInterval blocks the current execution if any? I can't find the answer in the doc unfortunately.
FYI the code of refreshState:
refreshState: function () {
// API call to backend
axios.get("/api/refreshState")
.then(response => {
this.states = response.data.states
})
.catch((err) => console.log(err)
}
Here's my use case :
alterState: function (incremental_state) {
clearInterval(this.scheduler) // ???
axios.post("/api/alterState", incremental_state)
.then(() => {
this.refreshState()
this.scheduler = setInterval(() => this.refreshState(), 60 * 1000)
})
.catch((err) => { console.log(error) })
}
I want to make sure that when i exit alterState , the variable this.states takes into account the addition of incremental state.
From...
I want to make sure that when i exit alterState, the variable this.states takes into account the addition of incremental state.
...I understand you're performing a change on backend and you want it reflected on frontend. And that currently that doesn't happen, although you're calling this.refreshState() right after getting a successful response from /api/alterState. 1
To achieve this functionality, it's not enough to call this.refreshState(), because your browser, by default, caches the result (it remembers the recent calls and their results, so it serves the previous result from cache, instead of calling the server again), unless the endpoint is specifically configured to disable caching.
To disable caching for a particular endpoint, you could either
configure the endpoint (server side) to tell browsers: "Hey, my stuff is time sensitive, don't cache it!" (won't go into how, as I have no idea what technology you're using on backend and it varies). Roughly it means setting appropriate response headers.
or call the endpoint with a unique param, each time. This makes the endpoint "change" from browser's POV, so it's always going to request from server:
axios
.get(`/api/refreshState?v=${Date.now()}`)
.then...
I recommend the second option, it's reliable, predictable and does not depend on server configuration.
And, unless something else, other than the current app instance (some other user, or other server scripts, etc...) make changes to the data, you don't actually need a setInterval. I suggest removing it.
But If you do have other sources changing server-side data, (and you do want to refresh it regardless of user interactions with the app), what you have works perfectly fine, there's no need to even cancel the existing interval when you make a change + refreshState()). 2
1 - if I misunderstood your question and that is not your problem, please clarify your question, right now it's a bit unclear
2 - as side-note and personal preference, I suggest renaming refreshState() to getState()
Related
I'm writing E2E tests in Cypress (version 12.3.0). I have a page with a table in a multi-step creation process that requires some data from back-end application. In some cases (rarely, but it occurs) the request gets stuck and the loader never disappears. The solution is simple: go back to the previous step and return to the "stuck" table. The request is sent anew and most likely receives a response - the process and the tests can proceed further. If the loader is not present, then going back and forth should be skipped (most of the times).
I managed to work around that with the code below, but I'm wondering if it could be done with some built-in Cypress functions and without explicit waiting. Unfortunately, I didn't find anything in the Cypress docs and on StackOverflow. I thought that maybe I could use the then function to work on a "conditionally present" element, but it fails on get, that's why I've used find on the jQuery ancestor element.
waitForTableData() {
return cy.get('.data-table')
.should('exist')
.then(table => {
if (this.loaderNotPresent(table)) {
return;
}
cy.wait(200)
.then(() => {
if (this.loaderNotPresent(table)) {
return;
}
cy.get('button')
.contains('Back')
.click()
.get('button')
.contains('Next')
.click()
.then(() => this.waitForTableData());
});
});
}
loaderNotPresent(table: JQuery) {
return !table.find('.loader')?.length;
}
Your code looks to me to be the best you could do at present.
The cy.wait(200) is about the right size, maybe a bit smaller would be better - 50 - 100 ms. The recursive call is going to give you similar behaviour to Cypress retry (which also waits internally, in order not to hammer the test runner).
Another approach would be to cy.intercept() and mock the backend, presuming it's the backend that gets stuck.
Also worth trying a simple test retry, if the loading only fails on a small percentage of times.
I have an SPA and for technical reasons I have different elements potentially firing the same fetch() call pretty much at the same time.[1]
Rather than going insane trying to prevent multiple unrelated elements to orchestrate loading of elements, I am thinking about creating a gloabalFetch() call where:
the init argument is serialised (along with the resource parameter) and used as hash
when a request is made, it's queued and its hash is stored
when another request comes, and the hash matches (which means it's in-flight), another request will NOT be made, and it will piggy back from the previous one
async function globalFetch(resource, init) {
const sigObject = { ...init, resource }
const sig = JSON.stringify(sigObject)
// If it's already happening, return that one
if (globalFetch.inFlight[sig]) {
// NOTE: I know I don't yet have sig.timeStamp, this is just to show
// the logic
if (Date.now - sig.timeStamp < 1000 * 5) {
return globalFetch.inFlight[sig]
} else {
delete globalFetch.inFlight[sig]
}
const ret = globalFetch.inFlight[sig] = fetch(resource, init)
return ret
}
globalFetch.inFlight = {}
It's obviously missing a way to have the requests' timestamps. Plus, it's missing a way to delete old requests in batch. Other than that... is this a good way to go about it?
Or, is there something already out there, and I am reinventing the wheel...?
[1] If you are curious, I have several location-aware elements which will reload data independently based on the URL. It's all nice and decoupled, except that it's a little... too decoupled. Nested elements (with partially matching URLs) needing the same data potentially end up making the same request at the same time.
Your concept will generally work just fine.
Some thing missing from your implementation:
Failed responses should either not be cached in the first place or removed from the cache when you see the failure. And failure is not just rejected promises, but also any request that doesn't return an appropriate success status (probably a 2xx status).
JSON.stringify(sigObject) is not a canonical representation of the exact same data because properties might not be stringified in the same order depending upon how the sigObject was built. If you grabbed the properties, sort them and inserted them in sorted order onto a temporary object and then stringified that, it would be more canonical.
I'd recommend using a Map object instead of a regular object for globalFetch.inFlight because it's more efficient when you're adding/removing items regularly and will never have any name collision with property names or methods (though your hash would probably not conflict anyway, but it's still a better practice to use a Map object for this kind of thing).
Items should be aged from the cache (as you apparently know already). You can just use a setInterval() that runs every so often (it doesn't have to run very often - perhaps every 30 minutes) that just iterates through all the items in the cache and removes any that are older than some amount of time. Since you're already checking the time when you find one, you don't have to clean the cache very often - you're just trying to prevent non-stop build-up of stale data that isn't going to be re-requested - so it isn't getting automatically replaced with newer data and isn't being used from the cache.
If you have any case insensitive properties or values in the request parameters or the URL, the current design would see different case as different requests. Not sure if that matters in your situation or not or if it's worth doing anything about it.
When you write the real code, you need Date.now(), not Date.now.
Here's a sample implementation that implements all of the above (except for case sensitivity because that's data-specific):
function makeHash(url, obj) {
// put properties in sorted order to make the hash canonical
// the canonical sort is top level only,
// does not sort properties in nested objects
let items = Object.entries(obj).sort((a, b) => b[0].localeCompare(a[0]));
// add URL on the front
items.unshift(url);
return JSON.stringify(items);
}
async function globalFetch(resource, init = {}) {
const key = makeHash(resource, init);
const now = Date.now();
const expirationDuration = 5 * 1000;
const newExpiration = now + expirationDuration;
const cachedItem = globalFetch.cache.get(key);
// if we found an item and it expires in the future (not expired yet)
if (cachedItem && cachedItem.expires >= now) {
// update expiration time
cachedItem.expires = newExpiration;
return cachedItem.promise;
}
// couldn't use a value from the cache
// make the request
let p = fetch(resource, init);
p.then(response => {
if (!response.ok) {
// if response not OK, remove it from the cache
globalFetch.cache.delete(key);
}
}, err => {
// if promise rejected, remove it from the cache
globalFetch.cache.delete(key);
});
// save this promise (will replace any expired value already in the cache)
globalFetch.cache.set(key, { promise: p, expires: newExpiration });
return p;
}
// initalize cache
globalFetch.cache = new Map();
// clean up interval timer to remove expired entries
// does not need to run that often because .expires is already checked above
// this just cleans out old expired entries to avoid memory increasing
// indefinitely
globalFetch.interval = setInterval(() => {
const now = Date.now()
for (const [key, value] of globalFetch.cache) {
if (value.expires < now) {
globalFetch.cache.delete(key);
}
}
}, 10 * 60 * 1000); // run every 10 minutes
Implementation Notes:
Depending upon your situation, you may want to customize the cleanup interval time. This is set to run a cleanup pass every 10 minutes just to keep it from growing unbounded. If you were making millions of requests, you'd probably run that interval more often or cap the number of items in the cache. If you aren't making that many requests, this can be less frequent. It is just to clean up old expired entries sometime so they don't accumulate forever if never re-requested. The check for the expiration time in the main function already keeps it from using expired entries - that's why this doesn't have to run very often.
This looks as response.ok from the fetch() result and promise rejection to determine a failed request. There could be some situations where you want to customize what is and isn't a failed request with some different criteria than that. For example, it might be useful to cache a 404 to prevent repeating it within the expiration time if you don't think the 404 is likely to be transitory. This really depends upon your specific use of the responses and behavior of the specific host you are targeting. The reason to not cache failed results is for cases where the failure is transitory (either a temporary hiccup or a timing issue and you want a new, clean request to go if the previous one failed).
There is a design question for whether you should or should not update the .expires property in the cache when you get a cache hit. If you do update it (like this code does), then an item could stay in the cache a long time if it keeps getting requested over and over before it expires. But, if you really want it to only be cached for a maximum amount of time and then force a new request, you can just remove the update of the expiration time and let the original result expire. I can see arguments for either design depending upon the specifics of your situation. If this is largely invariant data, then you can just let it stay in the cache as long as it keeps getting requested. If it is data that can change regularly, then you may want it to be cached no more than the expiration time, even if its being requested regularly.
Consider using a ServiceWorker or Workbox to separate caching logic from your application. The Stale-While-Revalidate strategy could apply here.
Note: Additional information appended to end of original question as Edit #1, detailing how request-promise in the back-end is causing the UI freeze. Keep in mind that a pure CSS animation is hanging temporarily, and you can probably just skip to the edit (or read all for completeness)
The setup
I'm working on a desktop webapp, using Electron.
At one point, the user is required to enter and submit some data. When they click "submit", I use JS to show this css loading animation (bottom-right loader), and send data asynchronously to the back-end...
- HTML -
<button id="submitBtn" type="submit" disabled="true">Go!</button>
<div class="submit-loader">
<div class="loader _hide"></div>
</div>
- JS -
form.addEventListener('submit', function(e) {
e.preventDefault();
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 0)
});
where ._hide is simply
._hide {
visibility: hidden;
}
and where ipcRenderer.send() is an async method, without option to set otherwise.
The problem
Normally, the 0ms delay is sufficient to allow the DOM to be changed before the blocking event takes place. But not here. Whether using the setTimeout() or not, there is still a delay.
So, add a tiny delay...
loader.classList.remove('_hide');
setTimeout(function() {
ipcRenderer.send('credentials:submit', credentials);
}, 100);
Great! The loader displays immediately upon submitting! But... after 100ms, the animation stops dead in its tracks, for about 500ms or so, and then gets back to chooching.
This working -> not working -> working pattern happens regardless of the delay length. As soon as the ipcRenderer starts doing stuff, everything is halted.
So... Why!?
This is the first time I've seen this kind of behavior. I'm pretty well-versed in HTML/CSS/JS, but am admittedly new to NodeJS and Electron. Why is my pure CSS animation being halted by the ipcRenderer, and what can I do to remedy this?
Edit #1 - Additional Info
In the back-end (NodeJS), I am using request-promise to make a call to an external API. This happens when the back-end receives the ipcRenderer message.
var rp = require('request-promise');
ipcMain.on('credentials:submit', function(e, credentials) {
var options = {
headers : {
... api-key...
},
json: true,
url : url,
method : 'GET'
};
return rp(options).then(function(data) {
... send response to callback...
}).catch(function(err) {
... send error to callback...
});
}
The buggy freezing behavior only happens on the first API call. Successive API calls (i.e. refreshing the desktop app without restarting the NodeJS backend), do not cause the hang-up. Even if I call a different API method, there are no issues.
For now, I've implemented the following hacky workaround:
First, initialize the first BrowserWindow with show:false...
window = new BrowserWindow({
show: false
});
When the window is ready, send a ping to the external API, and only display the window after a successful response...
window.on('ready-to-show', function() {
apiWrapper.ping(function(response) {
if(response.error) {
app.quit();
}else {
window.show(true);
}
});
});
This extra step means that there is about 500ms delay before the window appears, but then all successive API calls (whether .ping() or otherwise) no longer block the UI. We're getting to the verge of callback hell, but this isn't too bad.
So... this is a request-promise issue (which is asynchronous, as far as I can tell from the docs). Not sure why this behavior is only showing-up on the first call, so please feel free to let me know if you know! Otherwise, the little hacky bit will have to do for now.
(Note: I'm the only person who will ever use this desktop app, so I'm not too worried about displaying a "ping failed" message. For a commercial release, I would alert the user to a failed API call.)
Worth to check how does request-promise internally setups up module loading. reading it, it seems like there is kind of lazy loading (https://github.com/request/request-promise/blob/master/lib/rp.js#L10-L12) when request is being called. Quick try out
const convertHrtime = require('convert-hrtime');
const a = require('request-promise');
const start = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end = process.hrtime(start);
console.log(convertHrtime(end));
const start2 = process.hrtime();
a({uri: 'https://requestb.in/17on4me1'});
const end2 = process.hrtime(start2);
console.log(convertHrtime(end2));
returns value like below:
{ seconds: 0.00421092,
milliseconds: 4.21092,
nanoseconds: 4210920 }
{ seconds: 0.000511664,
milliseconds: 0.511664,
nanoseconds: 511664 }
first call is obviously taking longer than subsequent. (number of course may vary, I ran this on bare node.js on relatively fast cpu) If module loading is major cost for first call, then it'll block main process until module is loaded (cause node.js require resolve is synchronous)
I'm not able to say this is concrete reason, but worth to check. As suggested in comment, try other lib or bare internal module (like Electron's net) to rule out.
I have an app that loads several resources when it's first run, which are stored in localStorage. I have a function that checks whether all the local storage variables are set, so that part is working okay.
My method of working is like this:
Display a loading message.
Initialize the AJAX requests.
Start a timer interval to check if everything has loaded.
When the data has loaded, initialize the application etc.
If the data did not load, display an error message.
The problem is with #5 - how to detect if there was an error? For example if there was a connection problem or the sever sent back invalid data for whatever reason. Here is my current code - downloadData just performs a basic AJAX request:
// check local storage and download if any missing
if ( !checkLocalStorage() )
{
$('#content').before( '<div class="notice" id="downloading">Downloading data, please wait...</div>' );
for ( var i in db_tables )
{
if ( localStorage[db_tables[i]] == null )
downloadData( db_tables[i] );
}
}
// check progress
var timer = setInterval( function() {
if ( checkLocalStorage() )
{
// everything is downloaded
$('#downloading').hide();
clearInterval(timer);
initApp();
}
}, 500 );
Could you turn it around a bit? Something like this (with sensible variable names and a "real" API) would simplify things:
Display a loading message.
Instantiate an application initializer, ai.
Crank up the AJAX requests:
Success handlers call ai.finished(task).
Error handlers call ai.error(task).
Register with the initializer, ai.register(task), in case a "you're taking too long" check is desired.
Once all the AJAX requests have called ai.finished, initialize the application etc.
If any of the AJAX tasks called ai.error, then display an error message and start cleaning things up.
This way you wouldn't need to setInterval() and the individual AJAX tasks will tell you when they have finished or fallen over. You might still want the interval to deal with tasks that are taking too long but most of the logic would be notification based rather than polling based.
Seeing your actual ajax calls in downloadData would help, but I suggest you look over the jquery AJAX API again. Ajax calls have callbacks not just for overall completion but specifically for success and failure including errors. Try to do something like retrying if there is an error and if it continues to fail you can warn the user. You can also use these callbacks to notify your application when the loading is done instead of using an interval timer.
I have a web application where there are number of Ajax components which refresh themselves every so often inside a page (it's a dashboard of sorts).
Now, I want to add functionality to the page so that when there is no Internet connectivity, the current content of the page doesn't change and a message appears on the page saying that the page is offline (currently, as these various gadgets on the page try to refresh themselves and find that there is no connectivity, their old data vanishes).
So, what is the best way to go about this?
navigator.onLine
That should do what you're asking.
You probably want to check that in whatever code you have that updates the page. Eg:
if (navigator.onLine) {
updatePage();
} else {
displayOfflineWarning();
}
It seems like you've answered your own question. If the gadgets send an asynch request and it times out, don't update them. If enough of them do so, display the "page is offline" message.
See the HTML 5 draft specification. You want navigator.onLine. Not all browsers support it yet. Firefox 3 and Opera 9.5 do.
It sounds as though you are trying to cover up the problem rather than solve it. If a failed request causes your widgets to clear their data, then you should fix your code so that it doesn't attempt to update your widgets unless it receives a response, rather than attempting to figure out whether the request will succeed ahead of time.
One way to handle this might be to extend the XmlHTTPRequest object with an explicit timeout method, then use that to determine if you're working in offline mode (that is, for browsers that don't support navigator.onLine). Here's how I implemented Ajax timeouts on one site (a site that uses the Prototype library). After 10 seconds (10,000 milliseconds), it aborts the call and calls the onFailure method.
/**
* Monitor AJAX requests for timeouts
* Based on the script here: http://codejanitor.com/wp/2006/03/23/ajax-timeouts-with-prototype/
*
* Usage: If an AJAX call takes more than the designated amount of time to return, we call the onFailure
* method (if it exists), passing an error code to the function.
*
*/
var xhr = {
errorCode: 'timeout',
callInProgress: function (xmlhttp) {
switch (xmlhttp.readyState) {
case 1: case 2: case 3:
return true;
// Case 4 and 0
default:
return false;
}
}
};
// Register global responders that will occur on all AJAX requests
Ajax.Responders.register({
onCreate: function (request) {
request.timeoutId = window.setTimeout(function () {
// If we have hit the timeout and the AJAX request is active, abort it and let the user know
if (xhr.callInProgress(request.transport)) {
var parameters = request.options.parameters;
request.transport.abort();
// Run the onFailure method if we set one up when creating the AJAX object
if (request.options.onFailure) {
request.options.onFailure(request.transport, xhr.errorCode, parameters);
}
}
},
// 10 seconds
10000);
},
onComplete: function (request) {
// Clear the timeout, the request completed ok
window.clearTimeout(request.timeoutId);
}
});
Hmm actually, now I look into it a bit, it's a bit more complicated than that. Have a read of these links on John Resig's blog and the Mozilla site. The above poster may also have a good point - you're making requests anyway, so you should be able to work out when they fail.. That might be a much more reliable way to go.
Make a call to a reliable destination, or perhaps a series of calls, ones that should go through and return if the user has an active net connection - even something as simple as a token ping to google, yahoo, and msn, or something like that. If at least one comes back green, you know you're connected.
I think google gears have such functionality, maybe you could check how they did that.
Use the relevant HTML5 API: online/offline status/events.
One possible solution is that if the page and the cached page have a different url to just look and see what url you are on. If you are on the url of the cached page then you are in offline mode. This blog makes a good point about why navigator.online is broke