Infinite jasmine timeout - javascript

This is basically a follow-up to Remove timeout for single jasmine spec github issue.
The question:
Is it possible to make a single test never timeout?
The problem:
It is possible to set a timeout value globally via DEFAULT_TIMEOUT_INTERVAL or for every describe with beforeEach/afterEach or on a single it() block:
it('Has a custom timeout', function() {
expect(true).toBeTruthy();
}, value in msec)
I'm interested in having a single spec never timeout. I've tried to follow the advice proposed in the mentioned github issue and use Infinity:
it('Has a custom timeout', function() {
expect(true).toBeTruthy();
}, Infinity)
but, I've got the following error immediately after the tests got into the it() block:
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL
I guess I cannot use Infinity as a timeout value, or I'm doing something wrong.
As a workaround, I can use a hardcoded large number instead, but I'd like to avoid that.

Jasmine internally uses setTimeout to wait for specs to finish for a defined period of time.
According to this Q/A - Why does setTimeout() "break" for large millisecond delay values?:
setTimeout using a 32 bit int to store the delay
...
Timeout values too big to fit into a signed 32-bit integer may cause
overflow in FF, Safari, and Chrome, resulting in the timeout being
scheduled immediately. It makes more sense simply not to schedule
these timeouts, since 24.8 days is beyond a reasonable expectation for
the browser to stay open.
As soon as Infinity is greater than any other number the overflow occurs.
The max safe integer in this case is 231-1 = 2147483647. This value is finite, so the test won't actually run infinitely long, but as said I think 24.8 days is long enough.
You can define a constant to store this value:
jasmine.DEFAULT_TIMEOUT_INTERVAL = 2000;
var MAX_SAFE_TIMEOUT = Math.pow(2, 31) - 1;
describe('suite', function () {
it('should work infinitely long', function (done) {
setTimeout(function () {
expect(true).toBe(true);
done();
}, 3000)
}, MAX_SAFE_TIMEOUT);
});
See working sample here

Related

Error: 10 ABORTED: Too much contention on these documents. Please try again

What does this error mean?
Especially, what do they mean by : Please try again
Does it mean that the transaction failed I have to re-run the transaction manually?
From what I understood from the documentation,
The transaction read a document that was modified outside of the
transaction. In this case, the transaction automatically runs again.
The transaction is retried a finite number of times.
If so, on which documents?
The error do not indicate which document it is talking about. I just get this stack:
{ Error: 10 ABORTED: Too much contention on these documents. Please
try again.
at Object.exports.createStatusErrornode_modules\grpc\src\common.js:87:15)
at ClientReadableStream._emitStatusIfDone \node_modules\grpc\src\client.js:235:26)
at ClientReadableStream._receiveStatus \node_modules\grpc\src\client.js:213:8)
at Object.onReceiveStatus \node_modules\grpc\src\client_interceptors.js:1256:15)
at InterceptingListener._callNext node_modules\grpc\src\client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus\node_modules\grpc\src\client_interceptors.js:614:8)
at C:\Users\Tolotra Samuel\PhpstormProjects\CryptOcean\node_modules\grpc\src\client_interceptors.js:1019:24
code: 10, metadata: Metadata { _internal_repr: {} }, details: 'Too
much contention on these documents. Please try again.' }
To recreate this error, just run a for loop on the db.runTransaction method as indicated on the documentation
We run into the same problem with the Firebase Firestore database. Even small counters with less then 30 items to cound where running into this issue.
Our solution was not to distribute the counter but to increase the number of tries for the transaction and to add a deffer time for those retries.
The first step was to save the transaction action as const witch could be passed to another function.
const taskCountTransaction = async transaction => {
const taskDoc = await transaction.get(taskRef)
if (taskDoc.exists) {
let increment = 0
if (change.after.exists && !change.before.exists) {
increment = 1
} else if (!change.after.exists && change.before.exists) {
increment = -1
}
let newCount = (taskDoc.data()['itemsCount'] || 0) + increment
return await transaction.update(taskRef, { itemsCount: newCount > 0 ? newCount : 0 })
}
return null
}
The second step was to create two helper functions. One for waiting a specifix amount of time and the other one to run the transaction and catch errors. If the abort error with the code 10 occurs we just run the transaction again for a specific amount of retries.
const wait = ms => { return new Promise(resolve => setTimeout(resolve, ms))}
const runTransaction = async (taskCountTransaction, retry = 0) => {
try {
await fs.runTransaction(taskCountTransaction)
return null
} catch (e) {
console.warn(e)
if (e.code === 10) {
console.log(`Transaction abort error! Runing it again after ${retry} retries.`)
if (retry < 4) {
await wait(1000)
return runTransaction(taskCountTransaction, ++retry)
}
}
}
}
Now that we have all we need we can just call our helper function with await and our transaction call will run longer then a default one and it will deffer in time.
await runTransaction(taskCountTransaction)
What I like about this solution is that it doesn't mean more or complicated code and that most of the already written code can stay as it is. It also uses more time and resources only if the counter gets to the point that it has to count more items. Othervise the time and resources are the same as if you would have the default transactions.
For scaling up for large amounts of items we can increase eather the amount of retries or the waiting time. Both are also affecting the costs for Firebase. For the waiting part we also need to increase the timeout for our function.
DISCLAIMER: I have not stress tested this code with thousands or more of items. In our specific case the problems started with 20+ items and we need up to 50 items for a task. I tested it with 200 items and the problem did not apear again.
The transaction does run several times if needed, but if the values read continue to be updated before the write or writes can occur it will eventually fail, thus the documentation noting the transaction is retried a finite number of times. If you have a value that is updating frequently like a counter, consider other solutions like distributed counters. If you'd like more specific suggestions, I recommend you include the code of your transaction in your question and some information about what you're trying to achieve.
Firestore re-runs the transaction only a finite number of times. As of writing, this number is hard-coded as 5, and cannot be changed. To avoid congestion/contention when many users are using the same document, normally we use the exponential back-off algorithm (but this will result in transactions taking longer to complete, which may be acceptable in some use cases).
However, as of writing, this has not been implemented in the Firebase SDK yet — transactions are retried right away. Fortunately, we can implement our own exponential back-off algorithm in a transaction:
const createTransactionCollisionAvoider = () => {
let attempts = 0
return {
async avoidCollision() {
attempts++
await require('delay')(Math.pow(2, attempts) * 1000 * Math.random())
}
}
}
…which can be used like this:
// Each time we run a transaction, create a collision avoider.
const collisionAvoider = createTransactionCollisionAvoider()
db.runTransaction(async transaction => {
// At the very beginning of the transaction run,
// introduce a random delay. The delay increases each time
// the transaction has to be re-run.
await collisionAvoider.avoidCollision()
// The rest goes as normal.
const doc = await transaction.get(...)
// ...
transaction.set(...)
})
Note: The above example may cause your transaction to take up to 1.5 minutes to complete. This is fine for my use case. You might have to adjust the backoff algorithm for your use case.
I have implemented a simple back-off solution to share : maintain a global variable that assigns a different "retry slot" to each failed connection. For example if 5 connections came at the same time and 4 of them got a contention error, each would get a delay of 500ms, 1000ms, 1500ms, 2000ms until trying again, for example. So it could potentially all resolved at the same time without any more contention.
My transaction is a response of calling Firebase Functions. Each Functions computer instance could have a global variable nextRetrySlot that is preserved until it is shut down. So if error.code === 10 is caught for contention issue, the delay time can be (nextRetrySlot + 1) * 500 then you could for example nextRetrySlot = (nextRetrySlot + 1) % 10 so next connections get a different time round-robin in 500ms ~ 5000ms range.
Below are some benchmarks :
My situation is that I would like each new Firebase Auth registration to get a much shorter ID derived from unique Firebase UID, thus it has risk of collision.
My solution is simply to check all registered short ID and if the query returns something, just generate an another one until it is not. Then we register this new short ID to the database. So the algorithm cannot rely on only Firebase UID, but it is able to "move to the next one" in a deterministic way. (not just random again).
This is my transaction, it first read a database of all used short ID then write a new one atomically, to prevent an extremely unlikely event that 2 new registers came at the same time, with a different Firebase UID that derived into the same short ID, and both see that the short ID is vacant at the same time.
I run a test that intentionally register 20 different Firebase UIDs which all derived into the same short ID. (extremely unlikely situation) All that runs in burst at the same time. First I tried using the same delay on next retry, so I expect it to clash with each other again and again while slowly resolving some connections.
Same 500ms delay on retry : 45000ms ~ 60000ms
Same 1000ms delay on retry : 30000ms ~ 49000ms
Same 1500ms delay on retry : 43000ms ~ 49000ms
Then with distributed delay time in slots :
500ms * 5 slots on retry : 20000ms ~ 31000ms
500ms * 10 slots on retry : 22000ms ~ 23000ms
500ms * 20 slots on retry : 19000ms ~ 20000ms
1000ms * 5 slots on retry : ~29000ms
1000ms * 10 slots on retry : ~25000ms
1000ms * 20 slots on retry : ~26000ms
Confirming that different delay time definitely helps.
Found maxAttempts in the runTransaction code which should modify the 5 default attempts (but didn't tested yet).
Anyway, I think that random wait (plus eventually the queue) are still the better option.
Firestore now supports server-side increment() and decrement() atomic operations.
You can increment or decrement by any amount. See their blog post for full details. In many cases, this will remove the need for a client side transaction.
Example:
document("fitness_teams/Team_1").
updateData(["step_counter" : FieldValue.increment(500)])
This is still limited to a sustained write limit of 1 QPS per document so if you need higher throughput, consider using distributed counters. This will increase your read cost (as you'll need to read all the shard documents and compute a total) but allow you to scale your throughput by increasing the number of shards. Now, if you do need to increment a counter as part of a transaction, it's much less likely to fail due to update contention.

How to run a protractor browser.sleep for 15 minutes

I have to run some tests agains a live site.
I have to pretty much just make tasks to wait on a website to time out (15 minutes), then run another task, once that has passed.
the longest i got it to wait is 26.6 seconds (26600 ms) on firefox, and about 30 on chrome.
I get the following error :
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
so basically i need adjust the specified timeout from jasmine to run this:
browser.get('www.page.com');
browser.sleep(900000);
browser.doSomethingElse();
This is a jasmine timeout happening in your case. You need to tell Jasmine that it's okay it takes time. You can set the timeout globally in jasmineNodeOpts in your config:
jasmineNodeOpts: {
defaultTimeoutInterval: 200000,
}
Or, you can also set it on a spec level (example here).
beforeEach(function(){
browser.waitForAngular();
jasmine.DEFAULT_TIMEOUT_INTERVAL = 1000000;
});

jquery setTimeout too much recursion

I have read from multiple places that setTimeout() is preferable to setInterval() when setting something up to basically run forever. The code below works fine but after about an hour of running Firefox (38.0.1) throws an error of too much recursion.
Essentially I have it grabbing a very small amount of text from counts.php and updating a table with that information. The whole call and return takes about 50ms according to the inspectors. I'm trying to have it do this every x seconds as directed by t.
I suspect if I switch to setInterval() this would probably work, but I wasn't sure what the current state of the setTimeout() vs setInterval() mindset is as everything I've been finding is about 3-5 years old.
$(document).ready(function() {
t = 3000;
$.ajaxSetup({cache: false});
function countsTimer(t) {
setTimeout(function () {
$.getJSON("counts.php", function (r) {
$(".count").each(function(i,v) {
if ($(this).html() != r[i]) {
$(this).fadeOut(function () {
$(this)
.css("color", ($(this).html() < r[i]) ? "green" : "red")
.html(r[i])
.fadeIn()
.animate({color: '#585858'}, 10000);
})
};
});
t = $(".selected").html().slice(0,-1) * ($(".selected").html().slice(-1) == "s" ? 1000 : 60000);
countsTimer(t);
});
}, t);
};
countsTimer(t);
});
Update: This issue was resolved by adding the .stop(true, true) before the .fadeOut() animation. This issue only occurred in Firefox as testing in other browsers didn't cause any issues. I have marked the answer as correct in spite of it not being the solution in this particular case but rather it offers a good explanation in a more general sense.
You should indeed switch to setInterval() in this case. The problem with setInterval() is that you either have to keep a reference if you ever want to clear the timeout and in case the operation (possibly) takes longer to perform than the timeout itself the operation could be running twice.
For example if you have a function running every 1s using setInterval, however the function itself takes 2s to complete due to a slow XHR request, that function will be running twice at the same time at some point. This is often undesirable. By using setTimout and calling that at the end of the original function the function never overlaps and the timeout you set is always the time between two function calls.
However, in your case you have a long-running application it seems, because your function runs every 3 seconds, the function call stack will increase by one every three seconds. This cannot be avoided unless you break this recursion loop. For example, you could only do the request when receiving a browser event like click on the document and checking for the time.
(function()
{
var lastCheck = Date.now(), alreadyRunning = false;
document.addEventListener
(
"click",
function()
{
if(!alreadyRunning && Date.now() - lastCheck > 3000)
{
alreadyRunning = true;
/* Do your request here! */
//Code below should run after your request has finished
lastCheck = Date.now();
alreadyRunning = false;
}
}
)
}());
This doesn't have the drawback setInterval does, because you always check if the code is already running, however the check only runs when receiving a browser event. (Which is normally not a problem.) And this method causes a lot more boilerplate.
So if you're sure the XHR request won't take longer than 3s to complete, just use setInterval().
Edit: Answer above is wrong in some aspects
As pointed out in the comments, setTimeout() does indeed not increase the call stack size, since it returns before the function in the timeout is called. Also the function in the question does not contain any specific recursion. I'll keep this answer because part of the question are about setTimeout() vs setInterval(). However, the problem causing the recursion error will probably be in some other piece of code since there is not function calling itself, directly or indirectly, anywhere in the sample code.

Execution Affecting Already Set Timeouts

I am pretty experienced in Javascript but I have come across a weird issue in developing a web app where the script just isn't behaving as it should be. Oddly (inexplicably, in fact) I don't experience this issue when I am using my offline development copy or after logging in. However, when the user is already logged in and they refresh the page, they come across this issue.
Here is the affected method (this is literally the entire thing):
_setLiveSyncTimeout: function (firstTestCall) {
if (firstTestCall) {
console.log('set next')
window.setTimeout(function () {
console.log(200, 'a');
window.setTimeout(function () {
console.log(400, 'b');
}, 200);
}, 200);
window.setTimeout(function () {
console.log(400, 'c');
}, 400);
}
this._liveSyncTimeout = setTimeout(_(this.liveSyncNow).bind(this), this._liveSyncPeriod);
}
I noticed that this.liveSyncNow didn't seem to be getting called every time I would have expected it too, so I added in the if statement above to debug. Weirdly, when you pass in true, the first timeout ('a') will run, however, the 'b' and 'c' do not. Since the result of setTimeout is not stored in these cases, it should be literally impossible to cancel them but, for no apparent reason, they simply do not run. In the same case that 'b' and 'c' do not run, the timeout for this.liveSyncNow does not run either.
From tests it appears that when this function runs, all of the timeouts previously created in it get cancelled. The reason that the 'a' timeout does run is that there is a 200ms gap between the call where it is created and the next one.
Edit
This occurs in at least Chrome and Firefox.
Edit 2
Here is a more simple example of the issue. This is part of the definition of a Backbone model (you don't need to worry about that though). This is literally the entire function.
initialize: function () {
var a = setInterval(function () {
console.log('test')
}, 50);
}
The setInterval runs three times ('test' appears in the console three time) and then just stops.
I experience this issue (unsurprisingly) as a totally unrelated piece of code was incorrectly cancelling my timeouts. This happened because I was using a modded version of setTimeout, which returned its own incrementally-assigned integer but I did not use the correct modded version of clearTimeout and instead used the original window.clearTimeout. As a result my good timers were incorrectly being cancelled.
I found this issue and tracked down the dodgy piece of code by overwriting clearTimeout and finding out who called it, when, and with what id.
I have attempted to prevent this from happening in future by assigning ids in my modded setTimeout decrementally, below 0.

poll in javascript

I'm using [youtube api ][1] to get to know when a video is fully buffered player.getVideoLoadedFraction()
when the fraction is 1, the video is fully buffered, but I have to poll this function to check whether it is 1 and then get the time, like:
setInterval(display_fraction,1);
since a video could be tens of minutes.
Will this polling creates a heavy load on the browser/client and thus affect the video streaming? are there any other better polling methods or ways to detect when youtube finishes buffering?
BTW, the link for youtube api is:
https://developers.google.com/youtube/flash_api_reference#Playback_controls
Humans start perceiving time intervals somewhere between a 20th and a 10th of a second, so trying to poll with a value of 1ms is neither necessary nor desireable (any modern browser will round that up to 5ms or 10ms anyway). Values like 50 or 100 would be more appropriate.
I would also strongly recommend using a chained series of setTimeout calls rather than a setInterval call, something like this:
function onVideoReady(callback) {
// Do first check as soon as the JavaScript engine is available to do it
setTimeout(checkVideoReady, 0);
// Our check function
function checkVideoReady() {
if (videoIsReady) {
// The video is ready, notify calling code
callback();
}
else {
// Not ready yet, wait a 10th of a second
setTimeout(checkVideoReady, 100);
}
}
}
...which you then use like this:
onVideoReady(function() {
// The video is ready, do something
});
The reasons I advocate a chained series of setTimeout instead of setInterval are:
You can change the delay easily from iteration to iteration. For instance in the above, I fire the check as soon as possible the first time, then then after 100ms each subsequent time. You can do more complex things with timing than that, the flexibility is there.
It's much, much harder to inadvertently end up with more than one running, since code has to explicitly trigger the next loop.
setInterval varies amongst browsers about whether it measure the interface from the start of the last call or the end of it. If you use a pattern like the above, you're always sure it's from the end of the last check.
If your code is still running when the next interval is meant to occur, it just gets skipped. This can cause gaps (e.g., if you're doing something every 100ms and your previous loop takes 102ms to complete, the next one doesn't start as soon as possible, it waits the remaining 98ms), at least on some browsers.
But it's up to you, of course, the above can be done just as easily with setInterval and clearInterval calls as with a chain of setTimeout calls.
An alternative to chained timeouts is chained Promises. The following implements periodic polling along with a timeout.
var Promise = require('bluebird');
/**
* Periodically poll a signal function until either it returns true or a timeout is reached.
*
* #param signal function that returns true when the polled operation is complete
* #param interval time interval between polls in milliseconds
* #param timeout period of time before giving up on polling
* #returns true if the signal function returned true, false if the operation timed out
*/
function poll(signal, interval, timeout) {
function pollRecursive() {
return signal() ? Promise.resolve(true) : Promise.delay(interval).then(pollRecursive);
}
return pollRecursive()
.cancellable()
.timeout(timeout)
.catch(Promise.TimeoutError, Promise.CancellationError,function () {
return false;
});
}
You call it like so.
poll(isVideoReady, pollingInterval, timeout).then(console.log);
See Javascript polling with promises.

Categories