The costs of using Firebase cloud functions becomes greater when the amount of invocations becomes greater.
I was wondering if that means that I can reduce the cost when I invoke/run a function x amount of times in a function. I've tested this and it turned out that the invocations were reduced.
This is what I mean:
exports.functionName = functions.region("europe-west2").pubsub.schedule('every 1 minutes')
.onRun((context) => { //counts as invocation
console.log("Running...")
var timesRun = 0;
var interval = setInterval(() => {
timesRun += 1;
if(timesRun === 6){
console.log("Stopping interval...")
clearInterval(interval);
}
console.log("Executing...")
//code... for example fetching json
}, 10000); //doesn't count as invocation
});
With this, I can run my code 5 times a minute, while the official invocation is equal to 1.
Is this really more efficient, or am I missing something?
With this, I can run my code 5 times a minute, while the official invocation is equal to 1. Is this really more efficient, or am I missing something?
With Cloud Functions you pay for both invocation count and CPU/memory usage duration. So while your approach reduces the number of invocations, it increases the amount of time you're using the CPU/memory.
Which one comes out cheaper should be a matter of putting the data into the pricing calculator.
Note that while the calculator shows only full seconds, you're actually billed for compute time per 100ms:
Compute time is measured in 100ms increments, rounded up to the nearest increment. For example, a function executing for 260ms would be billed as 300ms.
According to the documentation, your function times out after 1 minute, extended to 9 minutes.
Whatever you do into that time, will count as 1 execution time.
if you start interacting with other services like Firestore, those operations are billed separated.
Related
This code recursively calls the same function with a setTimeout of 1 millisecond, which in theory should call the function 1000 times per second. However, it's only called about 200 times per second:
This behavior happens on different machines and different browsers, I checked if it's has something to do with the maximum call stack, but this limit is actually way higher than 200 on any browser.
const info = document.querySelector("#info");
let start = performance.now();
let iterations = 0;
function run() {
if (performance.now() - start > 1000) {
info.innerText = `${iterations} function calls per second`;
start = performance.now();
iterations = 0;
}
iterations++;
setTimeout(run, 1);
}
run();
<div id="info"></div>
There’s a limitation of how often nested timers can run. The HTML5 standard says: 11: "If nesting level is greater than 5, and timeout is less than 4, then set timeout to 4."
This is true only for client-side engines (browsers). In Node.JS this limitation does not exist
HTML Standard Timers Section
Very similar example from javascript.info
The delay argument passed to setTimeout and setInterval is not a guaranteed amount of time. It's the minimum amount of time you could expect to wait before the callback function is executed. It doesn't matter how much of a delay you've asked for, if the JavaScript call stack is busy, then anything in the event queue will have to wait.
Also, there is an absolute minimum amount of time you could reasonably expect a callback to be called after which is dependent on the internals of the client.
From the HTML5 Spec:
This API does not guarantee that timers will run exactly on schedule.
Delays due to CPU load, other tasks, etc, are to be expected.
I once read somewhere that it was around 16ms so setting a delay of anything less than that shouldn't really change the timings at all.
This question has already been answered for the browser here, but window.performance.now() is obviously not available in Node.js.
Some applications need a steady clock, i.e., a clock that monotonically increases through time, not subject to system clock drifts. For instance, Java has System.nanoTime() and C++ has std::chrono::steady_clock. Is such clock available in Node.js?
Turns out the equivalent in Node.js is process.hrtime(). As per the documentation:
[The time returned from process.hrtime() is] relative to an arbitrary time in the past, and not related to the time of day and therefore not subject to clock drift.
Example
Let's say we want to periodically call some REST endpoint once a second, process its outcome and print something to a log file. Consider the endpoint may take a while to respond, e.g., from hundreds of milliseconds to more than one second. We don't want to have two concurrent requests going on, so setInterval() does not exactly meet our needs.
One good approach is to call our function one first time, do the request, process it and then call setTimeout() and reschedule for another run. But we want to do that once a second, taking into account the time we spent making the request. Here's one way to do it using our steady clock (which will guarantee we won't be fooled by system clock drifts):
function time() {
const nanos = process.hrtime.bigint();
return Number(nanos / 1_000_000n);
}
async function run() {
const startTime = time();
const response = await doRequest();
await processResponse(response);
const endTime = time();
// wait just the right amount of time so we run once second;
// if we took more than one second, run again immediately
const nextRunInMillis = Math.max(0, 1000 - (endTime - startTime));
setTimeout(run, nextRunInMillis);
}
run();
I made this helper function time() which converts the value returned by process.hrtime.bigint() to a timestamp with milliseconds resolution; just enough resolution for this application.
NodeJS 10.7.0 added process.hrtime.bigint().
You can then do this:
function monotimeRef() {
return process.hrtime.bigint();
}
function monotimeDiff(ref) {
return Number(process.hrtime.bigint() - ref) / 10**9;
}
Demonstrating the usage in a Node REPL:
// Measure reference time.
> let t0 = monotimeRef();
undefined
[ ... let some time pass ... ]
// Measure time passed since reference time,
// in seconds.
> monotimeDiff(t0)
12.546663115
Note:
Number() converts a BigInt to a regular Number type, allowing for translating from nanoseconds to seconds with the normal division operator.
monotimeDiff() returns the wall time difference passed with nanosecond resolution as a floating point number (as of converting to Number before doing the division).
This assumes that the measured time duration does not grow beyond 2^53 ns, which is actually just about 104 days (2**53 ns / 10**9 ns/s / 86400.0 s/day = 104.3 day).
I have created a script in JavaScript that is injected into our Ext JS application during automated browser testing. The script measures the amount of time taken to load the data in our grids.
Specifically, the script polls each grid, looks to see if there is a first row or a 'no data' message, and once all grids have satisfied this condition the script records the value between Date.now() and performance.timing.fetchStart, and treats this as the time the page took to load.
This script works more or less as expected, however when compared with human measured timings (Google stopwatch ftw), the time reported by this test is consistently around 300 milliseconds longer than when measured by stopwatch.
My questions are these:
Is there a hole in this logic that would lead to incorrect results?
Are there any alternative and accurate ways to achieve this
measurement?
The script is as follows:
function loadPoll() {
var i, duration,
dataRow = '.firstRow', noDataRow = '.noData',
grids = ['.grid1', '.grid2', '.grid3','.grid4', 'grid5', 'grid6', 'grid7'];
for (i = 0; i < grids.length; ++i) {
var data = grids[i] + ' ' + dataRow,
noData = grids[i] + ' ' + noDataRow;
if (!(document.querySelector(data) || document.querySelector(noData))) {
window.setTimeout(loadPoll, 100);
return;
}
}
duration = Date.now() - performance.timing.fetchStart;
window.loadTime = duration;
}
loadPoll();
Some considerations:
Although I am aware that human response time can be slow, I am sure
that the 300 millisecond inconsistency is not introduced by the human
factor of using Google stopwatch.
Looking at the code it might appear that the polling of multiple
elements could lead to the 300 ms inconsistency, however when I
change the number of elements being monitored from 7 to 1, there
still appears to be a 300 ms surplus in the time reported by the
automated test.
Our automated tests are executed in a framework controlled by
Selenium and Protractor.
Thanks in advance if you are able to provide any insight to this!
If you use performance.now() the time should be accurate to 5 microseconds. According to MDN:
The performance.now() method returns a DOMHighResTimeStamp, measured
in milliseconds, accurate to five thousandths of a millisecond (5
microseconds).
The returned value represents the time elapsed since the time origin
(the PerformanceTiming.navigationStart property).
If I were you I would revise my approach to how the actual measuring of the time is captured. Rather than evaluating the time for each loadPoll() call, you can evaluate how many calls you can perform for a given period of time. In other words you can count the number of function iterations for a longer period of time, eg 1000 milliseconds. Here's how this can be done:
var timeout = 1000;
var startTime = new Date().getTime();
var elapsedTime = 0;
for (var iterations = 0; elapsedTime < timeout; iterations++) {
loadPoll();
elapsedTime = new Date().getTime() - startTime;
}
// output the number of achieved iterations
console.log(iterations);
This approach will give you more consistent and accurate time estimates. Faster systems will simply achieve a greater number of iterations. Keep in mind that setInterval()/setTimeout() are not perfectly precise and for really small interval timers these functions may give you invalid results due to garbage collection, demands from events and many other things that can run in parallel while your code is being executed.
Can anyone please say what these numbers are. They are increasing so fast. Is that the no of times the function executes?
var time = setInterval(function() {
var b = document.getElementsByTagName('a')[22].innerHTML;
if (b == "name") {
document.getElementsByTagName('a')[22].click();
clearInterval(time);
} else {
console.log("sript started");
}
}, 10);
Those are the number of times the console.log("Script Activated") message has been triggered. Chrome automatically groups consecutively identical log messages rather than write it out each one on a new line. This makes it easier to see previous messages that would normally get scrolled off the top of the console too quickly.
In your case, the interval's callback function is triggering the log message every 10 milliseconds, so it's increment that count very quickly because it will occur 100 times a second.
EDIT: In a comment on another answer you asked why setting the interval value to 10000000000 caused the interval to go extremely quickly, rather than once every ~115 days.
This is because the number exceeds the maximum size a signed 32-bit integer can be is aproximately 2.1 billion (2,147,483,647). Once it exceeds that amount, it "wraps" around to the negative numbers. When setInterval() receives a negative number for the interval milliseconds, it simply rounds the value up to 4 milliseconds. This results in the interval occurring as quickly as possible, about 1000 times a second. I say "about" because there is no guarantee it will go this quickly on slower hardware.
It's console.log() output's times.
log one time show num 1, log two times show num 2.
I'm trying to make a simple javascript game. Basically, you get pancakes at a certain amount per second. In the code, I call this value (the rate at which you get pancakes) pps. I want to make it so that as the HTML span tag that shows the total amount of pancakes gets more pancakes, (at the rate of pps), it ascends so it looks nicer.
For example, if I get pancakes at 5 pps, right now it just goes 0, 5, 10, etc... every second. I want it to go 0,1,2,3,4,5(1 second), next second 6,7,8,9,10, etc...
Here is the code that I have so far, for the pancake counter:
pps = 100;
tp = 0;
window.setInterval(function(){
tp += parseInt(pps);
document.getElementById("test").innerHTML = tp;
}, 1000);
Anyone know how to do this?
This is a problem common to all games, and one that you need to solve correctly for your game to work outside of your own computer.
The correct solution is that you need to measure the elasped time between each iteration of your game loop, or between each frame render. This is, in practice, going to be a very small number; you can think of this number as a "scaling factor".
If your game was about moving a space ship, and you wanted it to move 5 screen units per second, your game loop would:
Find the time elapsed since the last interval, in seconds. In a game rate-limited to 60 frames-per-second, this would be around 1/60th of a second
Multiply the ship's speed (5 units per second) by 1/60; the ship would move 0.8333... units this tick
move the ship by that amount.
By the time 1 full second has passed, the ship will have moved 5 units.
The exact same principal applies to your PPS.
The important part is that, in the real world, it will not be exactly 1/60th of a second between frames. If you're not computing the "scaling factor" each iteration of your loop, your game will slowly accrue error. setInterval is particularly bad for this, and not at all suitable as a source for time in a game.
The implementation in JavaScript is simple: Each game loop, record the current time from whatever source of time is available to you; in your case, you can use get Date().getTime(), which returns the time since the UNIX epoch in milliseconds. Record this value.
In the subsequent redraw, you will again call get Date().getTime(), and then subtract the previous value to the the elapsed time. That is your scaling factor, in milliseconds. You can multiply pps by that value to determine how many pancakes to add.
It's important that you still follow this approach, even if you're using setInterval. You might think you can simply setInterval(..., 1000 / 60) to invoke your callback 60 times per second, but setInterval (and setTimeout) are not accurate - they invoke your callback at least that far in the future, but potentially much further. You still need to scale pps by the elapsed times since the last redraw.
Here's a simple implementation:
var PPS = 5;
var lastTime = new Date().getTime();
var cakes = 0;
setInterval(function () {
var currentTime = new Date().getTime()
var elapsedTime = currentTime - lastTime;
lastTime = currentTime;
cakes += (PPS * (elapsedTime / 1000)) // elapsedTime is in milliseconds, divide by 1000 to get fractional seconds
document.getElementById('pps').innerText = cakes;
}, 10);
<div id="pps"></div>
As an aside, the incorrect solution is one you find in a lot of old games: Increment things as fast as you can. On old computers this was a viable solution; the game redrew slowly enough that the game would advance smoothly. As computers got faster, the game would run faster, until it became unplayable.
A simple interval timer would do the trick. Something like this:
function incrementToNumber(tag, currentNumber, targetNumber) {
var numTicks = targetNumber - currentNumber;
var interval = setInterval(function() {
currentNumber++;
tag.innerText = currentNumber;
if(currentNumber == targetNumber) {
clearInterval(interval);
}
}, 1000 / numTicks);
}
That particular function increments over the course of one second. To change the time it takes to increment, swap out the 1000 with whatever milliseconds you want it to take.
For a version that increases forever:
function inrementForever(tag, currentPancakes, pancakesPerSecond) {
setInterval(function() {
currentPancakes++;
tag.innerText = currentPancakes;
}, 1000 / pancakesPerSecond);
}