I'm creating bot for online dynamic game. In this case dynamic means that hero ingame can move around and background is changing when moving. Global variables like monsters are changing dynamic when moving as well.
My bot is using puppeteer. Since I need this monsters object I have function which get those monsters from page context every 2 - 3 seconds (randomize for anti-detection).
This solution is far from being perfect. Two main downsides are:
My bot kill monster and I want to go to next one it still see this monster which has got killed because next refresh is for example 1500ms from that point of time.
Performance of getting monsters is bad.
To solve first one I could just execute function which download monsters every time after killing one. On the other hand then second downside will be even stronger because I will perform much more getting monster which already is slow.
It all comes to second issue which is performance. You may ask how do I know that performance is bad?
When hero is moving it's relatively smooth but when monsters are being downloaded I see micro lag, like for a part of second hero stop. It's really maybe 100ms of lag but I can see it with human eye and if I will perform getting monster more frequently this lag will get stronger (to be clear - lag will not be longer but more often).
Downloading object from global window is long. The reason why is that game maintainers developed it so monsters are within big object npc which contains everything that is in dashboard and contains even empty elements so the total amount of this npc object is between 100k-200k elements. I'm doing lots of filters in order to get final monster data which I care about.
I'll show how I'm actually getting this monsters. So I execute 3 async functions:
const allNpc = await getAllNpc(page);
let monsters = await filterMonsters(page, allNpc, monstersToHunt);
monsters.hunt = await deleteAvoidedMonsters(page, monsters.hunt, monstersToOmit);
First one getAllNpc just get entire npc object (this big one which I mentioned above)
return await page.evaluate(() => {
if(window.g) return g.npc;
});
second function filter actual monsters and monsters which I want to kill from npc:
return new Promise((resolve, reject) => {
const validNpc = allNpc.filter(el => !!el);
const allMonsters = validNpc.filter(e => e.lvl !== 0);
const names = new Set(monstersNames);
const huntMonsters = validNpc
.filter(it => names.has(it.nick))
.map(({ nick, x, y, grp, id }) => ({ nick, x, y, grp, id }));
resolve({all: allMonsters, hunt: huntMonsters});
});
I'm using Set here to get rid of O(n) / O(n^2) algorithms and I think this is fastest I can achieve with javascript. Third function is same as this one but additionally filtering special monsters which I want to avoid.
Now my questions are:
is there a way to get this object on every this object and only this object change? I know there is function in puppeteer that can watch for DOM changes but is there something to watch global window object?
can I do anything more to speed it up? I read about worker_threads in NodeJS, can it help getting rid of this micro lag or something else? Clustering?
What I realized after a while is that bot was "lagging" because I was passing this huge g.npc array as argument to function. By that I mean I was resolving it from getAllNpc and then passing it to filterMonsters.
When I changed it so I execute .filter in getAllNpc inside evaluated script within page context and then resolving and passing array with hundreds not millions elements it is much faster and do not cause any lag or freeze.
Related
I am trying to understand how to prevent memory leaks when using timers. I believe the Chrome Dev Tools Performance tab (I am still learning about this feature) memory graph is showing me a bad pattern in terms of memory management. I believe it is a case of a "sawtooth" pattern each time a Timer is fired.
I have a simple test case (1. right click 'timer-adder.html', 2. click 'Preview' to get a preview of the loaded HTML) that involves using a constructor function to create objects that work as timers, in other words for each update, the DOM is changed inside a setInterval callback.
//...
start: function (initialTime, prefixedId) {
let $display = document.getElementById(prefixedId + '-display');
if ($display.style.display === 'none') { $display.style.display = 'block'; }
$display.textContent = TimerHandler.cycle(initialTime);
$display = null;
return setInterval(function () {
this.initialTime--;
(this.initialTime === 0 && this.pause());
document.getElementById(prefixedId + '-display').textContent = TimerHandler.cycle(this.initialTime);
}.bind(this), 1000);
},
/...
Attempts to minimize leakage although not directly related to what I believe to be the problem at hand:
assigning $display to null to make it collectible (garbage collection).
clearing the interval;
What I identify as a bad pattern:
View post on imgur.com (zoomable)
A fair memory usage would translate to DevTools showing a horizontal line? What would be a safer approach to this side effect? Because, as the test is right now, I think that in the long run, multiple active timers would overload memory and thus result in a noticeable decrease in performance.
PS: As a newcomer, I hope this is a fair presentation of the problem. Thank you.
As you can see from the last part, the Garbagge Collector was able to free that memory. Thus there is no memory leaking.
Garbagge Collection is a complicated task, thats why V8 tries to minimize the number of stop-the-world collections. Or in other words: It only cleans up memory if it needs it.
In your case it doesn't. There is still enough space available.
If (a noticible amount of) memory persists across GC calls, then you do have a memory leak.
assigning $display to null to make it collectible (garbage collection).
Thats just unneccessary. $display is not accessed in the inner function, therefore it is viable for gc'ing after init executed. Reassigning it doesn't change that.
I am writing a Whack-A-Mole game for class using HTML5, CSS3 and JavaScript. I have run into a very interesting bug where, at seemingly random intervals, my moles with stop changing their "onBoard" variables and, as a result, will stop being assigned to the board. Something similar has also happened with the holes, but not as often in my testing. All of this is completely independent of user interaction.
You guys and gals are my absolute last hope before I scrap the project and start completely from scratch. This has frustrated me to no end. Here is the Codepen and my github if you prefer to have the images.
Since Codepen links apparently require accompanying code, here is the function where I believe the problem is occuring.
// Run the game
function run() {
var interval = (Math.floor(Math.random() * 7) * 1000);
if(firstRound) {
renderHole(mole(), hole(), lifeSpan());
firstRound = false;
}
setTimeout(function() {
renderHole(mole(), hole(), lifeSpan());
run();
}, interval);
}
What I believe is happening is this. The function runs at random intervals, between 0-6 seconds. If the function runs too quickly, the data that is passed to my renderHole() function gets overwritten with the new data, thus causing the previous hole and mole to never be taken off the board (variable wise at least).
EDIT: It turns out that my issue came from my not having returns on my recursive function calls. Having come from a different language, I was not aware that, in JavaScript, functions return "undefined" if nothing else is indicated. I am, however, marking GameAlchemist's answer as the correct one due to the fact that my original code was convoluted and confusing, as well as redundant in places. Thank you all for your help!
You have done here and there in your code some design mistakes that, one after another, makes the code hard to read and follow, and quite impossible to debug.
the mole() function might return a mole... or not... or create a timeout to call itself later.. what will be done with the result when mole calls itself again ? nothing, so it will just be marked as onBoard never to be seen again.
--->>> Have a clear definition and a single responsibility for mole(): for instance 'returns an available non-displayed mole character or null'. And that's all, no count, no marking of the objects, just KISS (Keep It Simple S...) : it should always return a value and never trigger a timeout.
Quite the same goes for hole() : return a free hole or null, no marking, no timeout set.
render should be simplified : get a mole, get a hole, if either couldn't be found bye bye. If a mole+hole was found, just setup the new mole/hole couple + event handler (in a separate function). Your main run function will ensure to try again and again to spawn moles.
Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);
I'm currently creating a program using the createJS suite and have hit a roadblock. I'm "spawning" items on the stage however I wondered if there was a way to count how many currently exist on the stage.
So, for example:
if (spawnedItemCount <= 1) {
spawnItem();
}
spawnedItemCount would return the amount of a particular object that is currently being displayed on the stage. If there is only 1 (or less) of these objects then run the spawnItem function. Is this possible at all?
Thank you.
You, are looking for getNumChildren()
http://createjs.com/Docs/EaselJS/classes/Container.html#method_getNumChildren
Every container has this method, but it will only return the Number of direct children, no children of children, for that you will have to create a recursive call.
The problem is as such:
In a js and asm.js based multiplayer game I've got two loops.
One handles the actual game ticks, like unit position, velocity and combat.
The other handles rendering of this world onto the canvas for the user to see.
What I'd like to happen is when the processor/GPU(they made those the same thing on some machines now, can't say I'm happy about that) gets encumbered too much the rendering loop should skip and thus stop changing the canvas. I.e. freezing the game screen in a lag pike.
Meanwhile the little processing power left is used to successfully complete the actual game tick preventing de-synchronisation with other game clients.
(It's an RTS-like game when it comes to load so the user input instead of positions of all objects are sent over the net).
Failing this the client would have to be kicked by the other clients or all clients would have to pause for him to reconnect and resync. i.e. bad bad bad!
A sloppy makeshift way to do this would probably be by using timestamps and terminate the graphic loop if it won't be complete by a certain time. One would presumably do this by determining max execution time for the packet types on the stack of the loop and immediately terminate the loop if the "time to execute value" of all packets together is too great to be dealt with within the resource capacity the timestamps are indicating by slowdown measurement. Hell, maybe that's radical but perhaps even skip-terminating the graphic loop when any slowdown is detected just to be sure to avoid desync.
So priorotizing one loop over another(both handling ticks) and making the second one skip if a shortage in resource is detected to ensure the first one always completes it's tick within each timeframe(10 ticks per second here).
Any possibilities or best practice methods you guys can inform me on?
EDIT: Please focus on the ability to measure availability of cpu resources and the skipping/termination for one tick of the graphic loop if these resources would not available enough to finish both loops (i.e. if the loops won't finish in the 100ms timeframe after which the next loop tick should already be firing, don't start/terminate the graphics loop).
One solution would be to use a web worker to do your world update loop and then the normal javascript loop to do the render. You would need to hand the state back and forthright o /from the web worker but the render loop would only draw on the updated data.
An advantage is that you could still have reactionary display code on the main ui loop.
This also have the advantage of the fact that the web worker could be using a different core and with multiple web workers you could use multiple extra cores
Fo the logical loop, i would take setInterval, and for the paint - requestAnimationFrame. And even more - the callback at requestAnimationFrame also receives a timestamp, so you can track timestamps and skip single frame if some lack appear.
the processor is able to handle other tasks while also rendering the animation
This statement is wrong - processor can handle only one task, and requestAnimationFrame is not actually the Rendering, it is your callback - generic javascript. You can think about it like a setTimeout. The only difference is, that it tries to run the callback on next free framerate's frame. That's why it is much better than setTimeout. So for the animations you must use the requestAnimationFrame. Other good part about it is, when the webpage is in background(other tab opened). Then the callback wont be called until it comes to the foreground. This saves processor time, as nothing is calculated in that callback.
Going back to your question: You now but, that only one callback can be processed in a time, so if the processor is in a particular time busy with the logical function, then the callback of the animation loop won't be fired. In that case it calls 'lag'. But as I understood, it is actually the desired behavior - to give the logical callback function more time. But there is other side. What if your animation function is busy, when the time for logical function came to be fired? In this case it will be fired only when animation function ends. There is nothing to do about it. If your animation function is 'heavy', you could only try to split it for 2 frames. One frame - prepare everything for render, the second one - render.
But anyway, you never become millisecond-perfect interval or timeout in javascript. As it want be called until event-loop is not free. To get the idea:
var seconds = 0;
setInterval(function(){ seconds++; var x = 10e8; while(--x); }, 1000);
Depends on you CPU, but after 10 seconds time, variable 'seconds' will be much less then 10.
And one more thing, if you really rely on time, then it is safer to use Date.now() to synchronize next logical tick:
var setLogicalLoop = (function(){
var _startedAt,
_stop,
_ms;
function frame(){
if (_stop === true)
return;
// calculations
var dt = Date.now() - _startedAt,
diff = dt % _ms;
setTimeout(frame, ms - diff);
};
return function (callback, ms){
_startedAt = Date.now();
_stop = false;
setTimeout(frame, ms);
return function(){
_stop = true;
};
};
});
// -> start
var stopLoop = setLogicalLoop(myFunction, ms);
// -> stop
stopLoop();