Ok fellas, so we employ a multi-phase technique for pre-fatching of images through dynamic JavaScript and for checking of progress:
we declare an IMG object
assign onload handler to it.
assign a source to it through its SRC argument.
check the 'complete' flag to see if image already available from cache (in which case onload() would never fire)
The thing is... sometimes.... it would not work and nobody knows why.
The image might be in cache.. but the 'complete' flag would not be set; or - chromium would simply refuse to issue a GET request... nobody knows why. And then all of the sudden if I wipe the network event pane in google dev tools then MAYBE the image would be fetched...
sample code:
this.mImageURLs[i].lastTry = now;
let item = new Image();
item.onload = this.assetLoaded.bind(this, this.mImageURLs[i]);
item.src = this.wrapURIAroundDataSource(this.mImageURLs[i].data);
if (item.complete && ((item.width + item.height) > 0)) {
this.assetLoaded(this.mImageURLs[i]);
}
It really is a pain the ass that Chromium 'misbehaves', every now and then.
It is NOT like it suffices to schedule download of 100 objects and that these would be attempted (at least!) to be fetched no matter what - it is not the case and we are 100% positive of that.
One has to employ a variety of fancy techniques including resumptions of attempts, sliding windows to minimize the number of requests etc to maximize the likelihood that all the assets make their way WHEN we want them to be, before proceeding anu further with our logic and we won't proceed unless we've verified that assets have been fetched.
And even then, it is really difficult to keep track of things just like described in this very thread.
Of course, we've tripple checked if there's no issue on the server side, we monitor things with Wireshark.. GET requests are not being issued and the 'complete' flag is not set.
And every now and then everything works just as expected. And other times chromium decides to fire the GET requests 60+ seconds after these were requested even though there is no heavy load on its engine..multiple times for the same asset over a couple of Http 1.1 data streams....
ideas?
When looking at the documentation for the complete flag on MDN, it says this.
It's worth noting that due to the image potentially being received asynchronously, the value of complete may change while your script is running.
While I have not used the complete flag for exactly what you're trying to accomplish, I have made extensive use of onLoad for images and testing to see if they have already been loaded.
What appears to be happening is that you're setting the item.src attribute, which kicks off the image loading process (where Chromium will determine if it's in cache, needs to be refreshed from the server, or a new never before seen asset and react accordingly). That loading process is asynchronous and may or may not set .complete immediately.
I believe you want to set your onload function to query the .complete bit and do the additional check for height and width. This will eliminate the race condition that you're currently seeing by checking .complete immediately after setting .src.
Related
I'm using setInterval (within 3 Tampermonkey scripts) to check three different public websites every few seconds, so I can be alerted when specific text appears. These alerts are for freelance work offers, which can expire within seconds so I have to be quick.
It all works correctly, except when I'm working in a different tab or app, then that after about 6 minutes, setInterval starts to "trigger" for the background tab once per minute instead of once every few seconds.
Any suggestions how to fix this? Is it possible to use Date.now() in some way?
Note, I'm a complete beginner, willing to learn but need to keep things as simple as possible.
I've tried reloading the page every 3 minutes using window.location.reload() but that doesn't work. I guess I could create a script to activate and focus the tab every few minutes, but that would interrupt anything I was working on. I tested it with the following barebones script against https://www.google.co.uk/, in case something else in my script was causing a problem, but the same happens:
var i = 0;
setInterval(function() {
console.log("log: i:" + i);
i = i+1;
if(i==15) {
i = 0;
window.location.reload();
console.log("reloaded window");
}
}, 10000);
After a few minutes, i is incremented only once per minute - even following the window reload.
I've looked at this question
It mentions "workers" but can these be used within tampermonkey on public website I don't own? It also provides a link which suggests a workaround of playing an almost inaudible audio file - but I don't know if playing that within my tampermonkey script would work?
I see there are a number of workarounds here but I'm not sure if I can use any of them.
For example, can MutationObserver be used within a tampermonkey to detect changes in a public website? Even if it can, presumably I'd have to reload the webpage every time I needed to checK? Currently I'm using XMLHttpRequest instead of loading the webpage (far quicker and uses less CPU).
Interestingly, the above link seems to suggest that setInterval and SetTimeout are specifically targetted for throttling, I wonder if that means I could use some other function instead.
I've also seen this but I guess I can only use that for a website I own?
I can think of a few options.
Instead of having three scripts, use a single script, and run that single script on every site (with // #match *://*/*). Then, with that single script, set the interval. Whenever the interval callback runs, use Tampermonkey's GM_setValue and GM_getValue for cross-domain storage to coordinate executions - a callback will first check with getValue whether the last check was more than 3 minutes ago. If so, it calls setValue with the current date and performs the check. This way, even if the script is running on 100 different tabs (some in active tabs, some in background tabs), it'll still run the check once every few minutes.
To perform the check, use Tampermonkey's GM_xmlHttpRequest to get across same-origin restrictions; make a request to the three sites, and parse them into documents using DOMParser so you can programatically search through them for the elements you're looking for.
Perform the checks from a backend app instead of from a browser - for example, with Node and Puppeteer, which won't have throttling issues. To have the results be communicated with you, you could either have a userscript or websocket with your local webserver, or you could integrate the webserver into an Electron app instead. This is the approach I'd prefer, I think it's the most robust.
Use workers, which has worked for some
A web application has certain timeliness constraints. How can I check the time from invocation of a JS function to having the information visible in the browser?
Clearly I can start a stopwatch, but on what event should I stop it?
Modern browsers offer the Navigation Timing API, which you can use to get this kind of information. Which information from it you use is up to you, probably domComplete or loadEventStart (or, of course, loadEventEnd if you want to know when everything is fully loaded, but you could do that with window.onload). This tutorial may be useful.
If you're talking about requesting something via ajax after page load (you've said "...from invocation of a JS function to having the informatin visible in the browser..."), adding that to the page, and seeing how long that took, you'd stop the timer after you were done appending the elements to the DOM, immediately before returning from your ajax onreadystatechange handler callback. Or if you want to be really sure the information has been rendered, after using setTimeout(function() { /*...end the timer...*/ }, 0); from that callback instead, which yields back to the browser for the minimum possible time, giving it a chance to render (if it doesn't render while JS is running).
I am trying to measure the time it takes for an image in webgl to load.
I was thinking about using gl.finish() to get a timestamp before and after the image has loaded and subtracting the two to get an accurate measurement, however I couldn't find a good example for this kind of usage.
Is this sort of thing possible, and if so can someone provide a sample code?
It is now possible to time WebGL2 executions with the EXT_disjoint_timer_query_webgl2 extension.
const ext = gl.getExtension('EXT_disjoint_timer_query_webgl2');
const query = gl.createQuery();
gl.beginQuery(ext.TIME_ELAPSED_EXT, query);
/* gl.draw*, etc */
gl.endQuery(ext.TIME_ELAPSED_EXT);
Then sometime later, you can get the elapsed time for your query:
const available = this.gl.getQueryParameter(query, this.gl.QUERY_RESULT_AVAILABLE);
if (available) {
const elapsedNanos = gl.getQueryParameter(query, gl.QUERY_RESULT);
}
A couple things to be aware of:
only one timing query may be in progress at once.
results may become available asynchronously. If you have more than one call to time per frame, you may consider using a query pool.
No it is not.
In fact in Chrome gl.finish is just a gl.flush. See the code and search for "::finish".
Because Chrome is multi-process and actually implements security in depth the actual GL calls are issued in another process from your JavaScript so even if Chrome did call gl.finish it would happen in another process and from the POV of JavaScript would not be accurate for timing in any way shape or form. Firefox is apparently in the process of doing something similar for similar reasons.
Even outside of Chrome, every driver handles gl.finish differently. Using gl.finish for timing is not useful information because it's not representative of actual speed since it includes stalling the GPU pipeline. In other words, timing with gl.finish includes lots of overhead that wouldn't happen in real use and so is not an accurate measurement of how fast something would execute normal circumstances.
There are GL extensions on some GPUs to get timing info. Unfortunately they (a) are not available in WebGL and (b) will not likely ever be as they are not portable as they can't really work on tiled GPUs like those found in many mobile phones.
Instead of asking how to time GL calls what specifically are you trying to achieve by timing them? Maybe people can suggest a solution to that.
Being client based, WebGL event timings depend on the current loading of the client machine (CPU loading), GPU loading, and the implementation of the client itself. One way to get a very rough estimate, is to measure the round-trip latency from server to client using a XmlHttpRequest (http://en.wikipedia.org/wiki/XMLHttpRequest). By finding the delay from server measured time to local time, a possible measure of loading can be obtained.
I have a simple web-page (PHP, JS and HTML) that is displayed to illustrate that a computation is in process. This computation is triggered by a pure JavaScript AJAX-request of a PHP-script doing the actual computations.
For details, please see
here
What the actual computation is, does not play a role, so for simplicity, it is just a sleep()-command.
When I execute the same code locally (browser calls website under localhost: linux, apache, php-mod) it works fine, independant of the sleep-time.
However, when I let it run on a different machine (not localhost, but also Linux, apache, php-mod), the PHP-script does run through (results are created), but the AJAX-request does not get any response, so there is no "onreadystatechange" if the sleep-time is >= 900 seconds. When sleep-time < 900 seconds it also works nicely and the AJAX-request is correctly terminated (readyState==4 and status==200).
The apache and php-configuration are more or less default and I verified the crucial options there already (max_execution_time etc.) but none seems to be valid here as they are either shorter (<1 min.) or bigger, e.g. for the garbage-collector (24 min.).
So I am absolutely confused what may cause this. I am thinking it might be network-related, although I didn't find any appropriate option in my router or so.
Also no error is reported in the apache-logs or in PHP (error loggin to file).
Letting the JavaScript with the AJAX-request display the request.status upon successfull return, surprisingly when I hit "Esc" in the browser window after the sleep is over, I also get the status "200" displayed but not automatically as it should do it.
In any case, I am hoping that you may have an idea how to circumvent this problem?
Maybe some dummy-communication between client and server every 10 minutes or so might do the trick, but I don't have an idea how to best do something like this, especially letting this be transparent to the user and not interfering with the actual work of doing the computations/sleep.
Best,
Shadow
P.S. The post that I am referencing is written by me, but seems to tramsit the idea that it might be related to some config-option, which seems not to be the case. This is why I am writing this post here, basically asking for a way to circumvent such an issue regardless of it's origin.
I'm from the other post you mentioned!
Now that I know more about what you are trying to do: monitor a possibly long running server job, I can recommend something which should turn out a lot better, its not a direct answer to your question, but its a design consideration which includes by its nature a more suitable solution.
Basically, unlink the actions of "starting" the server side task, from monitoring its progress.
execute.php kicks off your background job on the server, and immediately returns.
Another script/URL (lets call it status.php) is available to check the progress of the task execute.php is performing.
When status.php is requested, it won't return until it has something to report, UNLESS 30 seconds (or some other fixed) amount of time passes, at which point it returns a value that you know means "check again". Do this in a loop, and you can be notified immediately of when the background task has completed.
More details on an approach similar to this: http://billhiggins.us/blog/2011/04/27/resty-long-ops
I hope this help give you some design ideas to address your problem!
I have created this recursive script that checks what address your on, and then checks in another file hierarchy if that folder your in are in, also exists on that place. Like for example say you're on somerandomsite.com/example/folder/folder1/folder1_1 and then you might want to redirect the user to somerandomsite.com/another/example/folder/folder1/folder1_1 if that folder exists, otherwise just redirect him to somerandomsite.com/another (of course I have som special cases as well, like if folder/folder1/ exists but not folder/folder1/folder1_1, then redirect to somerandomsite.com/another/example/folder/folder1/ etc.
Now to my problem, I have a real slow recursive implementation, and say that there are 50 folders in folder "example", 100 folders in folder, another folders in folder1, and last 100 folders the last level, then my implementation takes long time to "match" all the names.
So some browsers display an error message that "some script has stopped working" since it is taking to long to execute. So my question is if there is some way to tell the browsers to let the script finish?
You can find the code for the script here.
And for those who wounder how I perform the directory searches is that im creating xmlhttprequests to folders, and get a html version displaying all the folders, and then do a simple pattern match for each folder level. In the example above I do 4 xmlhttpRequests,
One to somerandomsite.com, patternmatch for "example"
One to somerandomsite.com/example/ patternmatch for "folder"
One to somerandomsite.com/example/folder/ patternmatch for "folder1"
One to somerandomsite.com/example/folder/folder1/ patternmatch for "folder1_1"
No. That feature is designed explicitly for slow scripts, as you admitted yours is. It's up to the user to decide whether to continue. If there was an escape hatch all sorts of harmful scripts would use it.
I doubt that would occur with AJAX calls, since there is no actual script running while the request is made. Are you positive you don't have any infinite loops in your code.
Oh, there is a problem with your code:
handleXML is your event handler for readystatechange, and it calls checkState which spawns a timeout to call itself every second. So every time the state changes you spawn another repeating checkState.
You don't need checkState to call itself, because handleXML already calls it and is called every time the state changes anyways. Also if the status returned isn't 200 the checkState will call itself forever. I think browsers will be much happier with you if you change your checkState function to:
var checkState = function(xmlhttp, callback) {
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
callback();
};
Good lord, do that on the server. You're already "getting httprequest lists of directories within a directory" - just do the whole thing on the server.
No. Look into using Web Workers to get the job done. But be wary of compatibility with older browsers.
The script is doing that because to much is happening at once.
Wait to start a new xmlhttprequest until the previous one has finished.
Other then using a faster browser, you can't force a script to keep running.