I wonder if that is possible to handle missing resources of a web page all togather including missin images from background: url(...) from CSS.
I have checked document.addEventListener("error", eventHandler, true);, window.onerror and also performance.getEntries() but none of them reports missing images referenced from CSS even from inlined styles.
Best base to also handle missing resources from other origins e.g. broken fonts referenced from CDNs.
A possible solution could be to list all resources using window.performance.getEntriesByType("resource"), then try to download each of them using ajax to see if it produces a 404. However there are 2 major drawbacks to consider:
this isn't an event listener, you have to execute the check after a reasonable amout of time to let the browser load all resources, or execute it periodically;
you must avoid to check for dynamic resources which could return a different result from the previous call, or the more dangerous case where a call to a resource produces an update of the data on the remote server (a very trivial example could be a view counter on an image);
there is a chance that the ajax call results in different status code even if the resource is static, for example if the client is on an unstable network and the connection goes down between the two calls.
Anyway here is a starting point:
window.onload = function() {
setTimeout(function() {
var resources = window.performance.getEntriesByType("resource");
resources.forEach(function (resource) {
$.get(resource.name).fail(function(e) {
var msg = 'Failed to load resource '+resource.name+' requested by '+resource.initiatorType;
var div = $('<div/>').text(msg);
$('body').append(div);
});
});
}, 1000);
};
body {
background-image: url('bar.jpg');
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<img src="foo.jpg"/>
Related
I'm trying to write a web extension that stops the requests from a url list provided locally, fetches the URL's response, analyzes it in a certain way and based on the analysis results, blocks or doesn't block the request.
Is that even possible?
The browser doesn't matter.
If it's possible, could you provide some examples?
I tried doing it with Chrome extensions, but it seems like it's not possible.
I heard it's possible on mozilla though
I think that this is only possible using the old webRequestBlocking API which Chrome is removing as a part of Manifest v3. Fortunately, Firefox is planning to continue supporting blocking web requests even as they transition to manifest v3 (read more here).
In terms of implementation, I would highly recommend referring to the MDN documentation for webRequest, in particular their section on modifying responses and their documentation for the filterResponseData method.
Mozilla have also provided a great example project that demonstrates how to achieve something very close to what I think you want to do.
Below I've modified their background.js code slightly so it is a little closer to what you want to do:
function listener(details) {
if (mySpecialUrls.indexOf(details.url) === -1) {
// Ignore this url, it's not on our list.
return {};
}
let filter = browser.webRequest.filterResponseData(details.requestId);
let decoder = new TextDecoder("utf-8");
let encoder = new TextEncoder();
filter.ondata = event => {
let str = decoder.decode(event.data, {stream: true});
// Just change any instance of Example in the HTTP response
// to WebExtension Example.
str = str.replace(/Example/g, 'WebExtension Example');
filter.write(encoder.encode(str));
filter.disconnect();
}
// This is a BlockingResponse object, you can set parameters here to e.g. cancel the request if you want to.
// See: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/BlockingResponse#type
return {};
}
browser.webRequest.onBeforeRequest.addListener(
listener,
// 'main_frame' means this will only affect requests for the main frame of the browser (e.g. the HTML for a page rather than the images, CSS, etc. that are loaded afterwards). You might want to look into whether you want to expand this.
{urls: ["*://*/*"], types: ["main_frame"]},
["blocking"]
);
Correction:
The above example only works properly if the response data fits in one chunk. If it is larger (and you still want to inspect the entirety of the response data), you would need to put all of the data into a buffer, and then work on it once all data has been received. See the document here for more information: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/StreamFilter/ondata#webextension_examples (the code section titled "This example combines all buffers into a single buffer" would be of most interest to you I think).
In terms of using this API to block responses, data is only returned from this URL if you call filter.write(), so if you don't like the response, you can simply not call it (just call filter.close()) and an empty response will be returned. You can also only return part of the full response body by filter.write()ing only the bits that you want to return.
I let users on my VanillaForums forum choose whether or not to use the https protocol and I want to test if I can change image sources on the client's side using jQuery.
I want this code to change the protocol in the image source links to // instead of http:// and load before the images have loaded, so I used .ready():
$(document).ready(function () {
if (window.location.protocol == "https:") {
var $imgs = $("img");
$imgs.each(function () {
var img_src = $(this).prop("src");
if (img_src.indexOf("http://") < 0) return;
var new_img_src = img_src.replace("http:", "");
$(this).prop("src", new_img_src);
});
}
});
While it does work in changing the image sources, the URL bar still shows this:
And the console gives a warning that http://someimageurl... is not secure.
Do I need to move the code to the top of the page or will that not make a difference?
It needs to be done server side for the browser not to throw an insecure connection warning. The file with the responsible code is /library/core/functions.render.php, which you can see here.
$PhotoURL is the variable that needs to be changed. Using the following makes sure all images are loaded over the https: protocol: str_replace('http://', 'https://', $PhotoURL).
I usually don't mind global scope on smaller software but in something as big as Vanilla it's like finding a needle in a haystack.
I couldn't find any other fixes for Vanilla in particular so I hope this helps people.
Scenario
A page invokes a remote script available at this url: http://url.to.script/myScript?ScriptParamsList. Let's assume that:
Async execution is not required.
Displaying output is not required.
The script is called on a button click event. Let Handler() be the javascript event handler:
function Handler()
{
//invoke the remote script
}
Several methods are available to implement Handler() function:
script vs img tag:
document.write('<script src="http://url.to.script/myScript?ScriptParamsList" type="text/javascript"></script>');
document.write('<img src="http://url.to.script/myScript?ScriptParamsList" />');
jQuery .html() vs .load():
$('#TargetDiv').html('<img src="http://url.to.script/myScript?ScriptParamsList" />');
$('#TargetDiv').load('http://url.to.script/myScript?ScriptParamsList');
Question
Which are the advantages and the disadvantages?
document.write will replace your current document when it's called after the document is loaded. Never use this method.
Using <script> allows you to fetch a request from an external domain, without being hindered by the same origin policy. Additionally, in the server's response, you can add and execute JavaScript, which might be useful.
Using .html('<img ...>') makes no sense, unless your server returns an image with meaningful data. If you intend to only trigger a server request, the following would be better:
new Image().src = 'http://url.to.script/myScript?...';
$('..').load is not going to work if the URL is located at a different domain, because of the same origin policy.
I vote for the new Image().src = '..'; method. If you dislike this syntax, and want more jQuery, use:
$('<img>').attr('src', 'http://...');
Note: The result may be cached. If you don't want this to happen, append a random query string, to break the cache (eg. url = url + '&_t=' + new Date().getTime()).
In my webapp I need to check if another server (on a different ip address) is up. I know I can't use direct ajax requests for this because of cross domain restrictions. I cannot use JSONP because I don't have any control over the other server. I also need to repeatedly check for the server being up as it may be unavailable some of the time. All these don't make it easy.
However since I only care about a request to the remote server succeeding (and I don't care about any data that the server might return), I was hoping that it would be possible.
I can't do this on the server side because the server side may or may not have visibility to the other server, what I really care about is that the client (web browser) have visibility.
I can append a new iframe to the page with the src equal to the remote server address but then how can I check when (IF) an iframe contents have been loaded successfully? Also how do I make periodic requests? By creating new iframes repeatedly? That seems quite unclean.
Is there any way to do this cleanly and reliably?
function testServer(server, cb) {
var img = new Image();
img.onload = function() {
cb.call(this, img);
};
img.src = server;
}
var url = "http://placekitten.com/200/300/"
testServer(url, function _self(img) {
if (img.naturalHeight === 200) {
alert("win");
} else {
setInterval(function() {
testServer(url, _self);
}, 10000);
}
});
You want to load an Image from a remote server then check that the image height or whatever is what you expect it is.
If you can't place an image on the remote server then you will need to use CORS
A similar solution to Raynos' using jQuery:
$('<img src="http://domain.tld/path/to/a.png">').load(function(){
console.log("domain.tld is up.");
});
I know I can't use ajax for this because of cross domain restrictions.
Not true. See JSONP
I'm using slightly modified sample code provided by the YUI team. When my source responds with something other than JSON (or just has a JSON syntax error) my browser (Safari) aborts script processing, preventing me from notifying the user there was a problem.
I'm definitely no JS guru, so this code may be a lot uglier than it has to be. The code is, roughly:
YUI().use("dump", "node", "datasource-get", "datasource-jsonschema", function(Y) {
var myDataSource = new Y.DataSource.Get({
source:"/some/json/source/?"}),
myCallback = {
success: function(e){
myResponse = e.response;
doSomething(myDataSource);
},
failure: function(e){
Y.get("#errors").setContent("<li>Could not retrieve data: " + e.error.message + "</li>");
}
};
myDataSource.plug(Y.Plugin.DataSourceJSONSchema, {
schema: {
resultListLocator: "blah.list",
resultFields: ["user", "nickname"]
}
});
myDataSource.sendRequest("foo=bar", myCallback);
}
I've tried wrapping the "var myDataSource" block in a try/catch, and I've also tried wrapping the whole YUI().use() block.
Is it possible to catch syntax errors? Do I have to replace the all-in-one DataSource.Get call with separate IO and parse calls?
Since you are requesting a local script, you can use Y.io + Y.JSON.parse inside a try/catch or Y.DataSource.IO + Y.DataSchema.JSON (+ Y.JSON).
The benefit of DataSource.Get is that it avoids the Same Origin Policy. However, it is less secure and less flexible. If it is not necessary, you should avoid using it.
The contract of DataSource.Get is that the server supports JSONP. The way this works is that Get adds a script node to the page with a src=(the url you provided)&callback=someDataSourceFunction.
The browser will request the resource at that url and one of two things will happen:
the server will respond with a JavaScript string in the form of someDataSourceFunction({"all":"your data"}); or
the server will return some text that can't be parsed as JavaScript.
In either event, that string is treated as the contents of a script node--it is parsed and executed. If it cannot be parsed, the browser will throw an error. There's no stopping this. While JSONP is technically not under the spec constraints of true JSON (even invalid JSON should parse and execute), you should always use pure JSON, and always use a server side lib to generate the JSON output (look on http://json.org for a list of libs in every conceivable language). Don't hand-roll JSON. It only leads to hours of debugging.
The problem is probably that the error happens at some level in the browser (Javascript parsing) before YUI even gets the occasion to report a failure.
It is notoriously hard to catch this kind of error in Safari, which does not implement window.onerror. In order to catch more errors with my Javascript library, bezen.org, I added try/catch in places where asynchronous code is triggered:
dynamic script loading (equivalent to your JSON download)
setTimeout/setTimer: I wrapped and replaced these browser functions to insert a try/catch which logs errors
You may be interested in having a look at the source code of the corresponding modules, which may be useful to you as is or as hints for the resolution of your problem:
bezen.dom.js Look for safelistener in appendScript method
bezen.error.js Check safeSetTimeout/safeSetInterval and catchError
Maybe try this before you "doSomething":
try
{
var test = YAHOO.lang.JSON.parse(jsonString);
...
}
catch (e)
{
alert('invalid json');
}